From patchwork Wed Sep 21 14:46:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12983800 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 684FCECAAD8 for ; Wed, 21 Sep 2022 14:46:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229845AbiIUOqu (ORCPT ); Wed, 21 Sep 2022 10:46:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229641AbiIUOqr (ORCPT ); Wed, 21 Sep 2022 10:46:47 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73E2477EBE; Wed, 21 Sep 2022 07:46:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0CD31B81F94; Wed, 21 Sep 2022 14:46:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C137CC433D6; Wed, 21 Sep 2022 14:46:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1663771600; bh=COTwp+YNWkYNcuFXEJn0ESb5JtmT8hmvHQ9dNRYWXy0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CZIBmMqgmwkUpaCp4elrrrXwJWKSmcf7lVESmTr2Lqp1JYrB4KJ2JcAdkmCuf+C1o m0PKbsJNTGBzyxNnyVqJvkcKDRITlEAEd2525z3mJ4F1Z4Maeej6uu1OchrpSsr9qE mx99srCFuS2lpM+IVxkswwPiLEHDk2Aw+jfiV0hhV7/k4Me9Q+viuszR+XLLHMSnZ3 EYQJ0EKFKfdHXenPfOWCxg2ZOuzQwuC+NwbiSFP0pJNMWuer2S4C3C/50mz/oG3kqf Y8s/k53V21i97nDEk/IAgH/YkjddYMrN2RsEnITsMfqX8m/9zcu9RRqHvYt1rPY03N BzvEyp7alHI1g== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 771F55C0849; Wed, 21 Sep 2022 07:46:40 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Thomas Gleixner , John Ogness , Petr Mladek Subject: [PATCH RFC rcu 1/4] srcu: Convert ->srcu_lock_count and ->srcu_unlock_count to atomic Date: Wed, 21 Sep 2022 07:46:35 -0700 Message-Id: <20220921144638.1201361-1-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220921144620.GA1200846@paulmck-ThinkPad-P17-Gen-1> References: <20220921144620.GA1200846@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org NMI-safe variants of srcu_read_lock() and srcu_read_unlock() are needed by printk(), which on many architectures entails read-modify-write atomic operations. This commit prepares Tree SRCU for this change by making both ->srcu_lock_count and ->srcu_unlock_count by atomic_long_t. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Signed-off-by: Paul E. McKenney Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek --- include/linux/srcutree.h | 4 ++-- kernel/rcu/srcutree.c | 24 ++++++++++++------------ 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index e3014319d1ad..0c4eca07d78d 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -23,8 +23,8 @@ struct srcu_struct; */ struct srcu_data { /* Read-side state. */ - unsigned long srcu_lock_count[2]; /* Locks per CPU. */ - unsigned long srcu_unlock_count[2]; /* Unlocks per CPU. */ + atomic_long_t srcu_lock_count[2]; /* Locks per CPU. */ + atomic_long_t srcu_unlock_count[2]; /* Unlocks per CPU. */ /* Update-side state. */ spinlock_t __private lock ____cacheline_internodealigned_in_smp; diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 1c304fec89c0..6fd0665f4d1f 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -417,7 +417,7 @@ static unsigned long srcu_readers_lock_idx(struct srcu_struct *ssp, int idx) for_each_possible_cpu(cpu) { struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu); - sum += READ_ONCE(cpuc->srcu_lock_count[idx]); + sum += atomic_long_read(&cpuc->srcu_lock_count[idx]); } return sum; } @@ -434,7 +434,7 @@ static unsigned long srcu_readers_unlock_idx(struct srcu_struct *ssp, int idx) for_each_possible_cpu(cpu) { struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu); - sum += READ_ONCE(cpuc->srcu_unlock_count[idx]); + sum += atomic_long_read(&cpuc->srcu_unlock_count[idx]); } return sum; } @@ -503,10 +503,10 @@ static bool srcu_readers_active(struct srcu_struct *ssp) for_each_possible_cpu(cpu) { struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu); - sum += READ_ONCE(cpuc->srcu_lock_count[0]); - sum += READ_ONCE(cpuc->srcu_lock_count[1]); - sum -= READ_ONCE(cpuc->srcu_unlock_count[0]); - sum -= READ_ONCE(cpuc->srcu_unlock_count[1]); + sum += atomic_long_read(&cpuc->srcu_lock_count[0]); + sum += atomic_long_read(&cpuc->srcu_lock_count[1]); + sum -= atomic_long_read(&cpuc->srcu_unlock_count[0]); + sum -= atomic_long_read(&cpuc->srcu_unlock_count[1]); } return sum; } @@ -636,7 +636,7 @@ int __srcu_read_lock(struct srcu_struct *ssp) int idx; idx = READ_ONCE(ssp->srcu_idx) & 0x1; - this_cpu_inc(ssp->sda->srcu_lock_count[idx]); + this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter); smp_mb(); /* B */ /* Avoid leaking the critical section. */ return idx; } @@ -650,7 +650,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock); void __srcu_read_unlock(struct srcu_struct *ssp, int idx) { smp_mb(); /* C */ /* Avoid leaking the critical section. */ - this_cpu_inc(ssp->sda->srcu_unlock_count[idx]); + this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter); } EXPORT_SYMBOL_GPL(__srcu_read_unlock); @@ -1687,8 +1687,8 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf) struct srcu_data *sdp; sdp = per_cpu_ptr(ssp->sda, cpu); - u0 = data_race(sdp->srcu_unlock_count[!idx]); - u1 = data_race(sdp->srcu_unlock_count[idx]); + u0 = data_race(sdp->srcu_unlock_count[!idx].counter); + u1 = data_race(sdp->srcu_unlock_count[idx].counter); /* * Make sure that a lock is always counted if the corresponding @@ -1696,8 +1696,8 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf) */ smp_rmb(); - l0 = data_race(sdp->srcu_lock_count[!idx]); - l1 = data_race(sdp->srcu_lock_count[idx]); + l0 = data_race(sdp->srcu_lock_count[!idx].counter); + l1 = data_race(sdp->srcu_lock_count[idx].counter); c0 = l0 - u0; c1 = l1 - u1; From patchwork Wed Sep 21 14:46:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12983802 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0843AC6FA8E for ; Wed, 21 Sep 2022 14:46:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229876AbiIUOqv (ORCPT ); Wed, 21 Sep 2022 10:46:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229973AbiIUOqu (ORCPT ); Wed, 21 Sep 2022 10:46:50 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B458997509; Wed, 21 Sep 2022 07:46:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3423DB82022; Wed, 21 Sep 2022 14:46:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C8FD2C433D7; Wed, 21 Sep 2022 14:46:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1663771600; bh=1co+xN/qQaemr8j3WZCMmCO62oK3HgxTXvvOO8SJiVk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U5d4Lfquhaj0veHZmWzuKSxD5FfEEmKjCLOxdg7o32IezS3e4zWEA4wx0Oqvr9TMw oI1bSXoBScKTwJKbeo1Mq9VLMNWP+d0yzU+9/PMHAH26cWh4VWBhGGNVFRmqsqsZvI KCnE1dGWXSvDn4PPl7g+/9LxEJcXLpb7o/0KXf87z1KfSgRL+E9i1NXPpYP67Ui2ln M8kKEKeh9sOTKCah1W3tbjEkVzOFE0hv109PZzdFbOiycRcudj1XWCBINm6+MdZc1W WWtDk0of/1jXhQMH1h2iBNFlZbLmmGi/xjcsjNSuErpl7lUo4OomI6JtWN5Eb32Xwv VvDezy3AP4+vw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 79A3C5C092A; Wed, 21 Sep 2022 07:46:40 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Thomas Gleixner , John Ogness , Petr Mladek Subject: [PATCH RFC rcu 2/4] srcu: Create and srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe() Date: Wed, 21 Sep 2022 07:46:36 -0700 Message-Id: <20220921144638.1201361-2-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220921144620.GA1200846@paulmck-ThinkPad-P17-Gen-1> References: <20220921144620.GA1200846@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit creates a pair of new srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe() functions, which allow SRCU readers in both NMI handlers and in process and IRQ context. It is bad practice to mix the existing and the new _nmisafe() primitives on the same srcu_struct structure. Use one set or the other, not both. [ paulmck: Apply kernel test robot feedback. ] Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Signed-off-by: Paul E. McKenney Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek --- include/linux/srcu.h | 37 +++++++++++++++++++++++++++++++++++++ include/linux/srcutiny.h | 11 +++++++++++ include/linux/srcutree.h | 3 +++ kernel/rcu/Kconfig | 3 +++ kernel/rcu/rcutorture.c | 11 +++++++++-- kernel/rcu/srcutree.c | 39 +++++++++++++++++++++++++++++++++++---- 6 files changed, 98 insertions(+), 6 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 01226e4d960a..aa9406e88fb8 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -166,6 +166,25 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) return retval; } +/** + * srcu_read_lock_nmisafe - register a new reader for an SRCU-protected structure. + * @ssp: srcu_struct in which to register the new reader. + * + * Enter an SRCU read-side critical section, but in an NMI-safe manner. + * See srcu_read_lock() for more information. + */ +static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp) +{ + int retval; + + if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) + retval = __srcu_read_lock_nmisafe(ssp); + else + retval = __srcu_read_lock(ssp); + rcu_lock_acquire(&(ssp)->dep_map); + return retval; +} + /* Used by tracing, cannot be traced and cannot invoke lockdep. */ static inline notrace int srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) @@ -191,6 +210,24 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) __srcu_read_unlock(ssp, idx); } +/** + * srcu_read_unlock_nmisafe - unregister a old reader from an SRCU-protected structure. + * @ssp: srcu_struct in which to unregister the old reader. + * @idx: return value from corresponding srcu_read_lock(). + * + * Exit an SRCU read-side critical section, but in an NMI-safe manner. + */ +static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) + __releases(ssp) +{ + WARN_ON_ONCE(idx & ~0x1); + rcu_lock_release(&(ssp)->dep_map); + if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) + __srcu_read_unlock_nmisafe(ssp, idx); + else + __srcu_read_unlock(ssp, idx); +} + /* Used by tracing, cannot be traced and cannot call lockdep. */ static inline notrace void srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index 5aa5e0faf6a1..278331bd7766 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -90,4 +90,15 @@ static inline void srcu_torture_stats_print(struct srcu_struct *ssp, data_race(READ_ONCE(ssp->srcu_idx_max))); } +static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) +{ + BUG(); + return 0; +} + +static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) +{ + BUG(); +} + #endif diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 0c4eca07d78d..d45dd507f4a5 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -154,4 +154,7 @@ void synchronize_srcu_expedited(struct srcu_struct *ssp); void srcu_barrier(struct srcu_struct *ssp); void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf); +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp); + #endif diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index d471d22a5e21..f53ad63b2bc6 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -72,6 +72,9 @@ config TREE_SRCU help This option selects the full-fledged version of SRCU. +config NEED_SRCU_NMI_SAFE + def_bool HAVE_NMI && !ARCH_HAS_NMI_SAFE_THIS_CPU_OPS && !TINY_SRCU + config TASKS_RCU_GENERIC def_bool TASKS_RCU || TASKS_RUDE_RCU || TASKS_TRACE_RCU select SRCU diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 9ad5301385a4..684e24f12a79 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -623,10 +623,14 @@ static struct rcu_torture_ops rcu_busted_ops = { DEFINE_STATIC_SRCU(srcu_ctl); static struct srcu_struct srcu_ctld; static struct srcu_struct *srcu_ctlp = &srcu_ctl; +static struct rcu_torture_ops srcud_ops; static int srcu_torture_read_lock(void) __acquires(srcu_ctlp) { - return srcu_read_lock(srcu_ctlp); + if (cur_ops == &srcud_ops) + return srcu_read_lock_nmisafe(srcu_ctlp); + else + return srcu_read_lock(srcu_ctlp); } static void @@ -650,7 +654,10 @@ srcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp) static void srcu_torture_read_unlock(int idx) __releases(srcu_ctlp) { - srcu_read_unlock(srcu_ctlp, idx); + if (cur_ops == &srcud_ops) + srcu_read_unlock_nmisafe(srcu_ctlp, idx); + else + srcu_read_unlock(srcu_ctlp, idx); } static int torture_srcu_read_lock_held(void) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 6fd0665f4d1f..f5b1485e79aa 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -654,6 +654,37 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx) } EXPORT_SYMBOL_GPL(__srcu_read_unlock); +/* + * Counts the new reader in the appropriate per-CPU element of the + * srcu_struct, but in an NMI-safe manner using RMW atomics. + * Returns an index that must be passed to the matching srcu_read_unlock(). + */ +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) +{ + int idx; + struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); + + idx = READ_ONCE(ssp->srcu_idx) & 0x1; + atomic_long_inc(&sdp->srcu_lock_count[idx]); + smp_mb__after_atomic(); /* B */ /* Avoid leaking the critical section. */ + return idx; +} +EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); + +/* + * Removes the count for the old reader from the appropriate per-CPU + * element of the srcu_struct. Note that this may well be a different + * CPU than that which was incremented by the corresponding srcu_read_lock(). + */ +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) +{ + struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); + + smp_mb__before_atomic(); /* C */ /* Avoid leaking the critical section. */ + atomic_long_inc(&sdp->srcu_unlock_count[idx]); +} +EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe); + /* * Start an SRCU grace period. */ @@ -1090,7 +1121,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, int ss_state; check_init_srcu_struct(ssp); - idx = srcu_read_lock(ssp); + idx = __srcu_read_lock_nmisafe(ssp); ss_state = smp_load_acquire(&ssp->srcu_size_state); if (ss_state < SRCU_SIZE_WAIT_CALL) sdp = per_cpu_ptr(ssp->sda, 0); @@ -1123,7 +1154,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, srcu_funnel_gp_start(ssp, sdp, s, do_norm); else if (needexp) srcu_funnel_exp_start(ssp, sdp_mynode, s); - srcu_read_unlock(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx); return s; } @@ -1427,13 +1458,13 @@ void srcu_barrier(struct srcu_struct *ssp) /* Initial count prevents reaching zero until all CBs are posted. */ atomic_set(&ssp->srcu_barrier_cpu_cnt, 1); - idx = srcu_read_lock(ssp); + idx = __srcu_read_lock_nmisafe(ssp); if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0)); else for_each_possible_cpu(cpu) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu)); - srcu_read_unlock(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx); /* Remove the initial count, at which point reaching zero can happen. */ if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt)) From patchwork Wed Sep 21 14:46:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12983801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 306C5C6FA82 for ; Wed, 21 Sep 2022 14:46:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229985AbiIUOqv (ORCPT ); Wed, 21 Sep 2022 10:46:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229921AbiIUOqs (ORCPT ); Wed, 21 Sep 2022 10:46:48 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1B869750F; Wed, 21 Sep 2022 07:46:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3BEB5B8276E; Wed, 21 Sep 2022 14:46:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5E8BC433C1; Wed, 21 Sep 2022 14:46:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1663771600; bh=CybKKL9Bfarsi64Wm8aejUYOJTR8Cqbl8S57sRKshyM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X3n+vu8/K6AjA77e/yi3MLQLHwg6X4/4wdVdRoy5g3V7nwQktWLpIO5B0UqmlIW3Y EZD1feQhZsYLFdsbpDtpOZ+xg4hxSoaRZjQ5etHdlmBpaPsVZXVusDWK4Lb0CxFzsj a1beS/KHRgvO2qodEAiczgNW5YwwBmr0F791MZYLbk+xcOm9rCXUbjmfTQEANvfb0b 3hYrjXwtZg+hOZWQIHd1RBO0uNhLwx1eZ5lezf2+EUFeEz9DB/xFx+5d9GkzLzmjRV w3+nCkeXwL2+zxRUfb6sA+4bnhBW6IURozTHBI7I+RFd8f1pOF5QS8/r2qr3GlNmag 7BvXvM0jdHu8w== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7BAB75C0A5A; Wed, 21 Sep 2022 07:46:40 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Thomas Gleixner , John Ogness , Petr Mladek Subject: [PATCH RFC rcu 3/4] srcu: Check for consistent per-CPU per-srcu_struct NMI safety Date: Wed, 21 Sep 2022 07:46:37 -0700 Message-Id: <20220921144638.1201361-3-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220921144620.GA1200846@paulmck-ThinkPad-P17-Gen-1> References: <20220921144620.GA1200846@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit adds runtime checks to verify that a given srcu_struct uses consistent NMI-safe (or not) read-side primitives on a per-CPU basis. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Signed-off-by: Paul E. McKenney Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek --- include/linux/srcu.h | 4 ++-- include/linux/srcutiny.h | 4 ++-- include/linux/srcutree.h | 9 +++++++-- kernel/rcu/srcutree.c | 38 ++++++++++++++++++++++++++++++++------ 4 files changed, 43 insertions(+), 12 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index aa9406e88fb8..274d7200ce4e 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -178,7 +178,7 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp int retval; if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) - retval = __srcu_read_lock_nmisafe(ssp); + retval = __srcu_read_lock_nmisafe(ssp, true); else retval = __srcu_read_lock(ssp); rcu_lock_acquire(&(ssp)->dep_map); @@ -223,7 +223,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) WARN_ON_ONCE(idx & ~0x1); rcu_lock_release(&(ssp)->dep_map); if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) - __srcu_read_unlock_nmisafe(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx, true); else __srcu_read_unlock(ssp, idx); } diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index 278331bd7766..f890301f123d 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -90,13 +90,13 @@ static inline void srcu_torture_stats_print(struct srcu_struct *ssp, data_race(READ_ONCE(ssp->srcu_idx_max))); } -static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) +static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) { BUG(); return 0; } -static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) +static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) { BUG(); } diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index d45dd507f4a5..35ffdedf86cc 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -25,6 +25,7 @@ struct srcu_data { /* Read-side state. */ atomic_long_t srcu_lock_count[2]; /* Locks per CPU. */ atomic_long_t srcu_unlock_count[2]; /* Unlocks per CPU. */ + int srcu_nmi_safety; /* NMI-safe srcu_struct structure? */ /* Update-side state. */ spinlock_t __private lock ____cacheline_internodealigned_in_smp; @@ -42,6 +43,10 @@ struct srcu_data { struct srcu_struct *ssp; }; +#define SRCU_NMI_UNKNOWN 0x0 +#define SRCU_NMI_NMI_UNSAFE 0x1 +#define SRCU_NMI_NMI_SAFE 0x2 + /* * Node in SRCU combining tree, similar in function to rcu_data. */ @@ -154,7 +159,7 @@ void synchronize_srcu_expedited(struct srcu_struct *ssp); void srcu_barrier(struct srcu_struct *ssp); void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf); -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp); +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) __acquires(ssp); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) __releases(ssp); #endif diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index f5b1485e79aa..09a11f2c2042 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -626,6 +626,26 @@ void cleanup_srcu_struct(struct srcu_struct *ssp) } EXPORT_SYMBOL_GPL(cleanup_srcu_struct); +/* + * Check for consistent NMI safety. + */ +static void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe) +{ + int nmi_safe_mask = 1 << nmi_safe; + int old_nmi_safe_mask; + struct srcu_data *sdp; + + if (!IS_ENABLED(CONFIG_PROVE_RCU)) + return; + sdp = raw_cpu_ptr(ssp->sda); + old_nmi_safe_mask = READ_ONCE(sdp->srcu_nmi_safety); + if (!old_nmi_safe_mask) { + WRITE_ONCE(sdp->srcu_nmi_safety, nmi_safe_mask); + return; + } + WARN_ONCE(old_nmi_safe_mask != nmi_safe_mask, "CPU %d old state %d new state %d\n", sdp->cpu, old_nmi_safe_mask, nmi_safe_mask); +} + /* * Counts the new reader in the appropriate per-CPU element of the * srcu_struct. @@ -638,6 +658,7 @@ int __srcu_read_lock(struct srcu_struct *ssp) idx = READ_ONCE(ssp->srcu_idx) & 0x1; this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter); smp_mb(); /* B */ /* Avoid leaking the critical section. */ + srcu_check_nmi_safety(ssp, false); return idx; } EXPORT_SYMBOL_GPL(__srcu_read_lock); @@ -651,6 +672,7 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx) { smp_mb(); /* C */ /* Avoid leaking the critical section. */ this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter); + srcu_check_nmi_safety(ssp, false); } EXPORT_SYMBOL_GPL(__srcu_read_unlock); @@ -659,7 +681,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock); * srcu_struct, but in an NMI-safe manner using RMW atomics. * Returns an index that must be passed to the matching srcu_read_unlock(). */ -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) { int idx; struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); @@ -667,6 +689,8 @@ int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) idx = READ_ONCE(ssp->srcu_idx) & 0x1; atomic_long_inc(&sdp->srcu_lock_count[idx]); smp_mb__after_atomic(); /* B */ /* Avoid leaking the critical section. */ + if (chknmisafe) + srcu_check_nmi_safety(ssp, true); return idx; } EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); @@ -676,12 +700,14 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); * element of the srcu_struct. Note that this may well be a different * CPU than that which was incremented by the corresponding srcu_read_lock(). */ -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) { struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); smp_mb__before_atomic(); /* C */ /* Avoid leaking the critical section. */ atomic_long_inc(&sdp->srcu_unlock_count[idx]); + if (chknmisafe) + srcu_check_nmi_safety(ssp, true); } EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe); @@ -1121,7 +1147,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, int ss_state; check_init_srcu_struct(ssp); - idx = __srcu_read_lock_nmisafe(ssp); + idx = __srcu_read_lock_nmisafe(ssp, false); ss_state = smp_load_acquire(&ssp->srcu_size_state); if (ss_state < SRCU_SIZE_WAIT_CALL) sdp = per_cpu_ptr(ssp->sda, 0); @@ -1154,7 +1180,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, srcu_funnel_gp_start(ssp, sdp, s, do_norm); else if (needexp) srcu_funnel_exp_start(ssp, sdp_mynode, s); - __srcu_read_unlock_nmisafe(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx, false); return s; } @@ -1458,13 +1484,13 @@ void srcu_barrier(struct srcu_struct *ssp) /* Initial count prevents reaching zero until all CBs are posted. */ atomic_set(&ssp->srcu_barrier_cpu_cnt, 1); - idx = __srcu_read_lock_nmisafe(ssp); + idx = __srcu_read_lock_nmisafe(ssp, false); if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0)); else for_each_possible_cpu(cpu) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu)); - __srcu_read_unlock_nmisafe(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx, false); /* Remove the initial count, at which point reaching zero can happen. */ if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt)) From patchwork Wed Sep 21 14:46:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12983799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F012C6FA8E for ; Wed, 21 Sep 2022 14:46:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229727AbiIUOqn (ORCPT ); Wed, 21 Sep 2022 10:46:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229641AbiIUOqm (ORCPT ); Wed, 21 Sep 2022 10:46:42 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E881F89CD3; Wed, 21 Sep 2022 07:46:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8518A62C2D; Wed, 21 Sep 2022 14:46:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D69FEC43470; Wed, 21 Sep 2022 14:46:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1663771600; bh=uclQIQLTJ0Fjb3CMUb437WvxBUVstCALy43H9B1aM3o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=poUcQfpnFSk0s3jba6jl0usVmXnqhhc8ZwIQXuDlfJn/WKUg8XfowqH9RI6mOPv1x 2sKqq6GQzhoV5sbxB5igLaEYwFT7NBF4vnNCxsrtiYlrBYrX3QToGEj5L0fEK6S/9i ET59zIIlq4VMND3YCFqqywtu/EtXAzI4ouHw7tnwvomJvABVf4KU9blwAg4wvzQEQW fZ0flbTl+uKXqQI8GwUdOfcL95hJo8i3kfyijvnJbxi1CxyiHV1GE8iZFMbWr52IJA rcbTW3B0XFO/3DbPf9e3CowvCx2GBDm8Z2Y+PccCq0PMW2Bbe7ipZHduFVPA1aALf3 jn9nJcRbkjTHg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7E4955C0AC7; Wed, 21 Sep 2022 07:46:40 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Thomas Gleixner , John Ogness , Petr Mladek Subject: [PATCH RFC rcu 4/4] srcu: Check for consistent global per-srcu_struct NMI safety Date: Wed, 21 Sep 2022 07:46:38 -0700 Message-Id: <20220921144638.1201361-4-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220921144620.GA1200846@paulmck-ThinkPad-P17-Gen-1> References: <20220921144620.GA1200846@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit adds runtime checks to verify that a given srcu_struct uses consistent NMI-safe (or not) read-side primitives globally, but based on the per-CPU data. These global checks are made by the grace-period code that must scan the srcu_data structures anyway, and are done only in kernels built with CONFIG_PROVE_RCU=y. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Signed-off-by: Paul E. McKenney Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek --- kernel/rcu/srcutree.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 09a11f2c2042..5772b8659c89 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -429,13 +429,18 @@ static unsigned long srcu_readers_lock_idx(struct srcu_struct *ssp, int idx) static unsigned long srcu_readers_unlock_idx(struct srcu_struct *ssp, int idx) { int cpu; + unsigned long mask = 0; unsigned long sum = 0; for_each_possible_cpu(cpu) { struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu); sum += atomic_long_read(&cpuc->srcu_unlock_count[idx]); + if (IS_ENABLED(CONFIG_PROVE_RCU)) + mask = mask | READ_ONCE(cpuc->srcu_nmi_safety); } + WARN_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && (mask & (mask >> 1)), + "Mixed NMI-safe readers for srcu_struct at %ps.\n", ssp); return sum; }