Message ID | 20220929180731.2875722-6-paulmck@kernel.org (mailing list archive) |
---|---|
State | Accepted |
Commit | 94b562fd505c28f5fedcfc6459f57925659295c0 |
Headers | show |
Series | NMI-safe SRCU reader API | expand |
Hi Paul, On Thu, Sep 29, 2022 at 11:07:29AM -0700, Paul E. McKenney wrote: > The arm64 architecture uses either an LL/SC loop (old systems) or an LSE > stadd instruction (new systems) to implement this_cpu_add(), both of which > are NMI safe. IIUC "NMI safe" here just means atomic w.r.t. an NMI being taken and modfying the same location the atomic was targetting (i.e. just like ARCH_HAVE_NMI_SAFE_CMPXCHG, which arm64 selects today). Assuming so: Acked-by: Mark Rutland <mark.rutland@arm.com> Only this patch went to LAKML, so maybe an earlier patch made that clear, but I didn't spot it. As one minor nit, it would be nice to align the naming with ARCH_HAVE_NMI_SAFE_CMPXCHG and select them next to each other in the Kconfig file is possible, but the Ack stands regardless. Thanks, Mark. > This means that the old and more-efficient srcu_read_lock() > may be used in NMI context, without the need for srcu_read_lock_nmisafe(). > Therefore, add the new Kconfig option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to > arch/arm64/Kconfig, which will cause NEED_SRCU_NMI_SAFE to be deselected, > thus preserving the current srcu_read_lock() behavior. > > Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ > > Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com> > Suggested-by: Frederic Weisbecker <frederic@kernel.org> > Suggested-by: Boqun Feng <boqun.feng@gmail.com> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Will Deacon <will@kernel.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: John Ogness <john.ogness@linutronix.de> > Cc: Petr Mladek <pmladek@suse.com> > Cc: <linux-arm-kernel@lists.infradead.org> > --- > arch/arm64/Kconfig | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 571cc234d0b3..664725a0b5dd 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -31,6 +31,7 @@ config ARM64 > select ARCH_HAS_KCOV > select ARCH_HAS_KEEPINITRD > select ARCH_HAS_MEMBARRIER_SYNC_CORE > + select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS > select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE > select ARCH_HAS_PTE_DEVMAP > select ARCH_HAS_PTE_SPECIAL > -- > 2.31.1.189.g2e36527f23 >
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 571cc234d0b3..664725a0b5dd 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -31,6 +31,7 @@ config ARM64 select ARCH_HAS_KCOV select ARCH_HAS_KEEPINITRD select ARCH_HAS_MEMBARRIER_SYNC_CORE + select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PTE_DEVMAP select ARCH_HAS_PTE_SPECIAL
The arm64 architecture uses either an LL/SC loop (old systems) or an LSE stadd instruction (new systems) to implement this_cpu_add(), both of which are NMI safe. This means that the old and more-efficient srcu_read_lock() may be used in NMI context, without the need for srcu_read_lock_nmisafe(). Therefore, add the new Kconfig option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/arm64/Kconfig, which will cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current srcu_read_lock() behavior. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com> Suggested-by: Frederic Weisbecker <frederic@kernel.org> Suggested-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: John Ogness <john.ogness@linutronix.de> Cc: Petr Mladek <pmladek@suse.com> Cc: <linux-arm-kernel@lists.infradead.org> --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+)