From patchwork Thu Sep 12 22:44:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Lameter via B4 Relay X-Patchwork-Id: 13802752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A24EEEE265 for ; Thu, 12 Sep 2024 22:44:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC2F56B0082; Thu, 12 Sep 2024 18:44:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D4C4A6B0083; Thu, 12 Sep 2024 18:44:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC57E6B0089; Thu, 12 Sep 2024 18:44:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9C5726B0082 for ; Thu, 12 Sep 2024 18:44:25 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1887B1A0C62 for ; Thu, 12 Sep 2024 22:44:25 +0000 (UTC) X-FDA: 82557566490.11.EA9E86F Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf12.hostedemail.com (Postfix) with ESMTP id 2B91A4000D for ; Thu, 12 Sep 2024 22:44:22 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Er6P7Cmt; spf=pass (imf12.hostedemail.com: domain of devnull+cl.gentwo.org@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=devnull+cl.gentwo.org@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726180922; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=+0qk3LheKRPcfUJy/EYhwYH1zWkTYEJ9o+DCBSdekZA=; b=O3b0UpyTwPkAEvJ9XobL16RkxI7x2K5dGkmgJrMCxQXOcAnj+B/cFVgAZ6+PWmc9uq3Nk3 GMSIIc9L+9aZA8DsS5y8PwCwDB9JhsgUVsKk+wC+f6QR0X4EbtS+wZ8/0bkSG6kUMZDvRz ezkYftOHomwBZU4uT+AFXA8F6fbRfZw= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Er6P7Cmt; spf=pass (imf12.hostedemail.com: domain of devnull+cl.gentwo.org@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=devnull+cl.gentwo.org@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726180922; a=rsa-sha256; cv=none; b=LHI3qktmLNxRfNgNOcKpXBm/CSdhOKSF8unbHQFuZKP2IzmmZRaUi4fy3ezlveb54JvX15 lzmJuNSBKQe0e5YuXWjPlUbql2K758TG1dksgLpbmWnmAnWfYiMelXJvQ7RTxt1BF4pwW3 6sRHytDFRUdeMg4ImprtB1hTglfCYag= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 5ECFC5C039C; Thu, 12 Sep 2024 22:44:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPS id 9D008C4CEC3; Thu, 12 Sep 2024 22:44:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726181060; bh=goeYyUrG6Krx8Gj0ORYB4xMO5kZeCEHqx6/YmoxT6gg=; h=From:Date:Subject:To:Cc:Reply-To:From; b=Er6P7CmtaMwQkLSWqJR2t2yKNQqZAC0TXSiCaJwh7uFA2bWueeiGKgKVrxFkk3yNb CF/xi5XR2JruU5z7kKsJdp8da9MqtWIjQ6JrN8HNQAVOB2X6vA7qGbgvB7DV0kRbDZ B/wz8g3hEWUBPtmx2Y8+vk/eCgXGLS0/+RMv8qPYyXG6IMNpB46Z6ZpgTlicLev09d ADMiKhwwNFH4n3hYtsTdy19hyhyfqxBNQq04eMmWDRwHCgPT3azTSbjQ1JcED+ZtMA QjCCMMdH6cfcce61EYxomOmutWjqc+lp5cKDqayDk0nSLPZVbkSqrJKCcvaLZQ4RPv eI2j4mK+SCmCQ== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BE39EEE265; Thu, 12 Sep 2024 22:44:20 +0000 (UTC) From: Christoph Lameter via B4 Relay Date: Thu, 12 Sep 2024 15:44:08 -0700 Subject: [PATCH v3] Avoid memory barrier in read_seqcount() through load acquire MIME-Version: 1.0 Message-Id: <20240912-seq_optimize-v3-1-8ee25e04dffa@gentwo.org> X-B4-Tracking: v=1; b=H4sIALdu42YC/1XMyw6CMBCF4Vchs7amDARbV76HIQbbocxCii2pF 8K7W0lcuPxPcr4FIgWmCMdigUCJI/sxR7UrwAzd6EiwzQ0osZaqrESk+8VPM9/4TaJRplaNbsx BK8iXKVDPz407t7kHjrMPr01P+F1/kP6HEopSaCttp/AqEfuTo3F++L0PDtp1XT+ubb0MqQAAA A== To: Thomas Gleixner , Catalin Marinas , Will Deacon , Peter Zijlstra , Ingo Molnar , Waiman Long , Boqun Feng Cc: Linus Torvalds , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org, "Christoph Lameter (Ampere)" X-Mailer: b4 0.15-dev-37811 X-Developer-Signature: v=1; a=ed25519-sha256; t=1726181060; l=10044; i=cl@gentwo.org; s=20240811; h=from:subject:message-id; bh=twHlnolUF54+RvSDVMLmtR5Su73uVeAgLsYk+Borf6U=; b=k97mMGYS69MnG5mKcJfc0vaBxyX3DPC5NQzKmzjEVfhZgRcKce8+kKyiWphbuR864mWRm1iVL K8OnoAP0EceDubpE9TYQxzxKuG2bsyvwWcpNjX2WyXW5BcLoi/fUkxr X-Developer-Key: i=cl@gentwo.org; a=ed25519; pk=I7gqGwDi9drzCReFIuf2k9de1FI1BGibsshXI0DIvq8= X-Endpoint-Received: by B4 Relay for cl@gentwo.org/20240811 with auth_id=194 X-Original-From: "Christoph Lameter (Ampere)" Reply-To: cl@gentwo.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2B91A4000D X-Stat-Signature: a8bz8i6h5rprei5s4fuucagdodcnj19z X-Rspam-User: X-HE-Tag: 1726181062-680916 X-HE-Meta: U2FsdGVkX1/8YF8u3WwZIAN2qPYqJF9V0EJ/PTJfq+0DupQhR7q+5SlVfaa1oUxvXGZOA8qVe3XLkO48smP23AHBLSfYvW+ySS8cdX7BjHR5xXKVNkILtGPAc9eAQBsUAtaFALobHhinW52M9YtYaYont2oaBymGu5h6LZUUOd5LobWhjD9rVogpq3+3BX3Sar6YM8pofxFDa5/bpFJfWvzk/JmvGv9MrIZG9O5BahCo1U9WGRB4RNGb/GY2k6dt4rVw9RiLX4r1k+Owu74k9i4pixs48PWkDOMh5hbWJMiDH8Yl4aC6Q/62mXF4lQ3UUJIXJeYVmf2mpWE4zcYeUeZSSpFQgkrL54reDe6wg+DjcXcUycM5jMnPBvvHH5UI7dG8t3UX8FPO/AOVX+EJN8KnMrh7T6O68fWQ8iycqp1UnGjbs6GhF2aP8QVwM91QwYBPzhhdH4Lx1NgWst0zfJdQNum8dJIGg0b/LIKlvaZqwgKYTc/myCfk2kI9k/D2cFn04W7O2dLrrhJAbibO/whfXo4pEU6e8dzYb3dOn2lFte7QN5aUJDouWGEsDmm1o9xcFl5es2Y8qydSjaMsATOM1joFL/a2rrW781iov+de1+8Ttd6ZOGjWetK/plc9FR6YgZM6dCpl1hTi9+fc2n690tJsPsHsVBLTwgJlicu/QW7osKMgbkf3raak4H9AtALVFRJpAzN43DtovwIiImfE9ZkjaEYOuPg2mHHYZao4Sq/dXVuc2dpsvtDFbcAH+kTkDZK5BCGdhCzXDsKkiJtm/qxYe1e5G7Dk4eqlljfdymZ4asrOMqq3/y1xwPLuVlpulmRRrPUlaFg19RLDYW5v+cm6TNRhpqSoZ/I7pbo4J4RQTSEIks/mZ7eJhENzd3fHF+ixuqVaj8CFgW4hh6Jgc21stcTsM+K/vAM8ZlOUg5fOIH6yLh+XXwe2aUUqbep9xO8InNTEnfGB/I9 wYTOeQ3k SmzeyhQkUWVz72hnkZrbZqCtFG68cl11Y6EZ/FtOukNH2+3rga7uIWbdOZtrKh2oTDFuHP/ae33YapDZH3WBzu0af4X6HnJ2U7UrJwJ5j7EIWpsKY395+AAjDJ1ERROYroDpqcL+N6wu6Ic2mtCivgyi1TJaYbOFyooTnJre0u74YxJDa1TAfxCvWLbnGqaqZqEGt/vNsW3BlUOrz/DzRAj/Pz//0bj50kru13XiFydrZvMB0e0nZCDhkPQ9R7akgeEX/9rTEAIU8rcJ0YLKHP28AOoGgDmvUFO00PrG8J1LgdiYqxKSam/TXesLmIrdOz7ztNP0gzgkAsttkcD1kHHoIUJx9Xp+UfZbiiS9qa2oSXmASF/0JaE1upySA2pl6b8iSotjD5yHqPia63qa1fu+OrU7uj1AJHNE0InoVnYo72DiFi8VhpGifu76sLjIac3W6DN418bPDe4+hC0JYhYxTsPR0UBYLwQBmH122tkBBGvw8jJnNw1Pe4cttrkp3/hK3IQi+tl1nerY8BBB1oTN/bTZXlLDYp9sqduI8iCNVAEk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Christoph Lameter (Ampere)" Some architectures support load acquire which can save us a memory barrier and save some cycles. A typical sequence do { seq = read_seqcount_begin(&s); } while (read_seqcount_retry(&s, seq); requires 13 cycles on ARM64 for an empty loop. Two read memory barriers are needed. One for each of the seqcount_* functions. We can replace the first read barrier with a load acquire of the seqcount which saves us one barrier. On ARM64 doing so reduces the cycle count from 13 to 8. This is a general improvement for the ARM64 architecture and not specific to a certain processor. The cycle count here was obtained on a Neoverse N1 (Ampere Altra). We can further optimize handling by using the cond_load_acquire logic which will give an ARM CPU a chance to enter some power saving mode while waiting for changes to a cacheline thereby avoiding busy loops and therefore saving power. The ARM documentation states that load acquire is more effective than a load plus barrier. In general that tends to be true on all compute platforms that support both. See (as quoted by Linus Torvalds): https://developer.arm.com/documentation/102336/0100/Load-Acquire-and-Store-Release-instructions "Weaker ordering requirements that are imposed by Load-Acquire and Store-Release instructions allow for micro-architectural optimizations, which could reduce some of the performance impacts that are otherwise imposed by an explicit memory barrier. If the ordering requirement is satisfied using either a Load-Acquire or Store-Release, then it would be preferable to use these instructions instead of a DMB" The patch benefited significantly from the knowledge of the innards of the seqlock code by Thomas Gleixner. Signed-off-by: Christoph Lameter (Ampere) Signed-off-by: Christoph Lameter --- V1->V2 - Describe the benefit of load acquire vs barriers - Explain the CONFIG_ARCH_HAS_ACQUIRE_RELEASE option better --- Changes in v3: - Support cond_load_acquire to give the processor a chance to do some sort of power down until cacheline changes. - Better code by Thomas Gleixner - Link to v2: https://lore.kernel.org/r/20240819-seq_optimize-v2-1-9d0da82b022f@gentwo.org --- arch/Kconfig | 8 +++++ arch/arm64/Kconfig | 1 + include/linux/seqlock.h | 85 ++++++++++++++++++++++++++++++++++++------------- 3 files changed, 71 insertions(+), 23 deletions(-) --- base-commit: 77f587896757708780a7e8792efe62939f25a5ab change-id: 20240813-seq_optimize-68c48696c798 Best regards, diff --git a/arch/Kconfig b/arch/Kconfig index 975dd22a2dbd..3c270f496231 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1600,6 +1600,14 @@ config ARCH_HAS_KERNEL_FPU_SUPPORT Architectures that select this option can run floating-point code in the kernel, as described in Documentation/core-api/floating-point.rst. +config ARCH_HAS_ACQUIRE_RELEASE + bool + help + Setting ARCH_HAS_ACQUIRE_RELEASE indicates that the architecture + supports load acquire and release. Typically these are more effective + than memory barriers. Code will prefer the use of load acquire and + store release over memory barriers if this option is enabled. + source "kernel/gcov/Kconfig" source "scripts/gcc-plugins/Kconfig" diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a2f8ff354ca6..19e34fff145f 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -39,6 +39,7 @@ config ARM64 select ARCH_HAS_PTE_DEVMAP select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_HW_PTE_YOUNG + select ARCH_HAS_ACQUIRE_RELEASE select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SET_DIRECT_MAP select ARCH_HAS_SET_MEMORY diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index d90d8ee29d81..a3fe9ee8edef 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -23,6 +23,13 @@ #include +#ifdef CONFIG_ARCH_HAS_ACQUIRE_RELEASE +# define USE_LOAD_ACQUIRE true +# define USE_COND_LOAD_ACQUIRE !IS_ENABLED(CONFIG_PREEMPT_RT) +#else +# define USE_LOAD_ACQUIRE false +# define USE_COND_LOAD_ACQUIRE false +#endif /* * The seqlock seqcount_t interface does not prescribe a precise sequence of * read begin/retry/end. For readers, typically there is a call to @@ -132,6 +139,17 @@ static inline void seqcount_lockdep_reader_access(const seqcount_t *s) #define seqcount_rwlock_init(s, lock) seqcount_LOCKNAME_init(s, lock, rwlock) #define seqcount_mutex_init(s, lock) seqcount_LOCKNAME_init(s, lock, mutex) +static __always_inline unsigned __seqprop_load_sequence(const seqcount_t *s, bool acquire) +{ + if (!acquire || !USE_LOAD_ACQUIRE) + return READ_ONCE(s->sequence); + + if (USE_COND_LOAD_ACQUIRE) + return smp_cond_load_acquire((unsigned int *)&s->sequence, (s->sequence & 1) == 0); + + return smp_load_acquire(&s->sequence); +} + /* * SEQCOUNT_LOCKNAME() - Instantiate seqcount_LOCKNAME_t and helpers * seqprop_LOCKNAME_*() - Property accessors for seqcount_LOCKNAME_t @@ -155,9 +173,10 @@ __seqprop_##lockname##_const_ptr(const seqcount_##lockname##_t *s) \ } \ \ static __always_inline unsigned \ -__seqprop_##lockname##_sequence(const seqcount_##lockname##_t *s) \ +__seqprop_##lockname##_sequence(const seqcount_##lockname##_t *s, \ + bool acquire) \ { \ - unsigned seq = READ_ONCE(s->seqcount.sequence); \ + unsigned seq = __seqprop_load_sequence(&s->seqcount, acquire); \ \ if (!IS_ENABLED(CONFIG_PREEMPT_RT)) \ return seq; \ @@ -170,7 +189,7 @@ __seqprop_##lockname##_sequence(const seqcount_##lockname##_t *s) \ * Re-read the sequence counter since the (possibly \ * preempted) writer made progress. \ */ \ - seq = READ_ONCE(s->seqcount.sequence); \ + seq = __seqprop_load_sequence(&s->seqcount, acquire); \ } \ \ return seq; \ @@ -206,9 +225,9 @@ static inline const seqcount_t *__seqprop_const_ptr(const seqcount_t *s) return s; } -static inline unsigned __seqprop_sequence(const seqcount_t *s) +static inline unsigned __seqprop_sequence(const seqcount_t *s, bool acquire) { - return READ_ONCE(s->sequence); + return __seqprop_load_sequence(s, acquire); } static inline bool __seqprop_preemptible(const seqcount_t *s) @@ -258,35 +277,53 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex) #define seqprop_ptr(s) __seqprop(s, ptr)(s) #define seqprop_const_ptr(s) __seqprop(s, const_ptr)(s) -#define seqprop_sequence(s) __seqprop(s, sequence)(s) +#define seqprop_sequence(s, a) __seqprop(s, sequence)(s, a) #define seqprop_preemptible(s) __seqprop(s, preemptible)(s) #define seqprop_assert(s) __seqprop(s, assert)(s) /** - * __read_seqcount_begin() - begin a seqcount_t read section w/o barrier - * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants - * - * __read_seqcount_begin is like read_seqcount_begin, but has no smp_rmb() - * barrier. Callers should ensure that smp_rmb() or equivalent ordering is - * provided before actually loading any of the variables that are to be - * protected in this critical section. - * - * Use carefully, only in critical code, and comment how the barrier is - * provided. + * read_seqcount_begin_cond_acquire() - begin a seqcount_t read section + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants + * @acquire: If true, the read of the sequence count uses smp_load_acquire() + * if the architecure provides and enabled it. * * Return: count to be passed to read_seqcount_retry() */ -#define __read_seqcount_begin(s) \ +#define read_seqcount_begin_cond_acquire(s, acquire) \ ({ \ unsigned __seq; \ \ - while ((__seq = seqprop_sequence(s)) & 1) \ - cpu_relax(); \ + if (acquire && USE_COND_LOAD_ACQUIRE) { \ + __seq = seqprop_sequence(s, acquire); \ + } else { \ + while ((__seq = seqprop_sequence(s, acquire)) & 1) \ + cpu_relax(); \ + } \ \ kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); \ __seq; \ }) +/** + * __read_seqcount_begin() - begin a seqcount_t read section w/o barrier + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants + * + * __read_seqcount_begin is like read_seqcount_begin, but it neither + * provides a smp_rmb() barrier nor does it use smp_load_acquire() on + * architectures which provide it. + * + * Callers should ensure that smp_rmb() or equivalent ordering is provided + * before actually loading any of the variables that are to be protected in + * this critical section. + * + * Use carefully, only in critical code, and comment how the barrier is + * provided. + * + * Return: count to be passed to read_seqcount_retry() + */ +#define __read_seqcount_begin(s) \ + read_seqcount_begin_cond_acquire(s, false) + /** * raw_read_seqcount_begin() - begin a seqcount_t read section w/o lockdep * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants @@ -295,9 +332,10 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex) */ #define raw_read_seqcount_begin(s) \ ({ \ - unsigned _seq = __read_seqcount_begin(s); \ + unsigned _seq = read_seqcount_begin_cond_acquire(s, true); \ \ - smp_rmb(); \ + if (!IS_ENABLED(CONFIG_ARCH_HAS_ACQUIRE_RELEASE)) \ + smp_rmb(); \ _seq; \ }) @@ -326,9 +364,10 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex) */ #define raw_read_seqcount(s) \ ({ \ - unsigned __seq = seqprop_sequence(s); \ + unsigned __seq = seqprop_sequence(s, true); \ \ - smp_rmb(); \ + if (!IS_ENABLED(CONFIG_ARCH_HAS_ACQUIRE_RELEASE)) \ + smp_rmb(); \ kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); \ __seq; \ })