From patchwork Mon Aug 8 07:13:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B576C25B0C for ; Mon, 8 Aug 2022 07:14:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kBxl61PUeRoGa+Bg4EUORc6DvaaViY4/QEWjIK+D3x8=; b=h7MZ3Mt8lg2aid 2Nz1NQrLRkLLzORLm9RnLLy4YMEJEUZl9rHnG2V4X1jnJgiZClG7YETV3K13PseXKdjBaPT/AwJ7j 7hd8nFLblMoFItSUyaX0vAcp9C2wGoTt1bFRal8K4TCYjUY0rdeTQ1zUL/Xpj15xFchG8asQw7Js8 S94FJLm/qYqCMAqJ+WsyW6KbwgTOGcldb97kMYaSB09gETpDfl4MGaQXPesTnTKIQdkAB0WydSYZ/ bw3Xj5bHyFAA/OhZi3blBaqHkcWyPjr1ouR+YvXH3679Ni1GSxsZJos0tQcxDW6FIjlwz5kOSuY9+ lkzMLZ1FW+b0MIFNZyzw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxQ-00C1bH-FN; Mon, 08 Aug 2022 07:13:48 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxN-00C1YP-76 for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:13:46 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 86DF860C21; Mon, 8 Aug 2022 07:13:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E63F4C433C1; Mon, 8 Aug 2022 07:13:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942824; bh=x8ThgiIuKR7Cb/zZ7HilMYDMWlq4+j+CW+ILWn7Ti5k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P88S4lJbP8Be2yMakC9wEEqudyc7zhXZOAxkmW7Q4X3jhlAy60UklpQ5NbgVP9Vbo jA+RgsevhZTRkloqo7fRg1fKVqLlR0qTF5fcFGDOTnufomLj5p2T3WM6Wf75ziKksR urnEGYFj3/NEna3JHv6dfjoagv5m0VvBZ1G0mdr1mMTPLT64gYi6jWdzfwZ7sokie/ h0/OqRQFfyYoSvAtZgRJ3TOJhisPPQ5Gue+pvUtJaLrjF34B9glm3C9Df7/8lkX1w1 T0a0L51d1mRr+IJZ2+NDxLXHFUl7bHv4bBWQ7sGFuy5p+Q3FWRjAoXIXY5gb3qVGri cslMO2EXPJCtg== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 01/15] asm-generic: ticket-lock: Remove unnecessary atomic_read Date: Mon, 8 Aug 2022 03:13:04 -0400 Message-Id: <20220808071318.3335746-2-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001345_391311_9844F01D X-CRM114-Status: UNSURE ( 9.79 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Remove unnecessary atomic_read in arch_spin_value_unlocked(lock), because the value has been in lock. This patch could prevent arch_spin_value_unlocked contend spin_lock data again. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- include/asm-generic/spinlock.h | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index fdfebcb050f4..90803a826ba0 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -68,11 +68,18 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) smp_store_release(ptr, (u16)val + 1); } +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) +{ + u32 val = lock.counter; + + return ((val >> 16) == (val & 0xffff)); +} + static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) { - u32 val = atomic_read(lock); + arch_spinlock_t val = READ_ONCE(*lock); - return ((val >> 16) != (val & 0xffff)); + return !arch_spin_value_unlocked(val); } static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) @@ -82,11 +89,6 @@ static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) return (s16)((val >> 16) - (val & 0xffff)) > 1; } -static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) -{ - return !arch_spin_is_locked(&lock); -} - #include #endif /* __ASM_GENERIC_SPINLOCK_H */ From patchwork Mon Aug 8 07:13:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BF676C25B06 for ; Mon, 8 Aug 2022 07:14:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5jk3WsB8334eNTcYCXwRCTFiGyK7R5KVf4kGErfDj+g=; b=V7J3eNSnrImt5Y 96YDKPhaHZ0FCcfNx/vlJ0u+NlloaLmGk5sih0S/ALsTGUtgDlgbpB83bGAW+yHVuxrw3xl/6eETf 5oFd5rQFnAtSxs9wdcn4zBvfYBTSCvyoWMFWuF91GcVLheVCvCzK8wmpMaYiPP5OnKdOLPCzsTE5P S+5zzcuv8ccBn0XKHn3FFx6bcrnhBaMjCXVXiI0jT+ANy8k5xmZU4y+SVrF5SKLLv5plyX8iSOWOE Alxbq46DQQtUH0NVOs18MTubfcTt4A29j+nlGvXfL9VUyJfkCWlP2V06E/1iZ8t93dDGIDgxHRhCg uwRS0P2B7a2A+HzqnYaA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxY-00C1fe-CV; Mon, 08 Aug 2022 07:13:56 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxV-00C1dS-Cr for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:13:55 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 87669CE0F82; Mon, 8 Aug 2022 07:13:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F785C433D6; Mon, 8 Aug 2022 07:13:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942829; bh=thtaHW8jAZpgDJtozuaQ3X5L4SWn53gdQRWmJtXyzQQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eLS2JN7AtN2QUrjVTx06njv8IFvJnKJierdmJz0hwB0YbhEIbUGIMhfe4X16kXjQI tiAgJzZnvSX2cpOGdbHLY9vjsMPPkg9XeoC8aszYqfCVeRl5m4HFd4dAmCgaBrq7jm 29iDemvhENfAvB08z0kKVCvD1A5bLwPn+vq/Tsj3QS6meuCW2g9Yn+Cbvg64dOE1sL FOTsSXAIxvLNk/1IZcnxS9if0oLePRntM6R6qLEEzVXPQwOcUUiJdqklqtQ/91brcJ tM1yeAVxWMGjXTq61Ilg2qORAla3OtgZlXZA3fzurZ1ee3MaLDLZEmHEXREbccSBqz f5nRiVPvD8isg== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 02/15] asm-generic: ticket-lock: Use the same struct definitions with qspinlock Date: Mon, 8 Aug 2022 03:13:05 -0400 Message-Id: <20220808071318.3335746-3-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001353_806224_B83D22B7 X-CRM114-Status: GOOD ( 11.62 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Let ticket_lock use the same struct definitions with qspinlock, and then we could move to combo spinlock (combine ticket & queue). Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- include/asm-generic/spinlock.h | 14 +++++++------- include/asm-generic/spinlock_types.h | 12 ++---------- 2 files changed, 9 insertions(+), 17 deletions(-) diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index 90803a826ba0..4773334ee638 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -32,7 +32,7 @@ static __always_inline void arch_spin_lock(arch_spinlock_t *lock) { - u32 val = atomic_fetch_add(1<<16, lock); + u32 val = atomic_fetch_add(1<<16, &lock->val); u16 ticket = val >> 16; if (ticket == (u16)val) @@ -46,31 +46,31 @@ static __always_inline void arch_spin_lock(arch_spinlock_t *lock) * have no outstanding writes due to the atomic_fetch_add() the extra * orderings are free. */ - atomic_cond_read_acquire(lock, ticket == (u16)VAL); + atomic_cond_read_acquire(&lock->val, ticket == (u16)VAL); smp_mb(); } static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock) { - u32 old = atomic_read(lock); + u32 old = atomic_read(&lock->val); if ((old >> 16) != (old & 0xffff)) return false; - return atomic_try_cmpxchg(lock, &old, old + (1<<16)); /* SC, for RCsc */ + return atomic_try_cmpxchg(&lock->val, &old, old + (1<<16)); /* SC, for RCsc */ } static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) { u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN); - u32 val = atomic_read(lock); + u32 val = atomic_read(&lock->val); smp_store_release(ptr, (u16)val + 1); } static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) { - u32 val = lock.counter; + u32 val = lock.val.counter; return ((val >> 16) == (val & 0xffff)); } @@ -84,7 +84,7 @@ static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) { - u32 val = atomic_read(lock); + u32 val = atomic_read(&lock->val); return (s16)((val >> 16) - (val & 0xffff)) > 1; } diff --git a/include/asm-generic/spinlock_types.h b/include/asm-generic/spinlock_types.h index 8962bb730945..f534aa5de394 100644 --- a/include/asm-generic/spinlock_types.h +++ b/include/asm-generic/spinlock_types.h @@ -3,15 +3,7 @@ #ifndef __ASM_GENERIC_SPINLOCK_TYPES_H #define __ASM_GENERIC_SPINLOCK_TYPES_H -#include -typedef atomic_t arch_spinlock_t; - -/* - * qrwlock_types depends on arch_spinlock_t, so we must typedef that before the - * include. - */ -#include - -#define __ARCH_SPIN_LOCK_UNLOCKED ATOMIC_INIT(0) +#include +#include #endif /* __ASM_GENERIC_SPINLOCK_TYPES_H */ From patchwork Mon Aug 8 07:13:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 23BCDC00140 for ; Mon, 8 Aug 2022 07:14:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KoRX7bhDh+haJU/a22lDtE/XL+Tl1upCYD8UWYgM09A=; b=a9CVZe9WQRKecP YovsCo3aBsBgZG+h9xMsH+m9nSR+whzG9HWIrGboKwHZNtIkYSgRlntrJ3Hcmf5+nlZlXxdbGWCA7 l9TVw30ToeYT7rkR5jVttKZoMkbIeOdDEi8OfojfkOIJxUr14lu3B619KklJEvHWZoVMuvN4tH0H4 lq/6n2boOvP4IkyQCPrFoRv5nqbf+WGKca1Bc1mQj26Qa7hZhxCO2uSXrv27v6dyf07mBhoI2XBvS b6+s58f0I5wLLUtrwu9ug3wDamWSrGSrW6PdP0fJnnHZKcLv0XfV67TbFmJfgGFP4nRbG9q10ITPA 4uAHrtoo/L8KgfD25QiQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxa-00C1ht-Se; Mon, 08 Aug 2022 07:13:58 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxY-00C1fH-8k for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:13:58 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 924D860CF8; Mon, 8 Aug 2022 07:13:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 57807C433C1; Mon, 8 Aug 2022 07:13:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942835; bh=rnPtWhi3CDyTr/uNnWyehBFaqZy89D/6l2depmXHrRk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SvpkR/Duzn3Y9YJwRHLSsZ4rRoFbqkPoVDLFpwHQCfmpSDpVWm7C3jeJAyf1nLNcG 0U2wj8TYl816UzVKY4Tl5EwbVYdoR4c8RHmeh5VBGITFrxwi/iXIaPK/KZVSb5I8rs VHM4Fzqg7IOblUKEliRTgIjrhFdxoP6zrf2GPLt4XeOYCkPNMTFh8DGYWLRmGqtZTX C6pY96kmLEB3h4/ruGAmD4AiitLEfoPpEFNp1Y9VbCntPRSCkZ+zew8pV09bOoIOuC SI2PtnL1ao/EPL3AlXK0X4eQ5Ua/svJFnwhwTYJjos/Gmwov0BFHd56jm1qW22MpYz VGrI+GsDJdVBw== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 03/15] asm-generic: ticket-lock: Move into ticket_spinlock.h Date: Mon, 8 Aug 2022 03:13:06 -0400 Message-Id: <20220808071318.3335746-4-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001356_407548_59F40181 X-CRM114-Status: GOOD ( 20.84 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Move ticket-lock definition into an independent file. It's a preparation patch for merging qspinlock into asm-generic spinlock. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- include/asm-generic/spinlock.h | 87 +--------------------- include/asm-generic/ticket_spinlock.h | 103 ++++++++++++++++++++++++++ 2 files changed, 104 insertions(+), 86 deletions(-) create mode 100644 include/asm-generic/ticket_spinlock.h diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index 4773334ee638..970590baf61b 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -1,94 +1,9 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* - * 'Generic' ticket-lock implementation. - * - * It relies on atomic_fetch_add() having well defined forward progress - * guarantees under contention. If your architecture cannot provide this, stick - * to a test-and-set lock. - * - * It also relies on atomic_fetch_add() being safe vs smp_store_release() on a - * sub-word of the value. This is generally true for anything LL/SC although - * you'd be hard pressed to find anything useful in architecture specifications - * about this. If your architecture cannot do this you might be better off with - * a test-and-set. - * - * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence - * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with - * a full fence after the spin to upgrade the otherwise-RCpc - * atomic_cond_read_acquire(). - * - * The implementation uses smp_cond_load_acquire() to spin, so if the - * architecture has WFE like instructions to sleep instead of poll for word - * modifications be sure to implement that (see ARM64 for example). - * - */ - #ifndef __ASM_GENERIC_SPINLOCK_H #define __ASM_GENERIC_SPINLOCK_H -#include -#include - -static __always_inline void arch_spin_lock(arch_spinlock_t *lock) -{ - u32 val = atomic_fetch_add(1<<16, &lock->val); - u16 ticket = val >> 16; - - if (ticket == (u16)val) - return; - - /* - * atomic_cond_read_acquire() is RCpc, but rather than defining a - * custom cond_read_rcsc() here we just emit a full fence. We only - * need the prior reads before subsequent writes ordering from - * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we - * have no outstanding writes due to the atomic_fetch_add() the extra - * orderings are free. - */ - atomic_cond_read_acquire(&lock->val, ticket == (u16)VAL); - smp_mb(); -} - -static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock) -{ - u32 old = atomic_read(&lock->val); - - if ((old >> 16) != (old & 0xffff)) - return false; - - return atomic_try_cmpxchg(&lock->val, &old, old + (1<<16)); /* SC, for RCsc */ -} - -static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) -{ - u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN); - u32 val = atomic_read(&lock->val); - - smp_store_release(ptr, (u16)val + 1); -} - -static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) -{ - u32 val = lock.val.counter; - - return ((val >> 16) == (val & 0xffff)); -} - -static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) -{ - arch_spinlock_t val = READ_ONCE(*lock); - - return !arch_spin_value_unlocked(val); -} - -static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) -{ - u32 val = atomic_read(&lock->val); - - return (s16)((val >> 16) - (val & 0xffff)) > 1; -} - +#include #include #endif /* __ASM_GENERIC_SPINLOCK_H */ diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h new file mode 100644 index 000000000000..cfcff22b37b3 --- /dev/null +++ b/include/asm-generic/ticket_spinlock.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * 'Generic' ticket-lock implementation. + * + * It relies on atomic_fetch_add() having well defined forward progress + * guarantees under contention. If your architecture cannot provide this, stick + * to a test-and-set lock. + * + * It also relies on atomic_fetch_add() being safe vs smp_store_release() on a + * sub-word of the value. This is generally true for anything LL/SC although + * you'd be hard pressed to find anything useful in architecture specifications + * about this. If your architecture cannot do this you might be better off with + * a test-and-set. + * + * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence + * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with + * a full fence after the spin to upgrade the otherwise-RCpc + * atomic_cond_read_acquire(). + * + * The implementation uses smp_cond_load_acquire() to spin, so if the + * architecture has WFE like instructions to sleep instead of poll for word + * modifications be sure to implement that (see ARM64 for example). + * + */ + +#ifndef __ASM_GENERIC_TICKET_SPINLOCK_H +#define __ASM_GENERIC_TICKET_SPINLOCK_H + +#include +#include + +static __always_inline void ticket_spin_lock(arch_spinlock_t *lock) +{ + u32 val = atomic_fetch_add(1<<16, &lock->val); + u16 ticket = val >> 16; + + if (ticket == (u16)val) + return; + + /* + * atomic_cond_read_acquire() is RCpc, but rather than defining a + * custom cond_read_rcsc() here we just emit a full fence. We only + * need the prior reads before subsequent writes ordering from + * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we + * have no outstanding writes due to the atomic_fetch_add() the extra + * orderings are free. + */ + atomic_cond_read_acquire(&lock->val, ticket == (u16)VAL); + smp_mb(); +} + +static __always_inline bool ticket_spin_trylock(arch_spinlock_t *lock) +{ + u32 old = atomic_read(&lock->val); + + if ((old >> 16) != (old & 0xffff)) + return false; + + return atomic_try_cmpxchg(&lock->val, &old, old + (1<<16)); /* SC, for RCsc */ +} + +static __always_inline void ticket_spin_unlock(arch_spinlock_t *lock) +{ + u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN); + u32 val = atomic_read(&lock->val); + + smp_store_release(ptr, (u16)val + 1); +} + +static __always_inline int ticket_spin_value_unlocked(arch_spinlock_t lock) +{ + u32 val = lock.val.counter; + + return ((val >> 16) == (val & 0xffff)); +} + +static __always_inline int ticket_spin_is_locked(arch_spinlock_t *lock) +{ + arch_spinlock_t val = READ_ONCE(*lock); + + return !ticket_spin_value_unlocked(val); +} + +static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock) +{ + u32 val = atomic_read(&lock->val); + + return (s16)((val >> 16) - (val & 0xffff)) > 1; +} + +/* + * Remapping spinlock architecture specific functions to the corresponding + * ticket spinlock functions. + */ +#define arch_spin_is_locked(l) ticket_spin_is_locked(l) +#define arch_spin_is_contended(l) ticket_spin_is_contended(l) +#define arch_spin_value_unlocked(l) ticket_spin_value_unlocked(l) +#define arch_spin_lock(l) ticket_spin_lock(l) +#define arch_spin_trylock(l) ticket_spin_trylock(l) +#define arch_spin_unlock(l) ticket_spin_unlock(l) + +#endif /* __ASM_GENERIC_TICKET_SPINLOCK_H */ From patchwork Mon Aug 8 07:13:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A11D6C00140 for ; Mon, 8 Aug 2022 07:14:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AxDC60TyagQv70aE7EJMquPxMk863hhyII9wyCttFRo=; b=x/lQluVHHR6D0U SowOfV552l6ei3CJR42qN/L8R1bL9BSGGolGAJI+nvXnJvMGE1BKrtkUQeK96NgSOnUKb2Qt8oqJf NLdHTkd8dbqk9AE34aFhiFFBJDh0JFw/oTu5djbCYsBXHajdNT0mf2jh2ONFaUixotioQOAX6DIIB YeSiFi7gHx0AfuXU3NNTOJnrOaPuAZPNziuqP/wfd9iuWugr90EyP6qhIjWkcQrTjmr68AnOkwhoA 9h0Z2vyIcwY2zfNCJuOIOHPeYT1sL23QIM3hbCat+q3zsY0KI6GICIfHZ84PYnfZnXOVrBO1+TzJ8 gYi21EGPEp4jeklzecag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxh-00C1nB-SS; Mon, 08 Aug 2022 07:14:05 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxe-00C1kT-Pq for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:04 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 650D160D30; Mon, 8 Aug 2022 07:14:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13FA8C433D6; Mon, 8 Aug 2022 07:13:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942841; bh=VTgzxT5YWA/FXTGh2L0uPc4Dz3cqTkrreO/CY7INghE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bfPqsa8VFloG1NRbhC8enTrVw0OkRGrGcLm+//WoDzQ9TvOTVF75xkB/W5QkB21ty X/CwZ9TYEc/E66B3wvCUHwWY38osrkJhBa/VQjO8x8sCm17Xq3EXus/jyOwRI6q6pl 0i9vN7To7knSIEXDvUSRmzNFJUIT0B6XB9bx2kHRiTc4JjieQpAdn3tUdFr+36K4Li rV+PiNTwZ4E92wpAs5/tCcoRlJxu2vsxg9POGgIdKFPWqnm6QV5+GLcbA5mB4Hy4u3 Y5xrgmB+hTFWRBsHoMNL+WpL4OBjFpR1yVnxV97fa+YUnvw9jhQFzFECEajyBhu4sp QRNjxJEU9Brlw== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 04/15] asm-generic: ticket-lock: Keep ticket-lock the same semantic with qspinlock Date: Mon, 8 Aug 2022 03:13:07 -0400 Message-Id: <20220808071318.3335746-5-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001402_954092_A35F3E3D X-CRM114-Status: GOOD ( 13.52 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Define smp_mb__after_spinlock by smp_mb as default behavior to give RCsc synchronization point for all architectures. Keep the same semantic with qspinlock, a acquire (RCpc) synchronization point. More detail, see include/linux/spinlock.h. Some architectures could give more robust semantics than smp_mb, eg. riscv. Some architectures needn't smp_mb__after_spinlock because their spinlocks have contained an RCsc. Signed-off-by: Guo Ren Signed-off-by: Guo Ren Cc: Peter Zijlstra --- include/asm-generic/spinlock.h | 5 +++++ include/asm-generic/ticket_spinlock.h | 18 ++++-------------- 2 files changed, 9 insertions(+), 14 deletions(-) diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index 970590baf61b..6f5a1b838ca2 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -6,4 +6,9 @@ #include #include +/* See include/linux/spinlock.h */ +#ifndef smp_mb__after_spinlock +#define smp_mb__after_spinlock() smp_mb() +#endif + #endif /* __ASM_GENERIC_SPINLOCK_H */ diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h index cfcff22b37b3..d8e6ec82f096 100644 --- a/include/asm-generic/ticket_spinlock.h +++ b/include/asm-generic/ticket_spinlock.h @@ -14,9 +14,8 @@ * a test-and-set. * * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence - * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with - * a full fence after the spin to upgrade the otherwise-RCpc - * atomic_cond_read_acquire(). + * uses smp_mb__after_spinlock which is RCsc to create an RCsc hot path, See + * include/linux/spinlock.h * * The implementation uses smp_cond_load_acquire() to spin, so if the * architecture has WFE like instructions to sleep instead of poll for word @@ -32,22 +31,13 @@ static __always_inline void ticket_spin_lock(arch_spinlock_t *lock) { - u32 val = atomic_fetch_add(1<<16, &lock->val); + u32 val = atomic_fetch_add_acquire(1<<16, &lock->val); u16 ticket = val >> 16; if (ticket == (u16)val) return; - /* - * atomic_cond_read_acquire() is RCpc, but rather than defining a - * custom cond_read_rcsc() here we just emit a full fence. We only - * need the prior reads before subsequent writes ordering from - * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we - * have no outstanding writes due to the atomic_fetch_add() the extra - * orderings are free. - */ atomic_cond_read_acquire(&lock->val, ticket == (u16)VAL); - smp_mb(); } static __always_inline bool ticket_spin_trylock(arch_spinlock_t *lock) @@ -57,7 +47,7 @@ static __always_inline bool ticket_spin_trylock(arch_spinlock_t *lock) if ((old >> 16) != (old & 0xffff)) return false; - return atomic_try_cmpxchg(&lock->val, &old, old + (1<<16)); /* SC, for RCsc */ + return atomic_try_cmpxchg_acquire(&lock->val, &old, old + (1<<16)); } static __always_inline void ticket_spin_unlock(arch_spinlock_t *lock) From patchwork Mon Aug 8 07:13:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 83BB9C00140 for ; Mon, 8 Aug 2022 07:14:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=82Z3TnUhXL0ukgntI1o5miFIQiddYja3owAOZehczt4=; b=KfHqxbBidzuSR8 jEGey85xztuRhg1J5XmoJ9sxai6fKRyUJrBEghCIXL/1MTsGaDcwgqXv5S86UCTBzLE4T9An7Wjlv UNAlLdg4lO2YXiaceJkwt4KmwT8Gk6zV2JPeB3ayS2umz1haRuRCsk/4Wx15YnqNWtziSOc5pcBDr 8c1ygcPONT0kHNGYdmyAwHJUpTswUim4BoQ17T3IFGs8PzTBhsNT29VshtAo2dqHjZjM8zBj2NF2C 6QbBjOMqxmRfa3TIbZzlIw/76ZEOPyU1mtBAWrGS23JSzMQeWuwSOlbJIIl+LuV4p+RzVC8xsykRP TjQZ79QeeOcb+MZ5fZtw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxo-00C1sf-HR; Mon, 08 Aug 2022 07:14:12 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxl-00C1oQ-BN for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:10 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 213ACB80E13; Mon, 8 Aug 2022 07:14:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C1E9AC43144; Mon, 8 Aug 2022 07:14:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942846; bh=WoY94/J8cfzum9irmyRxRUySe4AhzCRGKz56Za4j0Us=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Dmp61UFU8SA/lRXSMwulcK8hsmZHXKxWNoltZU6J1+C0Tl1VlTX6NbEQpqhjmwwC0 aC2JsVf6ET8hY6KdAULsOP5se+CPera0Vk+WakOUOMR2TfRuAJrA/5K2ZuXBQFJXhN b1vZCmK7gyPtayyCTr39QKIDO4xmLbUcZf2eYISecYjR0isYe5RDarzFsF3t8RE3rC F5O/b5SdogsbAuMSc5EZn5LAL3v4OSBUnsf69xe1NxDIPiRTe5IsbD1ct861I21LXy gt8cZJsiA/wU0qpb7KnvWYckD8OOBtFM/nIGT+ZxpzSQB79WHN0CgQNfNOrkQX70Wc yOkxvWPJg/ahA== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 05/15] asm-generic: spinlock: Add queued spinlock support in common header Date: Mon, 8 Aug 2022 03:13:08 -0400 Message-Id: <20220808071318.3335746-6-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001409_547449_049FAF55 X-CRM114-Status: UNSURE ( 8.44 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Select queued spinlock or ticket lock by CONFIG_QUEUED_SPINLOCKS in the common header file. Define smp_mb__after_spinlock with smp_mb() as default. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- include/asm-generic/spinlock.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index 6f5a1b838ca2..349cdb46a99c 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -3,7 +3,11 @@ #ifndef __ASM_GENERIC_SPINLOCK_H #define __ASM_GENERIC_SPINLOCK_H +#ifdef CONFIG_QUEUED_SPINLOCKS +#include +#else #include +#endif #include /* See include/linux/spinlock.h */ From patchwork Mon Aug 8 07:13:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B366C25B0C for ; Mon, 8 Aug 2022 07:14:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CTX3v9oiZkUocukhPRt2bp6wQLJGJgMK1kZSHYLV0mI=; b=LFoMr0kqOStPNQ ISIGs6dgtSjF57so+RyIuTPcM3BhXYC8pnTkZfqnvu/s0GM+q30hNP6hMSlhne4v3u2BwExjmJj2Q P1Dk93PtgwQDY9ItHATQsG0b0AzIwLYl86bF+xsHWQ0zteLeWIr1bEKQOFbowdRTWp3K9n3+jySK6 sI4tqDVLhlbBJkMF8jmsdDdAPZU0BDoGmNvtlGVmhzvttndd3kX6yFXDbsqUjn4obeG/H6YEmSVL7 ZgSZO5CHJ1z0mR1n75Kf74d07DpyAMPmYdLMJwnYPgkW5dhreK8laFSiuH2IT09IRD8NRYkWf4t04 ZLHTlYUNmjlcwK/6xvxA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxr-00C1w7-Q0; Mon, 08 Aug 2022 07:14:15 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxp-00C1tE-4i for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:14 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AF29B60CFA; Mon, 8 Aug 2022 07:14:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 788E9C433D6; Mon, 8 Aug 2022 07:14:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942852; bh=ui9v7vSQ2hZ9Vz4T0T/cwSBKPyXOYiw7vf10Zir0n/0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jtUogJgmVOf/DYHQ5mu+DxxkW5DYchHiYtNs6K9bysMcSAHFxJHngYjsDrrInA1iz eegXSERUjfrMp4PWe06dRWklCd7ke9sPK3GP2fAsESAMzcYmW4Bb5Nmc+3A9VUYtxt 7fd5m0Pf7Js9aDaNjw4KLCtY8XXW0ybI+8bA7u2EsGGy3fU0AsZ3s0zIHyo/4gRlgl jk4Q7XvTMu1hhfr5paSkEn48gohv7BWQJzr3ChjnlTF+fhKVkcB1XWgFFeuKJ/ObXe X3ULzoCaZOfie0GsEvcw4VVdms5Fhq+/vSbbZq14qn3o9u+mnvyX0rThANhfoWUNB6 swWa3uPUU5+hQ== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 06/15] riscv: atomic: Clean up unnecessary acquire and release definitions Date: Mon, 8 Aug 2022 03:13:09 -0400 Message-Id: <20220808071318.3335746-7-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001413_287078_35DC67C7 X-CRM114-Status: UNSURE ( 9.40 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Clean up unnecessary xchg_acquire, xchg_release, and cmpxchg_release custom definitions, because the generic implementation is the same as the riscv custom implementation. Before the patch: 000000000000024e <.LBB238>: ops = xchg_acquire(pending_ipis, 0); 24e: 089937af amoswap.d a5,s1,(s2) 252: 0230000f fence r,rw 0000000000000256 <.LBB243>: ops = xchg_release(pending_ipis, 0); 256: 0310000f fence rw,w 25a: 089934af amoswap.d s1,s1,(s2) After the patch: 000000000000026e <.LBB245>: ops = xchg_acquire(pending_ipis, 0); 26e: 089937af amoswap.d a5,s1,(s2) 0000000000000272 <.LBE247>: 272: 0230000f fence r,rw 0000000000000276 <.LBB249>: ops = xchg_release(pending_ipis, 0); 276: 0310000f fence rw,w 000000000000027a <.LBB251>: 27a: 089934af amoswap.d s1,s1,(s2) Only cmpxchg_acquire is necessary (It prevents unnecessary acquire ordering when the value from lr is different from old). Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/riscv/include/asm/atomic.h | 19 ----- arch/riscv/include/asm/cmpxchg.h | 116 ------------------------------- 2 files changed, 135 deletions(-) diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 0dfe9d857a76..83636320ba95 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -249,16 +249,6 @@ c_t arch_atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n) \ return __xchg_relaxed(&(v->counter), n, size); \ } \ static __always_inline \ -c_t arch_atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_acquire(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_release(&(v->counter), n, size); \ -} \ -static __always_inline \ c_t arch_atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n) \ { \ return __xchg(&(v->counter), n, size); \ @@ -276,12 +266,6 @@ c_t arch_atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v, \ return __cmpxchg_acquire(&(v->counter), o, n, size); \ } \ static __always_inline \ -c_t arch_atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_release(&(v->counter), o, n, size); \ -} \ -static __always_inline \ c_t arch_atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \ { \ return __cmpxchg(&(v->counter), o, n, size); \ @@ -299,12 +283,9 @@ c_t arch_atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \ ATOMIC_OPS() #define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed -#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire -#define arch_atomic_xchg_release arch_atomic_xchg_release #define arch_atomic_xchg arch_atomic_xchg #define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed #define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release #define arch_atomic_cmpxchg arch_atomic_cmpxchg #undef ATOMIC_OPS diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 12debce235e5..67ab6375b650 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -44,76 +44,6 @@ _x_, sizeof(*(ptr))); \ }) -#define __xchg_acquire(ptr, new, size) \ -({ \ - __typeof__(ptr) __ptr = (ptr); \ - __typeof__(new) __new = (new); \ - __typeof__(*(ptr)) __ret; \ - switch (size) { \ - case 4: \ - __asm__ __volatile__ ( \ - " amoswap.w %0, %2, %1\n" \ - RISCV_ACQUIRE_BARRIER \ - : "=r" (__ret), "+A" (*__ptr) \ - : "r" (__new) \ - : "memory"); \ - break; \ - case 8: \ - __asm__ __volatile__ ( \ - " amoswap.d %0, %2, %1\n" \ - RISCV_ACQUIRE_BARRIER \ - : "=r" (__ret), "+A" (*__ptr) \ - : "r" (__new) \ - : "memory"); \ - break; \ - default: \ - BUILD_BUG(); \ - } \ - __ret; \ -}) - -#define arch_xchg_acquire(ptr, x) \ -({ \ - __typeof__(*(ptr)) _x_ = (x); \ - (__typeof__(*(ptr))) __xchg_acquire((ptr), \ - _x_, sizeof(*(ptr))); \ -}) - -#define __xchg_release(ptr, new, size) \ -({ \ - __typeof__(ptr) __ptr = (ptr); \ - __typeof__(new) __new = (new); \ - __typeof__(*(ptr)) __ret; \ - switch (size) { \ - case 4: \ - __asm__ __volatile__ ( \ - RISCV_RELEASE_BARRIER \ - " amoswap.w %0, %2, %1\n" \ - : "=r" (__ret), "+A" (*__ptr) \ - : "r" (__new) \ - : "memory"); \ - break; \ - case 8: \ - __asm__ __volatile__ ( \ - RISCV_RELEASE_BARRIER \ - " amoswap.d %0, %2, %1\n" \ - : "=r" (__ret), "+A" (*__ptr) \ - : "r" (__new) \ - : "memory"); \ - break; \ - default: \ - BUILD_BUG(); \ - } \ - __ret; \ -}) - -#define arch_xchg_release(ptr, x) \ -({ \ - __typeof__(*(ptr)) _x_ = (x); \ - (__typeof__(*(ptr))) __xchg_release((ptr), \ - _x_, sizeof(*(ptr))); \ -}) - #define __xchg(ptr, new, size) \ ({ \ __typeof__(ptr) __ptr = (ptr); \ @@ -253,52 +183,6 @@ _o_, _n_, sizeof(*(ptr))); \ }) -#define __cmpxchg_release(ptr, old, new, size) \ -({ \ - __typeof__(ptr) __ptr = (ptr); \ - __typeof__(*(ptr)) __old = (old); \ - __typeof__(*(ptr)) __new = (new); \ - __typeof__(*(ptr)) __ret; \ - register unsigned int __rc; \ - switch (size) { \ - case 4: \ - __asm__ __volatile__ ( \ - RISCV_RELEASE_BARRIER \ - "0: lr.w %0, %2\n" \ - " bne %0, %z3, 1f\n" \ - " sc.w %1, %z4, %2\n" \ - " bnez %1, 0b\n" \ - "1:\n" \ - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ - : "rJ" ((long)__old), "rJ" (__new) \ - : "memory"); \ - break; \ - case 8: \ - __asm__ __volatile__ ( \ - RISCV_RELEASE_BARRIER \ - "0: lr.d %0, %2\n" \ - " bne %0, %z3, 1f\n" \ - " sc.d %1, %z4, %2\n" \ - " bnez %1, 0b\n" \ - "1:\n" \ - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ - : "rJ" (__old), "rJ" (__new) \ - : "memory"); \ - break; \ - default: \ - BUILD_BUG(); \ - } \ - __ret; \ -}) - -#define arch_cmpxchg_release(ptr, o, n) \ -({ \ - __typeof__(*(ptr)) _o_ = (o); \ - __typeof__(*(ptr)) _n_ = (n); \ - (__typeof__(*(ptr))) __cmpxchg_release((ptr), \ - _o_, _n_, sizeof(*(ptr))); \ -}) - #define __cmpxchg(ptr, old, new, size) \ ({ \ __typeof__(ptr) __ptr = (ptr); \ From patchwork Mon Aug 8 07:13:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 89807C25B0C for ; Mon, 8 Aug 2022 07:14:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aAi9EK5Ow2zoD1Vvqk1x7rMd9aRj1zdHGKpPBGdYKdA=; b=PpcnmBaahTgCAp KqguStaOV+qmFD3QPxqUO5cNMg9Mt4S4FX34emuUjvfIVnmBDuspkLxO7SRlU9PEVdwbVC1zjfX3B S99TqmL/m4tjAgW7pMbRfQyTSHn/IyzQZKlmqi5L5MGSdsL8O02JSa6yohjKCl9jmjefUmfmJW/m+ v58RdtV7RS1ljBbVPjBuzEYRVOylvLD3J43NgfB7P8TU/Nn2c/ci7ssRXMDmHZm+gDsD5069Z751c V1USGXuaMtf13zH4Eub8lMF+9VnvcF1j4jY7KNNEGgJrGe/ErKjn8lz3RvhHJmQokl7MqU2OLsuGo jaeqnfMtIbhO8tjnDf1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxz-00C254-S3; Mon, 08 Aug 2022 07:14:23 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwxv-00C1zK-8d for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:22 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C705E60D14; Mon, 8 Aug 2022 07:14:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 313C0C433C1; Mon, 8 Aug 2022 07:14:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942858; bh=z78WHIQdXHrVOrRs7faqs/81GPwGysyQnC0rF3Zrr/U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KnJyG8SYNNni4w3KsVr/N5l95731l2hqbGnCmvtDv+rdY4Ih6ohf4HFBW3ZVd9ZvX U7hodPcFC9KZJlvX+xq53WpU6iwCDGhsgTycoP6spsHxox0CCdIYCGbOPau4yh7WVa uyWdC5htV3UM+iwaxeAvfxzbMwvz9WA6xqPtrSeUw0iq/Ec+LaLpkNHwvqsOvzE0EP YZUJtRpuUbazXHXDwRTRF0DnIVUwaynbK+U8JG9qlQ8wZenFAXkRX0UwP47c+qSTYf QM8q2ocdVXxaOvfEvAjQH+rvxE3Tz9L7YIPfZuAj6BlU+mtkVoE23bISTTOkndL1L/ WRCTWZzlzyX2A== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 07/15] riscv: cmpxchg: Remove xchg32 and xchg64 Date: Mon, 8 Aug 2022 03:13:10 -0400 Message-Id: <20220808071318.3335746-8-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001419_399190_1B8CE044 X-CRM114-Status: UNSURE ( 8.18 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren The xchg32 and xchg64 are unused, so remove them. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/riscv/include/asm/cmpxchg.h | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 67ab6375b650..567ed2e274c4 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -76,18 +76,6 @@ (__typeof__(*(ptr))) __xchg((ptr), _x_, sizeof(*(ptr))); \ }) -#define xchg32(ptr, x) \ -({ \ - BUILD_BUG_ON(sizeof(*(ptr)) != 4); \ - arch_xchg((ptr), (x)); \ -}) - -#define xchg64(ptr, x) \ -({ \ - BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ - arch_xchg((ptr), (x)); \ -}) - /* * Atomic compare and exchange. Compare OLD with MEM, if identical, * store NEW in MEM. Return the initial value in MEM. Success is From patchwork Mon Aug 8 07:13:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EA846C00140 for ; Mon, 8 Aug 2022 07:14:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=f1Blp/K0aZgUH3/p1VPeHo3iosdpnsWe7yt4p+DxQaY=; b=L4PK3iuJknepuI 5CvXVvPb+5Th9PLTiMEXU5Bl6yd5htbW2eMPaOZnBJ6Mw5ct7ymM0QjIHgGa6u9xShuBOf2fnTc74 RNVd9ZGOQ7iUtrwGogETRsFfqwCsDNQ9+hjDtENaQ8KQCvAazlQsAxuQ1S5qJlZzL5bzQbDGGYzya 4g1cSyjqKr1UqmfDTjJ4wFk4n+pkRAAb3hlwbDxzutZcUS+FNRWSjLKlhDbB9mEPDhSnFlkbuIq8s lBfVHumWv2Wx5xhA6BnpyAfmiVgCSQSAfULM5VEKcw9D/vR6Z62K8WdK+7jp501mFmxgaL8cM2mpK rg/KZbSBTsyZl07HYNYg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwy5-00C2Al-Ky; Mon, 08 Aug 2022 07:14:29 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwy2-00C27E-I1 for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:28 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2A17EB80E07; Mon, 8 Aug 2022 07:14:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC53EC433D7; Mon, 8 Aug 2022 07:14:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942863; bh=CLonVkqPKNEwEB3qnFlqdH4Ppf6SvHvhKmodWZTZpYU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PuDuImkclhNbC6O6GNgnZxySgPcZGrdEUoOSbjm4ohK808vUD0GFt3aQfm7jIJZbL U8IM3Rvxrm+dcREdZjLlVyAgVGgnVJwjtr7PXoX/20iKUWAREx5SR1D3NgL9h8x1CM qXDQXiIfEiLMNDQRuIJ4DdRwB6Razsn2ZT0XsdJvXp+i1DpvkpvRAFibRiIw6mFgCY GP7VDQzkM5UE9bOGsTT+Kootm5rBDslJ5POK7h/mGrhoEYQ+bU9EE9dZOzBmQ9tbqF VgFRBnaGHKtgdr6lYtsnfv4y2hWxswjBIqZkZ6YxwVZEBm6wzNryjMvZsblUcOWllb 2MnQTsVjwL77w== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 08/15] riscv: cmpxchg: Forbid arch_cmpxchg64 for 32-bit Date: Mon, 8 Aug 2022 03:13:11 -0400 Message-Id: <20220808071318.3335746-9-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001426_747219_318C9768 X-CRM114-Status: UNSURE ( 9.10 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren RISC-V 32-bit couldn't support lr.d/sc.d instructions, so using arch_cmpxchg64 would cause error. Add forbid code to prevent the situation. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/riscv/include/asm/cmpxchg.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 567ed2e274c4..14c9280c7f7f 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -25,6 +25,7 @@ : "memory"); \ break; \ case 8: \ + BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT)); \ __asm__ __volatile__ ( \ " amoswap.d %0, %2, %1\n" \ : "=r" (__ret), "+A" (*__ptr) \ @@ -58,6 +59,7 @@ : "memory"); \ break; \ case 8: \ + BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT)); \ __asm__ __volatile__ ( \ " amoswap.d.aqrl %0, %2, %1\n" \ : "=r" (__ret), "+A" (*__ptr) \ @@ -101,6 +103,7 @@ : "memory"); \ break; \ case 8: \ + BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT)); \ __asm__ __volatile__ ( \ "0: lr.d %0, %2\n" \ " bne %0, %z3, 1f\n" \ @@ -146,6 +149,7 @@ : "memory"); \ break; \ case 8: \ + BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT)); \ __asm__ __volatile__ ( \ "0: lr.d %0, %2\n" \ " bne %0, %z3, 1f\n" \ @@ -192,6 +196,7 @@ : "memory"); \ break; \ case 8: \ + BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT)); \ __asm__ __volatile__ ( \ "0: lr.d %0, %2\n" \ " bne %0, %z3, 1f\n" \ @@ -220,6 +225,7 @@ #define arch_cmpxchg_local(ptr, o, n) \ (__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr)))) +#ifdef CONFIG_64BIT #define arch_cmpxchg64(ptr, o, n) \ ({ \ BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ @@ -231,5 +237,6 @@ BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ arch_cmpxchg_relaxed((ptr), (o), (n)); \ }) +#endif /* CONFIG_64BIT */ #endif /* _ASM_RISCV_CMPXCHG_H */ From patchwork Mon Aug 8 07:13:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F27F8C00140 for ; Mon, 8 Aug 2022 07:14:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=etL/UlHUFKoLTj1Pm+dsiSG9/+bXf4SRtnJpWUOuCXg=; b=k40vG5Dz9HugII eX38b1SW+LgdJkGAnTjcbKMQzCI+yfaSL4Pt0fXZdEHB7rR6P645JbTXLUG9o0RC0HW02fc58OJwb XiViVXW0iGC7ACG3V1oyF6NXZQFJu9KMjui/wtVeB+1sF+GBH60Y0EfQyUF1RsXOwyxJrI6DBeQoZ 5DmmnQ3y5mdXbqcGzblrGMv4B8+l9IQZKb+6ZwONbaC2SlSqb9ORPsdOxgqugUkOZWEv88bl5luKL YAaJ/z1eZxjXRZAQlNIMzqnl7QOnqNApImsl8KhkX6ryrcIx7+z2LOynNed8OpaJE2OGPlHJl8HVc kbcE0WfQdMqApmNYtwrg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyE-00C2KC-0f; Mon, 08 Aug 2022 07:14:38 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwy8-00C2Ck-Gy for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:35 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 32879B80E06; Mon, 8 Aug 2022 07:14:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94282C43142; Mon, 8 Aug 2022 07:14:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942869; bh=Z7mQrOMBfQftmFccyZvRl/mCYOYttb58VnNpEpOX/Ik=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UZXK0Xr6vkJIo3dV464xjp0HwZcmi1PThbmH7PHFH6q7X3tYqIu1myEg27d8pBHPi +PHFnEhtgjnmwoJuotlDAKd/yY4WCeFompZXG3d9K+JHTDvzmqidhGOWIvLs0cevsC ha2iH0lYAnOUvh/Nbyv4gdaRcmsOpJhmhgh3LHsV9WU3bKW06dLgfJBIqSONaw5vok VJJOWpMqL3puK3rgUSQ/tR5GmaKJZC76oFyK8r+PvNVbUCWpNhcIPIRcclDlwprnlt I3+gG85Dk5T+UffsNkrBPeadjwfQaX0DD1P2GMdbXT+ejYfRGpj7G+O/BYifz5zrH2 1bAPaRxxN1/jA== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 09/15] riscv: cmpxchg: Optimize cmpxchg64 Date: Mon, 8 Aug 2022 03:13:12 -0400 Message-Id: <20220808071318.3335746-10-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001432_807268_BC3F4D94 X-CRM114-Status: UNSURE ( 7.36 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Optimize cmpxchg64 with relaxed, acquire, release implementation. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/riscv/include/asm/cmpxchg.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 14c9280c7f7f..4b5fa25f4336 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -226,6 +226,24 @@ (__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr)))) #ifdef CONFIG_64BIT +#define arch_cmpxchg64_relaxed(ptr, o, n) \ +({ \ + BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ + arch_cmpxchg_relaxed((ptr), (o), (n)); \ +}) + +#define arch_cmpxchg64_acquire(ptr, o, n) \ +({ \ + BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ + arch_cmpxchg_acquire((ptr), (o), (n)); \ +}) + +#define arch_cmpxchg64_release(ptr, o, n) \ +({ \ + BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ + arch_cmpxchg_release((ptr), (o), (n)); \ +}) + #define arch_cmpxchg64(ptr, o, n) \ ({ \ BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ From patchwork Mon Aug 8 07:13:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 276DBC00140 for ; Mon, 8 Aug 2022 07:14:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8OYCOvOSr+fNYI7y2vWlOMEAp2z246iC6bIPqRDP3Bo=; b=wq9og/44z1EJYS fVhLfe3whEEG85M8Z2vAM50/b4dbihL3itMg3qg7VpmS42yXcszE8tzDg8Gc7NAbASYgmThpLmNds Rg0SctW1m+Dwrt4ay6wRgU2qZvFNXnCVGQyqGBLISHEhcc+9Jq6j0KQOIWGbzG/hHM2InTwoyTFkF a3NhL5b5weJB3BrLetlWArzmT5h6qT05ivInBVifTZ/ZdydMZE6i7EpS0RhwVdiG/x+EXB/i9SciG PvrRs2QINML40Gu8X0s8TVlPSs/d2ArBz4LO3RqSXrce+DLLeAwctlD1Jzm4FL9/LHlvnVZDyYEUk mL9HX0V9L2wnPtj1IhfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyJ-00C2Pd-Bf; Mon, 08 Aug 2022 07:14:43 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyE-00C2K5-5k for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:39 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DC6A9B80E07; Mon, 8 Aug 2022 07:14:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8BCC6C4347C; Mon, 8 Aug 2022 07:14:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942875; bh=I/44u9cGcvwSFt4iyq8S3994l8VIxQRwTNmOEKJrjqI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BhT5JZ5se53kN/u1QhZkbH4TOg0AfOYLr+6TxZzI9Qs+hriv0b3ASyInTiHDukAvz hQhTQeEpge3sIHqFmdSQCzgKfBdwOH8A05+whA+lEzORSyEd33qUjV3phvaI+nCK0K QKmfYuSa7Of/0yLlma9b8Vi4pNXYESDtB4XOWKSFMYSq149wkrTIANK821HG4omgjp TIUGQbZYHqJuAzs4VnYAWVvsUO9QCFC+XGJv3wrstQPQilXNE8VWX8zOpg9FVVWeM8 4ztXBPN6GLLCMaoRKGqtxU5wUOoERZyQy95hNYddc+XCkzhZOvz98Lyw4BCaLl3G2x qwgB8nWLOyJpQ== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 10/15] riscv: Enable ARCH_INLINE_READ*/WRITE*/SPIN* Date: Mon, 8 Aug 2022 03:13:13 -0400 Message-Id: <20220808071318.3335746-11-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001438_418765_1F84FCE8 X-CRM114-Status: UNSURE ( 8.52 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Enable ARCH_INLINE_READ*/WRITE*/SPIN* when !PREEMPTION, it is copied from arch/arm64. It could reduce procedure calls and improves performance. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/riscv/Kconfig | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 51713e03c934..c3ca23bc6352 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -32,6 +32,32 @@ config RISCV select ARCH_HAS_STRICT_MODULE_RWX if MMU && !XIP_KERNEL select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_UBSAN_SANITIZE_ALL + select ARCH_INLINE_READ_LOCK if !PREEMPTION + select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION + select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPTION + select ARCH_INLINE_READ_LOCK_IRQSAVE if !PREEMPTION + select ARCH_INLINE_READ_UNLOCK if !PREEMPTION + select ARCH_INLINE_READ_UNLOCK_BH if !PREEMPTION + select ARCH_INLINE_READ_UNLOCK_IRQ if !PREEMPTION + select ARCH_INLINE_READ_UNLOCK_IRQRESTORE if !PREEMPTION + select ARCH_INLINE_WRITE_LOCK if !PREEMPTION + select ARCH_INLINE_WRITE_LOCK_BH if !PREEMPTION + select ARCH_INLINE_WRITE_LOCK_IRQ if !PREEMPTION + select ARCH_INLINE_WRITE_LOCK_IRQSAVE if !PREEMPTION + select ARCH_INLINE_WRITE_UNLOCK if !PREEMPTION + select ARCH_INLINE_WRITE_UNLOCK_BH if !PREEMPTION + select ARCH_INLINE_WRITE_UNLOCK_IRQ if !PREEMPTION + select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE if !PREEMPTION + select ARCH_INLINE_SPIN_TRYLOCK if !PREEMPTION + select ARCH_INLINE_SPIN_TRYLOCK_BH if !PREEMPTION + select ARCH_INLINE_SPIN_LOCK if !PREEMPTION + select ARCH_INLINE_SPIN_LOCK_BH if !PREEMPTION + select ARCH_INLINE_SPIN_LOCK_IRQ if !PREEMPTION + select ARCH_INLINE_SPIN_LOCK_IRQSAVE if !PREEMPTION + select ARCH_INLINE_SPIN_UNLOCK if !PREEMPTION + select ARCH_INLINE_SPIN_UNLOCK_BH if !PREEMPTION + select ARCH_INLINE_SPIN_UNLOCK_IRQ if !PREEMPTION + select ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE if !PREEMPTION select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT select ARCH_STACKWALK From patchwork Mon Aug 8 07:13:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 21C72C25B06 for ; Mon, 8 Aug 2022 07:14:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WOBfp+dZO6Tbutk+0JMZBSMBnH3UfZy0l9LOAfnuyFI=; b=YzjaCunht+b5bp PeS+1AMvHOet0Hr5wmTdxZaYFozHpL0tEut+hqqCmegTxMj9iarqQYjZJVVOjzy++zWOYAgHSVfwO N+5iOsnKzTz3u92teAtIlshbNqlB5ASwFya6gNKEPzOHxYr9QoJ47NtgHeHZm9kj6xXGlFU/gxFYz iyCR+a1r+t0tgdhooKvqs1JG/DBOVvubSErozmzMnUjeHFdLWkd+uZOq7FctDB3KGu38U8lN8viT8 txqg887jqJTn85AUZhLxne+H+xAC/2437hJhXsyN1LV1uFqVZ3GloDFtfoGecs23Gj9QIHdxo3RR5 MJHjwpZ5+7FZDhaQXPsw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyO-00C2Uu-Ss; Mon, 08 Aug 2022 07:14:49 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyI-00C2Of-DR for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:44 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D8FE560CA4; Mon, 8 Aug 2022 07:14:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 44801C433D7; Mon, 8 Aug 2022 07:14:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942881; bh=5VBNACFhtl+dfvQZEMTzFThf7ylgX+7g/Yydm6EJovs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CtrfPxx/xVpdB9R30rfocfphFAEMUx3nf+KX3iZblbbX5nMZgnf3kBX9q/T8x3+Du 7iSWsHWQCRLLSy5lzm41xxYNUAKpeHhGgnSKMyf2ktQ36tR98vW8EODRofQt7+ixkB XtS5zw4A/y1B/Iac3AG9JoTgyww0b+BoRxWhYEMbmKTZLUSw2cqYgK/sQ2z/VBK4Og ov2XnFQWLvq86LR8ySmjFTzWyqJ4davEhT3pTtMoyXdb+muPsw/yi9CXumjf1/Jo4i nECo2Gl1ZBlS5XwPCS5AGjQVQarW+bQrWstZiMiGTKQTqKuNgM3ahOF1BHvcSYZc5O LukfVJ+avrWIg== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 11/15] riscv: Add qspinlock support Date: Mon, 8 Aug 2022 03:13:14 -0400 Message-Id: <20220808071318.3335746-12-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001442_716551_B2FFDC47 X-CRM114-Status: GOOD ( 12.83 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Enable qspinlock by the requirements mentioned in a8ad07e5240c9 ("asm-generic: qspinlock: Indicate the use of mixed-size atomics"). - RISC-V atomic_*_release()/atomic_*_acquire() are implemented with own relaxed version plus acquire/release_fence for RCsc synchronization. - RISC-V LR/SC pairs could provide a strong/weak forward guarantee that depends on micro-architecture. And RISC-V ISA spec has given out several limitations to let hardware support strict forward guarantee (RISC-V User ISA - 8.3 Eventual Success of Store-Conditional Instructions). Some riscv cores such as BOOMv3 & XiangShan could provide strict & strong forward guarantee (The cache line would be kept in an exclusive state for Backoff cycles, and only this core's interrupt could break the LR/SC pair). - RISC-V provides cheap atomic_fetch_or_acquire() with RCsc. - RISC-V only provides relaxed xchg16 to support qspinlock. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/riscv/Kconfig | 16 ++++++++++++++++ arch/riscv/include/asm/Kbuild | 2 ++ arch/riscv/include/asm/cmpxchg.h | 24 ++++++++++++++++++++++++ 3 files changed, 42 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index c3ca23bc6352..8b36a4307d03 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -359,6 +359,22 @@ config NODES_SHIFT Specify the maximum number of NUMA Nodes available on the target system. Increases memory reserved to accommodate various tables. +choice + prompt "RISC-V spinlock type" + default RISCV_TICKET_SPINLOCKS + +config RISCV_TICKET_SPINLOCKS + bool "Using ticket spinlock" + +config RISCV_QUEUED_SPINLOCKS + bool "Using queued spinlock" + depends on SMP && MMU + select ARCH_USE_QUEUED_SPINLOCKS + help + Make sure your micro arch LL/SC has a strong forward progress guarantee. + Otherwise, stay at ticket-lock. +endchoice + config RISCV_ALTERNATIVE bool depends on !XIP_KERNEL diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index 504f8b7e72d4..2cce98c7b653 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -2,7 +2,9 @@ generic-y += early_ioremap.h generic-y += flat.h generic-y += kvm_para.h +generic-y += mcs_spinlock.h generic-y += parport.h +generic-y += qspinlock.h generic-y += spinlock.h generic-y += spinlock_types.h generic-y += qrwlock.h diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 4b5fa25f4336..2ba88057db52 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -11,12 +11,36 @@ #include #include +static inline ulong __xchg16_relaxed(ulong new, void *ptr) +{ + ulong ret, tmp; + ulong shif = ((ulong)ptr & 2) ? 16 : 0; + ulong mask = 0xffff << shif; + ulong *__ptr = (ulong *)((ulong)ptr & ~2); + + __asm__ __volatile__ ( + "0: lr.w %0, %2\n" + " and %1, %0, %z3\n" + " or %1, %1, %z4\n" + " sc.w %1, %1, %2\n" + " bnez %1, 0b\n" + : "=&r" (ret), "=&r" (tmp), "+A" (*__ptr) + : "rJ" (~mask), "rJ" (new << shif) + : "memory"); + + return (ulong)((ret & mask) >> shif); +} + #define __xchg_relaxed(ptr, new, size) \ ({ \ __typeof__(ptr) __ptr = (ptr); \ __typeof__(new) __new = (new); \ __typeof__(*(ptr)) __ret; \ switch (size) { \ + case 2: { \ + __ret = (__typeof__(*(ptr))) \ + __xchg16_relaxed((ulong)__new, __ptr); \ + break;} \ case 4: \ __asm__ __volatile__ ( \ " amoswap.w %0, %2, %1\n" \ From patchwork Mon Aug 8 07:13:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8E73BC00140 for ; Mon, 8 Aug 2022 07:15:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=z/8vThbnGki65wykLyxbw22yT47dqkOwalHlKX9pwEk=; b=nHiOXvNaqecAOs zFBgHudYHZICZfWFXgt00ViP27sW1CJdslC630lfThVT2z8TzREu2aUqIaionc15YJKUv3Z2x8Axq Qcwq2V6bnHIWuIkP4HmCfEkGY2ax0kQOaHaMww+MXoF+Y7d66EDk7c96SfIzAb03m9+PbtYEjkBdX 6oR23rkgOgsvHVnCr5ixN9OeRcHW7pw6HzU4al58IbroQ9udcgjXMMfhXaR4VQI9MFgyjVK0psy1m 6CLXmTi5RYBE4Ger6gQiocsCbWvCua+xWmhai4mwVurJas2TyGPsJpX7TxM/Kb7k5EGi350yjarD6 pYSqmbSTPKPEYmlzrxTQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyV-00C2cm-Jl; Mon, 08 Aug 2022 07:14:55 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyO-00C2V1-Pe for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 409B260DCB; Mon, 8 Aug 2022 07:14:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1AAFC4347C; Mon, 8 Aug 2022 07:14:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942887; bh=SslN+97o/8qmyOuJZlGXnP7AnJZ3FRG2xQfQjrtJ2LE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CG69qtZNcF3mL/S1wrqa9r5BjmaHJ/VCHxS560epiQFdTQc54EjGfA273we1XJp76 auXMWdtz3AF0zhhO/4arAOOShDVMBsClvBZJ7GC+hmzm+EvFjNHrohYQwDf2+d8FLE mWPKCoB6Dn053BSkumSPVsjD950N1GlpTWbisoTjNA1NeMkqnF0DVRGoO6jNGWRlTj I/rDHxEIC7u/dP1VDx3l9OSryGNlATuJwLtyRbGwCuugxrg2IVbHQUMWeFxIWUOfZt GWWlFaXQuFI0PTZv8BVwbccuB3m4gRSDOElPOLP3DUGgM5pL/zn+lmKGcwGXReMyCt WCUUb57w6HSIA== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 12/15] riscv: Add combo spinlock support Date: Mon, 8 Aug 2022 03:13:15 -0400 Message-Id: <20220808071318.3335746-13-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001448_960257_667A5D52 X-CRM114-Status: GOOD ( 19.57 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Combo spinlock could support queued and ticket in one Linux Image and select them during boot time with command line option. Here is the func-size(Bytes) comparison table below: TYPE : COMBO | TICKET | QUEUED arch_spin_lock : 106 | 60 | 50 arch_spin_unlock : 54 | 36 | 26 arch_spin_trylock : 110 | 72 | 54 arch_spin_is_locked : 48 | 34 | 20 arch_spin_is_contended : 56 | 40 | 24 rch_spin_value_unlocked : 48 | 34 | 24 One example of disassemble combo arch_spin_unlock: 0xffffffff8000409c <+14>: nop # jump label slot 0xffffffff800040a0 <+18>: fence rw,w # queued spinlock start 0xffffffff800040a4 <+22>: sb zero,0(a4) # queued spinlock end 0xffffffff800040a8 <+26>: ld s0,8(sp) 0xffffffff800040aa <+28>: addi sp,sp,16 0xffffffff800040ac <+30>: ret 0xffffffff800040ae <+32>: lw a5,0(a4) # ticket spinlock start 0xffffffff800040b0 <+34>: sext.w a5,a5 0xffffffff800040b2 <+36>: fence rw,w 0xffffffff800040b6 <+40>: addiw a5,a5,1 0xffffffff800040b8 <+42>: slli a5,a5,0x30 0xffffffff800040ba <+44>: srli a5,a5,0x30 0xffffffff800040bc <+46>: sh a5,0(a4) # ticket spinlock end 0xffffffff800040c0 <+50>: ld s0,8(sp) 0xffffffff800040c2 <+52>: addi sp,sp,16 0xffffffff800040c4 <+54>: ret The qspinlock is smaller and faster than ticket-lock when all is in fast-path, and combo spinlock could provide a compatible Linux Image for different micro-arch design (weak/strict fwd guarantee) processors. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/riscv/Kconfig | 9 +++- arch/riscv/include/asm/Kbuild | 1 - arch/riscv/include/asm/spinlock.h | 77 +++++++++++++++++++++++++++++++ arch/riscv/kernel/setup.c | 22 +++++++++ 4 files changed, 107 insertions(+), 2 deletions(-) create mode 100644 arch/riscv/include/asm/spinlock.h diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 8b36a4307d03..6645f04c7da4 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -361,7 +361,7 @@ config NODES_SHIFT choice prompt "RISC-V spinlock type" - default RISCV_TICKET_SPINLOCKS + default RISCV_COMBO_SPINLOCKS config RISCV_TICKET_SPINLOCKS bool "Using ticket spinlock" @@ -373,6 +373,13 @@ config RISCV_QUEUED_SPINLOCKS help Make sure your micro arch LL/SC has a strong forward progress guarantee. Otherwise, stay at ticket-lock. + +config RISCV_COMBO_SPINLOCKS + bool "Using combo spinlock" + depends on SMP && MMU + select ARCH_USE_QUEUED_SPINLOCKS + help + Select queued spinlock or ticket-lock with jump_label. endchoice config RISCV_ALTERNATIVE diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index 2cce98c7b653..59d5ea7390ea 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -5,7 +5,6 @@ generic-y += kvm_para.h generic-y += mcs_spinlock.h generic-y += parport.h generic-y += qspinlock.h -generic-y += spinlock.h generic-y += spinlock_types.h generic-y += qrwlock.h generic-y += qrwlock_types.h diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h new file mode 100644 index 000000000000..b079462d818b --- /dev/null +++ b/arch/riscv/include/asm/spinlock.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_RISCV_SPINLOCK_H +#define __ASM_RISCV_SPINLOCK_H + +#ifdef CONFIG_RISCV_COMBO_SPINLOCKS +#include + +#undef arch_spin_is_locked +#undef arch_spin_is_contended +#undef arch_spin_value_unlocked +#undef arch_spin_lock +#undef arch_spin_trylock +#undef arch_spin_unlock + +#include +#include + +#undef arch_spin_is_locked +#undef arch_spin_is_contended +#undef arch_spin_value_unlocked +#undef arch_spin_lock +#undef arch_spin_trylock +#undef arch_spin_unlock + +DECLARE_STATIC_KEY_TRUE(qspinlock_key); + +static __always_inline void arch_spin_lock(arch_spinlock_t *lock) +{ + if (static_branch_likely(&qspinlock_key)) + queued_spin_lock(lock); + else + ticket_spin_lock(lock); +} + +static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock) +{ + if (static_branch_likely(&qspinlock_key)) + return queued_spin_trylock(lock); + return ticket_spin_trylock(lock); +} + +static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) +{ + if (static_branch_likely(&qspinlock_key)) + queued_spin_unlock(lock); + else + ticket_spin_unlock(lock); +} + +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) +{ + if (static_branch_likely(&qspinlock_key)) + return queued_spin_value_unlocked(lock); + else + return ticket_spin_value_unlocked(lock); +} + +static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) +{ + if (static_branch_likely(&qspinlock_key)) + return queued_spin_is_locked(lock); + return ticket_spin_is_locked(lock); +} + +static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) +{ + if (static_branch_likely(&qspinlock_key)) + return queued_spin_is_contended(lock); + return ticket_spin_is_contended(lock); +} +#include +#else +#include +#endif /* CONFIG_RISCV_COMBO_SPINLOCKS */ + +#endif /* __ASM_RISCV_SPINLOCK_H */ diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index f0f36a4a0e9b..b763039bf49b 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -261,6 +261,13 @@ static void __init parse_dtb(void) #endif } +#ifdef CONFIG_RISCV_COMBO_SPINLOCKS +DEFINE_STATIC_KEY_TRUE_RO(qspinlock_key); +EXPORT_SYMBOL(qspinlock_key); + +static bool qspinlock_flag __initdata = false; +#endif + void __init setup_arch(char **cmdline_p) { parse_dtb(); @@ -295,10 +302,25 @@ void __init setup_arch(char **cmdline_p) setup_smp(); #endif +#ifdef CONFIG_RISCV_COMBO_SPINLOCKS + if (!qspinlock_flag) + static_branch_disable(&qspinlock_key); +#endif + riscv_fill_hwcap(); apply_boot_alternatives(); } +#ifdef CONFIG_RISCV_COMBO_SPINLOCKS +static int __init enable_qspinlock(char *p) +{ + qspinlock_flag = true; + + return 0; +} +early_param("qspinlock", enable_qspinlock); +#endif + static int __init topology_init(void) { int i, ret; From patchwork Mon Aug 8 07:13:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06A7CC25B0C for ; Mon, 8 Aug 2022 07:15:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mulMLAnDiSSI529L62fKZsFSqQszk30iGU3vreJwsFQ=; b=2xP4fpjvNz5lon mxDbff6dE56yo1nE75D+l39UzwJNtmxo1MtUG41GsdVJo9/4EnOqdHg13cS2CmAelkrhxUXBt/A1V ZN2s/XJRUp+FQr5sd6hObnjZmJrHSPA2GocsKBEvhso8K3buuo9AliiPgKPkKTrOGFTSCvbu/ba5Z rtkfZcFdabAXgGkGvU3g1HSy63TChuJacPq7lzEen2N/bA9yuhHUdDVUjssJvC97ngvd6tHskYSAT EhRzEF6m5mNj27FZrRMAtN937vLOEaYiG54LTuEKEv3jawq7XA0cFexme10WEalgpznNtDmEWEfIz KF7bnvoVzgZ3cAXatuAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwya-00C2hU-2v; Mon, 08 Aug 2022 07:15:00 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyV-00C2bf-Q6 for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:14:57 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7F36DB80A07; Mon, 8 Aug 2022 07:14:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB243C433D6; Mon, 8 Aug 2022 07:14:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942893; bh=m7BB6dE2TLg0BfE1G3bhADCM2Cpp+bpUD503eZJTSow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LFLyADkxN57udyvW/wSHNAj4DiqaQ2FY56CZ1wParcPOZHb/kjXXLdhhlK08AW+W2 HKxgDweL8jQvu0EUGxHdTQJxYflSB8E1Vz/mKRScGy5JXhkLkTbXjwbbMRnfxBMo3o TgPH/0bfzn8hdLwOnwcLLAtDRoFsizdejk5yz7NKLTHWdwHfViQN8fj3swkHzM/Zfr YgJvWX8dHlZVp1hU+bDhIZECL1tv1PiMLyeX1CjdQOQRZduy92a/j3k7UUccxer3my ywTCOtqmibJ5uKZSobJ/uCSzV/Wot6TqaVGAjb2MWe4PuG96L33SQcEXRZBfyFlTuF y9YmF8pdVFMQQ== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren , Jonas Bonn , Stefan Kristiansson Subject: [PATCH V9 13/15] openrisc: cmpxchg: Cleanup unnecessary codes Date: Mon, 8 Aug 2022 03:13:16 -0400 Message-Id: <20220808071318.3335746-14-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001456_174253_BF569F10 X-CRM114-Status: GOOD ( 13.41 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Remove cmpxchg_small and xchg_small, because it's unnecessary now, and they break the forward guarantee for atomic operations. Also Remove unnecessary __HAVE_ARCH_CMPXCHG. Signed-off-by: Guo Ren Signed-off-by: Guo Ren Cc: Stafford Horne Cc: Jonas Bonn Cc: Stefan Kristiansson --- arch/openrisc/include/asm/cmpxchg.h | 167 +++++++++------------------- 1 file changed, 50 insertions(+), 117 deletions(-) diff --git a/arch/openrisc/include/asm/cmpxchg.h b/arch/openrisc/include/asm/cmpxchg.h index 79fd16162ccb..df83b33b5882 100644 --- a/arch/openrisc/include/asm/cmpxchg.h +++ b/arch/openrisc/include/asm/cmpxchg.h @@ -20,10 +20,8 @@ #include #include -#define __HAVE_ARCH_CMPXCHG 1 - -static inline unsigned long cmpxchg_u32(volatile void *ptr, - unsigned long old, unsigned long new) +/* cmpxchg */ +static inline u32 cmpxchg32(volatile void *ptr, u32 old, u32 new) { __asm__ __volatile__( "1: l.lwa %0, 0(%1) \n" @@ -41,8 +39,33 @@ static inline unsigned long cmpxchg_u32(volatile void *ptr, return old; } -static inline unsigned long xchg_u32(volatile void *ptr, - unsigned long val) +#define __cmpxchg(ptr, old, new, size) \ +({ \ + __typeof__(ptr) __ptr = (ptr); \ + __typeof__(*(ptr)) __old = (old); \ + __typeof__(*(ptr)) __new = (new); \ + __typeof__(*(ptr)) __ret; \ + switch (size) { \ + case 4: \ + __ret = (__typeof__(*(ptr))) \ + cmpxchg32(__ptr, (u32)__old, (u32)__new); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ + __ret; \ +}) + +#define arch_cmpxchg(ptr, o, n) \ +({ \ + __typeof__(*(ptr)) _o_ = (o); \ + __typeof__(*(ptr)) _n_ = (n); \ + (__typeof__(*(ptr))) __cmpxchg((ptr), \ + _o_, _n_, sizeof(*(ptr))); \ +}) + +/* xchg */ +static inline u32 xchg32(volatile void *ptr, u32 val) { __asm__ __volatile__( "1: l.lwa %0, 0(%1) \n" @@ -56,116 +79,26 @@ static inline unsigned long xchg_u32(volatile void *ptr, return val; } -static inline u32 cmpxchg_small(volatile void *ptr, u32 old, u32 new, - int size) -{ - int off = (unsigned long)ptr % sizeof(u32); - volatile u32 *p = ptr - off; -#ifdef __BIG_ENDIAN - int bitoff = (sizeof(u32) - size - off) * BITS_PER_BYTE; -#else - int bitoff = off * BITS_PER_BYTE; -#endif - u32 bitmask = ((0x1 << size * BITS_PER_BYTE) - 1) << bitoff; - u32 load32, old32, new32; - u32 ret; - - load32 = READ_ONCE(*p); - - while (true) { - ret = (load32 & bitmask) >> bitoff; - if (old != ret) - return ret; - - old32 = (load32 & ~bitmask) | (old << bitoff); - new32 = (load32 & ~bitmask) | (new << bitoff); - - /* Do 32 bit cmpxchg */ - load32 = cmpxchg_u32(p, old32, new32); - if (load32 == old32) - return old; - } -} - -/* xchg */ - -static inline u32 xchg_small(volatile void *ptr, u32 x, int size) -{ - int off = (unsigned long)ptr % sizeof(u32); - volatile u32 *p = ptr - off; -#ifdef __BIG_ENDIAN - int bitoff = (sizeof(u32) - size - off) * BITS_PER_BYTE; -#else - int bitoff = off * BITS_PER_BYTE; -#endif - u32 bitmask = ((0x1 << size * BITS_PER_BYTE) - 1) << bitoff; - u32 oldv, newv; - u32 ret; - - do { - oldv = READ_ONCE(*p); - ret = (oldv & bitmask) >> bitoff; - newv = (oldv & ~bitmask) | (x << bitoff); - } while (cmpxchg_u32(p, oldv, newv) != oldv); - - return ret; -} - -/* - * This function doesn't exist, so you'll get a linker error - * if something tries to do an invalid cmpxchg(). - */ -extern unsigned long __cmpxchg_called_with_bad_pointer(void) - __compiletime_error("Bad argument size for cmpxchg"); - -static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, - unsigned long new, int size) -{ - switch (size) { - case 1: - case 2: - return cmpxchg_small(ptr, old, new, size); - case 4: - return cmpxchg_u32(ptr, old, new); - default: - return __cmpxchg_called_with_bad_pointer(); - } -} - -#define arch_cmpxchg(ptr, o, n) \ - ({ \ - (__typeof__(*(ptr))) __cmpxchg((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr))); \ - }) - -/* - * This function doesn't exist, so you'll get a linker error if - * something tries to do an invalidly-sized xchg(). - */ -extern unsigned long __xchg_called_with_bad_pointer(void) - __compiletime_error("Bad argument size for xchg"); - -static inline unsigned long __xchg(volatile void *ptr, unsigned long with, - int size) -{ - switch (size) { - case 1: - case 2: - return xchg_small(ptr, with, size); - case 4: - return xchg_u32(ptr, with); - default: - return __xchg_called_with_bad_pointer(); - } -} - -#define arch_xchg(ptr, with) \ - ({ \ - (__typeof__(*(ptr))) __xchg((ptr), \ - (unsigned long)(with), \ - sizeof(*(ptr))); \ - }) +#define __xchg(ptr, new, size) \ +({ \ + __typeof__(ptr) __ptr = (ptr); \ + __typeof__(new) __new = (new); \ + __typeof__(*(ptr)) __ret; \ + switch (size) { \ + case 4: \ + __ret = (__typeof__(*(ptr))) \ + xchg32(__ptr, (u32)__new); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ + __ret; \ +}) + +#define arch_xchg(ptr, x) \ +({ \ + __typeof__(*(ptr)) _x_ = (x); \ + (__typeof__(*(ptr))) __xchg((ptr), _x_, sizeof(*(ptr))); \ +}) #endif /* __ASM_OPENRISC_CMPXCHG_H */ From patchwork Mon Aug 8 07:13:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18130C00140 for ; Mon, 8 Aug 2022 07:15:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/02XdVjf3FSMbevLW1fab+W+HxL6rfeavkljlAjKhCE=; b=iBABB9dTMY8i6d TgcOL3OIqFxUq+0H4bJNh2ROQ/2Yxs63qXLceR0FDVVZzLTcHqn1cj/ewhz2BurmtZHK93/v87Ion mwoFDvKgW4ncbhVgFmBylPaYg0UW4gGs9QAbAAiSUe+K14R9pLLi+F0vbnU5hFPDAspYfxx4JjmU1 YjkjpfxVST2/OulgiH3ilUDavn8XSlK7d4oyr1hI4KLcCxddhWPct2O4tzN7ziNa6CcdpFw9TVlSw Hja6hbnmvsGoXGP0Fgpl0ZixCfKBznZrKOM3FKtWfPJaDTNxCWHE9UK5323J7sYz9XNUWK5PdfL8V rDyO2+Y5FdaraHQ0u8vQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyh-00C2pR-CR; Mon, 08 Aug 2022 07:15:07 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwya-00C2hm-IJ for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:15:04 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C0C7560D58; Mon, 8 Aug 2022 07:14:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F30D6C433C1; Mon, 8 Aug 2022 07:14:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942899; bh=0ljJZLZfZy/Y0fH0ZoxhVUSYpr68jfPS8jFPFMXkooo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BpkrCBuSQNiyNibYspq9tJpW5U1Dz5aRnh30uX7shFTy2rLwBCQjRUTeU3KGH1pe8 Phzo8x8XEjfTl+bKPXd4ZVsu1b/SO8x6eV/FSilKmyT51PcZCNPdnfPoqxgx7dsvOG hpbWB6GYvzCfrSlKDpENLSFoIwVXBA6n+AQhtpAC3CEJFTki4/vHDH+8ulNrlIgQTo jV5Qq8edHxd/T+n++LqazyDcqMxH66+Sfr5AIGFSTWzRYIblueb+C9iRNQeUoocDOZ IFwlLB1HsS5O5eQZzu0DcdV3NOUv3cfUrC2vyNVbsMfl+iq3EQh9yqQIujgpANJSgt Iu6tT0IzS2Drg== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren , Jonas Bonn , Stefan Kristiansson Subject: [PATCH V9 14/15] openrisc: Move from ticket-lock to qspinlock Date: Mon, 8 Aug 2022 03:13:17 -0400 Message-Id: <20220808071318.3335746-15-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001500_742918_8A889742 X-CRM114-Status: GOOD ( 12.49 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Enable qspinlock by the requirements mentioned in a8ad07e5240c9 ("asm-generic: qspinlock: Indicate the use of mixed-size atomics"). Openrisc only has "l.lwa/l.swa" for all atomic operations. That means its ll/sc pair should be a strong atomic forward progress guarantee, or all atomic operations may cause live lock. The ticket-lock needs atomic_fetch_add well defined forward progress guarantees under contention, and qspinlock needs xchg16 forward progress guarantees. The atomic_fetch_add (l.lwa + add + l.swa) & xchg16 (l.lwa + and + or + l.swa) have similar implementations, so they has the same forward progress guarantees. The qspinlock is smaller and faster than ticket-lock when all is in fast-path. No reason keep openrisc in ticket-lock not qspinlock. Here is the comparison between qspinlock and ticket-lock in fast-path code sizes (bytes): TYPE : TICKET | QUEUED arch_spin_lock : 128 | 96 arch_spin_unlock : 56 | 44 arch_spin_trylock : 108 | 80 arch_spin_is_locked : 36 | 36 arch_spin_is_contended : 36 | 36 arch_spin_value_unlocked: 28 | 28 Signed-off-by: Guo Ren Signed-off-by: Guo Ren Cc: Stafford Horne Cc: Jonas Bonn Cc: Stefan Kristiansson --- arch/openrisc/Kconfig | 1 + arch/openrisc/include/asm/Kbuild | 2 ++ arch/openrisc/include/asm/cmpxchg.h | 25 +++++++++++++++++++++++++ 3 files changed, 28 insertions(+) diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig index c7f282f60f64..1652a6aac882 100644 --- a/arch/openrisc/Kconfig +++ b/arch/openrisc/Kconfig @@ -10,6 +10,7 @@ config OPENRISC select ARCH_HAS_DMA_SET_UNCACHED select ARCH_HAS_DMA_CLEAR_UNCACHED select ARCH_HAS_SYNC_DMA_FOR_DEVICE + select ARCH_USE_QUEUED_SPINLOCKS select COMMON_CLK select OF select OF_EARLY_FLATTREE diff --git a/arch/openrisc/include/asm/Kbuild b/arch/openrisc/include/asm/Kbuild index c8c99b554ca4..ad147fec50b4 100644 --- a/arch/openrisc/include/asm/Kbuild +++ b/arch/openrisc/include/asm/Kbuild @@ -2,6 +2,8 @@ generic-y += extable.h generic-y += kvm_para.h generic-y += parport.h +generic-y += mcs_spinlock.h +generic-y += qspinlock.h generic-y += spinlock_types.h generic-y += spinlock.h generic-y += qrwlock_types.h diff --git a/arch/openrisc/include/asm/cmpxchg.h b/arch/openrisc/include/asm/cmpxchg.h index df83b33b5882..2d650b07a0f4 100644 --- a/arch/openrisc/include/asm/cmpxchg.h +++ b/arch/openrisc/include/asm/cmpxchg.h @@ -65,6 +65,27 @@ static inline u32 cmpxchg32(volatile void *ptr, u32 old, u32 new) }) /* xchg */ +static inline u32 xchg16(volatile void *ptr, u32 val) +{ + u32 ret, tmp; + u32 shif = ((ulong)ptr & 2) ? 16 : 0; + u32 mask = 0xffff << shif; + u32 *__ptr = (u32 *)((ulong)ptr & ~2); + + __asm__ __volatile__( + "1: l.lwa %0, 0(%2) \n" + " l.and %1, %0, %3 \n" + " l.or %1, %1, %4 \n" + " l.swa 0(%2), %1 \n" + " l.bnf 1b \n" + " l.nop \n" + : "=&r" (ret), "=&r" (tmp) + : "r"(__ptr), "r" (~mask), "r" (val << shif) + : "cc", "memory"); + + return (ret & mask) >> shif; +} + static inline u32 xchg32(volatile void *ptr, u32 val) { __asm__ __volatile__( @@ -85,6 +106,10 @@ static inline u32 xchg32(volatile void *ptr, u32 val) __typeof__(new) __new = (new); \ __typeof__(*(ptr)) __ret; \ switch (size) { \ + case 2: \ + __ret = (__typeof__(*(ptr))) \ + xchg16(__ptr, (u32)__new); \ + break; \ case 4: \ __ret = (__typeof__(*(ptr))) \ xchg32(__ptr, (u32)__new); \ From patchwork Mon Aug 8 07:13:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12938527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8F59CC00140 for ; Mon, 8 Aug 2022 07:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=V60OvY3oxPD0m+I/z5Mc0FCvM9gZ5rLDRKAVofx7Ke8=; b=B47E0khMeUDpZY evKcJbAtR6xZOP11XXqp+Go4PYo8Pc9LXH81rOIl8sGWjvbNsNf6qadEBwLYdLZj77dZd5c35yt3s zzuXMtRRBcAEw6DFEQzpjIZulYJ8gBl6iOOP1wJk3jdZB636LtO7H0OmO5+aQyI7eQsu3W7gBOTby +LXBQs8kEoEmJwhtpeJJ3GFQtvymFT3zjlq/PPeTacKKeO8mYC837lKqdlGWYUith52HgQV3UMIAl 9lyb0y59sXIzoKBqH7NnEwysoRbXhlKJx/fGXY7hZI3e7orIVmhUlOzHq7/jVwOKxYBZxpIEKIZLS ibExN/4ZFHl072bT9XYg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyl-00C2sx-B0; Mon, 08 Aug 2022 07:15:11 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oKwyh-00C2p5-Vg for linux-riscv@lists.infradead.org; Mon, 08 Aug 2022 07:15:09 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9E847B80DD7; Mon, 8 Aug 2022 07:15:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45698C43140; Mon, 8 Aug 2022 07:14:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659942905; bh=PhULA4jDsByMhb+wbohiUiRVZy/cDd8Zyeeyl+ikEoM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ij7fgoAK4RLF2jeypOvXde6SXvGiYiFhg2010qwe0MdJ9Drq0uk6JyUrUGSk0qmpo lSuJlh3/uII5ljQLythYRGYHzJGx/bkutxshAqJngPETLEfN7pi9ht4TkTwlrjU9gF CGhuJfdqMH3ve2+ybv38bhEYh/shj/NyvqKy5RAKg1zOFi8c/NGOKIL3pnhJ2eE2dE sQBC9QyOCN2MmpycM+wVlOJbGY2VdSQWZmcMBdGbWU6FAcQRYQabrN820IvJuQgwmD UUcTZa54wde3tJLkR4S1jq0R+3kLM6xY6kDrJTLO5hfp7HA65zoGFDpjrA7vn1mHVo z9CiOGLJiZ5mQ== From: guoren@kernel.org To: palmer@rivosinc.com, heiko@sntech.de, hch@infradead.org, arnd@arndb.de, peterz@infradead.org, will@kernel.org, boqun.feng@gmail.com, longman@redhat.com, shorne@gmail.com, conor.dooley@microchip.com Cc: linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Guo Ren , Guo Ren Subject: [PATCH V9 15/15] csky: spinlock: Use the generic header files Date: Mon, 8 Aug 2022 03:13:18 -0400 Message-Id: <20220808071318.3335746-16-guoren@kernel.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220808071318.3335746-1-guoren@kernel.org> References: <20220808071318.3335746-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220808_001508_198689_B1F9F470 X-CRM114-Status: GOOD ( 11.88 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren There is no difference between csky and generic, so use the generic header. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/csky/include/asm/Kbuild | 2 ++ arch/csky/include/asm/spinlock.h | 12 ------------ arch/csky/include/asm/spinlock_types.h | 9 --------- 3 files changed, 2 insertions(+), 21 deletions(-) delete mode 100644 arch/csky/include/asm/spinlock.h delete mode 100644 arch/csky/include/asm/spinlock_types.h diff --git a/arch/csky/include/asm/Kbuild b/arch/csky/include/asm/Kbuild index 1117c28cb7e8..c08050fc0cce 100644 --- a/arch/csky/include/asm/Kbuild +++ b/arch/csky/include/asm/Kbuild @@ -7,6 +7,8 @@ generic-y += mcs_spinlock.h generic-y += qrwlock.h generic-y += qrwlock_types.h generic-y += qspinlock.h +generic-y += spinlock_types.h +generic-y += spinlock.h generic-y += parport.h generic-y += user.h generic-y += vmlinux.lds.h diff --git a/arch/csky/include/asm/spinlock.h b/arch/csky/include/asm/spinlock.h deleted file mode 100644 index 83a2005341f5..000000000000 --- a/arch/csky/include/asm/spinlock.h +++ /dev/null @@ -1,12 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ - -#ifndef __ASM_CSKY_SPINLOCK_H -#define __ASM_CSKY_SPINLOCK_H - -#include -#include - -/* See include/linux/spinlock.h */ -#define smp_mb__after_spinlock() smp_mb() - -#endif /* __ASM_CSKY_SPINLOCK_H */ diff --git a/arch/csky/include/asm/spinlock_types.h b/arch/csky/include/asm/spinlock_types.h deleted file mode 100644 index 75bdf3af80ba..000000000000 --- a/arch/csky/include/asm/spinlock_types.h +++ /dev/null @@ -1,9 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ - -#ifndef __ASM_CSKY_SPINLOCK_TYPES_H -#define __ASM_CSKY_SPINLOCK_TYPES_H - -#include -#include - -#endif /* __ASM_CSKY_SPINLOCK_TYPES_H */