From patchwork Sat Apr 17 04:45:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12209539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77816C433B4 for ; Sat, 17 Apr 2021 04:46:50 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C2444611AB for ; Sat, 17 Apr 2021 04:46:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C2444611AB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:MIME-Version:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Message-Id:Date:Subject:Cc:To:From:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=IbXcWlcQGmdE26LMsz4UjdCs1QOuaCmKwgAmWo9Qn5U=; b=NXYlrqNMLD/aklqjIaxGK+hwU4 C/7Kr04CerxshG7uWWhrZRYANeVxKoTzhPbyhhIR9rtAlXKc6akVrvfPrjEjy4lZBqZH1R4nmcWRZ Wt27SK2j5G1Bl+sjjsc3vGDYl/Ssd4zcyRAsppDnV1ZW/irOsAPpG0ADqROGRVKfrT2wlqMLsSGcq f0LUxlxxWUPv9qe0hm/lhhPrs80bHnscCbIjk6ShPaxb74XQCTZPrMRUKpwlgvZKj/a4mGt1rHOCi bieLRdQNwu8Bc9RVgqWXH0RkYeNwKR0Hyi/KURiKWk09zEcen0EU9LpKMY44p0GedXZQPAn7hO7ET FHJv70MA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lXcqf-004KzW-FM; Sat, 17 Apr 2021 04:46:26 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lXcqZ-004Kz0-8q for linux-riscv@desiato.infradead.org; Sat, 17 Apr 2021 04:46:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Message-Id:Date:Subject:Cc:To:From: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To:References; bh=fRlJrDlr40buUpKtuY2RvuiusD/dJisRy6RG0AylapI=; b=AsnDwA4A+KeC0uWTTGoDHgnGpP g5ndsHPaj2AR3RCFfGm+t6t3mtIZ8TUjJiN0o9fRUNRuAI5+XJaVhmARuOl743EOvIyTn0FSXuH+E CWCkSsfW405p3eTyMG08RyzWC1jxiY3vJ6YKNQirgoxZ9hsF7sBie1sjna9F0jB/xXTaL++v/W0v/ Ja/fJmo9fCPUsNxomd6w6PowT60FGG4ToLBXEyz6NU8MtRoqp8dARawWffiO2Z8iv6f9qeEqc8bLF /1/vkB9Y+bH5VhG3YRRxng5EUPmm9VPeO/P19DEwZlI5r9Fj+7/U2HHicaGzq4lvLeh6UbKpfNRH6 hYxCggzQ==; Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lXcqV-009rwT-Oj for linux-riscv@lists.infradead.org; Sat, 17 Apr 2021 04:46:18 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1E948610FC; Sat, 17 Apr 2021 04:46:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618634775; bh=U5hgTm7HPjukjmf37jGQqJNkINUraEjKj3JUa+Hpdzg=; h=From:To:Cc:Subject:Date:From; b=NLH9Mi+lp2Y1wGXDdOhHrx6RCGD7O1TK7wDVXaSncunKhq+FDoEVFDG+CiNvJIoqV 0H86p+1Ksi9rIcU9ICCpQXwv/YLZD55OBuEayOFB/6avBwoE0+b6RVX7JcGR+PQO7k 7jZXgdz+ImbxfsVMOS6mii8096UtHhkJZJffLQr2Otl9xn7yqnqW9k8Ss3GqeNCKle R2xAkmZFZTu7ORlIpuzofIm+9TxqAMYZaBNt8tVrtlLG8CE2g92Pte7BtFDCd3+BFE czyaYUbUK+AJoIBbuBVhCEPKYw4103PdBELLL/0qCbALdREB1QoM/g8JOzzAVDm6Gn CoclnR1xNbN3A== From: guoren@kernel.org To: guoren@kernel.org, peterz@infradead.org Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, Guo Ren , Arnd Bergmann Subject: [PATCH v2 (RESEND) 1/2] locking/atomics: Fixup GENERIC_ATOMIC64 conflict with atomic-arch-fallback.h Date: Sat, 17 Apr 2021 04:45:28 +0000 Message-Id: <1618634729-88821-1-git-send-email-guoren@kernel.org> X-Mailer: git-send-email 2.7.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210416_214615_887216_8DF668B5 X-CRM114-Status: UNSURE ( 9.16 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Current GENERIC_ATOMIC64 in atomic-arch-fallback.h is broken. When a 32-bit arch use atomic-arch-fallback.h will cause compile error. In file included from include/linux/atomic.h:81, from include/linux/rcupdate.h:25, from include/linux/rculist.h:11, from include/linux/pid.h:5, from include/linux/sched.h:14, from arch/riscv/kernel/asm-offsets.c:10: include/linux/atomic-arch-fallback.h: In function 'arch_atomic64_inc': >> include/linux/atomic-arch-fallback.h:1447:2: error: implicit declaration of function 'arch_atomic64_add'; did you mean 'arch_atomic_add'? [-Werror=implicit-function-declaration] 1447 | arch_atomic64_add(1, v); | ^~~~~~~~~~~~~~~~~ | arch_atomic_add The atomic-arch-fallback.h & atomic-fallback.h & atomic-instrumented.h are generated by gen-atomic-fallback.sh & gen-atomic-instrumented.sh, so just take care the bash files. Add atomic64_* wrapper into atomic64.h, then there is no dependency of atomic-*-fallback.h in atomic64.h. Change in v2: - Fixup scripts/atomic/gen-atomic-instrumented.sh wih duplicated definition export. Signed-off-by: Guo Ren Cc: Peter Zijlstra Cc: Arnd Bergmann --- include/asm-generic/atomic-instrumented.h | 264 +++++++++++++++--------------- include/asm-generic/atomic64.h | 89 ++++++++++ include/linux/atomic-arch-fallback.h | 5 +- include/linux/atomic-fallback.h | 5 +- scripts/atomic/gen-atomic-fallback.sh | 3 +- scripts/atomic/gen-atomic-instrumented.sh | 23 ++- 6 files changed, 251 insertions(+), 138 deletions(-) diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 888b6cf..c9e69c6 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -831,6 +831,137 @@ atomic_dec_if_positive(atomic_t *v) #define atomic_dec_if_positive atomic_dec_if_positive #endif +#if !defined(arch_xchg_relaxed) || defined(arch_xchg) +#define xchg(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_xchg(__ai_ptr, __VA_ARGS__); \ +}) +#endif + +#if defined(arch_xchg_acquire) +#define xchg_acquire(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \ +}) +#endif + +#if defined(arch_xchg_release) +#define xchg_release(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_xchg_release(__ai_ptr, __VA_ARGS__); \ +}) +#endif + +#if defined(arch_xchg_relaxed) +#define xchg_relaxed(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \ +}) +#endif + +#if !defined(arch_cmpxchg_relaxed) || defined(arch_cmpxchg) +#define cmpxchg(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_cmpxchg(__ai_ptr, __VA_ARGS__); \ +}) +#endif + +#if defined(arch_cmpxchg_acquire) +#define cmpxchg_acquire(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \ +}) +#endif + +#if defined(arch_cmpxchg_release) +#define cmpxchg_release(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ +}) +#endif + +#if defined(arch_cmpxchg_relaxed) +#define cmpxchg_relaxed(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \ +}) +#endif + +#if !defined(arch_try_cmpxchg_relaxed) || defined(arch_try_cmpxchg) +#define try_cmpxchg(ptr, oldp, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + typeof(oldp) __ai_oldp = (oldp); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ + arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ +}) +#endif + +#if defined(arch_try_cmpxchg_acquire) +#define try_cmpxchg_acquire(ptr, oldp, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + typeof(oldp) __ai_oldp = (oldp); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ + arch_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ +}) +#endif + +#if defined(arch_try_cmpxchg_release) +#define try_cmpxchg_release(ptr, oldp, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + typeof(oldp) __ai_oldp = (oldp); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ + arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ +}) +#endif + +#if defined(arch_try_cmpxchg_relaxed) +#define try_cmpxchg_relaxed(ptr, oldp, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + typeof(oldp) __ai_oldp = (oldp); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ + arch_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ +}) +#endif + +#define cmpxchg_local(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \ +}) + +#define sync_cmpxchg(ptr, ...) \ +({ \ + typeof(ptr) __ai_ptr = (ptr); \ + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ + arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ +}) + +#ifndef CONFIG_GENERIC_ATOMIC64 static __always_inline s64 atomic64_read(const atomic64_t *v) { @@ -1641,78 +1772,6 @@ atomic64_dec_if_positive(atomic64_t *v) #define atomic64_dec_if_positive atomic64_dec_if_positive #endif -#if !defined(arch_xchg_relaxed) || defined(arch_xchg) -#define xchg(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg(__ai_ptr, __VA_ARGS__); \ -}) -#endif - -#if defined(arch_xchg_acquire) -#define xchg_acquire(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \ -}) -#endif - -#if defined(arch_xchg_release) -#define xchg_release(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg_release(__ai_ptr, __VA_ARGS__); \ -}) -#endif - -#if defined(arch_xchg_relaxed) -#define xchg_relaxed(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \ -}) -#endif - -#if !defined(arch_cmpxchg_relaxed) || defined(arch_cmpxchg) -#define cmpxchg(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg(__ai_ptr, __VA_ARGS__); \ -}) -#endif - -#if defined(arch_cmpxchg_acquire) -#define cmpxchg_acquire(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \ -}) -#endif - -#if defined(arch_cmpxchg_release) -#define cmpxchg_release(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ -}) -#endif - -#if defined(arch_cmpxchg_relaxed) -#define cmpxchg_relaxed(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \ -}) -#endif - #if !defined(arch_cmpxchg64_relaxed) || defined(arch_cmpxchg64) #define cmpxchg64(ptr, ...) \ ({ \ @@ -1749,57 +1808,6 @@ atomic64_dec_if_positive(atomic64_t *v) }) #endif -#if !defined(arch_try_cmpxchg_relaxed) || defined(arch_try_cmpxchg) -#define try_cmpxchg(ptr, oldp, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - typeof(oldp) __ai_oldp = (oldp); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ -}) -#endif - -#if defined(arch_try_cmpxchg_acquire) -#define try_cmpxchg_acquire(ptr, oldp, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - typeof(oldp) __ai_oldp = (oldp); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ -}) -#endif - -#if defined(arch_try_cmpxchg_release) -#define try_cmpxchg_release(ptr, oldp, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - typeof(oldp) __ai_oldp = (oldp); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ -}) -#endif - -#if defined(arch_try_cmpxchg_relaxed) -#define try_cmpxchg_relaxed(ptr, oldp, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - typeof(oldp) __ai_oldp = (oldp); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ -}) -#endif - -#define cmpxchg_local(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \ -}) - #define cmpxchg64_local(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ @@ -1807,13 +1815,7 @@ atomic64_dec_if_positive(atomic64_t *v) arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \ }) -#define sync_cmpxchg(ptr, ...) \ -({ \ - typeof(ptr) __ai_ptr = (ptr); \ - instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ -}) - +#endif #define cmpxchg_double(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ @@ -1830,4 +1832,4 @@ atomic64_dec_if_positive(atomic64_t *v) }) #endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */ -// 4bec382e44520f4d8267e42620054db26a659ea3 +// 21c7f7e074cb2cb7e2f593fe7c8f6dec6ab9e7ea diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h index 370f01d..bb5cf1e 100644 --- a/include/asm-generic/atomic64.h +++ b/include/asm-generic/atomic64.h @@ -34,6 +34,18 @@ extern s64 atomic64_fetch_##op(s64 a, atomic64_t *v); ATOMIC64_OPS(add) ATOMIC64_OPS(sub) +#define atomic64_add_relaxed atomic64_add +#define atomic64_add_acquire atomic64_add +#define atomic64_add_release atomic64_add + +#define atomic64_add_return_relaxed atomic64_add_return +#define atomic64_add_return_acquire atomic64_add_return +#define atomic64_add_return_release atomic64_add_return + +#define atomic64_fetch_add_relaxed atomic64_fetch_add +#define atomic64_fetch_add_acquire atomic64_fetch_add +#define atomic64_fetch_add_release atomic64_fetch_add + #undef ATOMIC64_OPS #define ATOMIC64_OPS(op) ATOMIC64_OP(op) ATOMIC64_FETCH_OP(op) @@ -49,8 +61,85 @@ ATOMIC64_OPS(xor) extern s64 atomic64_dec_if_positive(atomic64_t *v); #define atomic64_dec_if_positive atomic64_dec_if_positive extern s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n); +#define atomic64_cmpxchg_relaxed atomic64_cmpxchg +#define atomic64_cmpxchg_acquire atomic64_cmpxchg +#define atomic64_cmpxchg_release atomic64_cmpxchg extern s64 atomic64_xchg(atomic64_t *v, s64 new); +#define atomic64_xchg_relaxed atomic64_xchg +#define atomic64_xchg_acquire atomic64_xchg +#define atomic64_xchg_release atomic64_xchg extern s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u); #define atomic64_fetch_add_unless atomic64_fetch_add_unless +static __always_inline void +atomic64_inc(atomic64_t *v) +{ + atomic64_add(1, v); +} + +static __always_inline s64 +atomic64_inc_return(atomic64_t *v) +{ + return atomic64_add_return(1, v); +} + +static __always_inline s64 +atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_add(1, v); +} + +static __always_inline void +atomic64_dec(atomic64_t *v) +{ + atomic64_sub(1, v); +} + +static __always_inline s64 +atomic64_dec_return(atomic64_t *v) +{ + return atomic64_sub_return(1, v); +} + +static __always_inline s64 +atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_sub(1, v); +} + +static __always_inline void +atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_and(~i, v); +} + +static __always_inline s64 +atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(~i, v); +} + +static __always_inline bool +atomic64_sub_and_test(int i, atomic64_t *v) +{ + return atomic64_sub_return(i, v) == 0; +} + +static __always_inline bool +atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_return(v) == 0; +} + +static __always_inline bool +atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_return(v) == 0; +} + +static __always_inline bool +atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v) < 0; +} #endif /* _ASM_GENERIC_ATOMIC64_H */ diff --git a/include/linux/atomic-arch-fallback.h b/include/linux/atomic-arch-fallback.h index a3dba31..2f1db6a 100644 --- a/include/linux/atomic-arch-fallback.h +++ b/include/linux/atomic-arch-fallback.h @@ -1252,7 +1252,7 @@ arch_atomic_dec_if_positive(atomic_t *v) #ifdef CONFIG_GENERIC_ATOMIC64 #include -#endif +#else #ifndef arch_atomic64_read_acquire static __always_inline s64 @@ -2357,5 +2357,6 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive #endif +#endif /* CONFIG_GENERIC_ATOMIC64 */ #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// cca554917d7ea73d5e3e7397dd70c484cad9b2c4 +// ae31a21075855e67a9b2927f8241dedddafda046 diff --git a/include/linux/atomic-fallback.h b/include/linux/atomic-fallback.h index 2a3f55d..7dda483 100644 --- a/include/linux/atomic-fallback.h +++ b/include/linux/atomic-fallback.h @@ -1369,7 +1369,7 @@ atomic_dec_if_positive(atomic_t *v) #ifdef CONFIG_GENERIC_ATOMIC64 #include -#endif +#else #define arch_atomic64_read atomic64_read #define arch_atomic64_read_acquire atomic64_read_acquire @@ -2591,5 +2591,6 @@ atomic64_dec_if_positive(atomic64_t *v) #define atomic64_dec_if_positive atomic64_dec_if_positive #endif +#endif /* CONFIG_GENERIC_ATOMIC64 */ #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// d78e6c293c661c15188f0ec05bce45188c8d5892 +// b809c8e3c88910826f765bdba4a74f21c527029d diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 317a6ce..8b7a685 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -247,7 +247,7 @@ done cat < -#endif +#else EOF @@ -256,5 +256,6 @@ grep '^[a-z]' "$1" | while read name meta args; do done cat < X-Patchwork-Id: 12209541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 825BEC43460 for ; Sat, 17 Apr 2021 04:46:52 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C4137611AB for ; Sat, 17 Apr 2021 04:46:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C4137611AB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:MIME-Version:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:Cc:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=U/7zi/l5qTHvfGSJF2HUsXr/WCA/xcBWZ7iGLMaekug=; b=U1KXCDE1j6d9qin+qT9O38r+AZ Ch1k0wf2uUCtEeRqHfzy3bUuYXyPNbSGaOp4ViGK41rvXhIm3Ny66RC3/OsxkHxg+i0ophCzdVeVZ 3CDocNB7HLlkFlQ11OX9/lvSQbMupnabFsaFf0GYcH4IxLRtR/588gq8tke4jIOs2OlHPEsP/jVvf yxFcZ7+teF9OBBpyh+7lBbXkkYo9QBYVqsY972Kti613tZsBratoXfx3CVd1bBI3HqAvjwt2GoXzY 4Z8Ixj4Bv6zJ/j0s63iO7rBTxu7AzgqRZ1kF3H9SiIZNVghJvJkINxHR8Ss98NU/a76zCyn3/yQNq nbxQ+6AQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lXcqo-004L0c-2K; Sat, 17 Apr 2021 04:46:34 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lXcqb-004KzO-U1 for linux-riscv@desiato.infradead.org; Sat, 17 Apr 2021 04:46:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Y51ZSBdbI3CuUC1QtYTkZrEuD27YgxE5HPtB70+KxiI=; b=XBikDLzReoc6tuNI5+DvQQe8VT XbYk6X/99hI+7nSLYeKxxdGr7pcaLSa5Vkqd/eEwcHPZHclskkeWmzomOluZvioBROlAhG73xwkvL yKjy1m5EcaVmkUVE870zqbcUPEU3WdCARflzwjOXHszqhmKN2T7aw6uJueE/oKd4uhKQMTkTq8ZA3 uZk2zIu8ki7/+5onifD4B2NU2tpPA+99l7trUeROw5ElvDnCUCFMuArZzIwaNHtbus8ymcSDv5POk e6+it2jUjHeRAnnHtHicLW5ZMYPHwyuWznRStJlaWofeizy2eEA/X3UJSIBoJRR5eHrzbKQv5ktVL V46G1NkA==; Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lXcqX-009rwx-Lw for linux-riscv@lists.infradead.org; Sat, 17 Apr 2021 04:46:20 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 805D2611AC; Sat, 17 Apr 2021 04:46:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618634777; bh=8UVRz6YlDtWdoEqYbp+SfzAA/+G8pcEqZjQokhkfLoU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Rh/qWzF3pFNcmT9hN7AK8fnPpVHHNidWbRB2MbBiTsuLnHmcOKaa5Op4SLgWc61v9 KY+D0GCScUU0bxQu2iI3LAT+ZC6xFAk0rDGiIUFhFBgWCC0XyqKOUrHen3VD+CIbHQ LFJcyC8PewKW5d1KctsfWkcAtsujzlzxGSIqmEmXpxfDaTli6xuooR95T7SQvVfNLu DFDd8JJ9UhLd5me8O/fi64ngvJVYKDPuc8SYExyjXlS45Wf8dixnDM5bEEmoFNnyAb Fj0OI2vwbfV6xHNQsv4TT1PE0r2v5knmpIxOxAIMTQcP3pyZeWHdDUiVurSr62zDWP Ohzpf3EfhDS8A== From: guoren@kernel.org To: guoren@kernel.org, peterz@infradead.org Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, Guo Ren , Arnd Bergmann , Anup Patel , Palmer Dabbelt Subject: [PATCH v2 (RESEND) 2/2] riscv: atomic: Using ARCH_ATOMIC in asm/atomic.h Date: Sat, 17 Apr 2021 04:45:29 +0000 Message-Id: <1618634729-88821-2-git-send-email-guoren@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1618634729-88821-1-git-send-email-guoren@kernel.org> References: <1618634729-88821-1-git-send-email-guoren@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210416_214617_772983_35673554 X-CRM114-Status: GOOD ( 11.42 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren The linux/atomic-arch-fallback.h has been there for a while, but only x86 & arm64 support it. Let's make riscv follow the linux/arch/* development trendy and make the codes more readable and maintainable. This patch also cleanup some codes: - Add atomic_andnot_* operation - Using amoswap.w.rl & amoswap.w.aq instructions in xchg - Remove cmpxchg_acquire/release unnecessary optimization Change in v2: - Fixup andnot bug by Peter Zijlstra Signed-off-by: Guo Ren Link: https://lore.kernel.org/linux-riscv/CAK8P3a0FG3cpqBNUP7kXj3713cMUqV1WcEh-vcRnGKM00WXqxw@mail.gmail.com/ Cc: Arnd Bergmann Cc: Peter Zijlstra Cc: Anup Patel Cc: Palmer Dabbelt --- Signed-off-by: Guo Ren --- arch/riscv/include/asm/atomic.h | 230 +++++++++++++++------------------------ arch/riscv/include/asm/cmpxchg.h | 199 ++------------------------------- 2 files changed, 99 insertions(+), 330 deletions(-) diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 400a8c8..b127cb1 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -8,13 +8,8 @@ #ifndef _ASM_RISCV_ATOMIC_H #define _ASM_RISCV_ATOMIC_H -#ifdef CONFIG_GENERIC_ATOMIC64 -# include -#else -# if (__riscv_xlen < 64) -# error "64-bit atomics require XLEN to be at least 64" -# endif -#endif +#include +#include #include #include @@ -25,25 +20,13 @@ #define __atomic_release_fence() \ __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory"); -static __always_inline int atomic_read(const atomic_t *v) -{ - return READ_ONCE(v->counter); -} -static __always_inline void atomic_set(atomic_t *v, int i) -{ - WRITE_ONCE(v->counter, i); -} +#define arch_atomic_read(v) __READ_ONCE((v)->counter) +#define arch_atomic_set(v, i) __WRITE_ONCE(((v)->counter), (i)) #ifndef CONFIG_GENERIC_ATOMIC64 -#define ATOMIC64_INIT(i) { (i) } -static __always_inline s64 atomic64_read(const atomic64_t *v) -{ - return READ_ONCE(v->counter); -} -static __always_inline void atomic64_set(atomic64_t *v, s64 i) -{ - WRITE_ONCE(v->counter, i); -} +#define ATOMIC64_INIT ATOMIC_INIT +#define arch_atomic64_read arch_atomic_read +#define arch_atomic64_set arch_atomic_set #endif /* @@ -53,7 +36,7 @@ static __always_inline void atomic64_set(atomic64_t *v, s64 i) */ #define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix) \ static __always_inline \ -void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \ +void arch_atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \ { \ __asm__ __volatile__ ( \ " amo" #asm_op "." #asm_type " zero, %1, %0" \ @@ -76,6 +59,12 @@ ATOMIC_OPS(sub, add, -i) ATOMIC_OPS(and, and, i) ATOMIC_OPS( or, or, i) ATOMIC_OPS(xor, xor, i) +ATOMIC_OPS(andnot, and, ~i) + +#define arch_atomic_andnot arch_atomic_andnot +#ifndef CONFIG_GENERIC_ATOMIC64 +#define arch_atomic64_andnot arch_atomic64_andnot +#endif #undef ATOMIC_OP #undef ATOMIC_OPS @@ -87,7 +76,7 @@ ATOMIC_OPS(xor, xor, i) */ #define ATOMIC_FETCH_OP(op, asm_op, I, asm_type, c_type, prefix) \ static __always_inline \ -c_type atomic##prefix##_fetch_##op##_relaxed(c_type i, \ +c_type arch_atomic##prefix##_fetch_##op##_relaxed(c_type i, \ atomic##prefix##_t *v) \ { \ register c_type ret; \ @@ -99,7 +88,7 @@ c_type atomic##prefix##_fetch_##op##_relaxed(c_type i, \ return ret; \ } \ static __always_inline \ -c_type atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v) \ +c_type arch_atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v)\ { \ register c_type ret; \ __asm__ __volatile__ ( \ @@ -112,15 +101,16 @@ c_type atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v) \ #define ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_type, c_type, prefix) \ static __always_inline \ -c_type atomic##prefix##_##op##_return_relaxed(c_type i, \ +c_type arch_atomic##prefix##_##op##_return_relaxed(c_type i, \ atomic##prefix##_t *v) \ { \ - return atomic##prefix##_fetch_##op##_relaxed(i, v) c_op I; \ + return arch_atomic##prefix##_fetch_##op##_relaxed(i, v) c_op I; \ } \ static __always_inline \ -c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v) \ +c_type arch_atomic##prefix##_##op##_return(c_type i, \ + atomic##prefix##_t *v) \ { \ - return atomic##prefix##_fetch_##op(i, v) c_op I; \ + return arch_atomic##prefix##_fetch_##op(i, v) c_op I; \ } #ifdef CONFIG_GENERIC_ATOMIC64 @@ -138,26 +128,26 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v) \ ATOMIC_OPS(add, add, +, i) ATOMIC_OPS(sub, add, +, -i) -#define atomic_add_return_relaxed atomic_add_return_relaxed -#define atomic_sub_return_relaxed atomic_sub_return_relaxed -#define atomic_add_return atomic_add_return -#define atomic_sub_return atomic_sub_return +#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed +#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return -#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed -#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed -#define atomic_fetch_add atomic_fetch_add -#define atomic_fetch_sub atomic_fetch_sub +#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed +#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub #ifndef CONFIG_GENERIC_ATOMIC64 -#define atomic64_add_return_relaxed atomic64_add_return_relaxed -#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed -#define atomic64_add_return atomic64_add_return -#define atomic64_sub_return atomic64_sub_return - -#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed -#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed -#define atomic64_fetch_add atomic64_fetch_add -#define atomic64_fetch_sub atomic64_fetch_sub +#define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed +#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed +#define arch_atomic64_add_return arch_atomic64_add_return +#define arch_atomic64_sub_return arch_atomic64_sub_return + +#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed +#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed +#define arch_atomic64_fetch_add arch_atomic64_fetch_add +#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub #endif #undef ATOMIC_OPS @@ -172,23 +162,28 @@ ATOMIC_OPS(sub, add, +, -i) #endif ATOMIC_OPS(and, and, i) +ATOMIC_OPS(andnot, and, ~i) ATOMIC_OPS( or, or, i) ATOMIC_OPS(xor, xor, i) -#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed -#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed -#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed -#define atomic_fetch_and atomic_fetch_and -#define atomic_fetch_or atomic_fetch_or -#define atomic_fetch_xor atomic_fetch_xor +#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed +#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed +#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed +#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor #ifndef CONFIG_GENERIC_ATOMIC64 -#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed -#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed -#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed -#define atomic64_fetch_and atomic64_fetch_and -#define atomic64_fetch_or atomic64_fetch_or -#define atomic64_fetch_xor atomic64_fetch_xor +#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed +#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed +#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed +#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed +#define arch_atomic64_fetch_and arch_atomic64_fetch_and +#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot +#define arch_atomic64_fetch_or arch_atomic64_fetch_or +#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor #endif #undef ATOMIC_OPS @@ -197,7 +192,7 @@ ATOMIC_OPS(xor, xor, i) #undef ATOMIC_OP_RETURN /* This is required to provide a full barrier on success. */ -static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) +static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { int prev, rc; @@ -214,10 +209,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) : "memory"); return prev; } -#define atomic_fetch_add_unless atomic_fetch_add_unless +#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless #ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { s64 prev; long rc; @@ -235,82 +230,10 @@ static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u : "memory"); return prev; } -#define atomic64_fetch_add_unless atomic64_fetch_add_unless +#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless #endif -/* - * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as - * {cmp,}xchg and the operations that return, so they need a full barrier. - */ -#define ATOMIC_OP(c_t, prefix, size) \ -static __always_inline \ -c_t atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_relaxed(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_acquire(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_release(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_relaxed(&(v->counter), o, n, size); \ -} \ -static __always_inline \ -c_t atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_acquire(&(v->counter), o, n, size); \ -} \ -static __always_inline \ -c_t atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_release(&(v->counter), o, n, size); \ -} \ -static __always_inline \ -c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \ -{ \ - return __cmpxchg(&(v->counter), o, n, size); \ -} - -#ifdef CONFIG_GENERIC_ATOMIC64 -#define ATOMIC_OPS() \ - ATOMIC_OP(int, , 4) -#else -#define ATOMIC_OPS() \ - ATOMIC_OP(int, , 4) \ - ATOMIC_OP(s64, 64, 8) -#endif - -ATOMIC_OPS() - -#define atomic_xchg_relaxed atomic_xchg_relaxed -#define atomic_xchg_acquire atomic_xchg_acquire -#define atomic_xchg_release atomic_xchg_release -#define atomic_xchg atomic_xchg -#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed -#define atomic_cmpxchg_acquire atomic_cmpxchg_acquire -#define atomic_cmpxchg_release atomic_cmpxchg_release -#define atomic_cmpxchg atomic_cmpxchg - -#undef ATOMIC_OPS -#undef ATOMIC_OP - -static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset) +static __always_inline int arch_atomic_sub_if_positive(atomic_t *v, int offset) { int prev, rc; @@ -327,11 +250,11 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset) : "memory"); return prev - offset; } +#define arch_atomic_dec_if_positive(v) arch_atomic_sub_if_positive(v, 1) -#define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1) #ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset) +static __always_inline s64 arch_atomic64_sub_if_positive(atomic64_t *v, s64 offset) { s64 prev; long rc; @@ -349,8 +272,35 @@ static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset) : "memory"); return prev - offset; } +#define arch_atomic64_dec_if_positive(v) arch_atomic64_sub_if_positive(v, 1) +#endif + +#define arch_atomic_xchg_relaxed(v, new) \ + arch_xchg_relaxed(&((v)->counter), (new)) +#define arch_atomic_xchg_acquire(v, new) \ + arch_xchg_acquire(&((v)->counter), (new)) +#define arch_atomic_xchg_release(v, new) \ + arch_xchg_release(&((v)->counter), (new)) +#define arch_atomic_xchg(v, new) \ + arch_xchg(&((v)->counter), (new)) + +#define arch_atomic_cmpxchg_relaxed(v, old, new) \ + arch_cmpxchg_relaxed(&((v)->counter), (old), (new)) +#define arch_atomic_cmpxchg_acquire(v, old, new) \ + arch_cmpxchg_acquire(&((v)->counter), (old), (new)) +#define arch_atomic_cmpxchg_release(v, old, new) \ + arch_cmpxchg_release(&((v)->counter), (old), (new)) +#define arch_atomic_cmpxchg(v, old, new) \ + arch_cmpxchg(&((v)->counter), (old), (new)) -#define atomic64_dec_if_positive(v) atomic64_sub_if_positive(v, 1) +#ifndef CONFIG_GENERIC_ATOMIC64 +#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed +#define arch_atomic64_xchg arch_atomic_xchg + +#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed +#define arch_atomic64_cmpxchg arch_atomic_cmpxchg #endif +#define ARCH_ATOMIC + #endif /* _ASM_RISCV_ATOMIC_H */ diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 262e5bb..16195a6 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -37,7 +37,7 @@ __ret; \ }) -#define xchg_relaxed(ptr, x) \ +#define arch_xchg_relaxed(ptr, x) \ ({ \ __typeof__(*(ptr)) _x_ = (x); \ (__typeof__(*(ptr))) __xchg_relaxed((ptr), \ @@ -52,16 +52,14 @@ switch (size) { \ case 4: \ __asm__ __volatile__ ( \ - " amoswap.w %0, %2, %1\n" \ - RISCV_ACQUIRE_BARRIER \ + " amoswap.w.aq %0, %2, %1\n" \ : "=r" (__ret), "+A" (*__ptr) \ : "r" (__new) \ : "memory"); \ break; \ case 8: \ __asm__ __volatile__ ( \ - " amoswap.d %0, %2, %1\n" \ - RISCV_ACQUIRE_BARRIER \ + " amoswap.d.aq %0, %2, %1\n" \ : "=r" (__ret), "+A" (*__ptr) \ : "r" (__new) \ : "memory"); \ @@ -72,7 +70,7 @@ __ret; \ }) -#define xchg_acquire(ptr, x) \ +#define arch_xchg_acquire(ptr, x) \ ({ \ __typeof__(*(ptr)) _x_ = (x); \ (__typeof__(*(ptr))) __xchg_acquire((ptr), \ @@ -87,16 +85,14 @@ switch (size) { \ case 4: \ __asm__ __volatile__ ( \ - RISCV_RELEASE_BARRIER \ - " amoswap.w %0, %2, %1\n" \ + " amoswap.w.rl %0, %2, %1\n" \ : "=r" (__ret), "+A" (*__ptr) \ : "r" (__new) \ : "memory"); \ break; \ case 8: \ __asm__ __volatile__ ( \ - RISCV_RELEASE_BARRIER \ - " amoswap.d %0, %2, %1\n" \ + " amoswap.d.rl %0, %2, %1\n" \ : "=r" (__ret), "+A" (*__ptr) \ : "r" (__new) \ : "memory"); \ @@ -107,7 +103,7 @@ __ret; \ }) -#define xchg_release(ptr, x) \ +#define arch_xchg_release(ptr, x) \ ({ \ __typeof__(*(ptr)) _x_ = (x); \ (__typeof__(*(ptr))) __xchg_release((ptr), \ @@ -140,24 +136,12 @@ __ret; \ }) -#define xchg(ptr, x) \ +#define arch_xchg(ptr, x) \ ({ \ __typeof__(*(ptr)) _x_ = (x); \ (__typeof__(*(ptr))) __xchg((ptr), _x_, sizeof(*(ptr))); \ }) -#define xchg32(ptr, x) \ -({ \ - BUILD_BUG_ON(sizeof(*(ptr)) != 4); \ - xchg((ptr), (x)); \ -}) - -#define xchg64(ptr, x) \ -({ \ - BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ - xchg((ptr), (x)); \ -}) - /* * Atomic compare and exchange. Compare OLD with MEM, if identical, * store NEW in MEM. Return the initial value in MEM. Success is @@ -199,7 +183,7 @@ __ret; \ }) -#define cmpxchg_relaxed(ptr, o, n) \ +#define arch_cmpxchg_relaxed(ptr, o, n) \ ({ \ __typeof__(*(ptr)) _o_ = (o); \ __typeof__(*(ptr)) _n_ = (n); \ @@ -207,169 +191,4 @@ _o_, _n_, sizeof(*(ptr))); \ }) -#define __cmpxchg_acquire(ptr, old, new, size) \ -({ \ - __typeof__(ptr) __ptr = (ptr); \ - __typeof__(*(ptr)) __old = (old); \ - __typeof__(*(ptr)) __new = (new); \ - __typeof__(*(ptr)) __ret; \ - register unsigned int __rc; \ - switch (size) { \ - case 4: \ - __asm__ __volatile__ ( \ - "0: lr.w %0, %2\n" \ - " bne %0, %z3, 1f\n" \ - " sc.w %1, %z4, %2\n" \ - " bnez %1, 0b\n" \ - RISCV_ACQUIRE_BARRIER \ - "1:\n" \ - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ - : "rJ" ((long)__old), "rJ" (__new) \ - : "memory"); \ - break; \ - case 8: \ - __asm__ __volatile__ ( \ - "0: lr.d %0, %2\n" \ - " bne %0, %z3, 1f\n" \ - " sc.d %1, %z4, %2\n" \ - " bnez %1, 0b\n" \ - RISCV_ACQUIRE_BARRIER \ - "1:\n" \ - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ - : "rJ" (__old), "rJ" (__new) \ - : "memory"); \ - break; \ - default: \ - BUILD_BUG(); \ - } \ - __ret; \ -}) - -#define cmpxchg_acquire(ptr, o, n) \ -({ \ - __typeof__(*(ptr)) _o_ = (o); \ - __typeof__(*(ptr)) _n_ = (n); \ - (__typeof__(*(ptr))) __cmpxchg_acquire((ptr), \ - _o_, _n_, sizeof(*(ptr))); \ -}) - -#define __cmpxchg_release(ptr, old, new, size) \ -({ \ - __typeof__(ptr) __ptr = (ptr); \ - __typeof__(*(ptr)) __old = (old); \ - __typeof__(*(ptr)) __new = (new); \ - __typeof__(*(ptr)) __ret; \ - register unsigned int __rc; \ - switch (size) { \ - case 4: \ - __asm__ __volatile__ ( \ - RISCV_RELEASE_BARRIER \ - "0: lr.w %0, %2\n" \ - " bne %0, %z3, 1f\n" \ - " sc.w %1, %z4, %2\n" \ - " bnez %1, 0b\n" \ - "1:\n" \ - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ - : "rJ" ((long)__old), "rJ" (__new) \ - : "memory"); \ - break; \ - case 8: \ - __asm__ __volatile__ ( \ - RISCV_RELEASE_BARRIER \ - "0: lr.d %0, %2\n" \ - " bne %0, %z3, 1f\n" \ - " sc.d %1, %z4, %2\n" \ - " bnez %1, 0b\n" \ - "1:\n" \ - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ - : "rJ" (__old), "rJ" (__new) \ - : "memory"); \ - break; \ - default: \ - BUILD_BUG(); \ - } \ - __ret; \ -}) - -#define cmpxchg_release(ptr, o, n) \ -({ \ - __typeof__(*(ptr)) _o_ = (o); \ - __typeof__(*(ptr)) _n_ = (n); \ - (__typeof__(*(ptr))) __cmpxchg_release((ptr), \ - _o_, _n_, sizeof(*(ptr))); \ -}) - -#define __cmpxchg(ptr, old, new, size) \ -({ \ - __typeof__(ptr) __ptr = (ptr); \ - __typeof__(*(ptr)) __old = (old); \ - __typeof__(*(ptr)) __new = (new); \ - __typeof__(*(ptr)) __ret; \ - register unsigned int __rc; \ - switch (size) { \ - case 4: \ - __asm__ __volatile__ ( \ - "0: lr.w %0, %2\n" \ - " bne %0, %z3, 1f\n" \ - " sc.w.rl %1, %z4, %2\n" \ - " bnez %1, 0b\n" \ - " fence rw, rw\n" \ - "1:\n" \ - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ - : "rJ" ((long)__old), "rJ" (__new) \ - : "memory"); \ - break; \ - case 8: \ - __asm__ __volatile__ ( \ - "0: lr.d %0, %2\n" \ - " bne %0, %z3, 1f\n" \ - " sc.d.rl %1, %z4, %2\n" \ - " bnez %1, 0b\n" \ - " fence rw, rw\n" \ - "1:\n" \ - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ - : "rJ" (__old), "rJ" (__new) \ - : "memory"); \ - break; \ - default: \ - BUILD_BUG(); \ - } \ - __ret; \ -}) - -#define cmpxchg(ptr, o, n) \ -({ \ - __typeof__(*(ptr)) _o_ = (o); \ - __typeof__(*(ptr)) _n_ = (n); \ - (__typeof__(*(ptr))) __cmpxchg((ptr), \ - _o_, _n_, sizeof(*(ptr))); \ -}) - -#define cmpxchg_local(ptr, o, n) \ - (__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr)))) - -#define cmpxchg32(ptr, o, n) \ -({ \ - BUILD_BUG_ON(sizeof(*(ptr)) != 4); \ - cmpxchg((ptr), (o), (n)); \ -}) - -#define cmpxchg32_local(ptr, o, n) \ -({ \ - BUILD_BUG_ON(sizeof(*(ptr)) != 4); \ - cmpxchg_relaxed((ptr), (o), (n)) \ -}) - -#define cmpxchg64(ptr, o, n) \ -({ \ - BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ - cmpxchg((ptr), (o), (n)); \ -}) - -#define cmpxchg64_local(ptr, o, n) \ -({ \ - BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ - cmpxchg_relaxed((ptr), (o), (n)); \ -}) - #endif /* _ASM_RISCV_CMPXCHG_H */