From patchwork Mon May 7 09:54:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Parri X-Patchwork-Id: 10383801 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9433360236 for ; Mon, 7 May 2018 09:54:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 809B02881B for ; Mon, 7 May 2018 09:54:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 74CF828947; Mon, 7 May 2018 09:54:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 55DCD2881B for ; Mon, 7 May 2018 09:54:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=n1RQRX7zSMDtqi2h0O9dwCv/S+XW9HTXThjf0UvCSkI=; b=lq1fUX8jKozM3W I6qZfOEP5URMbqo3liuSIkIgsEcuX4le1ElSaNGIMLk574G4DuhNIHqg1ZTMQUwQRoD/2Ky3ovoQt b75aPDd7YbQj+fURrjrHh/TCD3aLd3tnCRXQndnGPuKpjUcq8d3g4NXSL1Gu2F5nYYYXivL2Yl3fM oZOtBiNQlfNXnjWrVud241TqA1lV6CnHzJhk4ndYRDZyCji10w68cyfmRwJqqIwlRV08iTUuz/Did vMgieGGNMSnOcP66sExiMVwzHC3tnM7UyrDVOEUW+3QiWjJjmXO/JXe+D/KPmGR9iQ/V6jmR7CMjd LTKVNl/WHwXCcqJ4RVbQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fFcqj-0005Y8-67; Mon, 07 May 2018 09:54:29 +0000 Received: from mail-wr0-x243.google.com ([2a00:1450:400c:c0c::243]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fFcqe-0005Wb-6W for linux-arm-kernel@lists.infradead.org; Mon, 07 May 2018 09:54:26 +0000 Received: by mail-wr0-x243.google.com with SMTP id 94-v6so26789218wrf.5 for ; Mon, 07 May 2018 02:54:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amarulasolutions.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=DLZCcqFWxrI7phCay0ZNrG3Q5rXLJ/D/8l8ZSKicaR4=; b=cGFVnTHQ1X1S0UgC9joOODpBXc4b5GHfM378f8Nb9JeclJdTi7PeKZQw6jbGHKM6Q9 vtYeaLT4lyofttDFYZPOnt6Ek/W6wUqCpRCNxZzB+DxmNV8b3+Mo5XuBGDLjV5VOh4MJ kQ7sIShwOZmi26lcxyW3U7BIPx6mPnlQPkd7A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=DLZCcqFWxrI7phCay0ZNrG3Q5rXLJ/D/8l8ZSKicaR4=; b=iIo9TgWEHBAldMlo6lTNUA9k5crue08m5pZ3OIB97MyTX9FbJug8DlCuDffyscA651 JWO8LTykeEgYtf284f/kS1LmGphShjiCePOYOxq1oHcwKwJEYHarxQzheLthSI3MSHaJ Kz3mEa5QtB/q5xuE4ZxLutPpaprirmkaKKmR7xRKEoe+fjaEe/P5hXjBfc85tivltF9S m1NYkJbaFx0yw2L41Jz9lqsjU7zMP/bOF5sm/iAM8MdtwVW8XQ54m52KrWrZOX4oZVUE vyVuJazTh/9wj+b5M7WULPB+ONLo5sT3lx6wXH09d9aTuKo4r5fBze4XCihcL5wHFvBv e/XQ== X-Gm-Message-State: ALQs6tCTHGV8MfZKuCzr5OXcCF8iQaabfOstoJCUi/ztiohVpVl4QRgd sUY8PlwwBCdnY36Quxk4+a9m3w== X-Google-Smtp-Source: AB8JxZqlB8aHsQ/xwxw3MDpBxyeEifV+zF0ZnWIZLbKpMJETYtlwKMM1Y1yDFqfAhP+V7aIcpF9JjA== X-Received: by 2002:adf:8861:: with SMTP id e30-v6mr28123277wre.252.1525686850680; Mon, 07 May 2018 02:54:10 -0700 (PDT) Received: from andrea (85.100.broadband17.iol.cz. [109.80.100.85]) by smtp.gmail.com with ESMTPSA id l20-v6sm13535427wrf.90.2018.05.07.02.54.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 07 May 2018 02:54:09 -0700 (PDT) Date: Mon, 7 May 2018 11:54:03 +0200 From: Andrea Parri To: Ingo Molnar Subject: Re: [PATCH] locking/atomics: Simplify the op definitions in atomic.h some more Message-ID: <20180507095403.GA19583@andrea> References: <20180504173937.25300-1-mark.rutland@arm.com> <20180504173937.25300-2-mark.rutland@arm.com> <20180504180105.GS12217@hirez.programming.kicks-ass.net> <20180504180909.dnhfflibjwywnm4l@lakrids.cambridge.arm.com> <20180505081100.nsyrqrpzq2vd27bk@gmail.com> <20180505083635.622xmcvb42dw5xxh@gmail.com> <20180506141249.GA28723@andrea> <20180506145726.y4jxhvfolzvbuft5@gmail.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180506145726.y4jxhvfolzvbuft5@gmail.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180507_025424_254394_4B8BD873 X-CRM114-Status: GOOD ( 24.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Michael Ellerman , Peter Zijlstra , catalin.marinas@arm.com, boqun.feng@gmail.com, Palmer Dabbelt , will.deacon@arm.com, linux-kernel@vger.kernel.org, "Paul E. McKenney" , Paul Mackerras , dvyukov@google.com, Albert Ou , Benjamin Herrenschmidt , aryabinin@virtuozzo.com, Andrew Morton , Linus Torvalds , Thomas Gleixner , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On Sun, May 06, 2018 at 04:57:27PM +0200, Ingo Molnar wrote: > > * Andrea Parri wrote: > > > Hi Ingo, > > > > > From 5affbf7e91901143f84f1b2ca64f4afe70e210fd Mon Sep 17 00:00:00 2001 > > > From: Ingo Molnar > > > Date: Sat, 5 May 2018 10:23:23 +0200 > > > Subject: [PATCH] locking/atomics: Simplify the op definitions in atomic.h some more > > > > > > Before: > > > > > > #ifndef atomic_fetch_dec_relaxed > > > # ifndef atomic_fetch_dec > > > # define atomic_fetch_dec(v) atomic_fetch_sub(1, (v)) > > > # define atomic_fetch_dec_relaxed(v) atomic_fetch_sub_relaxed(1, (v)) > > > # define atomic_fetch_dec_acquire(v) atomic_fetch_sub_acquire(1, (v)) > > > # define atomic_fetch_dec_release(v) atomic_fetch_sub_release(1, (v)) > > > # else > > > # define atomic_fetch_dec_relaxed atomic_fetch_dec > > > # define atomic_fetch_dec_acquire atomic_fetch_dec > > > # define atomic_fetch_dec_release atomic_fetch_dec > > > # endif > > > #else > > > # ifndef atomic_fetch_dec_acquire > > > # define atomic_fetch_dec_acquire(...) __atomic_op_acquire(atomic_fetch_dec, __VA_ARGS__) > > > # endif > > > # ifndef atomic_fetch_dec_release > > > # define atomic_fetch_dec_release(...) __atomic_op_release(atomic_fetch_dec, __VA_ARGS__) > > > # endif > > > # ifndef atomic_fetch_dec > > > # define atomic_fetch_dec(...) __atomic_op_fence(atomic_fetch_dec, __VA_ARGS__) > > > # endif > > > #endif > > > > > > After: > > > > > > #ifndef atomic_fetch_dec_relaxed > > > # ifndef atomic_fetch_dec > > > # define atomic_fetch_dec(v) atomic_fetch_sub(1, (v)) > > > # define atomic_fetch_dec_relaxed(v) atomic_fetch_sub_relaxed(1, (v)) > > > # define atomic_fetch_dec_acquire(v) atomic_fetch_sub_acquire(1, (v)) > > > # define atomic_fetch_dec_release(v) atomic_fetch_sub_release(1, (v)) > > > # else > > > # define atomic_fetch_dec_relaxed atomic_fetch_dec > > > # define atomic_fetch_dec_acquire atomic_fetch_dec > > > # define atomic_fetch_dec_release atomic_fetch_dec > > > # endif > > > #else > > > # ifndef atomic_fetch_dec > > > # define atomic_fetch_dec(...) __atomic_op_fence(atomic_fetch_dec, __VA_ARGS__) > > > # define atomic_fetch_dec_acquire(...) __atomic_op_acquire(atomic_fetch_dec, __VA_ARGS__) > > > # define atomic_fetch_dec_release(...) __atomic_op_release(atomic_fetch_dec, __VA_ARGS__) > > > # endif > > > #endif > > > > > > The idea is that because we already group these APIs by certain defines > > > such as atomic_fetch_dec_relaxed and atomic_fetch_dec in the primary > > > branches - we can do the same in the secondary branch as well. > > > > > > ( Also remove some unnecessarily duplicate comments, as the API > > > group defines are now pretty much self-documenting. ) > > > > > > No change in functionality. > > > > > > Cc: Peter Zijlstra > > > Cc: Linus Torvalds > > > Cc: Andrew Morton > > > Cc: Thomas Gleixner > > > Cc: Paul E. McKenney > > > Cc: Will Deacon > > > Cc: linux-kernel@vger.kernel.org > > > Signed-off-by: Ingo Molnar > > > > This breaks compilation on RISC-V. (For some of its atomics, the arch > > currently defines the _relaxed and the full variants and it relies on > > the generic definitions for the _acquire and the _release variants.) > > I don't have cross-compilation for RISC-V, which is a relatively new arch. > (Is there any RISC-V set of cross-compilation tools on kernel.org somewhere?) I'm using the toolchain from: https://riscv.org/software-tools/ (adding Palmer and Albert in Cc:) > > Could you please send a patch that defines those variants against Linus's tree, > like the PowerPC patch that does something similar: > > 0476a632cb3a: locking/atomics/powerpc: Move cmpxchg helpers to asm/cmpxchg.h and define the full set of cmpxchg APIs > > ? Yes, please see below for a first RFC. (BTW, get_maintainer.pl says that that patch missed Benjamin, Paul, Michael and linuxppc-dev@lists.ozlabs.org: FWIW, I'm Cc-ing the maintainers here.) Andrea From 411f05a44e0b53a435331b977ff864fba7501a95 Mon Sep 17 00:00:00 2001 From: Andrea Parri Date: Mon, 7 May 2018 10:59:20 +0200 Subject: [RFC PATCH] riscv/atomic: Defines _acquire/_release variants In preparation for Ingo's renovation of the generic atomic.h header [1], define the _acquire/_release variants in the arch's header. No change in code generation. [1] http://lkml.kernel.org/r/20180505081100.nsyrqrpzq2vd27bk@gmail.com http://lkml.kernel.org/r/20180505083635.622xmcvb42dw5xxh@gmail.com Suggested-by: Ingo Molnar Signed-off-by: Andrea Parri Cc: Palmer Dabbelt Cc: Albert Ou Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: linux-riscv@lists.infradead.org --- arch/riscv/include/asm/atomic.h | 88 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 855115ace98c8..7cbd8033dfb5d 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -153,22 +153,54 @@ ATOMIC_OPS(sub, add, +, -i) #define atomic_add_return_relaxed atomic_add_return_relaxed #define atomic_sub_return_relaxed atomic_sub_return_relaxed +#define atomic_add_return_acquire(...) \ + __atomic_op_acquire(atomic_add_return, __VA_ARGS__) +#define atomic_sub_return_acquire(...) \ + __atomic_op_acquire(atomic_sub_return, __VA_ARGS__) +#define atomic_add_return_release(...) \ + __atomic_op_release(atomic_add_return, __VA_ARGS__) +#define atomic_sub_return_release(...) \ + __atomic_op_release(atomic_sub_return, __VA_ARGS__) #define atomic_add_return atomic_add_return #define atomic_sub_return atomic_sub_return #define atomic_fetch_add_relaxed atomic_fetch_add_relaxed #define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed +#define atomic_fetch_add_acquire(...) \ + __atomic_op_acquire(atomic_fetch_add, __VA_ARGS__) +#define atomic_fetch_sub_acquire(...) \ + __atomic_op_acquire(atomic_fetch_sub, __VA_ARGS__) +#define atomic_fetch_add_release(...) \ + __atomic_op_release(atomic_fetch_add, __VA_ARGS__) +#define atomic_fetch_sub_release(...) \ + __atomic_op_release(atomic_fetch_sub, __VA_ARGS__) #define atomic_fetch_add atomic_fetch_add #define atomic_fetch_sub atomic_fetch_sub #ifndef CONFIG_GENERIC_ATOMIC64 #define atomic64_add_return_relaxed atomic64_add_return_relaxed #define atomic64_sub_return_relaxed atomic64_sub_return_relaxed +#define atomic64_add_return_acquire(...) \ + __atomic_op_acquire(atomic64_add_return, __VA_ARGS__) +#define atomic64_sub_return_acquire(...) \ + __atomic_op_acquire(atomic64_sub_return, __VA_ARGS__) +#define atomic64_add_return_release(...) \ + __atomic_op_release(atomic64_add_return, __VA_ARGS__) +#define atomic64_sub_return_release(...) \ + __atomic_op_release(atomic64_sub_return, __VA_ARGS__) #define atomic64_add_return atomic64_add_return #define atomic64_sub_return atomic64_sub_return #define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed #define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed +#define atomic64_fetch_add_acquire(...) \ + __atomic_op_acquire(atomic64_fetch_add, __VA_ARGS__) +#define atomic64_fetch_sub_acquire(...) \ + __atomic_op_acquire(atomic64_fetch_sub, __VA_ARGS__) +#define atomic64_fetch_add_release(...) \ + __atomic_op_release(atomic64_fetch_add, __VA_ARGS__) +#define atomic64_fetch_sub_release(...) \ + __atomic_op_release(atomic64_fetch_sub, __VA_ARGS__) #define atomic64_fetch_add atomic64_fetch_add #define atomic64_fetch_sub atomic64_fetch_sub #endif @@ -191,6 +223,18 @@ ATOMIC_OPS(xor, xor, i) #define atomic_fetch_and_relaxed atomic_fetch_and_relaxed #define atomic_fetch_or_relaxed atomic_fetch_or_relaxed #define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed +#define atomic_fetch_and_acquire(...) \ + __atomic_op_acquire(atomic_fetch_and, __VA_ARGS__) +#define atomic_fetch_or_acquire(...) \ + __atomic_op_acquire(atomic_fetch_or, __VA_ARGS__) +#define atomic_fetch_xor_acquire(...) \ + __atomic_op_acquire(atomic_fetch_xor, __VA_ARGS__) +#define atomic_fetch_and_release(...) \ + __atomic_op_release(atomic_fetch_and, __VA_ARGS__) +#define atomic_fetch_or_release(...) \ + __atomic_op_release(atomic_fetch_or, __VA_ARGS__) +#define atomic_fetch_xor_release(...) \ + __atomic_op_release(atomic_fetch_xor, __VA_ARGS__) #define atomic_fetch_and atomic_fetch_and #define atomic_fetch_or atomic_fetch_or #define atomic_fetch_xor atomic_fetch_xor @@ -199,6 +243,18 @@ ATOMIC_OPS(xor, xor, i) #define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed #define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed #define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed +#define atomic64_fetch_and_acquire(...) \ + __atomic_op_acquire(atomic64_fetch_and, __VA_ARGS__) +#define atomic64_fetch_or_acquire(...) \ + __atomic_op_acquire(atomic64_fetch_or, __VA_ARGS__) +#define atomic64_fetch_xor_acquire(...) \ + __atomic_op_acquire(atomic64_fetch_xor, __VA_ARGS__) +#define atomic64_fetch_and_release(...) \ + __atomic_op_release(atomic64_fetch_and, __VA_ARGS__) +#define atomic64_fetch_or_release(...) \ + __atomic_op_release(atomic64_fetch_or, __VA_ARGS__) +#define atomic64_fetch_xor_release(...) \ + __atomic_op_release(atomic64_fetch_xor, __VA_ARGS__) #define atomic64_fetch_and atomic64_fetch_and #define atomic64_fetch_or atomic64_fetch_or #define atomic64_fetch_xor atomic64_fetch_xor @@ -290,22 +346,54 @@ ATOMIC_OPS(dec, add, +, -1) #define atomic_inc_return_relaxed atomic_inc_return_relaxed #define atomic_dec_return_relaxed atomic_dec_return_relaxed +#define atomic_inc_return_acquire(...) \ + __atomic_op_acquire(atomic_inc_return, __VA_ARGS__) +#define atomic_dec_return_acquire(...) \ + __atomic_op_acquire(atomic_dec_return, __VA_ARGS__) +#define atomic_inc_return_release(...) \ + __atomic_op_release(atomic_inc_return, __VA_ARGS__) +#define atomic_dec_return_release(...) \ + __atomic_op_release(atomic_dec_return, __VA_ARGS__) #define atomic_inc_return atomic_inc_return #define atomic_dec_return atomic_dec_return #define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed #define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed +#define atomic_fetch_inc_acquire(...) \ + __atomic_op_acquire(atomic_fetch_inc, __VA_ARGS__) +#define atomic_fetch_dec_acquire(...) \ + __atomic_op_acquire(atomic_fetch_dec, __VA_ARGS__) +#define atomic_fetch_inc_release(...) \ + __atomic_op_release(atomic_fetch_inc, __VA_ARGS__) +#define atomic_fetch_dec_release(...) \ + __atomic_op_release(atomic_fetch_dec, __VA_ARGS__) #define atomic_fetch_inc atomic_fetch_inc #define atomic_fetch_dec atomic_fetch_dec #ifndef CONFIG_GENERIC_ATOMIC64 #define atomic64_inc_return_relaxed atomic64_inc_return_relaxed #define atomic64_dec_return_relaxed atomic64_dec_return_relaxed +#define atomic64_inc_return_acquire(...) \ + __atomic_op_acquire(atomic64_inc_return, __VA_ARGS__) +#define atomic64_dec_return_acquire(...) \ + __atomic_op_acquire(atomic64_dec_return, __VA_ARGS__) +#define atomic64_inc_return_release(...) \ + __atomic_op_release(atomic64_inc_return, __VA_ARGS__) +#define atomic64_dec_return_release(...) \ + __atomic_op_release(atomic64_dec_return, __VA_ARGS__) #define atomic64_inc_return atomic64_inc_return #define atomic64_dec_return atomic64_dec_return #define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed #define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed +#define atomic64_fetch_inc_acquire(...) \ + __atomic_op_acquire(atomic64_fetch_inc, __VA_ARGS__) +#define atomic64_fetch_dec_acquire(...) \ + __atomic_op_acquire(atomic64_fetch_dec, __VA_ARGS__) +#define atomic64_fetch_inc_release(...) \ + __atomic_op_release(atomic64_fetch_inc, __VA_ARGS__) +#define atomic64_fetch_dec_release(...) \ + __atomic_op_release(atomic64_fetch_dec, __VA_ARGS__) #define atomic64_fetch_inc atomic64_fetch_inc #define atomic64_fetch_dec atomic64_fetch_dec #endif