From patchwork Sat May 5 08:54:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ingo Molnar X-Patchwork-Id: 10382051 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BD48C60236 for ; Sat, 5 May 2018 08:55:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A7E56292B2 for ; Sat, 5 May 2018 08:55:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9984A292E7; Sat, 5 May 2018 08:55:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: * X-Spam-Status: No, score=1.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, FSL_HELO_FAKE, MAILING_LIST_MULTI, T_DKIM_INVALID autolearn=no version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0A198292B2 for ; Sat, 5 May 2018 08:55:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xyJHOWbGaBB3vnZmG5twAhOq4TZeY+VkU9Aq3UO6Jpo=; b=Iw8eBQnmWAtefG 1DGbquV/lE9Ixa2+W8lS0PnvNKt7YVw9vORGPiePwY+drF0RSYJNpSh0Rk4O0obl7SuyOaVctUQ/N 53qEbftDiYtksm5Cid/y/G03QAoYRy2DGgXeW1MAKTJCeitSCBqPfNIJlkIhfslVSG0sNS1LebMa1 bhPw41SKqLD+2q6jeztiW0eXX/lIBDlwG6WGjzrB9ModcxgeHhkCzvx70VWjmTp4mBJ2p5Ku/3Av6 CBVgRqKQ/fOWo+yMdCKgqXZ2GHIfIwHaq6rYCPl3oy4mnOU8TbifwexBgKAfxuRr4EeMP+v2SD3zg 1Xw/2GYU68+YLDnXcTCA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fEsyB-0002t2-A5; Sat, 05 May 2018 08:55:07 +0000 Received: from mail-wr0-x243.google.com ([2a00:1450:400c:c0c::243]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fEsy4-0002jU-81 for linux-arm-kernel@lists.infradead.org; Sat, 05 May 2018 08:55:03 +0000 Received: by mail-wr0-x243.google.com with SMTP id p5-v6so23199388wre.12 for ; Sat, 05 May 2018 01:54:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=2biQcIjcztEaaAxv5CpakSJqTTvXeoN0OE8IB/7A+oA=; b=bFlJI7bIirOlk9kBlh3PqgXQ5WDy6YK+Av5MozK/24JwM6PKMSokepPLTukFH4fy/f kIV4YStSTLtTTPOtRtT+d+vS5Lcpr8irNoFCPPWnOC+k3CKRCwwwpLm8snLyKgLeBCM2 Yx+/d/N0s+TK2XkRV0VyP6cKJNlOI4OM9G896ynGo07ar4j7dhvPGD2ObGZsyGDwczvP cbOzzzQF35AArimjHR93qNB0ldwiExeZemD8zZ4PhvGaDWgzwXT+iEIdhvZgm1YzLhpz ag6JJTCh55hlqRlnhByqvuP/BGulpltxJ5wIhbXXPJHdhOwJStswpB8e+xTiLhAEzZOv awZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to:user-agent; bh=2biQcIjcztEaaAxv5CpakSJqTTvXeoN0OE8IB/7A+oA=; b=Cq8uLxVywX7ukkmiDxY0zUTFufeMJm33T2TXABvcp8iw6ZocQ7CvsRSm3YAbeitzik K/bcBXTyLEKUdSZKCkvN1p/Qtrsethc9otKIta23UMRvba5EjYa5dmckj5Wq9HE06D8x ++CaZr20jvgSimb7rekXQMZDHxNTqJcE6ztXfo+/OEo6mX7rlLfSmLgil0yLt188toq9 pdiBGavglkoAB0K30Uk0yDB1v8+wNmdtDlaiqHCAW22k0aEvk6nkY4Du8Y91Z3dfoSqV nr3ih0tQiFOz1XrGiMTqUbcyEBRk5IBACtzhH8rHIMDn5gSKildIRM23kxlJ1y1DPqKV O3PQ== X-Gm-Message-State: ALQs6tC++x301usUGd4fcYjCbmcQ9gAdBIYXHxD01XlgdPbyypu1Zfwp 0HEOiOTDYpD03Mg7DRKEZn8= X-Google-Smtp-Source: AB8JxZo7866m0q7jDZmpKRxGuoIQ8YluU7sTgJB1Z0sASso9bnaNA5AjNPIjSqUXdO+cyCK+HIaJFg== X-Received: by 2002:adf:e3c1:: with SMTP id k1-v6mr25181769wrm.94.1525510488848; Sat, 05 May 2018 01:54:48 -0700 (PDT) Received: from gmail.com (2E8B0CD5.catv.pool.telekom.hu. [46.139.12.213]) by smtp.gmail.com with ESMTPSA id v12-v6sm2617110wmc.35.2018.05.05.01.54.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sat, 05 May 2018 01:54:48 -0700 (PDT) Date: Sat, 5 May 2018 10:54:45 +0200 From: Ingo Molnar To: Mark Rutland Subject: [PATCH] locking/atomics: Combine the atomic_andnot() and atomic64_andnot() API definitions Message-ID: <20180505085445.cmdnqh6xpnpfoqzb@gmail.com> References: <20180504173937.25300-1-mark.rutland@arm.com> <20180504173937.25300-2-mark.rutland@arm.com> <20180504180105.GS12217@hirez.programming.kicks-ass.net> <20180504180909.dnhfflibjwywnm4l@lakrids.cambridge.arm.com> <20180505081100.nsyrqrpzq2vd27bk@gmail.com> <20180505083635.622xmcvb42dw5xxh@gmail.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180505083635.622xmcvb42dw5xxh@gmail.com> User-Agent: NeoMutt/20170609 (1.8.3) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180505_015500_301366_307B128C X-CRM114-Status: GOOD ( 16.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Peter Zijlstra , catalin.marinas@arm.com, boqun.feng@gmail.com, will.deacon@arm.com, linux-kernel@vger.kernel.org, "Paul E. McKenney" , dvyukov@google.com, aryabinin@virtuozzo.com, Andrew Morton , Linus Torvalds , Thomas Gleixner , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP * Ingo Molnar wrote: > Note that the simplest definition block is now: > > #ifndef atomic_cmpxchg_relaxed > # define atomic_cmpxchg_relaxed atomic_cmpxchg > # define atomic_cmpxchg_acquire atomic_cmpxchg > # define atomic_cmpxchg_release atomic_cmpxchg > #else > # ifndef atomic_cmpxchg > # define atomic_cmpxchg(...) __atomic_op_fence(atomic_cmpxchg, __VA_ARGS__) > # define atomic_cmpxchg_acquire(...) __atomic_op_acquire(atomic_cmpxchg, __VA_ARGS__) > # define atomic_cmpxchg_release(...) __atomic_op_release(atomic_cmpxchg, __VA_ARGS__) > # endif > #endif > > ... which is very readable! > > The total linecount reduction of the two patches is pretty significant as well: > > include/linux/atomic.h | 1063 ++++++++++++++++-------------------------------- > 1 file changed, 343 insertions(+), 720 deletions(-) BTW., I noticed two asymmetries while cleaning up this code: ==============> #ifdef atomic_andnot #ifndef atomic_fetch_andnot_relaxed # define atomic_fetch_andnot_relaxed atomic_fetch_andnot # define atomic_fetch_andnot_acquire atomic_fetch_andnot # define atomic_fetch_andnot_release atomic_fetch_andnot #else # ifndef atomic_fetch_andnot # define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) # define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) # define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) # endif #endif #endif /* atomic_andnot */ ... #ifdef atomic64_andnot #ifndef atomic64_fetch_andnot_relaxed # define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot # define atomic64_fetch_andnot_acquire atomic64_fetch_andnot # define atomic64_fetch_andnot_release atomic64_fetch_andnot #else # ifndef atomic64_fetch_andnot # define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) # define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) # define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) # endif #endif #endif /* atomic64_andnot */ <============== Why do these two API groups have an outer condition, i.e.: #ifdef atomic_andnot ... #endif /* atomic_andnot */ ... #ifdef atomic64_andnot ... #endif /* atomic64_andnot */ because the base APIs themselves are optional and have a default implementation: #ifndef atomic_andnot ... #endif ... #ifndef atomic64_andnot ... #endif I think it's overall cleaner if we combine them into continous blocks, defining all variants of an API group in a single place: #ifdef atomic_andnot #else #endif etc. The patch below implements this. Thanks, Ingo ===================> From f5efafa83af8c46b9e81b010b46caeeadb450179 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Sat, 5 May 2018 10:46:41 +0200 Subject: [PATCH] locking/atomics: Combine the atomic_andnot() and atomic64_andnot() API definitions The atomic_andnot() and atomic64_andnot() are defined in 4 separate groups spred out in the atomic.h header: #ifdef atomic_andnot ... #endif /* atomic_andnot */ ... #ifndef atomic_andnot ... #endif ... #ifdef atomic64_andnot ... #endif /* atomic64_andnot */ ... #ifndef atomic64_andnot ... #endif Combine them into unify them into two groups: #ifdef atomic_andnot #else #endif ... #ifdef atomic64_andnot #else #endif So that one API group is defined in a single place within the header. Cc: Peter Zijlstra Cc: Linus Torvalds Cc: Andrew Morton Cc: Thomas Gleixner Cc: Paul E. McKenney Cc: Will Deacon Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar --- include/linux/atomic.h | 72 +++++++++++++++++++++++++------------------------- 1 file changed, 36 insertions(+), 36 deletions(-) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 352ecc72d7f5..1176cf7c6f03 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -205,22 +205,6 @@ # endif #endif -#ifdef atomic_andnot - -#ifndef atomic_fetch_andnot_relaxed -# define atomic_fetch_andnot_relaxed atomic_fetch_andnot -# define atomic_fetch_andnot_acquire atomic_fetch_andnot -# define atomic_fetch_andnot_release atomic_fetch_andnot -#else -# ifndef atomic_fetch_andnot -# define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) -# define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) -# define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) -# endif -#endif - -#endif /* atomic_andnot */ - #ifndef atomic_fetch_xor_relaxed # define atomic_fetch_xor_relaxed atomic_fetch_xor # define atomic_fetch_xor_acquire atomic_fetch_xor @@ -338,7 +322,22 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) # define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) #endif -#ifndef atomic_andnot +#ifdef atomic_andnot + +#ifndef atomic_fetch_andnot_relaxed +# define atomic_fetch_andnot_relaxed atomic_fetch_andnot +# define atomic_fetch_andnot_acquire atomic_fetch_andnot +# define atomic_fetch_andnot_release atomic_fetch_andnot +#else +# ifndef atomic_fetch_andnot +# define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) +# endif +#endif + +#else /* !atomic_andnot: */ + static inline void atomic_andnot(int i, atomic_t *v) { atomic_and(~i, v); @@ -363,7 +362,8 @@ static inline int atomic_fetch_andnot_release(int i, atomic_t *v) { return atomic_fetch_and_release(~i, v); } -#endif + +#endif /* !atomic_andnot */ /** * atomic_inc_not_zero_hint - increment if not null @@ -600,22 +600,6 @@ static inline int atomic_dec_if_positive(atomic_t *v) # endif #endif -#ifdef atomic64_andnot - -#ifndef atomic64_fetch_andnot_relaxed -# define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot -# define atomic64_fetch_andnot_acquire atomic64_fetch_andnot -# define atomic64_fetch_andnot_release atomic64_fetch_andnot -#else -# ifndef atomic64_fetch_andnot -# define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) -# define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) -# define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) -# endif -#endif - -#endif /* atomic64_andnot */ - #ifndef atomic64_fetch_xor_relaxed # define atomic64_fetch_xor_relaxed atomic64_fetch_xor # define atomic64_fetch_xor_acquire atomic64_fetch_xor @@ -672,7 +656,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_try_cmpxchg_release atomic64_try_cmpxchg #endif -#ifndef atomic64_andnot +#ifdef atomic64_andnot + +#ifndef atomic64_fetch_andnot_relaxed +# define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot +# define atomic64_fetch_andnot_acquire atomic64_fetch_andnot +# define atomic64_fetch_andnot_release atomic64_fetch_andnot +#else +# ifndef atomic64_fetch_andnot +# define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) +# endif +#endif + +#else /* !atomic64_andnot: */ + static inline void atomic64_andnot(long long i, atomic64_t *v) { atomic64_and(~i, v); @@ -697,7 +696,8 @@ static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v { return atomic64_fetch_and_release(~i, v); } -#endif + +#endif /* !atomic64_andnot */ #define atomic64_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c)) #define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c))