From patchwork Fri Oct 14 19:10:55 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colin Vidal X-Patchwork-Id: 9377325 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 164CE6075E for ; Fri, 14 Oct 2016 19:11:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECB1C2A195 for ; Fri, 14 Oct 2016 19:11:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DCBC72A802; Fri, 14 Oct 2016 19:11:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id AE5C92A81C for ; Fri, 14 Oct 2016 19:11:37 +0000 (UTC) Received: (qmail 7928 invoked by uid 550); 14 Oct 2016 19:11:35 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 7891 invoked from network); 14 Oct 2016 19:11:33 -0000 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=cvidal.org; h=cc :date:from:message-id:subject:to:x-sasl-enc:x-sasl-enc; s= mesmtp; bh=BgGTpaqG+oNSiqEzyE6/zx/H7c8=; b=PE3anYGtveHRlSlyCgNx0 yEiXUnFu7FM8DyOa2JpdeSXWrvdsiwL++eK/WIQhFQgjtv8g7tNSRcNoo6LThCSM Ck6HirZ1b53Gr846GLV5W0/J1DiURiYwUajPca16gVeVAaVnQ7iG0xtG2EQuB4Lp yYh3IZCLq44iiX8OKleXYU= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:message-id:subject:to :x-sasl-enc:x-sasl-enc; s=smtpout; bh=BgGTpaqG+oNSiqEzyE6/zx/H7c 8=; b=EusooPHafqnsf1zMgeXZQtyN+TI8Bhaa3VzuSZ39GCy2itdQN24rV266Xf YPqQallQUtJcaCk/WiDwOzYhchiJ1JFI4rLV8lycJ8GZ+/d00U+af7kZT351U6KD +yyeRRVPa8F6eObkHEEwyA25mTjd6hE3ZwXGfolaI+ORTEvAQ= X-Sasl-enc: G+FMFbLRFPKF9xCRw6OAbKnY3srQCjmaxElYaFNMSn3v 1476472281 From: Colin Vidal To: "Reshetova, Elena" , David Windsor , AKASHI Takahiro , Hans Liljestrand , kernel-hardening@lists.openwall.com Cc: Colin Vidal Date: Fri, 14 Oct 2016 21:10:55 +0200 Message-Id: <1476472255-15851-1-git-send-email-colin@cvidal.org> X-Mailer: git-send-email 2.7.4 Subject: [kernel-hardening] include/asm-generic/atomic-long.h: Reordering atomic_*_wrap macros X-Virus-Scanned: ClamAV using ClamSMTP Hi Elena, I don't know if it may helps for the v2, but here is the (tiny) modifications I have done on a generic part to be able to build on ARM without CONFIG_HARDENING_ATOMIC (of course, there is also some modifications in arch/arm, but it is not the subject, and it only a prototype for now). It does not break x86 builds. It basically a reordering / guard on macros atomic_*_wrap in order to avoid implicitly defined / redefined error about them, when CONFIG_HARDENED_ATOMIC is unset. Thanks, Colin --- include/asm-generic/atomic-long.h | 43 ++++++++++++++++----------------------- 1 file changed, 18 insertions(+), 25 deletions(-) diff --git a/include/asm-generic/atomic-long.h b/include/asm-generic/atomic-long.h index 790cb00..6f1dc3e 100644 --- a/include/asm-generic/atomic-long.h +++ b/include/asm-generic/atomic-long.h @@ -46,6 +46,22 @@ typedef atomic_t atomic_long_wrap_t; #endif +#ifndef CONFIG_HARDENED_ATOMIC +#define atomic_read_wrap(v) atomic_read(v) +#define atomic_set_wrap(v, i) atomic_set((v), (i)) +#define atomic_add_wrap(i, v) atomic_add((i), (v)) +#define atomic_add_unless_wrap(v, i, j) atomic_add_unless((v), (i), (j)) +#define atomic_sub_wrap(i, v) atomic_sub((i), (v)) +#define atomic_inc_wrap(v) atomic_inc(v) +#define atomic_inc_and_test_wrap(v) atomic_inc_and_test(v) +#define atomic_inc_return_wrap(v) atomic_inc_return(v) +#define atomic_add_return_wrap(i, v) atomic_add_return((i), (v)) +#define atomic_dec_wrap(v) atomic_dec(v) +#define atomic_cmpxchg_wrap(v, o, n) atomic_cmpxchg((v), (o), (n)) +#define atomic_xchg_wrap(v, i) atomic_xchg((v), (i)) +#define atomic_long_sub_and_test_wrap(i, v) atomic_long_sub_and_test(i, v) +#endif /* CONFIG_HARDENED_ATOMIC */ + #define ATOMIC_LONG_READ_OP(mo, suffix) \ static inline long atomic_long_read##mo##suffix(const atomic_long##suffix##_t *l)\ { \ @@ -233,12 +249,14 @@ static inline int atomic_long_sub_and_test(long i, atomic_long_t *l) return ATOMIC_LONG_PFX(_sub_and_test)(i, v); } +#ifdef CONFIG_HARDENED_ATOMIC static inline int atomic_long_sub_and_test_wrap(long i, atomic_long_wrap_t *l) { ATOMIC_LONG_PFX(_wrap_t) *v = (ATOMIC_LONG_PFX(_wrap_t) *)l; return ATOMIC_LONG_PFX(_sub_and_test_wrap)(i, v); } +#endif static inline int atomic_long_dec_and_test(atomic_long_t *l) { @@ -291,29 +309,4 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u) #define atomic_long_inc_not_zero(l) \ ATOMIC_LONG_PFX(_inc_not_zero)((ATOMIC_LONG_PFX(_t) *)(l)) -#ifndef CONFIG_HARDENED_ATOMIC -#define atomic_read_wrap(v) atomic_read(v) -#define atomic_set_wrap(v, i) atomic_set((v), (i)) -#define atomic_add_wrap(i, v) atomic_add((i), (v)) -#define atomic_add_unless_wrap(v, i, j) atomic_add_unless((v), (i), (j)) -#define atomic_sub_wrap(i, v) atomic_sub((i), (v)) -#define atomic_inc_wrap(v) atomic_inc(v) -#define atomic_inc_and_test_wrap(v) atomic_inc_and_test(v) -#define atomic_inc_return_wrap(v) atomic_inc_return(v) -#define atomic_add_return_wrap(i, v) atomic_add_return((i), (v)) -#define atomic_dec_wrap(v) atomic_dec(v) -#define atomic_cmpxchg_wrap(v, o, n) atomic_cmpxchg((v), (o), (n)) -#define atomic_xchg_wrap(v, i) atomic_xchg((v), (i)) -#define atomic_long_read_wrap(v) atomic_long_read(v) -#define atomic_long_set_wrap(v, i) atomic_long_set((v), (i)) -#define atomic_long_add_wrap(i, v) atomic_long_add((i), (v)) -#define atomic_long_sub_wrap(i, v) atomic_long_sub((i), (v)) -#define atomic_long_inc_wrap(v) atomic_long_inc(v) -#define atomic_long_add_return_wrap(i, v) atomic_long_add_return((i), (v)) -#define atomic_long_inc_return_wrap(v) atomic_long_inc_return(v) -#define atomic_long_sub_and_test_wrap(i, v) atomic_long_sub_and_test((i), (v)) -#define atomic_long_dec_wrap(v) atomic_long_dec(v) -#define atomic_long_xchg_wrap(v, i) atomic_long_xchg((v), (i)) -#endif /* CONFIG_HARDENED_ATOMIC */ - #endif /* _ASM_GENERIC_ATOMIC_LONG_H */