From patchwork Tue Oct 18 14:59:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colin Vidal X-Patchwork-Id: 9382291 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5EAE6600CA for ; Tue, 18 Oct 2016 15:00:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E99C29631 for ; Tue, 18 Oct 2016 15:00:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 42E922963A; Tue, 18 Oct 2016 15:00:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 90C2629631 for ; Tue, 18 Oct 2016 15:00:07 +0000 (UTC) Received: (qmail 11585 invoked by uid 550); 18 Oct 2016 14:59:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 10208 invoked from network); 18 Oct 2016 14:59:54 -0000 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=cvidal.org; h=cc :date:from:in-reply-to:message-id:references:subject:to :x-sasl-enc:x-sasl-enc; s=mesmtp; bh=2dUcbGOJtzGddc+HHM/nNRMxL6E =; b=o0/Jrds7Q4YWdxm+Rfx0PwCdw7Sx3wLZ2J3xvTtN21B4YIgO/DFNUy5tWtq Tu90hoeAvM2J/yCWy6Gg2gUsA9YPKiudRe1nlW4pK3iClAM5ERBYuHN++XnZrsd1 RkEKV2zSx0Vp3LRIZuupKunlZeOFqVDSSG4lXf6QGsciK1Yk= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-sasl-enc:x-sasl-enc; s=smtpout; bh=2dUc bGOJtzGddc+HHM/nNRMxL6E=; b=Hg5gCJ0CRnPzcHuK5b9e016RjlDBjQB/ZreC lmb9rvSDMLDOm0XblnCOQKG5UnDXAFVoU7KZkK++Xpk1v36crdDbKxFHimLkNnZ/ 4EbQxXuJLxUSrAXTm2+5Rh4ppBLZo2e+tk6CruLFava3KADxJterDfl/lD5At5F2 8sPpCe4= X-Sasl-enc: XOZ7fWuSQHwHjBG+MYcxR97v3mOGHKmGnlDAJjlaxqS2 1476802781 From: Colin Vidal To: kernel-hardening@lists.openwall.com, "Reshetova, Elena" , AKASHI Takahiro , David Windsor , Kees Cook , Hans Liljestrand Cc: Colin Vidal Date: Tue, 18 Oct 2016 16:59:20 +0200 Message-Id: <1476802761-24340-2-git-send-email-colin@cvidal.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1476802761-24340-1-git-send-email-colin@cvidal.org> References: <1476802761-24340-1-git-send-email-colin@cvidal.org> Subject: [kernel-hardening] [RFC 1/2] Reordering / guard definition on atomic_*_wrap function in order to avoid implicitly defined / redefined error on them, when CONFIG_HARDENED_ATOMIC is unset. X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Colin Vidal --- include/asm-generic/atomic-long.h | 55 +++++++++++++++++++++------------------ include/linux/atomic.h | 55 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+), 25 deletions(-) diff --git a/include/asm-generic/atomic-long.h b/include/asm-generic/atomic-long.h index 790cb00..94d712b 100644 --- a/include/asm-generic/atomic-long.h +++ b/include/asm-generic/atomic-long.h @@ -46,6 +46,30 @@ typedef atomic_t atomic_long_wrap_t; #endif +#ifndef CONFIG_HARDENED_ATOMIC +#define atomic_read_wrap(v) atomic_read(v) +#define atomic_set_wrap(v, i) atomic_set((v), (i)) +#define atomic_add_wrap(i, v) atomic_add((i), (v)) +#define atomic_add_unless_wrap(v, i, j) atomic_add_unless((v), (i), (j)) +#define atomic_sub_wrap(i, v) atomic_sub((i), (v)) +#define atomic_inc_wrap(v) atomic_inc(v) +#define atomic_inc_and_test_wrap(v) atomic_inc_and_test(v) +#ifndef atomic_inc_return_wrap +#define atomic_inc_return_wrap(v) atomic_inc_return(v) +#endif +#ifndef atomic_add_return_wrap +#define atomic_add_return_wrap(i, v) atomic_add_return((i), (v)) +#endif +#define atomic_dec_wrap(v) atomic_dec(v) +#ifndef atomic_xchg_wrap +#define atomic_xchg_wrap(v, i) atomic_xchg((v), (i)) +#endif +#define atomic_long_inc_wrap(v) atomic_long_inc(v) +#define atomic_long_dec_wrap(v) atomic_long_dec(v) +#define atomic_long_xchg_wrap(v, n) atomic_long_xchg(v, n) +#define atomic_long_cmpxchg_wrap(l, o, n) atomic_long_cmpxchg(l, o, n) +#endif /* CONFIG_HARDENED_ATOMIC */ + #define ATOMIC_LONG_READ_OP(mo, suffix) \ static inline long atomic_long_read##mo##suffix(const atomic_long##suffix##_t *l)\ { \ @@ -104,6 +128,12 @@ ATOMIC_LONG_ADD_SUB_OP(sub, _release,) #define atomic_long_cmpxchg(l, old, new) \ (ATOMIC_LONG_PFX(_cmpxchg)((ATOMIC_LONG_PFX(_t) *)(l), (old), (new))) +#ifdef CONFIG_HARDENED_ATOMIC +#define atomic_long_cmpxchg_wrap(l, old, new) \ + (ATOMIC_LONG_PFX(_cmpxchg_wrap)((ATOMIC_LONG_PFX(_wrap_t) *)(l),\ + (old), (new))) +#endif + #define atomic_long_xchg_relaxed(v, new) \ (ATOMIC_LONG_PFX(_xchg_relaxed)((ATOMIC_LONG_PFX(_t) *)(v), (new))) #define atomic_long_xchg_acquire(v, new) \ @@ -291,29 +321,4 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u) #define atomic_long_inc_not_zero(l) \ ATOMIC_LONG_PFX(_inc_not_zero)((ATOMIC_LONG_PFX(_t) *)(l)) -#ifndef CONFIG_HARDENED_ATOMIC -#define atomic_read_wrap(v) atomic_read(v) -#define atomic_set_wrap(v, i) atomic_set((v), (i)) -#define atomic_add_wrap(i, v) atomic_add((i), (v)) -#define atomic_add_unless_wrap(v, i, j) atomic_add_unless((v), (i), (j)) -#define atomic_sub_wrap(i, v) atomic_sub((i), (v)) -#define atomic_inc_wrap(v) atomic_inc(v) -#define atomic_inc_and_test_wrap(v) atomic_inc_and_test(v) -#define atomic_inc_return_wrap(v) atomic_inc_return(v) -#define atomic_add_return_wrap(i, v) atomic_add_return((i), (v)) -#define atomic_dec_wrap(v) atomic_dec(v) -#define atomic_cmpxchg_wrap(v, o, n) atomic_cmpxchg((v), (o), (n)) -#define atomic_xchg_wrap(v, i) atomic_xchg((v), (i)) -#define atomic_long_read_wrap(v) atomic_long_read(v) -#define atomic_long_set_wrap(v, i) atomic_long_set((v), (i)) -#define atomic_long_add_wrap(i, v) atomic_long_add((i), (v)) -#define atomic_long_sub_wrap(i, v) atomic_long_sub((i), (v)) -#define atomic_long_inc_wrap(v) atomic_long_inc(v) -#define atomic_long_add_return_wrap(i, v) atomic_long_add_return((i), (v)) -#define atomic_long_inc_return_wrap(v) atomic_long_inc_return(v) -#define atomic_long_sub_and_test_wrap(i, v) atomic_long_sub_and_test((i), (v)) -#define atomic_long_dec_wrap(v) atomic_long_dec(v) -#define atomic_long_xchg_wrap(v, i) atomic_long_xchg((v), (i)) -#endif /* CONFIG_HARDENED_ATOMIC */ - #endif /* _ASM_GENERIC_ATOMIC_LONG_H */ diff --git a/include/linux/atomic.h b/include/linux/atomic.h index b5817c8..be16ea1 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -89,6 +89,11 @@ #define atomic_add_return(...) \ __atomic_op_fence(atomic_add_return, __VA_ARGS__) #endif + +#ifndef atomic_add_return_wrap_relaxed +#define atomic_add_return_wrap(...) \ + __atomic_op_fence(atomic_add_return_wrap, __VA_ARGS__) +#endif #endif /* atomic_add_return_relaxed */ /* atomic_inc_return_relaxed */ @@ -113,6 +118,11 @@ #define atomic_inc_return(...) \ __atomic_op_fence(atomic_inc_return, __VA_ARGS__) #endif + +#ifndef atomic_inc_return_wrap +#define atomic_inc_return_wrap(...) \ + __atomic_op_fence(atomic_inc_return_wrap, __VA_ARGS__) +#endif #endif /* atomic_inc_return_relaxed */ /* atomic_sub_return_relaxed */ @@ -137,6 +147,11 @@ #define atomic_sub_return(...) \ __atomic_op_fence(atomic_sub_return, __VA_ARGS__) #endif + +#ifndef atomic_sub_return_wrap_relaxed +#define atomic_sub_return_wrap(...) \ + __atomic_op_fence(atomic_sub_return_wrap, __VA_ARGS__) +#endif #endif /* atomic_sub_return_relaxed */ /* atomic_dec_return_relaxed */ @@ -161,6 +176,11 @@ #define atomic_dec_return(...) \ __atomic_op_fence(atomic_dec_return, __VA_ARGS__) #endif + +#ifndef atomic_dec_return_wrap +#define atomic_dec_return_wrap(...) \ + __atomic_op_fence(atomic_dec_return, __VA_ARGS__) +#endif #endif /* atomic_dec_return_relaxed */ @@ -421,6 +441,11 @@ #define atomic_cmpxchg(...) \ __atomic_op_fence(atomic_cmpxchg, __VA_ARGS__) #endif + +#ifndef atomic_cmpxchg_wrap +#define atomic_cmpxchg_wrap(...) \ + __atomic_op_fence(atomic_cmpxchg_wrap, __VA_ARGS__) +#endif #endif /* atomic_cmpxchg_relaxed */ /* cmpxchg_relaxed */ @@ -675,6 +700,11 @@ static inline int atomic_dec_if_positive(atomic_t *v) #define atomic64_add_return(...) \ __atomic_op_fence(atomic64_add_return, __VA_ARGS__) #endif + +#ifndef atomic64_add_return_wrap +#define atomic64_add_return_wrap(...) \ + __atomic_op_fence(atomic64_add_return_wrap, __VA_ARGS__) +#endif #endif /* atomic64_add_return_relaxed */ /* atomic64_inc_return_relaxed */ @@ -699,6 +729,11 @@ static inline int atomic_dec_if_positive(atomic_t *v) #define atomic64_inc_return(...) \ __atomic_op_fence(atomic64_inc_return, __VA_ARGS__) #endif + +#ifndef atomic64_inc_return_wrap +#define atomic64_inc_return_wrap(...) \ + __atomic_op_fence(atomic64_inc_return_wrap, __VA_ARGS__) +#endif #endif /* atomic64_inc_return_relaxed */ @@ -724,6 +759,11 @@ static inline int atomic_dec_if_positive(atomic_t *v) #define atomic64_sub_return(...) \ __atomic_op_fence(atomic64_sub_return, __VA_ARGS__) #endif + +#ifndef atomic64_sub_return_wrap +#define atomic64_sub_return_wrap(...) \ + __atomic_op_fence(atomic64_sub_return_wrap, __VA_ARGS__) +#endif #endif /* atomic64_sub_return_relaxed */ /* atomic64_dec_return_relaxed */ @@ -748,6 +788,11 @@ static inline int atomic_dec_if_positive(atomic_t *v) #define atomic64_dec_return(...) \ __atomic_op_fence(atomic64_dec_return, __VA_ARGS__) #endif + +#ifndef atomic64_dec_return_wrap +#define atomic64_dec_return_wrap(...) \ + __atomic_op_fence(atomic64_dec_return_wrap, __VA_ARGS__) +#endif #endif /* atomic64_dec_return_relaxed */ @@ -984,6 +1029,11 @@ static inline int atomic_dec_if_positive(atomic_t *v) #define atomic64_xchg(...) \ __atomic_op_fence(atomic64_xchg, __VA_ARGS__) #endif + +#ifndef atomic64_xchg_wrap +#define atomic64_xchg_wrap(...) \ + __atomic_op_fence(atomic64_xchg_wrap, __VA_ARGS__) +#endif #endif /* atomic64_xchg_relaxed */ /* atomic64_cmpxchg_relaxed */ @@ -1008,6 +1058,11 @@ static inline int atomic_dec_if_positive(atomic_t *v) #define atomic64_cmpxchg(...) \ __atomic_op_fence(atomic64_cmpxchg, __VA_ARGS__) #endif + +#ifndef atomic64_cmpxchg_wrap +#define atomic64_cmpxchg_wrap(...) \ + __atomic_op_fence(atomic64_cmpxchg_wrap, __VA_ARGS__) +#endif #endif /* atomic64_cmpxchg_relaxed */ #ifndef atomic64_andnot