From patchwork Fri Nov 11 13:00:34 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 9422989 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CCF5E60233 for ; Fri, 11 Nov 2016 13:00:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C05C928D83 for ; Fri, 11 Nov 2016 13:00:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B475D29AE8; Fri, 11 Nov 2016 13:00:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id BCE6228D83 for ; Fri, 11 Nov 2016 13:00:53 +0000 (UTC) Received: (qmail 5493 invoked by uid 550); 11 Nov 2016 13:00:51 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 5470 invoked from network); 11 Nov 2016 13:00:50 -0000 Date: Fri, 11 Nov 2016 14:00:34 +0100 From: Peter Zijlstra To: Mark Rutland Cc: kernel-hardening@lists.openwall.com, Kees Cook , Greg KH , Will Deacon , Elena Reshetova , Arnd Bergmann , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , LKML Message-ID: <20161111130034.GO3157@twins.programming.kicks-ass.net> References: <1478809488-18303-1-git-send-email-elena.reshetova@intel.com> <20161110203749.GV3117@twins.programming.kicks-ass.net> <20161110204838.GE17134@arm.com> <20161110211310.GX3117@twins.programming.kicks-ass.net> <20161110222744.GD8086@kroah.com> <20161110235714.GR3568@worktop.programming.kicks-ass.net> <1478824161.7326.5.camel@cvidal.org> <20161111124126.GG11945@leverpostej> <20161111124755.GI3117@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20161111124755.GI3117@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.23.1 (2014-03-12) Subject: Re: [kernel-hardening] Re: [RFC v4 PATCH 00/13] HARDENED_ATOMIC X-Virus-Scanned: ClamAV using ClamSMTP On Fri, Nov 11, 2016 at 01:47:55PM +0100, Peter Zijlstra wrote: > On Fri, Nov 11, 2016 at 12:41:27PM +0000, Mark Rutland wrote: > > Regardless of atomic_t semantics, a refcount_t would be far more obvious > > to developers than atomic_t and/or kref, and better documents the intent > > of code using it. > > > > We'd still see abuse of atomic_t (and so this won't solve the problems > > Kees mentioned), but even as something orthogonal I think that would > > make sense to have. > > Furthermore, you could implement that refcount_t stuff using > atomic_cmpxchg() in generic code. While that is sub-optimal for ll/sc > architectures you at least get generic code that works to get started. > > Also, I suspect that if your refcounts are heavily contended, you'll > have other problems than the performance of these primitives. > > Code for refcount_inc(), refcount_inc_not_zero() and > refcount_sub_and_test() can be copy-pasted from the kref patch I send > yesterday. A wee bit like so... diff --git a/include/linux/refcount.h b/include/linux/refcount.h new file mode 100644 index 000000000000..d1eae0d2345e --- /dev/null +++ b/include/linux/refcount.h @@ -0,0 +1,75 @@ +#ifndef _LINUX_REFCOUNT_H +#define _LINUX_REFCOUNT_H + +#include + +typedef struct refcount_struct { + atomic_t refs; +} refcount_t; + +static inline void refcount_inc(refcount_t *r) +{ + unsigned int old, new, val = atomic_read(&r->refs); + + for (;;) { + WARN_ON_ONCE(!val); + + new = val + 1; + if (new < val) + BUG(); /* overflow */ + + old = atomic_cmpxchg_relaxed(&r->refs, val, new); + if (old == val) + break; + + val = old; + } +} + +static inline bool refcount_inc_not_zero(refcount_t *r) +{ + unsigned int old, new, val = atomic_read(&r->refs); + + for (;;) { + if (!val) + return false; + + new = val + 1; + if (new < val) + BUG(); /* overflow */ + + old = atomic_cmpxchg_relaxed(&r->refs, val, new); + if (old == val) + break; + + val = old; + } + + return true; +} + +static inline bool refcount_sub_and_test(int i, refcount_t *r) +{ + unsigned int old, new, val = atomic_read(&r->refs); + + for (;;) { + new = val - i; + if (new > val) + BUG(); /* underflow */ + + old = atomic_cmpxchg_release(&r->refs, val, new); + if (old == val) + break; + + val = old; + } + + return !new; +} + +static inline bool refcount_dec_and_test(refcount_t *r) +{ + return refcount_sub_and_test(1, r); +} + +#endif /* _LINUX_REFCOUNT_H */