Message ID | 20161111130034.GO3157@twins.programming.kicks-ass.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, 11 Nov 2016, Peter Zijlstra wrote: > A wee bit like so... > + > +static inline bool refcount_sub_and_test(int i, refcount_t *r) Why would we want to expose that at all? refcount_inc() and refcount_dec_and_test() is what is required for refcounting. I know there are a few users of kref_sub() in tree, but that's all undocumented voodoo, which should not be proliferated. Thanks, tglx
On Fri, Nov 11, 2016 at 03:39:05PM +0100, Thomas Gleixner wrote: > On Fri, 11 Nov 2016, Peter Zijlstra wrote: > > A wee bit like so... > > + > > +static inline bool refcount_sub_and_test(int i, refcount_t *r) > > Why would we want to expose that at all? refcount_inc() and > refcount_dec_and_test() is what is required for refcounting. > > I know there are a few users of kref_sub() in tree, but that's all > undocumented voodoo, which should not be proliferated. I tend to agree. There are a few other sites that do multiple get/put as well using atomic_t. So supporting them using refcount_t is trivial -- simply match the atomic*() functions in semantics with added wrappery tests, but you're right in that having these encourages 'creative' use which we would be better off without. Ideally the audit would include sanitizing this. Moreover, there really is only a handfull of these creative user, so maybe we could just leave them be.
On Fri, Nov 11, 2016 at 02:00:34PM +0100, Peter Zijlstra wrote: > +static inline bool refcount_sub_and_test(int i, refcount_t *r) > +{ > + unsigned int old, new, val = atomic_read(&r->refs); > + > + for (;;) { regardless of the sub_and_test vs inc_and_test issue, this should probably also have: if (val == UINT_MAX) return false; such that we stay saturated. If for some reason someone can trigger more dec's than inc's, we'd be hosed. > + new = val - i; > + if (new > val) > + BUG(); /* underflow */ > + > + old = atomic_cmpxchg_release(&r->refs, val, new); > + if (old == val) > + break; > + > + val = old; > + } > + > + return !new; > +}
diff --git a/include/linux/refcount.h b/include/linux/refcount.h new file mode 100644 index 000000000000..d1eae0d2345e --- /dev/null +++ b/include/linux/refcount.h @@ -0,0 +1,75 @@ +#ifndef _LINUX_REFCOUNT_H +#define _LINUX_REFCOUNT_H + +#include <linux/atomic.h> + +typedef struct refcount_struct { + atomic_t refs; +} refcount_t; + +static inline void refcount_inc(refcount_t *r) +{ + unsigned int old, new, val = atomic_read(&r->refs); + + for (;;) { + WARN_ON_ONCE(!val); + + new = val + 1; + if (new < val) + BUG(); /* overflow */ + + old = atomic_cmpxchg_relaxed(&r->refs, val, new); + if (old == val) + break; + + val = old; + } +} + +static inline bool refcount_inc_not_zero(refcount_t *r) +{ + unsigned int old, new, val = atomic_read(&r->refs); + + for (;;) { + if (!val) + return false; + + new = val + 1; + if (new < val) + BUG(); /* overflow */ + + old = atomic_cmpxchg_relaxed(&r->refs, val, new); + if (old == val) + break; + + val = old; + } + + return true; +} + +static inline bool refcount_sub_and_test(int i, refcount_t *r) +{ + unsigned int old, new, val = atomic_read(&r->refs); + + for (;;) { + new = val - i; + if (new > val) + BUG(); /* underflow */ + + old = atomic_cmpxchg_release(&r->refs, val, new); + if (old == val) + break; + + val = old; + } + + return !new; +} + +static inline bool refcount_dec_and_test(refcount_t *r) +{ + return refcount_sub_and_test(1, r); +} + +#endif /* _LINUX_REFCOUNT_H */