From patchwork Fri Mar 3 01:55:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Windsor X-Patchwork-Id: 9601933 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0615B6016C for ; Fri, 3 Mar 2017 01:56:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EA6A4285B2 for ; Fri, 3 Mar 2017 01:56:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DEDB1285E8; Fri, 3 Mar 2017 01:56:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id A8A1E285B2 for ; Fri, 3 Mar 2017 01:56:22 +0000 (UTC) Received: (qmail 15426 invoked by uid 550); 3 Mar 2017 01:56:20 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 15395 invoked from network); 3 Mar 2017 01:56:17 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=ug1/Ai4Ify+0G0DGr5WX/xuI6zQsH5W8rCpL1kP6mDE=; b=UPzj2un2OD3QwOxEsjasGcQu9Sxm8+mi7weTe8NXAFH97R8dqcwp660aQEEOEyfjIx iKezvrYbVSIbQLP6EZ30OaOuQhC7dly8JEj5OHt5OCIyCLVV1sNxoN98IW58A7B5IO6k KIY33b1657bm83UCgw1BEVwpBbjHnX85MePID/RViIBy0wRQhl1fC1x8Y5lwn7wjCKb5 7CMNHCco+gNsIgrPg1r+UDVNG8XZndEV2zQq3sfRgG4CWio90FRK4Tpl+XQ7Sw3DwHmu GgZUtLZyAyLw7sv4rRc06bq3GlbKNm7QqflcrSmJ0Jn8h+P7RKRKH5UEdUqpYsMnBeXW CKsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ug1/Ai4Ify+0G0DGr5WX/xuI6zQsH5W8rCpL1kP6mDE=; b=mXBpg123YzMHm6kMBUacZKeiapIg0068ewmogwBehHXyrnZ9N3zDpei3A830tQv+Cn iAW8cN+n6BKP/cvXBak5n/NItwOjwUg5SLFI02VCQpihYfT4kCaH9aJxqOy4SO7Xxv3D 9KP+0fUecpU7nRtWScS8vuFp3udDP2RfxwpOvBY3TXwGfYpH7ikLPfHMVphGURed+rSn lrv+cO7ZjfOXfvaWvtr7MdoBENiG42hLNgEnq1WREm3HiGWBl2noVk+Q0mOIbXZzV4s6 WD+3EUHkSKJ6AZsYBAYsovgp5yXN2QC8sO0VQ3PE1jnd1W+VRGg9+Xv7qow7tKhe0WXR 2Rrg== X-Gm-Message-State: AMke39ljvKN5vdbjH4bq/wLQqvQjPKi+u1WnMw0c06zwWLXy0NAmfE0KMaoVFE+q4mUCcQ== X-Received: by 10.129.40.206 with SMTP id o197mr177212ywo.353.1488506165084; Thu, 02 Mar 2017 17:56:05 -0800 (PST) From: David Windsor To: peterz@infradead.org, mingo@kernel.org, elena.reshetova@intel.com, dwindsor@gmail.com Cc: linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Thu, 2 Mar 2017 20:55:59 -0500 Message-Id: <1488506159-3506-1-git-send-email-dwindsor@gmail.com> X-Mailer: git-send-email 2.7.4 Subject: [kernel-hardening] [PATCH] refcount: add refcount_t API kernel-doc comments X-Virus-Scanned: ClamAV using ClamSMTP This adds kernel-doc comments for the new refcount_t API. v2: incorporate fixes from Peter Zijlstra and Ingo Molnar Signed-off-by: David Windsor Reviewed-by: Kees Cook --- lib/refcount.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 110 insertions(+), 12 deletions(-) diff --git a/lib/refcount.c b/lib/refcount.c index 1d33366..d6e317a 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -37,6 +37,24 @@ #include #include +/** + * refcount_add_not_zero - add a value to a refcount unless it is 0 + * @i: the value to add to the refcount + * @r: the refcount + * + * Will saturate at UINT_MAX and WARN. + * + * Provides no memory ordering, it is assumed the caller has guaranteed the + * object memory to be stable (RCU, etc.). It does provide a control dependency + * and thereby orders future stores. See the comment on top. + * + * Use of this function is not recommended for the normal reference counting + * use case in which references are taken and released one at a time. In these + * cases, refcount_inc(), or one of its variants, should instead be used to + * increment a reference count. + * + * Return: false if the passed refcount is 0, true otherwise + */ bool refcount_add_not_zero(unsigned int i, refcount_t *r) { unsigned int old, new, val = atomic_read(&r->refs); @@ -64,18 +82,39 @@ bool refcount_add_not_zero(unsigned int i, refcount_t *r) } EXPORT_SYMBOL_GPL(refcount_add_not_zero); +/** + * refcount_add - add a value to a refcount + * @i: the value to add to the refcount + * @r: the refcount + * + * Similar to atomic_add(), but will saturate at UINT_MAX and WARN. + * + * Provides no memory ordering, it is assumed the caller has guaranteed the + * object memory to be stable (RCU, etc.). It does provide a control dependency + * and thereby orders future stores. See the comment on top. + * + * Use of this function is not recommended for the normal reference counting + * use case in which references are taken and released one at a time. In these + * cases, refcount_inc(), or one of its variants, should instead be used to + * increment a reference count. + */ void refcount_add(unsigned int i, refcount_t *r) { WARN(!refcount_add_not_zero(i, r), "refcount_t: addition on 0; use-after-free.\n"); } EXPORT_SYMBOL_GPL(refcount_add); -/* - * Similar to atomic_inc_not_zero(), will saturate at UINT_MAX and WARN. +/** + * refcount_inc_not_zero - increment a refcount unless it is 0 + * @r: the refcount to increment + * + * Similar to atomic_inc_not_zero(), but will saturate at UINT_MAX and WARN. * * Provides no memory ordering, it is assumed the caller has guaranteed the * object memory to be stable (RCU, etc.). It does provide a control dependency * and thereby orders future stores. See the comment on top. + * + * Return: true if the increment was successful, false otherwise */ bool refcount_inc_not_zero(refcount_t *r) { @@ -103,11 +142,17 @@ bool refcount_inc_not_zero(refcount_t *r) } EXPORT_SYMBOL_GPL(refcount_inc_not_zero); -/* - * Similar to atomic_inc(), will saturate at UINT_MAX and WARN. +/** + * refcount_inc - increment a refcount + * @r: the refcount to increment + * + * Similar to atomic_inc(), but will saturate at UINT_MAX and WARN. * * Provides no memory ordering, it is assumed the caller already has a - * reference on the object, will WARN when this is not so. + * reference on the object. + * + * Will WARN if the refcount is 0, as this represents a possible use-after-free + * condition. */ void refcount_inc(refcount_t *r) { @@ -115,6 +160,26 @@ void refcount_inc(refcount_t *r) } EXPORT_SYMBOL_GPL(refcount_inc); +/** + * refcount_sub_and_test - subtract from a refcount and test if it is 0 + * @i: amount to subtract from the refcount + * @r: the refcount + * + * Similar to atomic_dec_and_test(), but it will WARN, return false and + * ultimately leak on underflow and will fail to decrement when saturated + * at UINT_MAX. + * + * Provides release memory ordering, such that prior loads and stores are done + * before, and provides a control dependency such that free() must come after. + * See the comment on top. + * + * Use of this function is not recommended for the normal reference counting + * use case in which references are taken and released one at a time. In these + * cases, refcount_dec(), or one of its variants, should instead be used to + * decrement a reference count. + * + * Return: true if the resulting refcount is 0, false otherwise + */ bool refcount_sub_and_test(unsigned int i, refcount_t *r) { unsigned int old, new, val = atomic_read(&r->refs); @@ -140,13 +205,18 @@ bool refcount_sub_and_test(unsigned int i, refcount_t *r) } EXPORT_SYMBOL_GPL(refcount_sub_and_test); -/* +/** + * refcount_dec_and_test - decrement a refcount and test if it is 0 + * @r: the refcount + * * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to * decrement when saturated at UINT_MAX. * * Provides release memory ordering, such that prior loads and stores are done * before, and provides a control dependency such that free() must come after. * See the comment on top. + * + * Return: true if the resulting refcount is 0, false otherwise */ bool refcount_dec_and_test(refcount_t *r) { @@ -154,21 +224,26 @@ bool refcount_dec_and_test(refcount_t *r) } EXPORT_SYMBOL_GPL(refcount_dec_and_test); -/* +/** + * refcount_dec - decrement a refcount + * @r: the refcount + * * Similar to atomic_dec(), it will WARN on underflow and fail to decrement * when saturated at UINT_MAX. * * Provides release memory ordering, such that prior loads and stores are done * before. */ - void refcount_dec(refcount_t *r) { WARN(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n"); } EXPORT_SYMBOL_GPL(refcount_dec); -/* +/** + * refcount_dec_if_one - decrement a refcount if it is 1 + * @r: the refcount + * * No atomic_t counterpart, it attempts a 1 -> 0 transition and returns the * success thereof. * @@ -178,6 +253,8 @@ EXPORT_SYMBOL_GPL(refcount_dec); * It can be used like a try-delete operator; this explicit case is provided * and not cmpxchg in generic, because that would allow implementing unsafe * operations. + * + * Return: true if the resulting refcount is 0, false otherwise */ bool refcount_dec_if_one(refcount_t *r) { @@ -185,11 +262,16 @@ bool refcount_dec_if_one(refcount_t *r) } EXPORT_SYMBOL_GPL(refcount_dec_if_one); -/* +/** + * refcount_dec_not_one - decrement a refcount if it is not 1 + * @r: the refcount + * * No atomic_t counterpart, it decrements unless the value is 1, in which case * it will return false. * * Was often done like: atomic_add_unless(&var, -1, 1) + * + * Return: true if the decrement operation was successful, false otherwise */ bool refcount_dec_not_one(refcount_t *r) { @@ -219,13 +301,21 @@ bool refcount_dec_not_one(refcount_t *r) } EXPORT_SYMBOL_GPL(refcount_dec_not_one); -/* +/** + * refcount_dec_and_mutex_lock - return holding mutex if able to decrement + * refcount to 0 + * @r: the refcount + * @lock: the mutex to be locked + * * Similar to atomic_dec_and_mutex_lock(), it will WARN on underflow and fail * to decrement when saturated at UINT_MAX. * * Provides release memory ordering, such that prior loads and stores are done * before, and provides a control dependency such that free() must come after. * See the comment on top. + * + * Return: true and hold mutex if able to decrement refcount to 0, false + * otherwise */ bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) { @@ -242,13 +332,21 @@ bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) } EXPORT_SYMBOL_GPL(refcount_dec_and_mutex_lock); -/* +/** + * refcount_dec_and_lock - return holding spinlock if able to decrement + * refcount to 0 + * @r: the refcount + * @lock: the spinlock to be locked + * * Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to * decrement when saturated at UINT_MAX. * * Provides release memory ordering, such that prior loads and stores are done * before, and provides a control dependency such that free() must come after. * See the comment on top. + * + * Return: true and hold spinlock if able to decrement refcount to 0, false + * otherwise */ bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) {