From patchwork Mon Nov 5 13:31:16 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 1697311 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork2.kernel.org (Postfix) with ESMTP id 9CAECDF2AB for ; Mon, 5 Nov 2012 13:33:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7593E9F781 for ; Mon, 5 Nov 2012 05:33:18 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from smtp-outbound-1.vmware.com (smtp-outbound-1.vmware.com [208.91.2.12]) by gabe.freedesktop.org (Postfix) with ESMTP id C5C659F0EC for ; Mon, 5 Nov 2012 05:31:35 -0800 (PST) Received: from sc9-mailhost1.vmware.com (sc9-mailhost1.vmware.com [10.113.161.71]) by smtp-outbound-1.vmware.com (Postfix) with ESMTP id AD1D428B75; Mon, 5 Nov 2012 05:31:35 -0800 (PST) Received: from zcs-prod-mta-2.vmware.com (zcs-prod-mta-2.vmware.com [10.113.163.64]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id A8962184E5; Mon, 5 Nov 2012 05:31:35 -0800 (PST) Received: from localhost (localhost.localdomain [127.0.0.1]) by zcs-prod-mta-2.vmware.com (Postfix) with ESMTP id 7B206E121F; Mon, 5 Nov 2012 05:31:04 -0800 (PST) X-Virus-Scanned: amavisd-new at zcs-prod-mta-2.vmware.com Received: from zcs-prod-mta-2.vmware.com ([127.0.0.1]) by localhost (zcs-prod-mta-2.vmware.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 5TdsZLZlehuG; Mon, 5 Nov 2012 05:31:04 -0800 (PST) Received: from sc9-mailhost2.vmware.com (unknown [10.113.160.14]) by zcs-prod-mta-2.vmware.com (Postfix) with ESMTPSA id 667A1E1224; Mon, 5 Nov 2012 05:31:02 -0800 (PST) From: Thomas Hellstrom To: airlied@gmail.com, airlied@redhat.com Subject: [PATCH 2/4] kref: Implement kref_get_unless_zero Date: Mon, 5 Nov 2012 14:31:16 +0100 Message-Id: <1352122278-12896-3-git-send-email-thellstrom@vmware.com> X-Mailer: git-send-email 1.7.4.4 In-Reply-To: <1352122278-12896-1-git-send-email-thellstrom@vmware.com> References: <1352122278-12896-1-git-send-email-thellstrom@vmware.com> Cc: Thomas Hellstrom , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org This function is intended to simplify locking around refcounting for objects that can be looked up from a lookup structure, and which are removed from that lookup structure in the object destructor. Operations on such objects require at least a read lock around lookup + kref_get, and a write lock around kref_put + remove from lookup structure. Furthermore, RCU implementations become extremely tricky. With a lookup followed by a kref_get_unless_zero *with return value check* locking in the kref_put path can be deferred to the actual removal from the lookup structure and RCU lookups become trivial. Signed-off-by: Thomas Hellstrom --- include/linux/kref.h | 21 +++++++++++++++++++++ 1 files changed, 21 insertions(+), 0 deletions(-) diff --git a/include/linux/kref.h b/include/linux/kref.h index 65af688..fd16042 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -111,4 +111,25 @@ static inline int kref_put_mutex(struct kref *kref, } return 0; } + +/** + * kref_get_unless_zero - Increment refcount for object unless it is zero. + * @kref: object. + * + * Return 0 if the increment succeeded. Otherwise return non-zero. + + * This function is intended to simplify locking around refcounting for + * objects that can be looked up from a lookup structure, and which are + * removed from that lookup structure in the object destructor. + * Operations on such objects require at least a read lock around + * lookup + kref_get, and a write lock around kref_put + remove from lookup + * structure. Furthermore, RCU implementations become extremely tricky. + * With a lookup followed by a kref_get_unless_zero *with return value check* + * locking in the kref_put path can be deferred to the actual removal from + * the lookup structure and RCU lookups become trivial. + */ +static inline int __must_check kref_get_unless_zero(struct kref *kref) +{ + return !atomic_add_unless(&kref->refcount, 1, 0); +} #endif /* _KREF_H_ */