From patchwork Sat Dec 24 21:50:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak R Varma X-Patchwork-Id: 13081393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2ACC0C10F1B for ; Sat, 24 Dec 2022 21:50:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4BB3910E054; Sat, 24 Dec 2022 21:50:33 +0000 (UTC) X-Greylist: delayed 116396 seconds by postgrey-1.36 at gabe; Sat, 24 Dec 2022 21:50:31 UTC Received: from msg-4.mailo.com (msg-4.mailo.com [213.182.54.15]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3014F10E054; Sat, 24 Dec 2022 21:50:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=mailo.com; s=mailo; t=1671918615; bh=3RMRR8G3aQrYrRNzBtDZFRRHfOs5HasLm+7fpYUXFwE=; h=X-EA-Auth:Date:From:To:Cc:Subject:Message-ID:MIME-Version: Content-Type; b=d/Etw4X6d93jWGYSzsp6rv9fqXVnPLxVIC2NnQu6nBgnbV+7zOMV7Vx7EX98dM6uz NTDIZDg3HujpuLm14VUBP7F62msc0ixOr/a6qwOLmwFzu2wWJteuiuMyXRj3g0c1m+ NphzZl/wHIScbUXRf9xxkzR7s8yRRdn5W96wkQgM= Received: by b-3.in.mailobj.net [192.168.90.13] with ESMTP via ip-206.mailobj.net [213.182.55.206] Sat, 24 Dec 2022 22:50:15 +0100 (CET) X-EA-Auth: 4MfYGIvEIKR+4dpyaHgInVQhNCzEG5VrXxDsSwjZeiLhfOmGuLNBhBjxVhFiQgk3p6BWsoLG3Mg7GfMSDYbYv5g8lOMevslW Date: Sun, 25 Dec 2022 03:20:08 +0530 From: Deepak R Varma To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: [PATCH] drm/i915: convert i915_active.count from atomic_t to refcount_t Message-ID: MIME-Version: 1.0 Content-Disposition: inline X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Praveen Kumar , Deepak R Varma , Saurabh Singh Sengar Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The refcount_* APIs are designed to address known issues with the atomic_t APIs for reference counting. They provide following distinct advantages: - protect the reference counters from overflow/underflow - avoid use-after-free errors - provide improved memory ordering guarantee schemes - neater and safer. Hence, convert the atomic_t count member variable and associated atomic_*() API calls to equivalent refcount_t type and refcount_*() API calls. This patch proposal address the following warnings generated by the atomic_as_refcounter.cocci coccinelle script atomic_add_unless Signed-off-by: Deepak R Varma --- Please note: Proposed changes are compile tested only. drivers/gpu/drm/i915/i915_active.c | 24 +++++++++++++----------- drivers/gpu/drm/i915/i915_active.h | 6 +++--- drivers/gpu/drm/i915/i915_active_types.h | 4 ++-- 3 files changed, 18 insertions(+), 16 deletions(-) -- 2.34.1 diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c index 7412abf166a8..4a8d873b4347 100644 --- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -133,7 +133,7 @@ __active_retire(struct i915_active *ref) GEM_BUG_ON(i915_active_is_idle(ref)); /* return the unused nodes to our slabcache -- flushing the allocator */ - if (!atomic_dec_and_lock_irqsave(&ref->count, &ref->tree_lock, flags)) + if (!refcount_dec_and_lock_irqsave(&ref->count, &ref->tree_lock, &flags)) return; GEM_BUG_ON(rcu_access_pointer(ref->excl.fence)); @@ -179,8 +179,8 @@ active_work(struct work_struct *wrk) { struct i915_active *ref = container_of(wrk, typeof(*ref), work); - GEM_BUG_ON(!atomic_read(&ref->count)); - if (atomic_add_unless(&ref->count, -1, 1)) + GEM_BUG_ON(!refcount_read(&ref->count)); + if (refcount_dec_not_one(&ref->count)) return; __active_retire(ref); @@ -189,8 +189,8 @@ active_work(struct work_struct *wrk) static void active_retire(struct i915_active *ref) { - GEM_BUG_ON(!atomic_read(&ref->count)); - if (atomic_add_unless(&ref->count, -1, 1)) + GEM_BUG_ON(!refcount_read(&ref->count)); + if (refcount_dec_not_one(&ref->count)) return; if (ref->flags & I915_ACTIVE_RETIRE_SLEEPS) { @@ -354,7 +354,7 @@ void __i915_active_init(struct i915_active *ref, ref->cache = NULL; init_llist_head(&ref->preallocated_barriers); - atomic_set(&ref->count, 0); + refcount_set(&ref->count, 0); __mutex_init(&ref->mutex, "i915_active", mkey); __i915_active_fence_init(&ref->excl, NULL, excl_retire); INIT_WORK(&ref->work, active_work); @@ -445,7 +445,7 @@ int i915_active_add_request(struct i915_active *ref, struct i915_request *rq) if (replace_barrier(ref, active)) { RCU_INIT_POINTER(active->fence, NULL); - atomic_dec(&ref->count); + refcount_dec(&ref->count); } if (!__i915_active_fence_set(active, fence)) __i915_active_acquire(ref); @@ -488,14 +488,16 @@ i915_active_set_exclusive(struct i915_active *ref, struct dma_fence *f) bool i915_active_acquire_if_busy(struct i915_active *ref) { debug_active_assert(ref); - return atomic_add_unless(&ref->count, 1, 0); + return refcount_add_not_zero(1, &ref->count); } static void __i915_active_activate(struct i915_active *ref) { spin_lock_irq(&ref->tree_lock); /* __active_retire() */ - if (!atomic_fetch_inc(&ref->count)) + if (!refcount_inc_not_zero(&ref->count)) { + refcount_inc(&ref->count); debug_active_activate(ref); + } spin_unlock_irq(&ref->tree_lock); } @@ -757,7 +759,7 @@ int i915_sw_fence_await_active(struct i915_sw_fence *fence, void i915_active_fini(struct i915_active *ref) { debug_active_fini(ref); - GEM_BUG_ON(atomic_read(&ref->count)); + GEM_BUG_ON(refcount_read(&ref->count)); GEM_BUG_ON(work_pending(&ref->work)); mutex_destroy(&ref->mutex); @@ -927,7 +929,7 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref, first = first->next; - atomic_dec(&ref->count); + refcount_dec(&ref->count); intel_engine_pm_put(barrier_to_engine(node)); kmem_cache_free(slab_cache, node); diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h index 7eb44132183a..116c7c28466a 100644 --- a/drivers/gpu/drm/i915/i915_active.h +++ b/drivers/gpu/drm/i915/i915_active.h @@ -193,14 +193,14 @@ void i915_active_release(struct i915_active *ref); static inline void __i915_active_acquire(struct i915_active *ref) { - GEM_BUG_ON(!atomic_read(&ref->count)); - atomic_inc(&ref->count); + GEM_BUG_ON(!refcount_read(&ref->count)); + refcount_inc(&ref->count); } static inline bool i915_active_is_idle(const struct i915_active *ref) { - return !atomic_read(&ref->count); + return !refcount_read(&ref->count); } void i915_active_fini(struct i915_active *ref); diff --git a/drivers/gpu/drm/i915/i915_active_types.h b/drivers/gpu/drm/i915/i915_active_types.h index b02a78ac87db..152a3a25d9f7 100644 --- a/drivers/gpu/drm/i915/i915_active_types.h +++ b/drivers/gpu/drm/i915/i915_active_types.h @@ -7,7 +7,7 @@ #ifndef _I915_ACTIVE_TYPES_H_ #define _I915_ACTIVE_TYPES_H_ -#include +#include #include #include #include @@ -23,7 +23,7 @@ struct i915_active_fence { struct active_node; struct i915_active { - atomic_t count; + refcount_t count; struct mutex mutex; spinlock_t tree_lock;