From patchwork Mon Mar 13 17:24:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Janusz Krzysztofik X-Patchwork-Id: 13173021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C3DF9C74A4B for ; Mon, 13 Mar 2023 17:24:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9AF9F10E5DB; Mon, 13 Mar 2023 17:24:51 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9C37010E5D2; Mon, 13 Mar 2023 17:24:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678728287; x=1710264287; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6V0/M4vadLAvip27rgTp2+hqO4K00k0Y4Cuz1UYkBPU=; b=US4b7NsudbkK6YfaRBvl0KNKy8jJCrr9nS1bnfWLOEB/LCnR5mWNu/yt +j6JbfysLmu0TmOnYtEqM7wUn+xdLp5+MjfEodw0Tey1wtKqbFxxICMbQ 8HkRzvB6vdejWtHbZAxspnSdxsmqp8TXXO9bXKstdRpj8E6r8e/wJ/iMu t29yWPhLaz0nQulILjpvVtzH6Q3YWq3QFqadGknht5ozoV6O867Xdqigv NtENknbJML4I+3K8H5StvDHHfEmsUhQVFJxxVxwIieWdc278prGikyohN uebMtaPjljnxF2BHe1eHVu0wbKeVehPIHcCSR5DqVayxhv5KhDQkaHZn2 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10648"; a="321062301" X-IronPort-AV: E=Sophos;i="5.98,257,1673942400"; d="scan'208";a="321062301" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2023 10:24:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10648"; a="924586765" X-IronPort-AV: E=Sophos;i="5.98,257,1673942400"; d="scan'208";a="924586765" Received: from jkrzyszt-mobl1.ger.corp.intel.com (HELO jkrzyszt-mobl1.intranet) ([10.213.1.93]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2023 10:24:44 -0700 From: Janusz Krzysztofik To: intel-gfx@lists.freedesktop.org Date: Mon, 13 Mar 2023 18:24:13 +0100 Message-Id: <20230313172415.125932-2-janusz.krzysztofik@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230313172415.125932-1-janusz.krzysztofik@linux.intel.com> References: <20230313172415.125932-1-janusz.krzysztofik@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 1/3] drm/i915/active: Serialize preallocation of idle barriers X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Andrzej Hajda , dri-devel@lists.freedesktop.org, Rodrigo Vivi , Chris Wilson , Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" When we collect barriers for preallocating them, we reuse either idle or non-idle ones, whichever we find. In case of non-idle barriers, we depend on their successful deletion from their barrier tasks lists as an indication of them not being claimed by another thread. However, in case of idle barriers, we neither perform any similar checks nor take any preventive countermeasures against unexpected races with other threads. We may then end up adding the same barrier to two independent preallocated lists, and then adding it twice back to the same engine's barrier tasks list, thus effectively creating a loop of llist nodes. As a result, searches through that barrier tasks llist may start spinning indefinitely. Occurrences of that issue were never observed on CI nor reported by users. However, deep code analysis revealed a silent, most probably not intended workaround that actively breaks those loops by rebuilding barrier tasks llists in reverse order inside our local implementation of llist node deletion. A simple patch that replaces that reverse order rebuild with just an update of next pointer of a node preceding the one to be deleted helps to reproduce the race, though still not easily. As soon as we have the race fixed, we may want to consider such update for the code to be more clear and more predictable. To fix the issue, whenever an idle barrier is selected for preallocation, mark it immediately as non-idle with our ERR_PTR(-EAGAIN) barrier mark, so other threads are no longer able to claim it, neither as idle nor as non-idle since not a member of respective barrier tasks list. Serialize that claim operation against other potential concurrent updates of active fence pointer, and skip the node in favor of allocating a new one if it occurs claimed meanwhile by another competing thread. Once claimed, increase active count of its composite tracker host immediately, as long as we still know that was an idle barrier. While being at it, fortify now still marginally racy check for preallocated_barriers llist being still empty when we populate it with collected proto-barriers (assuming we need that check). Fixes: 9ff33bbcda25 ("drm/i915: Reduce locking around i915_active_acquire_preallocate_barrier()") Cc: Chris Wilson Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Janusz Krzysztofik --- drivers/gpu/drm/i915/i915_active.c | 50 +++++++++++++++++------------- 1 file changed, 29 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c index a9fea115f2d26..b2f79f5c257a8 100644 --- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -788,8 +788,13 @@ static struct active_node *reuse_idle_barrier(struct i915_active *ref, u64 idx) * node kept alive (as we reuse before parking). We prefer to reuse * completely idle barriers (less hassle in manipulating the llists), * but otherwise any will do. + * + * We reuse the request field to mark this as being our proto-node. */ - if (ref->cache && is_idle_barrier(ref->cache, idx)) { + if (ref->cache && is_idle_barrier(ref->cache, idx) && + !cmpxchg(__active_fence_slot(&ref->cache->base), NULL, + ERR_PTR(-EAGAIN))) { + __i915_active_acquire(ref); p = &ref->cache->node; goto match; } @@ -800,8 +805,12 @@ static struct active_node *reuse_idle_barrier(struct i915_active *ref, u64 idx) struct active_node *node = rb_entry(p, struct active_node, node); - if (is_idle_barrier(node, idx)) + if (is_idle_barrier(node, idx) && + !cmpxchg(__active_fence_slot(&node->base), NULL, + ERR_PTR(-EAGAIN))) { + __i915_active_acquire(ref); goto match; + } prev = p; if (node->timeline < idx) @@ -827,8 +836,12 @@ static struct active_node *reuse_idle_barrier(struct i915_active *ref, u64 idx) if (node->timeline < idx) continue; - if (is_idle_barrier(node, idx)) + if (is_idle_barrier(node, idx) && + !cmpxchg(__active_fence_slot(&node->base), NULL, + ERR_PTR(-EAGAIN))) { + __i915_active_acquire(ref); goto match; + } /* * The list of pending barriers is protected by the @@ -889,29 +902,24 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref, if (!node) goto unwind; - RCU_INIT_POINTER(node->base.fence, NULL); + /* Mark this as being our unconnected proto-node */ + RCU_INIT_POINTER(node->base.fence, ERR_PTR(-EAGAIN)); node->base.cb.func = node_retire; node->timeline = idx; node->ref = ref; - } - - if (!i915_active_fence_isset(&node->base)) { - /* - * Mark this as being *our* unconnected proto-node. - * - * Since this node is not in any list, and we have - * decoupled it from the rbtree, we can reuse the - * request to indicate this is an idle-barrier node - * and then we can use the rb_node and list pointers - * for our tracking of the pending barrier. - */ - RCU_INIT_POINTER(node->base.fence, ERR_PTR(-EAGAIN)); - node->base.cb.node.prev = (void *)engine; __i915_active_acquire(ref); + } else { + GEM_BUG_ON(rcu_access_pointer(node->base.fence) != + ERR_PTR(-EAGAIN)); } - GEM_BUG_ON(rcu_access_pointer(node->base.fence) != ERR_PTR(-EAGAIN)); - GEM_BUG_ON(barrier_to_engine(node) != engine); + /* + * Since this node is not in any list, we have decoupled it + * from the rbtree, and we reuse the request to indicate + * this is a barrier node, then we can use list pointers + * for our tracking of the pending barrier. + */ + node->base.cb.node.prev = (void *)engine; first = barrier_to_ll(node); first->next = prev; if (!last) @@ -920,7 +928,7 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref, } GEM_BUG_ON(!llist_empty(&ref->preallocated_barriers)); - llist_add_batch(first, last, &ref->preallocated_barriers); + GEM_BUG_ON(!llist_add_batch(first, last, &ref->preallocated_barriers)); return 0; From patchwork Mon Mar 13 17:24:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Janusz Krzysztofik X-Patchwork-Id: 13173022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F1BC6C6FD19 for ; Mon, 13 Mar 2023 17:24:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3DB4810E5E5; Mon, 13 Mar 2023 17:24:54 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id E15A210E5E0; Mon, 13 Mar 2023 17:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678728290; x=1710264290; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KH4FkxVcs5A7dvfX3bheNC9WhQ0NJxXt0TWMdr4+WKQ=; b=imU0ueKycfeCTfGcpxoEWnA2vsAVBi3iy2uI9lPhTEAeG8s29YZQTHAO hjmuFKXvrp3MRVSi6oVHvn6Ds/mXiei+t78+JuPfaxWh4WFJUzBdd6utW sWpgx6TC2OtMmUPDy3IqM7o4eOkqoUK2xnuhvFeRWSmZeQPDzg0wHlcjI bZqouymZK1Tjhm2XaQ3JchsgqOBRvD5nXev9Gr3nDPZOjHu0lng+hYJ6J 06++17FG8bqZYZnEr7B3OWiR/zkRyjn24I6h6U+KhgmT+mcXGZ2wqVB1U LR3XQ3nxxMWiMyAj9MfQM/2FomjmstSr9FzwYxhkcKThs/h7jFuYT4CWP A==; X-IronPort-AV: E=McAfee;i="6500,9779,10648"; a="321062321" X-IronPort-AV: E=Sophos;i="5.98,257,1673942400"; d="scan'208";a="321062321" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2023 10:24:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10648"; a="924586789" X-IronPort-AV: E=Sophos;i="5.98,257,1673942400"; d="scan'208";a="924586789" Received: from jkrzyszt-mobl1.ger.corp.intel.com (HELO jkrzyszt-mobl1.intranet) ([10.213.1.93]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2023 10:24:47 -0700 From: Janusz Krzysztofik To: intel-gfx@lists.freedesktop.org Date: Mon, 13 Mar 2023 18:24:14 +0100 Message-Id: <20230313172415.125932-3-janusz.krzysztofik@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230313172415.125932-1-janusz.krzysztofik@linux.intel.com> References: <20230313172415.125932-1-janusz.krzysztofik@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 2/3] drm/i915/active: Serialize use of barriers as fence trackers X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Andrzej Hajda , dri-devel@lists.freedesktop.org, Rodrigo Vivi , Chris Wilson , Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" When adding a request to a composite tracker, we try to use an existing fence tracker already registered with that composite. The tracker we obtain can already track another fence, can be an idle barrier, or an active barrier. When we acquire an idle barrier, we don't claim it in any way until __i915_active_fence_set() we call substitutes its NULL fence pointer with that of our request's fence. But another thread looking for an idle barrier can race with us. If that thread is collecting barriers for preallocation, it may update the NULL fence pointer with ERR_PTR(-EAGAIN) barrier mark, either before or after we manage to replace it with our request fence. It can also corrupt our callback list pointers when reusing them as an engine pointer (prev) and a preallocated barriers llist node link (next), or we can corrupt their data. When we acquire a non-idle barrier in turn, we try to delete that barrier from a list of barrier tasks it belongs to. If that deletion succeedes then we convert the barrier to an idle one by replacing its barrier mark with NULL and decermenting active count of its hosting composite tracker. But as soon as we do this, we expose that barrier to the above described idle barrier race. Claim acquired idle barrier right away by marking it immediately with ERR_PTR(-EAGAIN) barrier mark. Serialize that operation with other threads trying to claim a barrier and go back for picking up another tracker if some other thread wins the race. Furthermore, on successful deletion of a non-idle barrier from a barrier tasks list, don't overwrite the barrier mark with NULL -- that's not needed at the moment since the barrier, once deleted from its list, can no longer be acquired by any other thread as long as all threads respect deletion results. Also, don't decrease active counter of the hosting composite tracker, but skip the follow up step that increases it back. For the above to work correctly, teach __i915_active_fence_set() function to recognize and handle non-idle barriers correctly when requested. The issue has never been reproduced cleanly, only identified via code analysis while working on fence callback list corruptions which occurred to have a complex root cause, see commit e0e6b416b25e ("drm/i915/active: Fix misuse of non-idle barriers as fence trackers") for details. However, it has been assumed that the issue could start to be potentially reproducible as soon as timeline mutex locks around calls to i915_active_fence_set() were dropped by commit df9f85d8582e ("drm/i915: Serialise i915_active_fence_set() with itself"). Fixes: df9f85d8582e ("drm/i915: Serialise i915_active_fence_set() with itself") Cc: Chris Wilson Cc: stable@vger.kernel.org # v5.6+ Signed-off-by: Janusz Krzysztofik --- drivers/gpu/drm/i915/i915_active.c | 65 ++++++++++++++++++++---------- 1 file changed, 44 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c index b2f79f5c257a8..8eb10af7928f4 100644 --- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -425,11 +425,17 @@ replace_barrier(struct i915_active *ref, struct i915_active_fence *active) return __active_del_barrier(ref, node_from_active(active)); } +static inline bool is_idle_barrier(struct active_node *node, u64 idx); +static struct dma_fence * +____i915_active_fence_set(struct i915_active_fence *active, + struct dma_fence *fence, bool barrier); + int i915_active_add_request(struct i915_active *ref, struct i915_request *rq) { u64 idx = i915_request_timeline(rq)->fence_context; struct dma_fence *fence = &rq->fence; struct i915_active_fence *active; + bool replaced; int err; /* Prevent reaping in case we malloc/wait while building the tree */ @@ -444,13 +450,18 @@ int i915_active_add_request(struct i915_active *ref, struct i915_request *rq) goto out; } - if (replace_barrier(ref, active)) { - RCU_INIT_POINTER(active->fence, NULL); - atomic_dec(&ref->count); - } - } while (unlikely(is_barrier(active))); + replaced = replace_barrier(ref, active); + if (replaced) + break; + + if (!cmpxchg(__active_fence_slot(active), NULL, + ERR_PTR(-EAGAIN))) + break; - if (!__i915_active_fence_set(active, fence)) + } while (IS_ERR_OR_NULL(rcu_access_pointer(active->fence))); + + if (!____i915_active_fence_set(active, fence, is_barrier(active)) && + !replaced) __i915_active_acquire(ref); out: @@ -1021,21 +1032,9 @@ void i915_request_add_active_barriers(struct i915_request *rq) spin_unlock_irqrestore(&rq->lock, flags); } -/* - * __i915_active_fence_set: Update the last active fence along its timeline - * @active: the active tracker - * @fence: the new fence (under construction) - * - * Records the new @fence as the last active fence along its timeline in - * this active tracker, moving the tracking callbacks from the previous - * fence onto this one. Returns the previous fence (if not already completed), - * which the caller must ensure is executed before the new fence. To ensure - * that the order of fences within the timeline of the i915_active_fence is - * understood, it should be locked by the caller. - */ -struct dma_fence * -__i915_active_fence_set(struct i915_active_fence *active, - struct dma_fence *fence) +static struct dma_fence * +____i915_active_fence_set(struct i915_active_fence *active, + struct dma_fence *fence, bool barrier) { struct dma_fence *prev; unsigned long flags; @@ -1067,6 +1066,11 @@ __i915_active_fence_set(struct i915_active_fence *active, */ spin_lock_irqsave(fence->lock, flags); prev = xchg(__active_fence_slot(active), fence); + if (barrier) { + GEM_BUG_ON(!IS_ERR(prev)); + prev = NULL; + } + GEM_BUG_ON(IS_ERR(prev)); if (prev) { GEM_BUG_ON(prev == fence); spin_lock_nested(prev->lock, SINGLE_DEPTH_NESTING); @@ -1079,6 +1083,25 @@ __i915_active_fence_set(struct i915_active_fence *active, return prev; } +/* + * __i915_active_fence_set: Update the last active fence along its timeline + * @active: the active tracker + * @fence: the new fence (under construction) + * + * Records the new @fence as the last active fence along its timeline in + * this active tracker, moving the tracking callbacks from the previous + * fence onto this one. Returns the previous fence (if not already completed), + * which the caller must ensure is executed before the new fence. To ensure + * that the order of fences within the timeline of the i915_active_fence is + * understood, it should be locked by the caller. + */ +struct dma_fence * +__i915_active_fence_set(struct i915_active_fence *active, + struct dma_fence *fence) +{ + return ____i915_active_fence_set(active, fence, false); +} + int i915_active_fence_set(struct i915_active_fence *active, struct i915_request *rq) { From patchwork Mon Mar 13 17:24:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Janusz Krzysztofik X-Patchwork-Id: 13173023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 279C2C6FD19 for ; Mon, 13 Mar 2023 17:25:01 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A762510E5E0; Mon, 13 Mar 2023 17:24:55 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1F17B10E5E0; Mon, 13 Mar 2023 17:24:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678728294; x=1710264294; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/GnihSDIpsDjCoyoxBnL+YJx4ACoHDwigG2Aer3CR7w=; b=Qv+S9/eDpiqkwwE/LL/3ezG66mpgWsQ3uTlSPGciOX/yGVbIYW5OqHEF eMNZa59mmlaacd2l34jWXsMl5o4UxPrsdwMUeGscAP18p7JuBSUAZD6ZN NKxdg062KeDHeIcWo5sfKvNnrnwgbivu+osrpEvtd1KPT7aazS4qP02lC q5JqbpvdLLFY2+DphI/LHbrjyRKynp+jdDVqjjrD56qxOt4VyjxnvTO/Q 2K4R/f4SYKZJxlaB1dlVFZ2lEJBDK3is2qr9j8ZYuEZstqPPMQlILs7A/ ASC6SXihIhNVjiorxvSWsNj4L+MOt1vprvTpRrDKSI0SZLiueo/aiS/si A==; X-IronPort-AV: E=McAfee;i="6500,9779,10648"; a="321062340" X-IronPort-AV: E=Sophos;i="5.98,257,1673942400"; d="scan'208";a="321062340" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2023 10:24:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10648"; a="924586811" X-IronPort-AV: E=Sophos;i="5.98,257,1673942400"; d="scan'208";a="924586811" Received: from jkrzyszt-mobl1.ger.corp.intel.com (HELO jkrzyszt-mobl1.intranet) ([10.213.1.93]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2023 10:24:50 -0700 From: Janusz Krzysztofik To: intel-gfx@lists.freedesktop.org Date: Mon, 13 Mar 2023 18:24:15 +0100 Message-Id: <20230313172415.125932-4-janusz.krzysztofik@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230313172415.125932-1-janusz.krzysztofik@linux.intel.com> References: <20230313172415.125932-1-janusz.krzysztofik@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 3/3] drm/i915/active: Simplify llist search-and-delete X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Andrzej Hajda , dri-devel@lists.freedesktop.org, Rodrigo Vivi , Chris Wilson , Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Inside ____active_del_barrier(), while searching for a node to be deleted, we now rebuild barrier_tasks llist content in reverse order. Theoretically neutral, that method was observed to provide an undocumented workaround for unexpected loops of llist nodes appearing now and again due to races, silently breaking those llist node loops, then protecting llist_for_each_safe() from spinning indefinitely. Having all races hopefully fixed, make that function behavior more predictable, more easy to follow -- switch to an alternative, equally simple but less invasive algorithm that only updates a link between list nodes that precede and follow the deleted node. Signed-off-by: Janusz Krzysztofik --- drivers/gpu/drm/i915/i915_active.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c index 8eb10af7928f4..10f52eb4a4592 100644 --- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -391,13 +391,14 @@ static bool ____active_del_barrier(struct i915_active *ref, llist_for_each_safe(pos, next, llist_del_all(&engine->barrier_tasks)) { if (node == barrier_from_ll(pos)) { node = NULL; + if (tail) + tail->next = next; continue; } - pos->next = head; - head = pos; - if (!tail) - tail = pos; + if (!head) + head = pos; + tail = pos; } if (head) llist_add_batch(head, tail, &engine->barrier_tasks);