From patchwork Mon Jan 28 01:02:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10783163 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 389186C2 for ; Mon, 28 Jan 2019 01:22:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 27CCD2A628 for ; Mon, 28 Jan 2019 01:22:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1BFFD2A62A; Mon, 28 Jan 2019 01:22:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A065B2A628 for ; Mon, 28 Jan 2019 01:22:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E2D896E377; Mon, 28 Jan 2019 01:22:29 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 90D9C6E375 for ; Mon, 28 Jan 2019 01:22:24 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 15370513-1500050 for multiple; Mon, 28 Jan 2019 01:02:49 +0000 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Mon, 28 Jan 2019 01:02:34 +0000 Message-Id: <20190128010245.20148-17-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190128010245.20148-1-chris@chris-wilson.co.uk> References: <20190128010245.20148-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 17/28] drm/i915: Track active timelines X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Now that we pin timelines around use, we have a clearly defined lifetime and convenient points at which we can track only the active timelines. This allows us to reduce the list iteration to only consider those active timelines and not all. Signed-off-by: Chris Wilson Reviewed-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_drv.h | 2 +- drivers/gpu/drm/i915/i915_gem.c | 4 +-- drivers/gpu/drm/i915/i915_reset.c | 2 +- drivers/gpu/drm/i915/i915_timeline.c | 39 ++++++++++++++++++---------- 4 files changed, 29 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 6a051381f535..d072f3369ee1 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1977,7 +1977,7 @@ struct drm_i915_private { struct i915_gt_timelines { struct mutex mutex; /* protects list, tainted by GPU */ - struct list_head list; + struct list_head active_list; /* Pack multiple timelines' seqnos into the same page */ spinlock_t hwsp_lock; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 1bd724d663d9..05627000b77d 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -3248,7 +3248,7 @@ wait_for_timelines(struct drm_i915_private *i915, return timeout; mutex_lock(>->mutex); - list_for_each_entry(tl, >->list, link) { + list_for_each_entry(tl, >->active_list, link) { struct i915_request *rq; rq = i915_gem_active_get_unlocked(&tl->last_request); @@ -3276,7 +3276,7 @@ wait_for_timelines(struct drm_i915_private *i915, /* restart after reacquiring the lock */ mutex_lock(>->mutex); - tl = list_entry(>->list, typeof(*tl), link); + tl = list_entry(>->active_list, typeof(*tl), link); } mutex_unlock(>->mutex); diff --git a/drivers/gpu/drm/i915/i915_reset.c b/drivers/gpu/drm/i915/i915_reset.c index bd82f9b1043f..acf3c777e49d 100644 --- a/drivers/gpu/drm/i915/i915_reset.c +++ b/drivers/gpu/drm/i915/i915_reset.c @@ -856,7 +856,7 @@ bool i915_gem_unset_wedged(struct drm_i915_private *i915) * No more can be submitted until we reset the wedged bit. */ mutex_lock(&i915->gt.timelines.mutex); - list_for_each_entry(tl, &i915->gt.timelines.list, link) { + list_for_each_entry(tl, &i915->gt.timelines.active_list, link) { struct i915_request *rq; long timeout; diff --git a/drivers/gpu/drm/i915/i915_timeline.c b/drivers/gpu/drm/i915/i915_timeline.c index 34ffa6dca1b7..1c9794fe717e 100644 --- a/drivers/gpu/drm/i915/i915_timeline.c +++ b/drivers/gpu/drm/i915/i915_timeline.c @@ -120,7 +120,6 @@ int i915_timeline_init(struct drm_i915_private *i915, const char *name, struct i915_vma *hwsp) { - struct i915_gt_timelines *gt = &i915->gt.timelines; void *vaddr; /* @@ -169,10 +168,6 @@ int i915_timeline_init(struct drm_i915_private *i915, i915_syncmap_init(&timeline->sync); - mutex_lock(>->mutex); - list_add(&timeline->link, >->list); - mutex_unlock(>->mutex); - return 0; } @@ -181,7 +176,7 @@ void i915_timelines_init(struct drm_i915_private *i915) struct i915_gt_timelines *gt = &i915->gt.timelines; mutex_init(>->mutex); - INIT_LIST_HEAD(>->list); + INIT_LIST_HEAD(>->active_list); spin_lock_init(>->hwsp_lock); INIT_LIST_HEAD(>->hwsp_free_list); @@ -190,6 +185,24 @@ void i915_timelines_init(struct drm_i915_private *i915) i915_gem_shrinker_taints_mutex(i915, >->mutex); } +static void timeline_add_to_active(struct i915_timeline *tl) +{ + struct i915_gt_timelines *gt = &tl->i915->gt.timelines; + + mutex_lock(>->mutex); + list_add(&tl->link, >->active_list); + mutex_unlock(>->mutex); +} + +static void timeline_remove_from_active(struct i915_timeline *tl) +{ + struct i915_gt_timelines *gt = &tl->i915->gt.timelines; + + mutex_lock(>->mutex); + list_del(&tl->link); + mutex_unlock(>->mutex); +} + /** * i915_timelines_park - called when the driver idles * @i915: the drm_i915_private device @@ -206,7 +219,7 @@ void i915_timelines_park(struct drm_i915_private *i915) struct i915_timeline *timeline; mutex_lock(>->mutex); - list_for_each_entry(timeline, >->list, link) { + list_for_each_entry(timeline, >->active_list, link) { /* * All known fences are completed so we can scrap * the current sync point tracking and start afresh, @@ -220,16 +233,10 @@ void i915_timelines_park(struct drm_i915_private *i915) void i915_timeline_fini(struct i915_timeline *timeline) { - struct i915_gt_timelines *gt = &timeline->i915->gt.timelines; - GEM_BUG_ON(timeline->pin_count); GEM_BUG_ON(!list_empty(&timeline->requests)); GEM_BUG_ON(i915_gem_active_isset(&timeline->barrier)); - mutex_lock(>->mutex); - list_del(&timeline->link); - mutex_unlock(>->mutex); - i915_syncmap_free(&timeline->sync); hwsp_free(timeline); @@ -295,6 +302,8 @@ int i915_timeline_pin(struct i915_timeline *tl) i915_ggtt_offset(tl->hwsp_ggtt) + offset_in_page(tl->hwsp_offset); + timeline_add_to_active(tl); + return 0; unpin: @@ -308,6 +317,8 @@ void i915_timeline_unpin(struct i915_timeline *tl) if (--tl->pin_count) return; + timeline_remove_from_active(tl); + /* * Since this timeline is idle, all bariers upon which we were waiting * must also be complete and so we can discard the last used barriers @@ -331,7 +342,7 @@ void i915_timelines_fini(struct drm_i915_private *i915) { struct i915_gt_timelines *gt = &i915->gt.timelines; - GEM_BUG_ON(!list_empty(>->list)); + GEM_BUG_ON(!list_empty(>->active_list)); GEM_BUG_ON(!list_empty(>->hwsp_free_list)); mutex_destroy(>->mutex);