From patchwork Mon Aug 6 08:30:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10556671 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8667C13BB for ; Mon, 6 Aug 2018 08:30:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 749ED2927C for ; Mon, 6 Aug 2018 08:30:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 693A429280; Mon, 6 Aug 2018 08:30:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C63562927C for ; Mon, 6 Aug 2018 08:30:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 223636E1D5; Mon, 6 Aug 2018 08:30:39 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id D313A6E1D5 for ; Mon, 6 Aug 2018 08:30:36 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 12646506-1500050 for multiple; Mon, 06 Aug 2018 09:30:22 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Mon, 6 Aug 2018 09:30:14 +0100 Message-Id: <20180806083017.32215-3-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180806083017.32215-1-chris@chris-wilson.co.uk> References: <20180806083017.32215-1-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [PATCH 3/6] drm/i915: Combine multiple internal plists into the same i915_priolist bucket X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP As we are about to allow ourselves to slightly bump the user priority into a few different sublevels, packthose internal priority lists into the same i915_priolist to keep the rbtree compact and avoid having to allocate the default user priority even after the internal bumping. The downside to having an requests[] rather than a node per active list, is that we then have to walk over the empty higher priority lists. To compensate, we track the active buckets and use a small bitmap to skip over any inactive ones. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/intel_engine_cs.c | 6 +- drivers/gpu/drm/i915/intel_guc_submission.c | 12 ++- drivers/gpu/drm/i915/intel_lrc.c | 87 ++++++++++++++------- drivers/gpu/drm/i915/intel_ringbuffer.h | 13 ++- 4 files changed, 80 insertions(+), 38 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c index 67c4fc5d737c..9fb479068002 100644 --- a/drivers/gpu/drm/i915/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/intel_engine_cs.c @@ -1530,10 +1530,10 @@ void intel_engine_dump(struct intel_engine_cs *engine, count = 0; drm_printf(m, "\t\tQueue priority: %d\n", execlists->queue_priority); for (rb = rb_first_cached(&execlists->queue); rb; rb = rb_next(rb)) { - struct i915_priolist *p = - rb_entry(rb, typeof(*p), node); + struct i915_priolist *p = rb_entry(rb, typeof(*p), node); + int i; - list_for_each_entry(rq, &p->requests, sched.link) { + priolist_for_each_request(rq, p, i) { if (count++ < MAX_REQUESTS_TO_SHOW - 1) print_request(m, rq, "\t\tQ "); else diff --git a/drivers/gpu/drm/i915/intel_guc_submission.c b/drivers/gpu/drm/i915/intel_guc_submission.c index 195adbd0ebf7..3c8ebc1c5828 100644 --- a/drivers/gpu/drm/i915/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/intel_guc_submission.c @@ -719,30 +719,28 @@ static bool __guc_dequeue(struct intel_engine_cs *engine) while ((rb = rb_first_cached(&execlists->queue))) { struct i915_priolist *p = to_priolist(rb); struct i915_request *rq, *rn; + int i; - list_for_each_entry_safe(rq, rn, &p->requests, sched.link) { + priolist_for_each_request_consume(rq, rn, p, i) { if (last && rq->hw_context != last->hw_context) { - if (port == last_port) { - __list_del_many(&p->requests, - &rq->sched.link); + if (port == last_port) goto done; - } if (submit) port_assign(port, last); port++; } - INIT_LIST_HEAD(&rq->sched.link); + list_del_init(&rq->sched.link); __i915_request_submit(rq); trace_i915_request_in(rq, port_index(port, execlists)); + last = rq; submit = true; } rb_erase_cached(&p->node, &execlists->queue); - INIT_LIST_HEAD(&p->requests); if (p->priority != I915_PRIORITY_NORMAL) kmem_cache_free(engine->i915->priorities, p); } diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index b0be180c6294..a36b12f8277d 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -259,14 +259,49 @@ intel_lr_context_descriptor_update(struct i915_gem_context *ctx, ce->lrc_desc = desc; } -static struct i915_priolist * +static void assert_priolists(struct intel_engine_execlists * const execlists, + int queue_priority) +{ + struct rb_node *rb; + int last_prio, i; + + if (!IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)) + return; + + GEM_BUG_ON(rb_first_cached(&execlists->queue) != + rb_first(&execlists->queue.rb_root)); + + last_prio = (queue_priority >> I915_USER_PRIORITY_SHIFT) + 1; + for (rb = rb_first_cached(&execlists->queue); rb; rb = rb_next(rb)) { + struct i915_priolist *p = to_priolist(rb); + + GEM_BUG_ON(p->priority >= last_prio); + last_prio = p->priority; + + GEM_BUG_ON(!p->used); + for (i = 0; i < ARRAY_SIZE(p->requests); i++) { + if (list_empty(&p->requests[i])) + continue; + + GEM_BUG_ON(!(p->used & BIT(i))); + } + } +} + +static struct list_head * lookup_priolist(struct intel_engine_cs *engine, int prio) { struct intel_engine_execlists * const execlists = &engine->execlists; struct i915_priolist *p; struct rb_node **parent, *rb; bool first = true; + int idx, i; + + assert_priolists(execlists, INT_MAX); + /* buckets sorted from highest [in slot 0] to lowest priority */ + idx = I915_PRIORITY_COUNT - (prio & ~I915_PRIORITY_MASK) - 1; + prio >>= I915_USER_PRIORITY_SHIFT; if (unlikely(execlists->no_priolist)) prio = I915_PRIORITY_NORMAL; @@ -283,7 +318,7 @@ lookup_priolist(struct intel_engine_cs *engine, int prio) parent = &rb->rb_right; first = false; } else { - return p; + goto out; } } @@ -309,11 +344,15 @@ lookup_priolist(struct intel_engine_cs *engine, int prio) } p->priority = prio; - INIT_LIST_HEAD(&p->requests); + for (i = 0; i < ARRAY_SIZE(p->requests); i++) + INIT_LIST_HEAD(&p->requests[i]); rb_link_node(&p->node, rb, parent); rb_insert_color_cached(&p->node, &execlists->queue, first); + p->used = 0; - return p; +out: + p->used |= BIT(idx); + return &p->requests[idx]; } static void unwind_wa_tail(struct i915_request *rq) @@ -325,7 +364,7 @@ static void unwind_wa_tail(struct i915_request *rq) static void __unwind_incomplete_requests(struct intel_engine_cs *engine) { struct i915_request *rq, *rn; - struct i915_priolist *uninitialized_var(p); + struct list_head *uninitialized_var(pl); int last_prio = I915_PRIORITY_INVALID; lockdep_assert_held(&engine->timeline.lock); @@ -342,11 +381,10 @@ static void __unwind_incomplete_requests(struct intel_engine_cs *engine) GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID); if (rq_prio(rq) != last_prio) { last_prio = rq_prio(rq); - p = lookup_priolist(engine, last_prio); + pl = lookup_priolist(engine, last_prio); } - GEM_BUG_ON(p->priority != rq_prio(rq)); - list_add(&rq->sched.link, &p->requests); + list_add(&rq->sched.link, pl); } } @@ -674,8 +712,9 @@ static void execlists_dequeue(struct intel_engine_cs *engine) while ((rb = rb_first_cached(&execlists->queue))) { struct i915_priolist *p = to_priolist(rb); struct i915_request *rq, *rn; + int i; - list_for_each_entry_safe(rq, rn, &p->requests, sched.link) { + priolist_for_each_request_consume(rq, rn, p, i) { /* * Can we combine this request with the current port? * It has to be the same context/ringbuffer and not @@ -694,11 +733,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine) * combine this request with the last, then we * are done. */ - if (port == last_port) { - __list_del_many(&p->requests, - &rq->sched.link); + if (port == last_port) goto done; - } /* * If GVT overrides us we only ever submit @@ -708,11 +744,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine) * request) to the second port. */ if (ctx_single_port_submission(last->hw_context) || - ctx_single_port_submission(rq->hw_context)) { - __list_del_many(&p->requests, - &rq->sched.link); + ctx_single_port_submission(rq->hw_context)) goto done; - } GEM_BUG_ON(last->hw_context == rq->hw_context); @@ -723,15 +756,16 @@ static void execlists_dequeue(struct intel_engine_cs *engine) GEM_BUG_ON(port_isset(port)); } - INIT_LIST_HEAD(&rq->sched.link); + list_del_init(&rq->sched.link); + __i915_request_submit(rq); trace_i915_request_in(rq, port_index(port, execlists)); + last = rq; submit = true; } rb_erase_cached(&p->node, &execlists->queue); - INIT_LIST_HEAD(&p->requests); if (p->priority != I915_PRIORITY_NORMAL) kmem_cache_free(engine->i915->priorities, p); } @@ -755,6 +789,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine) */ execlists->queue_priority = port != execlists->port ? rq_prio(last) : INT_MIN; + assert_priolists(execlists, execlists->queue_priority); if (submit) { port_assign(port, last); @@ -866,16 +901,16 @@ static void execlists_cancel_requests(struct intel_engine_cs *engine) /* Flush the queued requests to the timeline list (for retiring). */ while ((rb = rb_first_cached(&execlists->queue))) { struct i915_priolist *p = to_priolist(rb); + int i; - list_for_each_entry_safe(rq, rn, &p->requests, sched.link) { - INIT_LIST_HEAD(&rq->sched.link); + priolist_for_each_request_consume(rq, rn, p, i) { + list_del_init(&rq->sched.link); dma_fence_set_error(&rq->fence, -EIO); __i915_request_submit(rq); } rb_erase_cached(&p->node, &execlists->queue); - INIT_LIST_HEAD(&p->requests); if (p->priority != I915_PRIORITY_NORMAL) kmem_cache_free(engine->i915->priorities, p); } @@ -1081,8 +1116,7 @@ static void queue_request(struct intel_engine_cs *engine, struct i915_sched_node *node, int prio) { - list_add_tail(&node->link, - &lookup_priolist(engine, prio)->requests); + list_add_tail(&node->link, lookup_priolist(engine, prio)); } static void __update_queue(struct intel_engine_cs *engine, int prio) @@ -1152,7 +1186,7 @@ sched_lock_engine(struct i915_sched_node *node, struct intel_engine_cs *locked) static void execlists_schedule(struct i915_request *request, const struct i915_sched_attr *attr) { - struct i915_priolist *uninitialized_var(pl); + struct list_head *uninitialized_var(pl); struct intel_engine_cs *engine, *last; struct i915_dependency *dep, *p; struct i915_dependency stack; @@ -1247,8 +1281,7 @@ static void execlists_schedule(struct i915_request *request, pl = lookup_priolist(engine, prio); last = engine; } - GEM_BUG_ON(pl->priority != prio); - list_move_tail(&node->link, &pl->requests); + list_move_tail(&node->link, pl); } if (prio > engine->execlists.queue_priority && diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 57f3787ed6ec..bfcbf4317745 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -188,11 +188,22 @@ enum intel_engine_id { }; struct i915_priolist { + struct list_head requests[I915_PRIORITY_COUNT]; struct rb_node node; - struct list_head requests; + unsigned long used; int priority; }; +#define priolist_for_each_request(it, plist, idx) \ + for (idx = 0; idx < ARRAY_SIZE((plist)->requests); idx++) \ + list_for_each_entry(it, &(plist)->requests[idx], sched.link) + +#define priolist_for_each_request_consume(it, n, plist, idx) \ + for (; (idx = ffs((plist)->used)); (plist)->used &= ~BIT(idx - 1)) \ + list_for_each_entry_safe(it, n, \ + &(plist)->requests[idx - 1], \ + sched.link) + struct st_preempt_hang { struct completion completion; bool inject_hang;