From patchwork Fri Jun 29 07:53:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10495691 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DF8C76022E for ; Fri, 29 Jun 2018 07:54:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CA67C29119 for ; Fri, 29 Jun 2018 07:54:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BED4029123; Fri, 29 Jun 2018 07:54:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0871529119 for ; Fri, 29 Jun 2018 07:54:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 767256EF68; Fri, 29 Jun 2018 07:54:17 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 67FAC6EF50 for ; Fri, 29 Jun 2018 07:54:09 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 12195491-1500050 for multiple; Fri, 29 Jun 2018 08:54:06 +0100 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Fri, 29 Jun 2018 08:54:05 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Fri, 29 Jun 2018 08:53:19 +0100 Message-Id: <20180629075348.27358-8-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180629075348.27358-1-chris@chris-wilson.co.uk> References: <20180629075348.27358-1-chris@chris-wilson.co.uk> X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 Subject: [Intel-gfx] [PATCH 08/37] drm/i915: Hold request reference for submission until retirement X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Currently the async submission backends (guc and execlists) hold a extra reference to the requests being processed as they are not serialised with request retirement. If we instead, prevent the request being dropped from the engine timeline until after submission has finished processing the request, we can use a single reference held for the entire submission process (currently, it is held only for the submission fence). By doing so we remove a few atomics from inside the irqoff path, on the order of 200ns as measured by gem_syslatency, bringing the impact of direct submission into line with the previous tasklet implementation. The tradeoff is that as we may postpone the retirement, we have to check for any residual requests after detecting that the engines are idle. v2: switch-to-kernel-context needs to be cognisant of the delayed release on the engine->timeline again. Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_gem.c | 39 +++++++++++++--- drivers/gpu/drm/i915/i915_gem_context.c | 2 +- drivers/gpu/drm/i915/i915_request.c | 20 ++++---- drivers/gpu/drm/i915/intel_engine_cs.c | 51 +++++++++++++++------ drivers/gpu/drm/i915/intel_guc_submission.c | 4 +- drivers/gpu/drm/i915/intel_lrc.c | 10 +--- drivers/gpu/drm/i915/intel_ringbuffer.h | 4 +- 7 files changed, 86 insertions(+), 44 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index def8d7a78530..c13d5b78a02e 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -3789,15 +3789,42 @@ static int wait_for_timeline(struct i915_timeline *tl, unsigned int flags) static int wait_for_engines(struct drm_i915_private *i915) { - if (wait_for(intel_engines_are_idle(i915), I915_IDLE_ENGINES_TIMEOUT)) { - dev_err(i915->drm.dev, - "Failed to idle engines, declaring wedged!\n"); - GEM_TRACE_DUMP(); - i915_gem_set_wedged(i915); - return -EIO; + struct intel_engine_cs *engine; + enum intel_engine_id id; + + for_each_engine(engine, i915, id) { + struct i915_request *rq, *rn; + + if (wait_for(intel_engine_is_idle(engine), + I915_IDLE_ENGINES_TIMEOUT)) { + dev_err(i915->drm.dev, + "Failed to idle %s engine, declaring wedged!\n", + engine->name); + goto set_wedged; + } + + /* + * Now that we know the engine is definitely idle; explicitly + * retire all residual requests as they may have been skipped + * by earlier calls to i915_retire_requests(). + */ + list_for_each_entry_safe(rq, rn, + &engine->timeline.requests, link) { + if (!intel_engine_retire_request(engine, rq)) { + dev_err(i915->drm.dev, + "Failed to retire %s engine, declaring wedged!\n", + engine->name); + goto set_wedged; + } + } } return 0; + +set_wedged: + GEM_TRACE_DUMP(); + i915_gem_set_wedged(i915); + return -EIO; } int i915_gem_wait_for_idle(struct drm_i915_private *i915, unsigned int flags) diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index ccf463ab6562..8995c1a57c4f 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -650,7 +650,7 @@ static bool engine_has_kernel_context_barrier(struct intel_engine_cs *engine) return true; /* The engine is idle; check that it is idling in the kernel context. */ - return engine->last_retired_context == ce; + return intel_engine_has_kernel_context(engine); } int i915_gem_switch_to_kernel_context(struct drm_i915_private *i915) diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 9ff6d14a0fef..14bf0be6f994 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -347,17 +347,15 @@ static void free_capture_list(struct i915_request *request) static void __retire_engine_upto(struct intel_engine_cs *engine, struct i915_request *rq) { + struct list_head * const requests = &engine->timeline.requests; struct i915_request *tmp; if (list_empty(&rq->link)) return; - do { - tmp = list_first_entry(&engine->timeline.requests, - typeof(*tmp), link); - - intel_engine_retire_request(engine, tmp); - } while (tmp != rq); + do + tmp = list_first_entry(requests, typeof(*tmp), link); + while (intel_engine_retire_request(engine, tmp) && tmp != rq); } static void i915_request_retire(struct i915_request *request) @@ -376,6 +374,8 @@ static void i915_request_retire(struct i915_request *request) trace_i915_request_retire(request); + __retire_engine_upto(request->engine, request); + advance_ring(request); free_capture_list(request); @@ -414,8 +414,6 @@ static void i915_request_retire(struct i915_request *request) atomic_dec_if_positive(&request->gem_context->ban_score); intel_context_unpin(request->hw_context); - __retire_engine_upto(request->engine, request); - unreserve_gt(request->i915); i915_sched_node_fini(request->i915, &request->sched); @@ -722,8 +720,10 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx) rq->timeline->fence_context, timeline_get_seqno(rq->timeline)); - /* We bump the ref for the fence chain */ - i915_sw_fence_init(&i915_request_get(rq)->submit, submit_notify); + /* We bump the ref for the fence chain and for the submit backend. */ + refcount_set(&rq->fence.refcount.refcount, 3); + + i915_sw_fence_init(&rq->submit, submit_notify); init_waitqueue_head(&rq->execute); i915_sched_node_init(&rq->sched); diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c index 63f60ab908b4..457003311b74 100644 --- a/drivers/gpu/drm/i915/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/intel_engine_cs.c @@ -1038,11 +1038,11 @@ bool intel_engines_are_idle(struct drm_i915_private *dev_priv) * executed if the engine is already idle, is the kernel context * (#i915.kernel_context). */ -bool intel_engine_has_kernel_context(const struct intel_engine_cs *engine) +bool intel_engine_has_kernel_context(struct intel_engine_cs *engine) { const struct intel_context *kernel_context = to_intel_context(engine->i915->kernel_context, engine); - struct i915_request *rq; + const struct intel_context *last; lockdep_assert_held(&engine->i915->drm.struct_mutex); @@ -1051,11 +1051,15 @@ bool intel_engine_has_kernel_context(const struct intel_engine_cs *engine) * the last request that remains in the timeline. When idle, it is * the last executed context as tracked by retirement. */ - rq = __i915_gem_active_peek(&engine->timeline.last_request); - if (rq) - return rq->hw_context == kernel_context; - else - return engine->last_retired_context == kernel_context; + last = engine->last_retired_context; + + spin_lock_irq(&engine->timeline.lock); + if (!list_empty(&engine->timeline.requests)) + last = list_last_entry(&engine->timeline.requests, + struct i915_request, link)->hw_context; + spin_unlock_irq(&engine->timeline.lock); + + return last == kernel_context; } void intel_engines_reset_default_submission(struct drm_i915_private *i915) @@ -1096,20 +1100,25 @@ void intel_engines_sanitize(struct drm_i915_private *i915) * * This request has been completed and is part of the chain being retired by * the caller, so drop any reference to it from the engine. + * + * Returns: true if the reference was dropped, false if it was still busy. */ -void intel_engine_retire_request(struct intel_engine_cs *engine, +bool intel_engine_retire_request(struct intel_engine_cs *engine, struct i915_request *rq) { - GEM_TRACE("%s(%s) fence %llx:%d, global=%d, current %d\n", - __func__, engine->name, - rq->fence.context, rq->fence.seqno, - rq->global_seqno, - intel_engine_get_seqno(engine)); + GEM_TRACE("%s: fence %llx:%d, global=%d, current %d, active?=%s\n", + engine->name, rq->fence.context, rq->fence.seqno, + rq->global_seqno, intel_engine_get_seqno(engine), + yesno(port_request(engine->execlists.port) == rq)); lockdep_assert_held(&engine->i915->drm.struct_mutex); GEM_BUG_ON(rq->engine != engine); GEM_BUG_ON(!i915_request_completed(rq)); + /* Don't drop the final ref until after the backend has finished */ + if (port_request(engine->execlists.port) == rq) + return false; + local_irq_disable(); spin_lock(&engine->timeline.lock); @@ -1141,6 +1150,19 @@ void intel_engine_retire_request(struct intel_engine_cs *engine, if (engine->last_retired_context) intel_context_unpin(engine->last_retired_context); engine->last_retired_context = rq->hw_context; + + i915_request_put(rq); + return true; +} + +static void engine_retire_requests(struct intel_engine_cs *engine) +{ + struct i915_request *rq, *next; + + list_for_each_entry_safe(rq, next, &engine->timeline.requests, link) { + if (WARN_ON(!intel_engine_retire_request(engine, rq))) + break; + } } /** @@ -1173,6 +1195,7 @@ void intel_engines_park(struct drm_i915_private *i915) "%s is not idle before parking\n", engine->name); intel_engine_dump(engine, &p, NULL); + engine->cancel_requests(engine); } /* Must be reset upon idling, or we may miss the busy wakeup. */ @@ -1181,6 +1204,8 @@ void intel_engines_park(struct drm_i915_private *i915) if (engine->park) engine->park(engine); + engine_retire_requests(engine); + if (engine->pinned_default_state) { i915_gem_object_unpin_map(engine->default_state); engine->pinned_default_state = NULL; diff --git a/drivers/gpu/drm/i915/intel_guc_submission.c b/drivers/gpu/drm/i915/intel_guc_submission.c index f3945258fe1b..05449f636d94 100644 --- a/drivers/gpu/drm/i915/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/intel_guc_submission.c @@ -669,8 +669,7 @@ static void guc_submit(struct intel_engine_cs *engine) static void port_assign(struct execlist_port *port, struct i915_request *rq) { GEM_BUG_ON(port_isset(port)); - - port_set(port, i915_request_get(rq)); + port_set(port, rq); } static inline int rq_prio(const struct i915_request *rq) @@ -793,7 +792,6 @@ static void guc_submission_tasklet(unsigned long data) rq = port_request(port); while (rq && i915_request_completed(rq)) { trace_i915_request_out(rq); - i915_request_put(rq); port = execlists_port_complete(execlists, port); if (port_isset(port)) { diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index ab89dabc2965..a6bc50d7195e 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -518,11 +518,7 @@ static bool can_merge_ctx(const struct intel_context *prev, static void port_assign(struct execlist_port *port, struct i915_request *rq) { GEM_BUG_ON(rq == port_request(port)); - - if (port_isset(port)) - i915_request_put(port_request(port)); - - port_set(port, port_pack(i915_request_get(rq), port_count(port))); + port_set(port, port_pack(rq, port_count(port))); } static void inject_preempt_context(struct intel_engine_cs *engine) @@ -793,8 +789,6 @@ execlists_cancel_port_requests(struct intel_engine_execlists * const execlists) INTEL_CONTEXT_SCHEDULE_OUT : INTEL_CONTEXT_SCHEDULE_PREEMPTED); - i915_request_put(rq); - memset(port, 0, sizeof(*port)); port++; } @@ -1088,8 +1082,6 @@ static void process_csb(struct intel_engine_cs *engine) execlists_context_schedule_out(rq, INTEL_CONTEXT_SCHEDULE_OUT); - i915_request_put(rq); - GEM_TRACE("%s completed ctx=%d\n", engine->name, port->context_id); diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 54678bd06fd9..a1aff360d0ce 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -900,7 +900,7 @@ int intel_init_bsd_ring_buffer(struct intel_engine_cs *engine); int intel_init_blt_ring_buffer(struct intel_engine_cs *engine); int intel_init_vebox_ring_buffer(struct intel_engine_cs *engine); -void intel_engine_retire_request(struct intel_engine_cs *engine, +bool intel_engine_retire_request(struct intel_engine_cs *engine, struct i915_request *rq); int intel_engine_stop_cs(struct intel_engine_cs *engine); @@ -1077,7 +1077,7 @@ void intel_engines_sanitize(struct drm_i915_private *i915); bool intel_engine_is_idle(struct intel_engine_cs *engine); bool intel_engines_are_idle(struct drm_i915_private *dev_priv); -bool intel_engine_has_kernel_context(const struct intel_engine_cs *engine); +bool intel_engine_has_kernel_context(struct intel_engine_cs *engine); void intel_engine_lost_context(struct intel_engine_cs *engine); void intel_engines_park(struct drm_i915_private *i915);