From patchwork Fri Dec 18 14:41:57 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nick Hoath X-Patchwork-Id: 7885721 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 11D93BEEE5 for ; Fri, 18 Dec 2015 14:42:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DEAD42044C for ; Fri, 18 Dec 2015 14:42:03 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 95AA220497 for ; Fri, 18 Dec 2015 14:42:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 157976EA95; Fri, 18 Dec 2015 06:42:02 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTP id E68776EA95 for ; Fri, 18 Dec 2015 06:42:00 -0800 (PST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 18 Dec 2015 06:42:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,446,1444719600"; d="scan'208";a="876504655" Received: from nthoath-linux2.isw.intel.com ([10.102.226.56]) by fmsmga002.fm.intel.com with ESMTP; 18 Dec 2015 06:41:59 -0800 From: Nick Hoath To: intel-gfx@lists.freedesktop.org Date: Fri, 18 Dec 2015 14:41:57 +0000 Message-Id: <1450449717-33485-1-git-send-email-nicholas.hoath@intel.com> X-Mailer: git-send-email 1.9.1 Cc: Daniel Vetter Subject: [Intel-gfx] [PATCH v9] drm/i915: Extend LRC pinning to cover GPU context writeback X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use the first retired request on a new context to unpin the old context. This ensures that the hw context remains bound until it has been written back to by the GPU. Now that the context is pinned until later in the request/context lifecycle, it no longer needs to be pinned from context_queue to retire_requests. This fixes an issue with GuC submission where the GPU might not have finished writing back the context before it is unpinned. This results in a GPU hang. v2: Moved the new pin to cover GuC submission (Alex Dai) Moved the new unpin to request_retire to fix coverage leak v3: Added switch to default context if freeing a still pinned context just in case the hw was actually still using it v4: Unwrapped context unpin to allow calling without a request v5: Only create a switch to idle context if the ring doesn't already have a request pending on it (Alex Dai) Rename unsaved to dirty to avoid double negatives (Dave Gordon) Changed _no_req postfix to __ prefix for consistency (Dave Gordon) Split out per engine cleanup from context_free as it was getting unwieldy Corrected locking (Dave Gordon) v6: Removed some bikeshedding (Mika Kuoppala) Added explanation of the GuC hang that this fixes (Daniel Vetter) v7: Removed extra per request pinning from ring reset code (Alex Dai) Added forced ring unpin/clean in error case in context free (Alex Dai) v8: Renamed lrc specific last_context to lrc_last_context as there were some reset cases where the codepaths leaked (Mika Kuoppala) NULL'd last_context in reset case - there was a pointer leak if someone did reset->close context. v9: Rebase over "Fix context/engine cleanup order" Signed-off-by: Nick Hoath Issue: VIZ-4277 Cc: Daniel Vetter Cc: David Gordon Cc: Chris Wilson Cc: Alex Dai Cc: Mika Kuoppala --- drivers/gpu/drm/i915/i915_drv.h | 1 + drivers/gpu/drm/i915/i915_gem.c | 7 +- drivers/gpu/drm/i915/intel_lrc.c | 146 ++++++++++++++++++++++++++------ drivers/gpu/drm/i915/intel_lrc.h | 1 + drivers/gpu/drm/i915/intel_ringbuffer.h | 1 + 5 files changed, 124 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 27bb401..2f6f411 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -887,6 +887,7 @@ struct intel_context { struct { struct drm_i915_gem_object *state; struct intel_ringbuffer *ringbuf; + bool dirty; int pin_count; } engine[I915_NUM_RINGS]; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 517676a..69e904f 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1362,6 +1362,9 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request) { trace_i915_gem_request_retire(request); + if (i915.enable_execlists) + intel_lr_context_complete_check(request); + /* We know the GPU must have read the request to have * sent us the seqno + interrupt, so use the position * of tail of the request to update the last known position @@ -2772,10 +2775,6 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv, struct drm_i915_gem_request, execlist_link); list_del(&submit_req->execlist_link); - - if (submit_req->ctx != ring->default_context) - intel_lr_context_unpin(submit_req); - i915_gem_request_unreference(submit_req); } spin_unlock_irq(&ring->execlist_lock); diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index d542a8d..01063f7 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -570,9 +570,6 @@ static int execlists_context_queue(struct drm_i915_gem_request *request) struct drm_i915_gem_request *cursor; int num_elements = 0; - if (request->ctx != ring->default_context) - intel_lr_context_pin(request); - i915_gem_request_reference(request); spin_lock_irq(&ring->execlist_lock); @@ -736,6 +733,13 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request) if (intel_ring_stopped(ring)) return; + if (request->ctx != ring->default_context) { + if (!request->ctx->engine[ring->id].dirty) { + intel_lr_context_pin(request); + request->ctx->engine[ring->id].dirty = true; + } + } + if (dev_priv->guc.execbuf_client) i915_guc_submit(dev_priv->guc.execbuf_client, request); else @@ -962,12 +966,6 @@ void intel_execlists_retire_requests(struct intel_engine_cs *ring) spin_unlock_irq(&ring->execlist_lock); list_for_each_entry_safe(req, tmp, &retired_list, execlist_link) { - struct intel_context *ctx = req->ctx; - struct drm_i915_gem_object *ctx_obj = - ctx->engine[ring->id].state; - - if (ctx_obj && (ctx != ring->default_context)) - intel_lr_context_unpin(req); list_del(&req->execlist_link); i915_gem_request_unreference(req); } @@ -1062,21 +1060,39 @@ reset_pin_count: return ret; } -void intel_lr_context_unpin(struct drm_i915_gem_request *rq) +static void __intel_lr_context_unpin(struct intel_engine_cs *ring, + struct intel_context *ctx) { - struct intel_engine_cs *ring = rq->ring; - struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring->id].state; - struct intel_ringbuffer *ringbuf = rq->ringbuf; - + struct drm_i915_gem_object *ctx_obj = ctx->engine[ring->id].state; + struct intel_ringbuffer *ringbuf = ctx->engine[ring->id].ringbuf; if (ctx_obj) { WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex)); - if (--rq->ctx->engine[ring->id].pin_count == 0) { + if (--ctx->engine[ring->id].pin_count == 0) { intel_unpin_ringbuffer_obj(ringbuf); i915_gem_object_ggtt_unpin(ctx_obj); } } } +void intel_lr_context_unpin(struct drm_i915_gem_request *rq) +{ + __intel_lr_context_unpin(rq->ring, rq->ctx); +} + +void intel_lr_context_complete_check(struct drm_i915_gem_request *req) +{ + struct intel_engine_cs *ring = req->ring; + + if (ring->lrc_last_context && ring->lrc_last_context != req->ctx && + ring->lrc_last_context->engine[ring->id].dirty) { + __intel_lr_context_unpin( + ring, + ring->lrc_last_context); + ring->lrc_last_context->engine[ring->id].dirty = false; + } + ring->lrc_last_context = req->ctx; +} + static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req) { int ret, i; @@ -2350,6 +2366,81 @@ populate_lr_context(struct intel_context *ctx, struct drm_i915_gem_object *ctx_o } /** + * intel_lr_context_clean_ring() - clean the ring specific parts of an LRC + * @ctx: the LR context being freed. + * @ring: the engine being cleaned + * @ctx_obj: the hw context being unreferenced + * @ringbuf: the ringbuf being freed + * + * Take care of cleaning up the per-engine backing + * objects and the logical ringbuffer. + */ +static void +intel_lr_context_clean_ring(struct intel_context *ctx, + struct drm_i915_gem_object *ctx_obj, + struct intel_ringbuffer *ringbuf) +{ + int ret; + struct intel_engine_cs *ring = ringbuf->ring; + + WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex)); + + if (ctx->is_global_default) { + /* + * The HWSP is part of the default context + * object in LRC mode. + */ + lrc_teardown_hardware_status_page(ring); + intel_unpin_ringbuffer_obj(ringbuf); + i915_gem_object_ggtt_unpin(ctx_obj); + } + + if (ctx->engine[ring->id].dirty) { + struct drm_i915_gem_request *req = NULL; + + /** + * If there is already a request pending on + * this ring, wait for that to complete, + * otherwise create a switch to idle request + */ + if (list_empty(&ring->request_list)) { + int ret; + + ret = i915_gem_request_alloc( + ring, + ring->default_context, + &req); + if (!ret) + i915_add_request(req); + else + DRM_DEBUG("Failed to ensure context saved"); + } else { + req = list_first_entry( + &ring->request_list, + typeof(*req), list); + } + if (req) { + ret = i915_wait_request(req); + if (ret != 0) { + /** + * If we get here, there's probably been a ring + * reset, so we just clean up the dirty flag.& + * pin count. + */ + ctx->engine[ring->id].dirty = false; + __intel_lr_context_unpin( + ring, + ctx); + } + } + } + + WARN_ON(ctx->engine[ring->id].pin_count); + intel_ringbuffer_free(ringbuf); + drm_gem_object_unreference(&ctx_obj->base); +} + +/** * intel_lr_context_free() - free the LRC specific bits of a context * @ctx: the LR context to free. * @@ -2368,19 +2459,9 @@ void intel_lr_context_free(struct intel_context *ctx) if (!ctx_obj) continue; - if (ctx->is_global_default) { - /* - * The HWSP is part of the default context - * object in LRC mode. - */ - lrc_teardown_hardware_status_page(ringbuf->ring); - intel_unpin_ringbuffer_obj(ringbuf); - i915_gem_object_ggtt_unpin(ctx_obj); - } - - WARN_ON(ctx->engine[i].pin_count); - intel_ringbuffer_free(ringbuf); - drm_gem_object_unreference(&ctx_obj->base); + intel_lr_context_clean_ring(ctx, + ctx_obj, + ringbuf); } } @@ -2562,5 +2643,14 @@ void intel_lr_context_reset(struct drm_device *dev, ringbuf->head = 0; ringbuf->tail = 0; + + if (ctx->engine[ring->id].dirty) { + __intel_lr_context_unpin( + ring, + ctx); + ctx->engine[ring->id].dirty = false; + if (ring->lrc_last_context == ctx) + ring->lrc_last_context = NULL; + } } } diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h index 0b821b9..fd1b6b4 100644 --- a/drivers/gpu/drm/i915/intel_lrc.h +++ b/drivers/gpu/drm/i915/intel_lrc.h @@ -91,6 +91,7 @@ void intel_lr_context_reset(struct drm_device *dev, struct intel_context *ctx); uint64_t intel_lr_context_descriptor(struct intel_context *ctx, struct intel_engine_cs *ring); +void intel_lr_context_complete_check(struct drm_i915_gem_request *req); /* Execlists */ int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists); diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 49574ff..9bd98db 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -275,6 +275,7 @@ struct intel_engine_cs { u32 flush_domains); int (*emit_bb_start)(struct drm_i915_gem_request *req, u64 offset, unsigned dispatch_flags); + struct intel_context *lrc_last_context; /** * List of objects currently involved in rendering from the