From patchwork Wed Jan 20 13:40:55 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 8072091 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3AB7D9F818 for ; Wed, 20 Jan 2016 13:41:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3494E20452 for ; Wed, 20 Jan 2016 13:41:09 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 28E5320434 for ; Wed, 20 Jan 2016 13:41:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6A6576E931; Wed, 20 Jan 2016 05:41:02 -0800 (PST) X-Original-To: Intel-gfx@lists.freedesktop.org Delivered-To: Intel-gfx@lists.freedesktop.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTP id 3C22289C37 for ; Wed, 20 Jan 2016 05:41:01 -0800 (PST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 20 Jan 2016 05:41:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,321,1449561600"; d="scan'208";a="885504767" Received: from tursulin-linux.isw.intel.com ([10.102.226.196]) by fmsmga001.fm.intel.com with ESMTP; 20 Jan 2016 05:41:00 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org Date: Wed, 20 Jan 2016 13:40:55 +0000 Message-Id: <1453297257-4707-1-git-send-email-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 1.9.1 Subject: [Intel-gfx] [PATCH 1/3] drm/i915: Make LRC (un)pinning work on context and engine X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Tvrtko Ursulin Previously intel_lr_context_(un)pin were operating on requests which is in conflict with their names. If we make them take a context and an engine, it makes the names make more sense and it also makes future fixes possible. Signed-off-by: Tvrtko Ursulin Cc: Chris Wilson Cc: Nick Hoath --- drivers/gpu/drm/i915/i915_gem.c | 2 +- drivers/gpu/drm/i915/intel_lrc.c | 48 ++++++++++++++++++++-------------------- drivers/gpu/drm/i915/intel_lrc.h | 3 ++- 3 files changed, 27 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 6b0102da859c..a752b00d4ff3 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2681,7 +2681,7 @@ void i915_gem_request_free(struct kref *req_ref) if (ctx) { if (i915.enable_execlists) { if (ctx != req->ring->default_context) - intel_lr_context_unpin(req); + intel_lr_context_unpin(ctx, req->ring); } i915_gem_context_unreference(ctx); diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index faaf49077fea..48ca51a36948 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -225,7 +225,8 @@ enum { #define GEN8_CTX_ID_SHIFT 32 #define CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT 0x17 -static int intel_lr_context_pin(struct drm_i915_gem_request *rq); +static int intel_lr_context_pin(struct intel_context *ctx, + struct intel_engine_cs *engine); static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring, struct drm_i915_gem_object *default_ctx_obj); @@ -599,7 +600,7 @@ static int execlists_context_queue(struct drm_i915_gem_request *request) int num_elements = 0; if (request->ctx != ring->default_context) - intel_lr_context_pin(request); + intel_lr_context_pin(request->ctx, ring); i915_gem_request_reference(request); @@ -691,7 +692,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request request->ringbuf = request->ctx->engine[request->ring->id].ringbuf; if (request->ctx != request->ring->default_context) { - ret = intel_lr_context_pin(request); + ret = intel_lr_context_pin(request->ctx, request->ring); if (ret) return ret; } @@ -1007,7 +1008,7 @@ void intel_execlists_retire_requests(struct intel_engine_cs *ring) ctx->engine[ring->id].state; if (ctx_obj && (ctx != ring->default_context)) - intel_lr_context_unpin(req); + intel_lr_context_unpin(ctx, ring); list_del(&req->execlist_link); i915_gem_request_unreference(req); } @@ -1051,8 +1052,8 @@ int logical_ring_flush_all_caches(struct drm_i915_gem_request *req) return 0; } -static int intel_lr_context_do_pin(struct intel_engine_cs *ring, - struct intel_context *ctx) +static int intel_lr_context_do_pin(struct intel_context *ctx, + struct intel_engine_cs *ring) { struct drm_device *dev = ring->dev; struct drm_i915_private *dev_priv = dev->dev_private; @@ -1095,41 +1096,40 @@ unpin_ctx_obj: return ret; } -static int intel_lr_context_pin(struct drm_i915_gem_request *rq) +static int intel_lr_context_pin(struct intel_context *ctx, + struct intel_engine_cs *engine) { int ret = 0; - struct intel_engine_cs *ring = rq->ring; - if (rq->ctx->engine[ring->id].pin_count++ == 0) { - ret = intel_lr_context_do_pin(ring, rq->ctx); + if (ctx->engine[engine->id].pin_count++ == 0) { + ret = intel_lr_context_do_pin(ctx, engine); if (ret) goto reset_pin_count; } return ret; reset_pin_count: - rq->ctx->engine[ring->id].pin_count = 0; + ctx->engine[engine->id].pin_count = 0; return ret; } -void intel_lr_context_unpin(struct drm_i915_gem_request *rq) +void intel_lr_context_unpin(struct intel_context *ctx, + struct intel_engine_cs *engine) { - struct intel_engine_cs *ring = rq->ring; - struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring->id].state; - struct intel_ringbuffer *ringbuf = rq->ringbuf; + struct drm_i915_gem_object *ctx_obj = ctx->engine[engine->id].state; - WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex)); + WARN_ON(!mutex_is_locked(&engine->dev->struct_mutex)); - if (!ctx_obj) + if (WARN_ON_ONCE(!ctx_obj)) return; - if (--rq->ctx->engine[ring->id].pin_count == 0) { - kunmap(kmap_to_page(rq->ctx->engine[ring->id].lrc_reg_state)); - intel_unpin_ringbuffer_obj(ringbuf); + if (--ctx->engine[engine->id].pin_count == 0) { + kunmap(kmap_to_page(ctx->engine[engine->id].lrc_reg_state)); + intel_unpin_ringbuffer_obj(ctx->engine[engine->id].ringbuf); i915_gem_object_ggtt_unpin(ctx_obj); - rq->ctx->engine[ring->id].lrc_vma = NULL; - rq->ctx->engine[ring->id].lrc_desc = 0; - rq->ctx->engine[ring->id].lrc_reg_state = NULL; + ctx->engine[engine->id].lrc_vma = NULL; + ctx->engine[engine->id].lrc_desc = 0; + ctx->engine[engine->id].lrc_reg_state = NULL; } } @@ -2032,7 +2032,7 @@ logical_ring_init(struct drm_device *dev, struct intel_engine_cs *ring) goto error; /* As this is the default context, always pin it */ - ret = intel_lr_context_do_pin(ring, ring->default_context); + ret = intel_lr_context_do_pin(ring->default_context, ring); if (ret) { DRM_ERROR( "Failed to pin and map ringbuffer %s: %d\n", diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h index 49af638f6213..e6cda3e225d0 100644 --- a/drivers/gpu/drm/i915/intel_lrc.h +++ b/drivers/gpu/drm/i915/intel_lrc.h @@ -101,7 +101,8 @@ void intel_lr_context_free(struct intel_context *ctx); uint32_t intel_lr_context_size(struct intel_engine_cs *ring); int intel_lr_context_deferred_alloc(struct intel_context *ctx, struct intel_engine_cs *ring); -void intel_lr_context_unpin(struct drm_i915_gem_request *req); +void intel_lr_context_unpin(struct intel_context *ctx, + struct intel_engine_cs *engine); void intel_lr_context_reset(struct drm_device *dev, struct intel_context *ctx); uint64_t intel_lr_context_descriptor(struct intel_context *ctx,