From patchwork Sun Apr 24 18:10:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 8920561 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 91B669F1C1 for ; Sun, 24 Apr 2016 18:11:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 68C5B20165 for ; Sun, 24 Apr 2016 18:11:26 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 45E8E2024F for ; Sun, 24 Apr 2016 18:11:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0F83E6E40E; Sun, 24 Apr 2016 18:11:06 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-x244.google.com (mail-wm0-x244.google.com [IPv6:2a00:1450:400c:c09::244]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0C1786E204 for ; Sun, 24 Apr 2016 18:10:52 +0000 (UTC) Received: by mail-wm0-x244.google.com with SMTP id n3so19145608wmn.1 for ; Sun, 24 Apr 2016 11:10:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=ElzaTS9jkI+RMQV/177zHtlg8XV8zvDKzV7L57q7lmQ=; b=PPkDlppqtsaMWzSojdEBU3tYvmi7BUPeYy65LbCpzR0o5w/8FFNRtbXhrkcGPp3uNG 76rJco5Y7NH7nHD5c13gPAnzR1/QY5TD8J06vLeoAny+2/cE56OkR7Jof0g2648/WyND WU4t3n4BLu5vE5nhBJlqiiOzlJ/Buy7nmb/iDeKEN4Jh60Lf4dajJ+hTv7wnAwDrEiC0 Bzl50oAFfzOSgO9Rw34aenh2n6h1qqEBnWix5ckkChZx6YMEVRSlOcC9wZ35qgiwFonI 2tFhwXcq3cIciMdmO7Zi8j+t8gp9/b5Xq+ErSqg4PH56RBLMbLQczzBWgGqeXOKnN3V7 d/6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=ElzaTS9jkI+RMQV/177zHtlg8XV8zvDKzV7L57q7lmQ=; b=nKq12+x/gK34GpktOgeD5ChpSdGODa+6gHt8rJQWlj8kRGIXt0Ttq2/cihaTjny/L9 IM1ZrG95/NwlDwIPwSUAFV9jziH9Pz5Mtoi360nQvJwEVOcWaPWrBbjrYWHwImqOEbsK oZMUdmQfofnnVI8xisRDxSdyMq/1frlK4/RJZBvpjbdsTGOTpwEw8HIRadI3RvLcdmVC 55/YIHaCuhOJC7s/5H3gb7m0ph3KNNvvxQsedoZ1aw7qi+ll+Y5/P7f0p80/jlatUUYN islBdVIkh2Iu/ULEf/jThXT+Ybo9ETmH+/Oq1DlulgwgfazxqsyLoZuPVFfcIKfHq3gW Tn/Q== X-Gm-Message-State: AOPr4FVkfRnjOBqIBWkHg2YvXCu8yCJQ1/NWf++38uE6RxArDL6rGCHraUb4LYimIG7roQ== X-Received: by 10.28.54.33 with SMTP id d33mr4482920wma.62.1461521449841; Sun, 24 Apr 2016 11:10:49 -0700 (PDT) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id lh1sm19590888wjb.20.2016.04.24.11.10.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 24 Apr 2016 11:10:48 -0700 (PDT) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Sun, 24 Apr 2016 19:10:12 +0100 Message-Id: <1461521419-18086-14-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1461521419-18086-1-git-send-email-chris@chris-wilson.co.uk> References: <1461521419-18086-1-git-send-email-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [PATCH v3 14/21] drm/i915: Refactor execlists default context pinning X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Refactor pinning and unpinning of contexts, such that the default context for an engine is pinned during initialisation and unpinned during teardown (pinning of the context handles the reference counting). Thus we can eliminate the special case handling of the default context that was required to mask that it was not being pinned normally. v2: Rebalance context_queue after rebasing. v3: Rebase to -nightly (not 40 patches in) Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin Reviewed-by: Tvrtko Ursulin Reviewed-by: Mika Kuoppala --- drivers/gpu/drm/i915/i915_debugfs.c | 5 +- drivers/gpu/drm/i915/i915_gem.c | 2 +- drivers/gpu/drm/i915/intel_lrc.c | 107 ++++++++++++++---------------------- 3 files changed, 43 insertions(+), 71 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index be2a4a0fae13..5bc9789e1f1e 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -2095,9 +2095,8 @@ static int i915_dump_lrc(struct seq_file *m, void *unused) return ret; list_for_each_entry(ctx, &dev_priv->context_list, link) - if (ctx != dev_priv->kernel_context) - for_each_engine(engine, dev_priv) - i915_dump_lrc_obj(m, ctx, engine); + for_each_engine(engine, dev_priv) + i915_dump_lrc_obj(m, ctx, engine); mutex_unlock(&dev->struct_mutex); diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 3b294dcf0add..40e9a0e0f298 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2711,7 +2711,7 @@ void i915_gem_request_free(struct kref *req_ref) i915_gem_request_remove_from_client(req); if (ctx) { - if (i915.enable_execlists && ctx != req->i915->kernel_context) + if (i915.enable_execlists) intel_lr_context_unpin(ctx, req->engine); i915_gem_context_unreference(ctx); diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 2ed7363f76ea..838abd4b42a3 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -588,9 +588,7 @@ static void execlists_context_queue(struct drm_i915_gem_request *request) struct drm_i915_gem_request *cursor; int num_elements = 0; - if (request->ctx != request->i915->kernel_context) - intel_lr_context_pin(request->ctx, engine); - + intel_lr_context_pin(request->ctx, request->engine); i915_gem_request_reference(request); spin_lock_bh(&engine->execlist_lock); @@ -691,10 +689,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request return ret; } - if (request->ctx != request->i915->kernel_context) - ret = intel_lr_context_pin(request->ctx, request->engine); - - return ret; + return intel_lr_context_pin(request->ctx, request->engine); } static int logical_ring_wait_for_space(struct drm_i915_gem_request *req, @@ -774,12 +769,8 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request) if (engine->last_context != request->ctx) { if (engine->last_context) intel_lr_context_unpin(engine->last_context, engine); - if (request->ctx != request->i915->kernel_context) { - intel_lr_context_pin(request->ctx, engine); - engine->last_context = request->ctx; - } else { - engine->last_context = NULL; - } + intel_lr_context_pin(request->ctx, engine); + engine->last_context = request->ctx; } if (dev_priv->guc.execbuf_client) @@ -1000,12 +991,7 @@ void intel_execlists_retire_requests(struct intel_engine_cs *engine) spin_unlock_bh(&engine->execlist_lock); list_for_each_entry_safe(req, tmp, &retired_list, execlist_link) { - struct intel_context *ctx = req->ctx; - struct drm_i915_gem_object *ctx_obj = - ctx->engine[engine->id].state; - - if (ctx_obj && (ctx != req->i915->kernel_context)) - intel_lr_context_unpin(ctx, engine); + intel_lr_context_unpin(req->ctx, engine); list_del(&req->execlist_link); i915_gem_request_unreference(req); @@ -1050,23 +1036,26 @@ int logical_ring_flush_all_caches(struct drm_i915_gem_request *req) return 0; } -static int intel_lr_context_do_pin(struct intel_context *ctx, - struct intel_engine_cs *engine) +static int intel_lr_context_pin(struct intel_context *ctx, + struct intel_engine_cs *engine) { - struct drm_device *dev = engine->dev; - struct drm_i915_private *dev_priv = dev->dev_private; - struct drm_i915_gem_object *ctx_obj = ctx->engine[engine->id].state; - struct intel_ringbuffer *ringbuf = ctx->engine[engine->id].ringbuf; + struct drm_i915_private *dev_priv = ctx->i915; + struct drm_i915_gem_object *ctx_obj; + struct intel_ringbuffer *ringbuf; void *vaddr; u32 *lrc_reg_state; int ret; - WARN_ON(!mutex_is_locked(&engine->dev->struct_mutex)); + lockdep_assert_held(&ctx->i915->dev->struct_mutex); + if (ctx->engine[engine->id].pin_count++) + return 0; + + ctx_obj = ctx->engine[engine->id].state; ret = i915_gem_obj_ggtt_pin(ctx_obj, GEN8_LR_CONTEXT_ALIGN, PIN_OFFSET_BIAS | GUC_WOPCM_TOP); if (ret) - return ret; + goto err; vaddr = i915_gem_object_pin_map(ctx_obj); if (IS_ERR(vaddr)) { @@ -1076,10 +1065,12 @@ static int intel_lr_context_do_pin(struct intel_context *ctx, lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE; + ringbuf = ctx->engine[engine->id].ringbuf; ret = intel_pin_and_map_ringbuffer_obj(engine->dev, ringbuf); if (ret) goto unpin_map; + i915_gem_context_reference(ctx); ctx->engine[engine->id].lrc_vma = i915_gem_obj_to_ggtt(ctx_obj); intel_lr_context_descriptor_update(ctx, engine); lrc_reg_state[CTX_RING_BUFFER_START+1] = ringbuf->vma->node.start; @@ -1090,51 +1081,39 @@ static int intel_lr_context_do_pin(struct intel_context *ctx, if (i915.enable_guc_submission) I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE); - return ret; + return 0; unpin_map: i915_gem_object_unpin_map(ctx_obj); unpin_ctx_obj: i915_gem_object_ggtt_unpin(ctx_obj); - +err: + ctx->engine[engine->id].pin_count = 0; return ret; } -static int intel_lr_context_pin(struct intel_context *ctx, - struct intel_engine_cs *engine) +void intel_lr_context_unpin(struct intel_context *ctx, + struct intel_engine_cs *engine) { - int ret = 0; + struct drm_i915_gem_object *ctx_obj; - if (ctx->engine[engine->id].pin_count++ == 0) { - ret = intel_lr_context_do_pin(ctx, engine); - if (ret) - goto reset_pin_count; + lockdep_assert_held(&ctx->i915->dev->struct_mutex); + GEM_BUG_ON(ctx->engine[engine->id].pin_count == 0); - i915_gem_context_reference(ctx); - } - return ret; + if (--ctx->engine[engine->id].pin_count) + return; -reset_pin_count: - ctx->engine[engine->id].pin_count = 0; - return ret; -} + intel_unpin_ringbuffer_obj(ctx->engine[engine->id].ringbuf); -void intel_lr_context_unpin(struct intel_context *ctx, - struct intel_engine_cs *engine) -{ - struct drm_i915_gem_object *ctx_obj = ctx->engine[engine->id].state; + ctx_obj = ctx->engine[engine->id].state; + i915_gem_object_unpin_map(ctx_obj); + i915_gem_object_ggtt_unpin(ctx_obj); - WARN_ON(!mutex_is_locked(&ctx->i915->dev->struct_mutex)); - if (--ctx->engine[engine->id].pin_count == 0) { - i915_gem_object_unpin_map(ctx_obj); - intel_unpin_ringbuffer_obj(ctx->engine[engine->id].ringbuf); - i915_gem_object_ggtt_unpin(ctx_obj); - ctx->engine[engine->id].lrc_vma = NULL; - ctx->engine[engine->id].lrc_desc = 0; - ctx->engine[engine->id].lrc_reg_state = NULL; + ctx->engine[engine->id].lrc_vma = NULL; + ctx->engine[engine->id].lrc_desc = 0; + ctx->engine[engine->id].lrc_reg_state = NULL; - i915_gem_context_unreference(ctx); - } + i915_gem_context_unreference(ctx); } static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req) @@ -2032,6 +2011,7 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *engine) i915_gem_object_unpin_map(engine->status_page.obj); engine->status_page.obj = NULL; } + intel_lr_context_unpin(dev_priv->kernel_context, engine); engine->idle_lite_restore_wa = 0; engine->disable_lite_restore_wa = false; @@ -2135,11 +2115,10 @@ logical_ring_init(struct drm_device *dev, struct intel_engine_cs *engine) goto error; /* As this is the default context, always pin it */ - ret = intel_lr_context_do_pin(dctx, engine); + ret = intel_lr_context_pin(dctx, engine); if (ret) { - DRM_ERROR( - "Failed to pin and map ringbuffer %s: %d\n", - engine->name, ret); + DRM_ERROR("Failed to pin context for %s: %d\n", + engine->name, ret); goto error; } @@ -2560,12 +2539,6 @@ void intel_lr_context_free(struct intel_context *ctx) if (!ctx_obj) continue; - if (ctx == ctx->i915->kernel_context) { - intel_unpin_ringbuffer_obj(ringbuf); - i915_gem_object_ggtt_unpin(ctx_obj); - i915_gem_object_unpin_map(ctx_obj); - } - WARN_ON(ctx->engine[i].pin_count); intel_ringbuffer_free(ringbuf); drm_gem_object_unreference(&ctx_obj->base);