From patchwork Fri Nov 20 00:52:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: yu.dai@intel.com X-Patchwork-Id: 7662851 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D7A0A9F1C2 for ; Fri, 20 Nov 2015 00:54:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E3F902045B for ; Fri, 20 Nov 2015 00:54:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 5CE3520528 for ; Fri, 20 Nov 2015 00:54:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D77656E597; Thu, 19 Nov 2015 16:54:29 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id 1CEB66E597 for ; Thu, 19 Nov 2015 16:54:28 -0800 (PST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 19 Nov 2015 16:54:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,319,1444719600"; d="scan'208";a="855335007" Received: from alex-hsw.fm.intel.com ([10.19.83.10]) by fmsmga002.fm.intel.com with ESMTP; 19 Nov 2015 16:54:15 -0800 From: yu.dai@intel.com To: intel-gfx@lists.freedesktop.org Date: Thu, 19 Nov 2015 16:52:05 -0800 Message-Id: <1447980725-3600-1-git-send-email-yu.dai@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1446847421-28788-1-git-send-email-yu.dai@intel.com> References: <1446847421-28788-1-git-send-email-yu.dai@intel.com> Subject: [Intel-gfx] [PATCH v1] drm/i915: Defer LRC unpin and release X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Alex Dai Can't immediately free LRC (neither unpin it) even all its referenced requests are completed, because HW still need a short period of time to save data to LRC status page. It is safe to free LRC when HW completes a request from a different LRC. Introduce a new function intel_lr_context_do_unpin that do the actual unpin work. When driver receives unpin call (from retiring of a request), the LRC pin & ref count will be increased to defer the unpin and release. If last LRC is different and its pincount reaches to zero, driver will do the actual unpin work. There will be always a LRC kept until ring itself gets cleaned up. v1: Simplify the update of last context by reusing current ring-> last_context. Be note that it is safe to do so because lrc ring is cleaned up early than i915_gem_context_fini(). Signed-off-by: Alex Dai --- drivers/gpu/drm/i915/intel_lrc.c | 59 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 54 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 06180dc..7a3c9cc 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1039,6 +1039,55 @@ unpin_ctx_obj: return ret; } +static void intel_lr_context_do_unpin(struct intel_engine_cs *ring, + struct intel_context *ctx) +{ + struct drm_device *dev = ring->dev; + struct drm_i915_private *dev_priv = dev->dev_private; + struct drm_i915_gem_object *ctx_obj; + + WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex)); + + ctx_obj = ctx->engine[ring->id].state; + if (!ctx_obj) + return; + + i915_gem_object_ggtt_unpin(ctx_obj); + intel_unpin_ringbuffer_obj(ctx->engine[ring->id].ringbuf); + + /* Invalidate GuC TLB. */ + if (i915.enable_guc_submission) + I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE); +} + +static void set_last_lrc(struct intel_engine_cs *ring, + struct intel_context *ctx) +{ + struct intel_context *last; + + /* Unpin will be deferred, so the release of lrc. Hold pin & ref count + * untill we receive the retire of next request. */ + if (ctx) { + ctx->engine[ring->id].pin_count++; + i915_gem_context_reference(ctx); + } + + last = ring->last_context; + ring->last_context = ctx; + + if (last == NULL) + return; + + /* Unpin is on hold for last context. Release pincount first. Then if HW + * completes request from another lrc, try to do the actual unpin. */ + last->engine[ring->id].pin_count--; + if (last != ctx && !last->engine[ring->id].pin_count) + intel_lr_context_do_unpin(ring, last); + + /* Release previous context refcount that on hold */ + i915_gem_context_unreference(last); +} + static int intel_lr_context_pin(struct drm_i915_gem_request *rq) { int ret = 0; @@ -1062,14 +1111,11 @@ void intel_lr_context_unpin(struct drm_i915_gem_request *rq) { struct intel_engine_cs *ring = rq->ring; struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring->id].state; - struct intel_ringbuffer *ringbuf = rq->ringbuf; if (ctx_obj) { WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex)); - if (--rq->ctx->engine[ring->id].pin_count == 0) { - intel_unpin_ringbuffer_obj(ringbuf); - i915_gem_object_ggtt_unpin(ctx_obj); - } + --rq->ctx->engine[ring->id].pin_count; + set_last_lrc(ring, rq->ctx); } } @@ -1908,6 +1954,9 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *ring) } lrc_destroy_wa_ctx_obj(ring); + + /* this will clean up last lrc */ + set_last_lrc(ring, NULL); } static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *ring)