From patchwork Mon Nov 23 11:39:12 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Harrison X-Patchwork-Id: 7680071 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A65A09F54F for ; Mon, 23 Nov 2015 11:40:07 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A75E3206E8 for ; Mon, 23 Nov 2015 11:40:06 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 67B56206E5 for ; Mon, 23 Nov 2015 11:40:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 511606E515; Mon, 23 Nov 2015 03:40:03 -0800 (PST) X-Original-To: Intel-GFX@lists.freedesktop.org Delivered-To: Intel-GFX@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTP id AB7FB721FF for ; Mon, 23 Nov 2015 03:39:53 -0800 (PST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP; 23 Nov 2015 03:39:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,336,1444719600"; d="scan'208";a="845221702" Received: from johnharr-linux.isw.intel.com ([10.102.226.93]) by fmsmga001.fm.intel.com with ESMTP; 23 Nov 2015 03:39:52 -0800 From: John.C.Harrison@Intel.com To: Intel-GFX@Lists.FreeDesktop.Org Date: Mon, 23 Nov 2015 11:39:12 +0000 Message-Id: <1448278774-31376-18-git-send-email-John.C.Harrison@Intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1448278774-31376-1-git-send-email-John.C.Harrison@Intel.com> References: <1448278774-31376-1-git-send-email-John.C.Harrison@Intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Subject: [Intel-gfx] [PATCH 17/39] drm/i915: Hook scheduler node clean up into retire requests X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Harrison The scheduler keeps its own lock on various DRM objects in order to guarantee safe access long after the original execbuff IOCTL has completed. This is especially important when pre-emption is enabled as the batch buffer might need to be submitted to the hardware multiple times. This patch hooks the clean up of these locks into the request retire function. The request can only be retired after it has completed on the hardware and thus is no longer eligible for re-submission. Thus there is no point holding on to the locks beyond that time. For: VIZ-1587 Signed-off-by: John Harrison --- drivers/gpu/drm/i915/i915_gem.c | 3 +++ drivers/gpu/drm/i915/i915_scheduler.c | 51 ++++++++++++++++++++++++----------- drivers/gpu/drm/i915/i915_scheduler.h | 1 + 3 files changed, 39 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index f625d88..451ae6d 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1402,6 +1402,9 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request) fence_signal_locked(&request->fence); } + if (request->scheduler_qe) + i915_gem_scheduler_clean_node(request->scheduler_qe); + i915_gem_request_unreference(request); } diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index cd69f53..5dc3497 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -383,6 +383,38 @@ void i915_scheduler_wakeup(struct drm_device *dev) queue_work(dev_priv->wq, &dev_priv->mm.scheduler_work); } +void i915_gem_scheduler_clean_node(struct i915_scheduler_queue_entry *node) +{ + uint32_t i; + + if (WARN(!I915_SQS_IS_COMPLETE(node), "Incomplete node: %d!\n", node->status)) + return; + + if (node->params.batch_obj) { + /* The batch buffer must be unpinned before it is unreferenced + * otherwise the unpin fails with a missing vma!? */ + if (node->params.dispatch_flags & I915_DISPATCH_SECURE) + i915_gem_execbuff_release_batch_obj(node->params.batch_obj); + + node->params.batch_obj = NULL; + } + + /* Release the locked buffers: */ + for (i = 0; i < node->num_objs; i++) { + drm_gem_object_unreference( + &node->saved_objects[i].obj->base); + } + kfree(node->saved_objects); + node->saved_objects = NULL; + node->num_objs = 0; + + /* Context too: */ + if (node->params.ctx) { + i915_gem_context_unreference(node->params.ctx); + node->params.ctx = NULL; + } +} + static int i915_scheduler_remove(struct intel_engine_cs *ring) { struct drm_i915_private *dev_priv = ring->dev->dev_private; @@ -392,7 +424,7 @@ static int i915_scheduler_remove(struct intel_engine_cs *ring) int flying = 0, queued = 0; int ret = 0; bool do_submit; - uint32_t i, min_seqno; + uint32_t min_seqno; struct list_head remove; if (list_empty(&scheduler->node_queue[ring->id])) @@ -491,21 +523,8 @@ static int i915_scheduler_remove(struct intel_engine_cs *ring) node = list_first_entry(&remove, typeof(*node), link); list_del(&node->link); - /* The batch buffer must be unpinned before it is unreferenced - * otherwise the unpin fails with a missing vma!? */ - if (node->params.dispatch_flags & I915_DISPATCH_SECURE) - i915_gem_execbuff_release_batch_obj(node->params.batch_obj); - - /* Release the locked buffers: */ - for (i = 0; i < node->num_objs; i++) { - drm_gem_object_unreference( - &node->saved_objects[i].obj->base); - } - kfree(node->saved_objects); - - /* Context too: */ - if (node->params.ctx) - i915_gem_context_unreference(node->params.ctx); + /* Free up all the DRM object references */ + i915_gem_scheduler_clean_node(node); /* And anything else owned by the node: */ node->params.request->scheduler_qe = NULL; diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h index da095f9..8469270 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.h +++ b/drivers/gpu/drm/i915/i915_scheduler.h @@ -87,6 +87,7 @@ bool i915_scheduler_is_enabled(struct drm_device *dev); int i915_scheduler_init(struct drm_device *dev); int i915_scheduler_closefile(struct drm_device *dev, struct drm_file *file); +void i915_gem_scheduler_clean_node(struct i915_scheduler_queue_entry *node); int i915_scheduler_queue_execbuffer(struct i915_scheduler_queue_entry *qe); bool i915_scheduler_notify_request(struct drm_i915_gem_request *req); void i915_scheduler_wakeup(struct drm_device *dev);