From patchwork Fri Jul 17 14:33:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Harrison X-Patchwork-Id: 6816421 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 69EC7C05AC for ; Fri, 17 Jul 2015 14:34:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 558A22067D for ; Fri, 17 Jul 2015 14:34:20 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 5D98F2064F for ; Fri, 17 Jul 2015 14:34:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 83C886E4FD; Fri, 17 Jul 2015 07:34:18 -0700 (PDT) X-Original-To: Intel-GFX@lists.freedesktop.org Delivered-To: Intel-GFX@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id 5F9D06E4E6 for ; Fri, 17 Jul 2015 07:34:09 -0700 (PDT) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP; 17 Jul 2015 07:34:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,497,1432623600"; d="scan'208";a="766366703" Received: from johnharr-linux.isw.intel.com ([10.102.226.190]) by orsmga002.jf.intel.com with ESMTP; 17 Jul 2015 07:34:08 -0700 From: John.C.Harrison@Intel.com To: Intel-GFX@Lists.FreeDesktop.Org Date: Fri, 17 Jul 2015 15:33:25 +0100 Message-Id: <1437143628-6329-17-git-send-email-John.C.Harrison@Intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1437143628-6329-1-git-send-email-John.C.Harrison@Intel.com> References: <1437143628-6329-1-git-send-email-John.C.Harrison@Intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Subject: [Intel-gfx] [RFC 16/39] drm/i915: Added tracking/locking of batch buffer objects X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Harrison The scheduler needs to track interdependencies between batch buffers. These are calculated by analysing the object lists of the buffers and looking for commonality. The scheduler also needs to keep those buffers locked long after the initial IOCTL call has returned to user land. Change-Id: I31e3677ecfc2c9b5a908bda6acc4850432d55f1e For: VIZ-1587 Signed-off-by: John Harrison --- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 48 ++++++++++++++++++++++++++++-- drivers/gpu/drm/i915/i915_scheduler.c | 33 ++++++++++++++++++-- 2 files changed, 76 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 75d018d..61a5498 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -1498,7 +1498,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, struct i915_execbuffer_params *params = &qe.params; const u32 ctx_id = i915_execbuffer2_get_context_id(*args); u32 dispatch_flags; - int ret; + int ret, i; bool need_relocs; int fd_fence_complete = -1; #ifdef CONFIG_SYNC @@ -1636,6 +1636,14 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, goto pre_mutex_err; } + qe.saved_objects = kzalloc( + sizeof(*qe.saved_objects) * args->buffer_count, + GFP_KERNEL); + if (!qe.saved_objects) { + ret = -ENOMEM; + goto err; + } + /* Look up object handles */ ret = eb_lookup_vmas(eb, exec, args, vm, file); if (ret) @@ -1756,7 +1764,26 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, params->args_DR1 = args->DR1; params->args_DR4 = args->DR4; params->batch_obj = batch_obj; - params->ctx = ctx; + + /* + * Save away the list of objects used by this batch buffer for the + * purpose of tracking inter-buffer dependencies. + */ + for (i = 0; i < args->buffer_count; i++) { + /* + * NB: 'drm_gem_object_lookup()' increments the object's + * reference count and so must be matched by a + * 'drm_gem_object_unreference' call. + */ + qe.saved_objects[i].obj = + to_intel_bo(drm_gem_object_lookup(dev, file, + exec[i].handle)); + } + qe.num_objs = i; + + /* Lock and save the context object as well. */ + i915_gem_context_reference(ctx); + params->ctx = ctx; #ifdef CONFIG_SYNC if (args->flags & I915_EXEC_CREATE_FENCE) { @@ -1808,6 +1835,23 @@ err: i915_gem_context_unreference(ctx); eb_destroy(eb); + if (qe.saved_objects) { + /* Need to release the objects: */ + for (i = 0; i < qe.num_objs; i++) { + if (!qe.saved_objects[i].obj) + continue; + + drm_gem_object_unreference( + &qe.saved_objects[i].obj->base); + } + + kfree(qe.saved_objects); + + /* Context too */ + if (params->ctx) + i915_gem_context_unreference(params->ctx); + } + /* * If the request was created but not successfully submitted then it * must be freed again. If it was submitted then it is being tracked diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index e145829..f5fa968 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -108,7 +108,23 @@ int i915_scheduler_queue_execbuffer(struct i915_scheduler_queue_entry *qe) if (ret) return ret; - /* Free everything that is owned by the QE structure: */ + /* Need to release the objects: */ + for (i = 0; i < qe->num_objs; i++) { + if (!qe->saved_objects[i].obj) + continue; + + drm_gem_object_unreference(&qe->saved_objects[i].obj->base); + } + + kfree(qe->saved_objects); + qe->saved_objects = NULL; + qe->num_objs = 0; + + /* Free the context object too: */ + if (qe->params.ctx) + i915_gem_context_unreference(qe->params.ctx); + + /* And anything else owned by the QE structure: */ kfree(qe->params.cliprects); if (qe->params.dispatch_flags & I915_DISPATCH_SECURE) i915_gem_execbuff_release_batch_obj(qe->params.batch_obj); @@ -425,7 +441,7 @@ static int i915_scheduler_remove(struct intel_engine_cs *ring) int flying = 0, queued = 0; int ret = 0; bool do_submit; - uint32_t min_seqno; + uint32_t i, min_seqno; struct list_head remove; if (list_empty(&scheduler->node_queue[ring->id])) @@ -524,7 +540,18 @@ static int i915_scheduler_remove(struct intel_engine_cs *ring) if (node->params.dispatch_flags & I915_DISPATCH_SECURE) i915_gem_execbuff_release_batch_obj(node->params.batch_obj); - /* Free everything that is owned by the node: */ + /* Release the locked buffers: */ + for (i = 0; i < node->num_objs; i++) { + drm_gem_object_unreference( + &node->saved_objects[i].obj->base); + } + kfree(node->saved_objects); + + /* Context too: */ + if (node->params.ctx) + i915_gem_context_unreference(node->params.ctx); + + /* And anything else owned by the node: */ node->params.request->scheduler_qe = NULL; i915_gem_request_unreference(node->params.request); kfree(node->params.cliprects);