From patchwork Fri Jul 17 14:33:18 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Harrison X-Patchwork-Id: 6816331 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E7AC99F358 for ; Fri, 17 Jul 2015 14:34:07 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D749B20681 for ; Fri, 17 Jul 2015 14:34:06 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id C58532064F for ; Fri, 17 Jul 2015 14:34:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 02B776E4F6; Fri, 17 Jul 2015 07:34:05 -0700 (PDT) X-Original-To: Intel-GFX@lists.freedesktop.org Delivered-To: Intel-GFX@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id CD0CF6E4F9 for ; Fri, 17 Jul 2015 07:34:01 -0700 (PDT) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP; 17 Jul 2015 07:34:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,497,1432623600"; d="scan'208";a="766366587" Received: from johnharr-linux.isw.intel.com ([10.102.226.190]) by orsmga002.jf.intel.com with ESMTP; 17 Jul 2015 07:34:00 -0700 From: John.C.Harrison@Intel.com To: Intel-GFX@Lists.FreeDesktop.Org Date: Fri, 17 Jul 2015 15:33:18 +0100 Message-Id: <1437143628-6329-10-git-send-email-John.C.Harrison@Intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1437143628-6329-1-git-send-email-John.C.Harrison@Intel.com> References: <1437143628-6329-1-git-send-email-John.C.Harrison@Intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Subject: [Intel-gfx] [RFC 09/39] drm/i915: Added scheduler hook into i915_gem_complete_requests_ring() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Harrison The GPU scheduler can cause requests to complete out of order. For example, because one request pre-empted others that had already been submitted. This means the simple seqno comparison is not necessarily valid. Instead, a check against what the scheduler is currently doing must be made to determine if a request has really completed. Change-Id: I149250a8f9382586514ca324aba1c53063b83e19 For: VIZ-1587 Signed-off-by: John Harrison --- drivers/gpu/drm/i915/i915_drv.h | 2 ++ drivers/gpu/drm/i915/i915_gem.c | 13 +++++++++++-- drivers/gpu/drm/i915/i915_scheduler.c | 31 +++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_scheduler.h | 2 ++ 4 files changed, 46 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 7d2a494..58f53ec 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2238,6 +2238,8 @@ struct drm_i915_gem_request { /** process identifier submitting this request */ struct pid *pid; + struct i915_scheduler_queue_entry *scheduler_qe; + /** * The ELSP only accepts two elements at a time, so we queue * context/tail pairs on a given queue (ring->execlist_queue) until the diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 56405cd..e3c4032 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2772,6 +2772,7 @@ void i915_gem_request_notify(struct intel_engine_cs *ring) { struct drm_i915_gem_request *req, *req_next; unsigned long flags; + bool complete; u32 seqno; LIST_HEAD(free_list); @@ -2785,8 +2786,13 @@ void i915_gem_request_notify(struct intel_engine_cs *ring) spin_lock_irqsave(&ring->fence_lock, flags); list_for_each_entry_safe(req, req_next, &ring->fence_signal_list, signal_list) { if (!req->cancelled) { - if (!i915_seqno_passed(seqno, req->seqno)) - continue; + if (i915_scheduler_is_request_tracked(req, &complete, NULL)) { + if (!complete) + continue; + } else { + if (!i915_seqno_passed(seqno, req->seqno)) + continue; + } fence_signal_locked(&req->fence); trace_i915_gem_request_complete(req); @@ -2811,6 +2817,9 @@ void i915_gem_request_notify(struct intel_engine_cs *ring) i915_gem_request_unreference(req); } + + /* Necessary? Or does the fence_signal() call do an implicit wakeup? */ + wake_up_all(&ring->irq_queue); } static void i915_fence_timeline_value_str(struct fence *fence, char *str, int size) diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index 71d8df7..0d1cbe3 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -119,6 +119,9 @@ int i915_scheduler_queue_execbuffer(struct i915_scheduler_queue_entry *qe) node->stamp = stamp; i915_gem_request_reference(node->params.request); + BUG_ON(node->params.request->scheduler_qe); + node->params.request->scheduler_qe = node; + /* Need to determine the number of incomplete entries in the list as * that will be the maximum size of the dependency list. * @@ -363,6 +366,13 @@ static void i915_scheduler_seqno_complete(struct intel_engine_cs *ring, uint32_t got_changes = true; } + /* + * Avoid issues with requests not being signalled because their + * interrupt has already passed. + */ + if (got_changes) + i915_gem_request_notify(ring); + /* Should submit new work here if flight list is empty but the DRM * mutex lock might not be available if a '__wait_request()' call is * blocking the system. */ @@ -504,6 +514,7 @@ int i915_scheduler_remove(struct intel_engine_cs *ring) i915_gem_execbuff_release_batch_obj(node->params.batch_obj); /* Free everything that is owned by the node: */ + node->params.request->scheduler_qe = NULL; i915_gem_request_unreference(node->params.request); kfree(node->params.cliprects); kfree(node->dep_list); @@ -774,3 +785,23 @@ static int i915_scheduler_remove_dependent(struct i915_scheduler *scheduler, return 0; } + +bool i915_scheduler_is_request_tracked(struct drm_i915_gem_request *req, + bool *completed, bool *busy) +{ + struct drm_i915_private *dev_priv = req->ring->dev->dev_private; + struct i915_scheduler *scheduler = dev_priv->scheduler; + + if (!scheduler) + return false; + + if (req->scheduler_qe == NULL) + return false; + + if (completed) + *completed = I915_SQS_IS_COMPLETE(req->scheduler_qe); + if (busy) + *busy = I915_SQS_IS_QUEUED(req->scheduler_qe); + + return true; +} diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h index 0c5fc7f..6b2585a 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.h +++ b/drivers/gpu/drm/i915/i915_scheduler.h @@ -87,5 +87,7 @@ enum { int i915_scheduler_init(struct drm_device *dev); int i915_scheduler_queue_execbuffer(struct i915_scheduler_queue_entry *qe); int i915_scheduler_handle_irq(struct intel_engine_cs *ring); +bool i915_scheduler_is_request_tracked(struct drm_i915_gem_request *req, + bool *completed, bool *busy); #endif /* _I915_SCHEDULER_H_ */