From patchwork Fri Jul 17 14:33:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Harrison X-Patchwork-Id: 6816481 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EF1939F818 for ; Fri, 17 Jul 2015 14:34:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 049B220712 for ; Fri, 17 Jul 2015 14:34:29 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 80F30206DC for ; Fri, 17 Jul 2015 14:34:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E8B966ED77; Fri, 17 Jul 2015 07:34:24 -0700 (PDT) X-Original-To: Intel-GFX@lists.freedesktop.org Delivered-To: Intel-GFX@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id B169B6E4FD for ; Fri, 17 Jul 2015 07:34:14 -0700 (PDT) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP; 17 Jul 2015 07:34:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,497,1432623600"; d="scan'208";a="766366739" Received: from johnharr-linux.isw.intel.com ([10.102.226.190]) by orsmga002.jf.intel.com with ESMTP; 17 Jul 2015 07:34:13 -0700 From: John.C.Harrison@Intel.com To: Intel-GFX@Lists.FreeDesktop.Org Date: Fri, 17 Jul 2015 15:33:30 +0100 Message-Id: <1437143628-6329-22-git-send-email-John.C.Harrison@Intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1437143628-6329-1-git-send-email-John.C.Harrison@Intel.com> References: <1437143628-6329-1-git-send-email-John.C.Harrison@Intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Subject: [Intel-gfx] [RFC 21/39] drm/i915: Added scheduler flush calls to ring throttle and idle functions X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Harrison When requesting that all GPU work is completed, it is now necessary to get the scheduler involved in order to flush out work that queued and not yet submitted. Change-Id: I95dcc2a2ee5c1a844748621c333994ddd6cf6a66 For: VIZ-1587 Signed-off-by: John Harrison --- drivers/gpu/drm/i915/i915_gem.c | 17 ++++++++++++- drivers/gpu/drm/i915/i915_scheduler.c | 45 +++++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_scheduler.h | 1 + 3 files changed, 62 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index dd9ebbe..6142e68 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -3710,6 +3710,10 @@ int i915_gpu_idle(struct drm_device *dev) /* Flush everything onto the inactive list. */ for_each_ring(ring, dev_priv, i) { + ret = i915_scheduler_flush(ring, true); + if (ret < 0) + return ret; + if (!i915.enable_execlists) { struct drm_i915_gem_request *req; @@ -4679,7 +4683,8 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file) unsigned long recent_enough = jiffies - DRM_I915_THROTTLE_JIFFIES; struct drm_i915_gem_request *request, *target = NULL; unsigned reset_counter; - int ret; + int i, ret; + struct intel_engine_cs *ring; ret = i915_gem_wait_for_error(&dev_priv->gpu_error); if (ret) @@ -4689,6 +4694,16 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file) if (ret) return ret; + for_each_ring(ring, dev_priv, i) { + /* Need a mechanism to flush out scheduler entries that were + * submitted more than 'recent_enough' time ago as well! In the + * meantime, just flush everything out to ensure that entries + * can not sit around indefinitely. */ + ret = i915_scheduler_flush(ring, false); + if (ret < 0) + return ret; + } + spin_lock(&file_priv->mm.lock); list_for_each_entry(request, &file_priv->mm.request_list, client_list) { if (time_after_eq(request->emitted_jiffies, recent_enough)) diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index 811cbe4..73c9ba6 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -653,6 +653,51 @@ int i915_scheduler_flush_request(struct drm_i915_gem_request *req, return flush_count; } +int i915_scheduler_flush(struct intel_engine_cs *ring, bool is_locked) +{ + struct i915_scheduler_queue_entry *node; + struct drm_i915_private *dev_priv; + struct i915_scheduler *scheduler; + unsigned long flags; + bool found; + int ret; + uint32_t count = 0; + + if (!ring) + return -EINVAL; + + dev_priv = ring->dev->dev_private; + scheduler = dev_priv->scheduler; + + if (!scheduler) + return 0; + + BUG_ON(is_locked && (scheduler->flags[ring->id] & i915_sf_submitting)); + + do { + found = false; + spin_lock_irqsave(&scheduler->lock, flags); + list_for_each_entry(node, &scheduler->node_queue[ring->id], link) { + if (!I915_SQS_IS_QUEUED(node)) + continue; + + found = true; + break; + } + spin_unlock_irqrestore(&scheduler->lock, flags); + + if (found) { + ret = i915_scheduler_submit(ring, is_locked); + if (ret < 0) + return ret; + + count += ret; + } + } while (found); + + return count; +} + static void i915_scheduler_priority_bump_clear(struct i915_scheduler *scheduler) { struct i915_scheduler_queue_entry *node; diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h index fcf2640..5e094d5 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.h +++ b/drivers/gpu/drm/i915/i915_scheduler.h @@ -92,6 +92,7 @@ void i915_gem_scheduler_clean_node(struct i915_scheduler_queue_entry *nod int i915_scheduler_queue_execbuffer(struct i915_scheduler_queue_entry *qe); int i915_scheduler_handle_irq(struct intel_engine_cs *ring); void i915_gem_scheduler_work_handler(struct work_struct *work); +int i915_scheduler_flush(struct intel_engine_cs *ring, bool is_locked); int i915_scheduler_flush_request(struct drm_i915_gem_request *req, bool is_locked); bool i915_scheduler_is_request_tracked(struct drm_i915_gem_request *req,