From patchwork Fri Jul 17 14:33:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Harrison X-Patchwork-Id: 6816601 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 99CE79F818 for ; Fri, 17 Jul 2015 14:34:41 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9523B2064F for ; Fri, 17 Jul 2015 14:34:40 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 89FCF20681 for ; Fri, 17 Jul 2015 14:34:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8105E6ED8A; Fri, 17 Jul 2015 07:34:37 -0700 (PDT) X-Original-To: Intel-GFX@lists.freedesktop.org Delivered-To: Intel-GFX@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id 7D1C56ED88 for ; Fri, 17 Jul 2015 07:34:26 -0700 (PDT) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP; 17 Jul 2015 07:34:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,497,1432623600"; d="scan'208";a="766366897" Received: from johnharr-linux.isw.intel.com ([10.102.226.190]) by orsmga002.jf.intel.com with ESMTP; 17 Jul 2015 07:34:25 -0700 From: John.C.Harrison@Intel.com To: Intel-GFX@Lists.FreeDesktop.Org Date: Fri, 17 Jul 2015 15:33:39 +0100 Message-Id: <1437143628-6329-31-git-send-email-John.C.Harrison@Intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1437143628-6329-1-git-send-email-John.C.Harrison@Intel.com> References: <1437143628-6329-1-git-send-email-John.C.Harrison@Intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Subject: [Intel-gfx] [RFC 30/39] drm/i915: Added scheduler queue throttling by DRM file handle X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Harrison The scheduler decouples the submission of batch buffers to the driver from their subsequent submission to the hardware. This means that an application which is continuously submitting buffers as fast as it can could potentialy flood the driver. To prevent this, the driver now tracks how many buffers are in progress (queued in software or executing in hardware) and limits this to a given (tunable) number. If this number is exceeded then the queue to the driver will return EAGAIN and thus prevent the scheduler's queue becoming arbitrarily large. Change-Id: I83258240aec7c810db08c006a3062d46aa91363f For: VIZ-1587 Signed-off-by: John Harrison --- drivers/gpu/drm/i915/i915_drv.h | 2 ++ drivers/gpu/drm/i915/i915_gem_execbuffer.c | 8 +++++++ drivers/gpu/drm/i915/i915_scheduler.c | 34 ++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_scheduler.h | 2 ++ 4 files changed, 46 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index b568432..e230632 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -334,6 +334,8 @@ struct drm_i915_file_private { } rps; struct intel_engine_cs *bsd_ring; + + u32 scheduler_queue_length; }; enum intel_dpll_id { diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index f90a2c8..c2a69d8 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -1935,6 +1935,10 @@ i915_gem_execbuffer(struct drm_device *dev, void *data, return -EINVAL; } + /* Throttle batch requests per device file */ + if (i915_scheduler_file_queue_is_full(file)) + return -EAGAIN; + /* Copy in the exec list from userland */ exec_list = drm_malloc_ab(sizeof(*exec_list), args->buffer_count); exec2_list = drm_malloc_ab(sizeof(*exec2_list), args->buffer_count); @@ -2018,6 +2022,10 @@ i915_gem_execbuffer2(struct drm_device *dev, void *data, return -EINVAL; } + /* Throttle batch requests per device file */ + if (i915_scheduler_file_queue_is_full(file)) + return -EAGAIN; + exec2_list = kmalloc(sizeof(*exec2_list)*args->buffer_count, GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY); if (exec2_list == NULL) diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index 408bedc..f0c99ad 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -40,6 +40,8 @@ static void i915_scheduler_priority_bump_clear(struct i915_scheduler *sch static int i915_scheduler_priority_bump(struct i915_scheduler *scheduler, struct i915_scheduler_queue_entry *target, uint32_t bump); +static void i915_scheduler_file_queue_inc(struct drm_file *file); +static void i915_scheduler_file_queue_dec(struct drm_file *file); bool i915_scheduler_is_enabled(struct drm_device *dev) { @@ -75,6 +77,7 @@ int i915_scheduler_init(struct drm_device *dev) scheduler->priority_level_max = ~0U; scheduler->priority_level_preempt = 900; scheduler->min_flying = 2; + scheduler->file_queue_max = 64; dev_priv->scheduler = scheduler; @@ -249,6 +252,8 @@ int i915_scheduler_queue_execbuffer(struct i915_scheduler_queue_entry *qe) list_add_tail(&node->link, &scheduler->node_queue[ring->id]); + i915_scheduler_file_queue_inc(node->params.file); + if (i915.scheduler_override & i915_so_submit_on_queue) not_flying = true; else @@ -630,6 +635,12 @@ static int i915_scheduler_remove(struct intel_engine_cs *ring) /* Strip the dependency info while the mutex is still locked */ i915_scheduler_remove_dependent(scheduler, node); + /* Likewise clean up the file descriptor before it might disappear. */ + if (node->params.file) { + i915_scheduler_file_queue_dec(node->params.file); + node->params.file = NULL; + } + continue; } @@ -1330,3 +1341,26 @@ int i915_scheduler_closefile(struct drm_device *dev, struct drm_file *file) return 0; } + +bool i915_scheduler_file_queue_is_full(struct drm_file *file) +{ + struct drm_i915_file_private *file_priv = file->driver_priv; + struct drm_i915_private *dev_priv = file_priv->dev_priv; + struct i915_scheduler *scheduler = dev_priv->scheduler; + + return file_priv->scheduler_queue_length >= scheduler->file_queue_max; +} + +static void i915_scheduler_file_queue_inc(struct drm_file *file) +{ + struct drm_i915_file_private *file_priv = file->driver_priv; + + file_priv->scheduler_queue_length++; +} + +static void i915_scheduler_file_queue_dec(struct drm_file *file) +{ + struct drm_i915_file_private *file_priv = file->driver_priv; + + file_priv->scheduler_queue_length--; +} diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h index 3f94512..301c567 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.h +++ b/drivers/gpu/drm/i915/i915_scheduler.h @@ -87,6 +87,7 @@ struct i915_scheduler { uint32_t priority_level_max; uint32_t priority_level_preempt; uint32_t min_flying; + uint32_t file_queue_max; }; /* Flag bits for i915_scheduler::flags */ @@ -120,5 +121,6 @@ int i915_scheduler_flush_request(struct drm_i915_gem_request *req, bool is_locked); bool i915_scheduler_is_request_tracked(struct drm_i915_gem_request *req, bool *completed, bool *busy); +bool i915_scheduler_file_queue_is_full(struct drm_file *file); #endif /* _I915_SCHEDULER_H_ */