From patchwork Mon Apr 9 11:14:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10331119 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 54FB360236 for ; Mon, 9 Apr 2018 11:15:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45E1928662 for ; Mon, 9 Apr 2018 11:15:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3ACFD28ADA; Mon, 9 Apr 2018 11:15:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A349028662 for ; Mon, 9 Apr 2018 11:15:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1F2E36E081; Mon, 9 Apr 2018 11:15:26 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 989A56E081 for ; Mon, 9 Apr 2018 11:15:24 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 11308941-1500050 for multiple; Mon, 09 Apr 2018 12:15:13 +0100 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Mon, 09 Apr 2018 12:15:08 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Mon, 9 Apr 2018 12:14:11 +0100 Message-Id: <20180409111413.6352-17-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180409111413.6352-1-chris@chris-wilson.co.uk> References: <20180409111413.6352-1-chris@chris-wilson.co.uk> X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 Subject: [Intel-gfx] [PATCH 16/18] drm/i915/preemption: Select timeout when scheduling X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP The choice of preemption timeout is determined by the context from which we trigger the preemption, as such allow the caller to specify the desired timeout. Effectively the other choice would be to use the shortest timeout along the dependency chain. However, given that we would have already triggered preemption for the dependency chain, we can assume that no preemption along that chain is more important than the current request, ergo we need only consider the current timeout. Realising this, we can then pass control of the preemption timeout to the caller for greater flexibility. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_gem.c | 2 +- drivers/gpu/drm/i915/i915_request.c | 2 +- drivers/gpu/drm/i915/intel_lrc.c | 5 +- drivers/gpu/drm/i915/intel_ringbuffer.h | 6 +- drivers/gpu/drm/i915/selftests/intel_lrc.c | 106 ++++++++++++++++++++- 5 files changed, 114 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index cec52bbd1b41..e8c1a077e223 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -576,7 +576,7 @@ static void __fence_set_priority(struct dma_fence *fence, int prio) rcu_read_lock(); if (engine->schedule) - engine->schedule(rq, prio); + engine->schedule(rq, prio, 0); rcu_read_unlock(); } diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 629f3e860592..ddffffb829ef 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1062,7 +1062,7 @@ void __i915_request_add(struct i915_request *request, bool flush_caches) */ rcu_read_lock(); if (engine->schedule) - engine->schedule(request, request->ctx->priority); + engine->schedule(request, request->ctx->priority, 0); rcu_read_unlock(); local_bh_disable(); diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 4f1e985b3cdb..877340afb63d 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1260,7 +1260,8 @@ pt_lock_engine(struct i915_priotree *pt, struct intel_engine_cs *locked) return engine; } -static void execlists_schedule(struct i915_request *request, int prio) +static void execlists_schedule(struct i915_request *request, + int prio, unsigned int timeout) { struct intel_engine_cs *engine; struct i915_dependency *dep, *p; @@ -1356,7 +1357,7 @@ static void execlists_schedule(struct i915_request *request, int prio) if (prio > engine->execlists.queue_priority && i915_sw_fence_done(&pt_to_request(pt)->submit)) - __submit_queue(engine, prio, 0); + __submit_queue(engine, prio, timeout); } spin_unlock_irq(&engine->timeline->lock); diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 25147a877518..04d25d1d4c2f 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -463,13 +463,15 @@ struct intel_engine_cs { */ void (*submit_request)(struct i915_request *rq); - /* Call when the priority on a request has changed and it and its + /* + * Call when the priority on a request has changed and it and its * dependencies may need rescheduling. Note the request itself may * not be ready to run! * * Called under the struct_mutex. */ - void (*schedule)(struct i915_request *request, int priority); + void (*schedule)(struct i915_request *request, + int priority, unsigned int timeout); /* * Cancel all requests on the hardware, or queued for execution. diff --git a/drivers/gpu/drm/i915/selftests/intel_lrc.c b/drivers/gpu/drm/i915/selftests/intel_lrc.c index 72d2770e5d71..5e4d6d07fff5 100644 --- a/drivers/gpu/drm/i915/selftests/intel_lrc.c +++ b/drivers/gpu/drm/i915/selftests/intel_lrc.c @@ -458,7 +458,7 @@ static int live_late_preempt(void *arg) goto err_wedged; } - engine->schedule(rq, I915_PRIORITY_MAX); + engine->schedule(rq, I915_PRIORITY_MAX, 0); if (!wait_for_spinner(&spin_hi, rq)) { pr_err("High priority context failed to preempt the low priority context\n"); @@ -677,6 +677,109 @@ static int live_preempt_reset(void *arg) return err; } +static int live_late_preempt_timeout(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct i915_gem_context *ctx_hi, *ctx_lo; + struct spinner spin_hi, spin_lo; + struct intel_engine_cs *engine; + enum intel_engine_id id; + int err = -ENOMEM; + + if (!HAS_LOGICAL_RING_PREEMPTION(i915)) + return 0; + + mutex_lock(&i915->drm.struct_mutex); + + if (spinner_init(&spin_hi, i915)) + goto err_unlock; + + if (spinner_init(&spin_lo, i915)) + goto err_spin_hi; + + ctx_hi = kernel_context(i915); + if (!ctx_hi) + goto err_spin_lo; + + ctx_lo = kernel_context(i915); + if (!ctx_lo) + goto err_ctx_hi; + + for_each_engine(engine, i915, id) { + struct i915_request *rq; + + rq = spinner_create_request(&spin_lo, ctx_lo, engine, MI_NOOP); + if (IS_ERR(rq)) { + err = PTR_ERR(rq); + goto err_ctx_lo; + } + + i915_request_add(rq); + if (!wait_for_spinner(&spin_lo, rq)) { + pr_err("First context failed to start\n"); + goto err_wedged; + } + + rq = spinner_create_request(&spin_hi, ctx_hi, engine, MI_NOOP); + if (IS_ERR(rq)) { + spinner_end(&spin_lo); + err = PTR_ERR(rq); + goto err_ctx_lo; + } + + i915_request_add(rq); + if (wait_for_spinner(&spin_hi, rq)) { + pr_err("Second context overtook first?\n"); + goto err_wedged; + } + + GEM_TRACE("%s rescheduling (no timeout)\n", engine->name); + engine->schedule(rq, 1, 0); + + if (wait_for_spinner(&spin_hi, rq)) { + pr_err("High priority context overtook first without an arbitration point?\n"); + goto err_wedged; + } + + GEM_TRACE("%s rescheduling (with timeout)\n", engine->name); + engine->schedule(rq, 2, 10 * 1000 /* 10us */); + + if (!wait_for_spinner(&spin_hi, rq)) { + pr_err("High priority context failed to force itself in front of the low priority context\n"); + GEM_TRACE_DUMP(); + goto err_wedged; + } + + spinner_end(&spin_hi); + spinner_end(&spin_lo); + if (flush_test(i915, I915_WAIT_LOCKED)) { + err = -EIO; + goto err_ctx_lo; + } + } + + err = 0; +err_ctx_lo: + kernel_context_close(ctx_lo); +err_ctx_hi: + kernel_context_close(ctx_hi); +err_spin_lo: + spinner_fini(&spin_lo); +err_spin_hi: + spinner_fini(&spin_hi); +err_unlock: + flush_test(i915, I915_WAIT_LOCKED); + mutex_unlock(&i915->drm.struct_mutex); + return err; + +err_wedged: + spinner_end(&spin_hi); + spinner_end(&spin_lo); + i915_gem_set_wedged(i915); + err = -EIO; + goto err_ctx_lo; +} + int intel_execlists_live_selftests(struct drm_i915_private *i915) { static const struct i915_subtest tests[] = { @@ -685,6 +788,7 @@ int intel_execlists_live_selftests(struct drm_i915_private *i915) SUBTEST(live_late_preempt), SUBTEST(live_preempt_timeout), SUBTEST(live_preempt_reset), + SUBTEST(live_late_preempt_timeout), }; return i915_subtests(tests, i915); }