From patchwork Sat Mar 28 10:43:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11463479 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C5B081 for ; Sat, 28 Mar 2020 10:43:59 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5C8A3206F2 for ; Sat, 28 Mar 2020 10:43:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C8A3206F2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=chris-wilson.co.uk Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 12F636EAD0; Sat, 28 Mar 2020 10:43:57 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8C18D6EAD0 for ; Sat, 28 Mar 2020 10:43:55 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from build.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 20722932-1500050 for multiple; Sat, 28 Mar 2020 10:43:46 +0000 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Sat, 28 Mar 2020 10:43:46 +0000 Message-Id: <20200328104346.28988-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200328091628.20381-1-chris@chris-wilson.co.uk> References: <20200328091628.20381-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH] drm/i915/selftests: Exercise lite-restore on top of a semaphore X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Exercise issuing a lite-restore (a continuation of the same active context with a new request) while the HW is blocked on a semaphore. We expect the HW to ACK immediately after the lite-restore from the next failed semaphore poll. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/gt/selftest_lrc.c | 181 +++++++++++++++++++++++++ 1 file changed, 181 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c index 6f06ba750a0a..ff3db405efe6 100644 --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c @@ -350,6 +350,186 @@ static int live_unlite_preempt(void *arg) return live_unlite_restore(arg, I915_USER_PRIORITY(I915_PRIORITY_MAX)); } +static struct i915_request * +create_lite_semaphore(struct intel_context *ce, void *slot) +{ + const u32 offset = + i915_ggtt_offset(ce->engine->status_page.vma) + + offset_in_page(slot); + struct i915_request *rq; + u32 *cs; + int err; + + rq = intel_context_create_request(ce); + if (IS_ERR(rq)) + return rq; + + if (rq->engine->emit_init_breadcrumb) { + err = rq->engine->emit_init_breadcrumb(rq); + if (err) + goto err; + } + + cs = intel_ring_begin(rq, 10); + if (IS_ERR(cs)) { + err = PTR_ERR(cs); + goto err; + } + + *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; + *cs++ = offset; + *cs++ = 0; + *cs++ = 1; + + *cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE; + *cs++ = MI_ARB_CHECK; + + *cs++ = MI_SEMAPHORE_WAIT | + MI_SEMAPHORE_GLOBAL_GTT | + MI_SEMAPHORE_POLL | + MI_SEMAPHORE_SAD_EQ_SDD; + *cs++ = 0; + *cs++ = offset; + *cs++ = 0; + + intel_ring_advance(rq, cs); + + err = 0; +err: + i915_request_get(rq); + i915_request_add(rq); + if (err) { + i915_request_put(rq); + return ERR_PTR(err); + } + + return rq; +} + +static inline bool +ring_is_paused(const struct intel_engine_cs *engine) +{ + return engine->status_page.addr[I915_GEM_HWS_PREEMPT]; +} + +static int live_lite_semaphore(void *arg) +{ + struct intel_gt *gt = arg; + struct intel_engine_cs *engine; + enum intel_engine_id id; + int err = -ENOMEM; + + /* + * Exercise issuing a lite-restore (a continuation of the same + * active context with a new request) while the HW is blocked + * on a semaphore. We expect the HW to ACK immediately after the + * lite-restore from the next failed semaphore poll. + */ + + err = 0; + for_each_engine(engine, gt, id) { + struct intel_context *ce; + struct i915_request *rq; + struct igt_live_test t; + unsigned long saved; + u32 *slot; + + if (!intel_engine_has_semaphores(engine)) + continue; + + if (!intel_engine_can_store_dword(engine)) + continue; + + if (igt_live_test_begin(&t, gt->i915, __func__, engine->name)) { + err = -EIO; + break; + } + engine_heartbeat_disable(engine, &saved); + + slot = memset32(engine->status_page.addr + 1000, 0, 4); + + ce = intel_context_create(engine); + if (IS_ERR(ce)) { + err = PTR_ERR(ce); + goto err; + } + + rq = create_lite_semaphore(ce, slot); + if (IS_ERR(rq)) { + err = PTR_ERR(rq); + goto err_ce; + } + + if (wait_for_submit(engine, rq, HZ / 2)) { + GEM_TRACE_ERR("%s: failed to submit request\n", + engine->name); + err = -ETIME; + goto err_rq; + } + + if (wait_for(READ_ONCE(*slot), 50)) { + GEM_TRACE_ERR("%s: failed to start semaphore\n", + engine->name); + err = -ETIME; + goto err_rq; + } + + GEM_BUG_ON(engine->execlists.pending[0]); + + /* Switch from the inner semaphore to the preempt-to-busy one */ + ring_set_paused(engine, 1); + WRITE_ONCE(*slot, 0); + + if (i915_request_wait(rq, 0, HZ / 2) < 0) { + GEM_TRACE_ERR("%s: failed to complete request\n", + engine->name); + err = -ETIME; + goto err_rq; + } + + i915_request_put(rq); + + rq = intel_context_create_request(ce); + if (IS_ERR(rq)) { + err = PTR_ERR(rq); + goto err_ce; + } + + /* + * The ring_is_paused() should only be cleared on the HW ACK + * following the preemption request (see process_csb()). We + * depend on the HW processing that ACK even if it is currently + * inside a semaphore. + */ + GEM_BUG_ON(!ring_is_paused(engine)); + GEM_BUG_ON(engine->execlists.pending[0]); + GEM_BUG_ON(execlists_active(&engine->execlists)->context != ce); + + i915_request_get(rq); + i915_request_add(rq); + + if (i915_request_wait(rq, 0, HZ / 2) < 0) { + GEM_TRACE_ERR("%s: failed to complete lite-restore\n", + engine->name); + err = -ETIME; + goto err_rq; + } + +err_rq: + i915_request_put(rq); +err_ce: + intel_context_put(ce); +err: + engine_heartbeat_enable(engine, saved); + if (igt_live_test_end(&t)) + err = -EIO; + if (err) + break; + } + + return err; +} + static int live_pin_rewind(void *arg) { struct intel_gt *gt = arg; @@ -3954,6 +4134,7 @@ int intel_execlists_live_selftests(struct drm_i915_private *i915) SUBTEST(live_sanitycheck), SUBTEST(live_unlite_switch), SUBTEST(live_unlite_preempt), + SUBTEST(live_lite_semaphore), SUBTEST(live_pin_rewind), SUBTEST(live_hold_reset), SUBTEST(live_error_interrupt),