From patchwork Wed Jan 30 09:54:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10788117 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3E9B13B5 for ; Wed, 30 Jan 2019 09:55:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A069A29E85 for ; Wed, 30 Jan 2019 09:55:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 94AD32C763; Wed, 30 Jan 2019 09:55:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4229129E85 for ; Wed, 30 Jan 2019 09:55:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 85AB16EAF5; Wed, 30 Jan 2019 09:55:15 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 97CF16EAF3; Wed, 30 Jan 2019 09:55:13 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 15400300-1500050 for multiple; Wed, 30 Jan 2019 09:55:01 +0000 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 30 Jan 2019 09:54:57 +0000 Message-Id: <20190130095500.23596-5-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190130095500.23596-1-chris@chris-wilson.co.uk> References: <20190130095500.23596-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH i-g-t 5/8] i915/gem_exec_schedule: Verify that using HW semaphores doesn't block X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: igt-dev@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP We may use HW semaphores to schedule nearly-ready work such that they are already spinning on the GPU waiting for the completion on another engine. However, we don't want for that spinning task to actually block any real work should it be scheduled. Signed-off-by: Chris Wilson --- tests/i915/gem_exec_schedule.c | 87 ++++++++++++++++++++++++++++++++++ 1 file changed, 87 insertions(+) diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c index 0462ce84f..e947dde41 100644 --- a/tests/i915/gem_exec_schedule.c +++ b/tests/i915/gem_exec_schedule.c @@ -47,6 +47,10 @@ #define MAX_CONTEXTS 1024 +#define LOCAL_I915_EXEC_BSD_SHIFT (13) +#define LOCAL_I915_EXEC_BSD_MASK (3 << LOCAL_I915_EXEC_BSD_SHIFT) +#define ENGINE_MASK (I915_EXEC_RING_MASK | LOCAL_I915_EXEC_BSD_MASK) + IGT_TEST_DESCRIPTION("Check that we can control the order of execution"); static uint32_t __store_dword(int fd, uint32_t ctx, unsigned ring, @@ -305,6 +309,86 @@ static void smoketest(int fd, unsigned ring, unsigned timeout) munmap(ptr, 4096); } +static uint32_t __batch_create(int i915, uint32_t offset) +{ + const uint32_t bbe = MI_BATCH_BUFFER_END; + uint32_t handle; + + handle = gem_create(i915, ALIGN(offset + 4, 4096)); + gem_write(i915, handle, offset, &bbe, sizeof(bbe)); + + return handle; +} + +static uint32_t batch_create(int i915) +{ + return __batch_create(i915, 0); +} + +static void semaphore(int i915) +{ + struct drm_i915_gem_exec_object2 obj = { + .handle = batch_create(i915), + }; + igt_spin_t *spin = NULL; + unsigned int engine; + uint32_t scratch; + + igt_require(gem_scheduler_has_preemption(i915)); + + /* + * Given the use of semaphores to govern parallel submission + * of nearly-ready work to HW, we still want to run actually + * ready work immediately. Without semaphores, the dependent + * work wouldn't be submitted so our ready work will run. + */ + + scratch = gem_create(i915, 4096); + for_each_physical_engine(i915, engine) { + if (!spin) { + spin = igt_spin_batch_new(i915, + .dependency = scratch, + .engine = engine); + } else { + typeof(spin->execbuf.flags) saved = spin->execbuf.flags; + + spin->execbuf.flags &= ~ENGINE_MASK; + spin->execbuf.flags |= engine; + + gem_execbuf(i915, &spin->execbuf); + + spin->execbuf.flags = saved; + } + } + igt_require(spin); + gem_close(i915, scratch); + + /* + * On all dependent engines, the request may be executing (busywaiting + * on a HW semaphore) but it should not prevent any real work from + * taking precedence. + */ + scratch = gem_context_create(i915); + for_each_physical_engine(i915, engine) { + struct drm_i915_gem_execbuffer2 execbuf = { + .buffers_ptr = to_user_pointer(&obj), + .buffer_count = 1, + .flags = engine, + .rsvd1 = scratch, + }; + + if (engine == (spin->execbuf.flags & ENGINE_MASK)) + continue; + + gem_execbuf(i915, &execbuf); + } + gem_context_destroy(i915, scratch); + gem_sync(i915, obj.handle); /* to hang unless we can preempt */ + gem_close(i915, obj.handle); + + igt_spin_batch_free(i915, spin); +} + static void reorder(int fd, unsigned ring, unsigned flags) #define EQUAL 1 { @@ -1236,6 +1320,9 @@ igt_main igt_require(gem_scheduler_has_ctx_priority(fd)); } + igt_subtest("semaphore") + semaphore(fd); + igt_subtest("smoketest-all") smoketest(fd, ALL_ENGINES, 30);