From patchwork Tue Jun 19 10:49:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10474029 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 08E676029B for ; Tue, 19 Jun 2018 10:49:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E5240286C1 for ; Tue, 19 Jun 2018 10:49:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D9F8528A69; Tue, 19 Jun 2018 10:49:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 792FC286C1 for ; Tue, 19 Jun 2018 10:49:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 27C6E6E50F; Tue, 19 Jun 2018 10:49:36 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id C288F6E50B; Tue, 19 Jun 2018 10:49:33 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 12091679-1500050 for multiple; Tue, 19 Jun 2018 11:49:28 +0100 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Tue, 19 Jun 2018 11:49:27 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Tue, 19 Jun 2018 11:49:18 +0100 Message-Id: <20180619104920.17528-4-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.18.0.rc2 In-Reply-To: <20180619104920.17528-1-chris@chris-wilson.co.uk> References: <20180619104920.17528-1-chris@chris-wilson.co.uk> X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 Subject: [Intel-gfx] [PATCH i-g-t 4/6] igt/gem_exec_nop: Drip feed nops X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: igt-dev@lists.freedesktop.org MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Wait until the previous nop batch is running before submitting the next. This prevents the kernel from batching up sequential requests into a a ringfull, more strenuous exercising the "lite-restore" execution path. Signed-off-by: Chris Wilson --- tests/gem_exec_nop.c | 146 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 142 insertions(+), 4 deletions(-) diff --git a/tests/gem_exec_nop.c b/tests/gem_exec_nop.c index 50f0a3aad..0523b1c02 100644 --- a/tests/gem_exec_nop.c +++ b/tests/gem_exec_nop.c @@ -104,6 +104,129 @@ static double nop_on_ring(int fd, uint32_t handle, unsigned ring_id, return elapsed(&start, &now); } +static void poll_ring(int fd, unsigned ring, const char *name, int timeout) +{ + const int gen = intel_gen(intel_get_drm_devid(fd)); + const uint32_t MI_ARB_CHK = 0x5 << 23; + struct drm_i915_gem_execbuffer2 execbuf; + struct drm_i915_gem_exec_object2 obj; + struct drm_i915_gem_relocation_entry reloc[4], *r; + uint32_t *bbe[2], *state, *batch; + unsigned engines[16], nengine, flags; + struct timespec tv = {}; + unsigned long cycles; + uint64_t elapsed; + + flags = I915_EXEC_NO_RELOC; + if (gen == 4 || gen == 5) + flags |= I915_EXEC_SECURE; + + nengine = 0; + if (ring == ALL_ENGINES) { + for_each_physical_engine(fd, ring) { + if (!gem_can_store_dword(fd, ring)) + continue; + + engines[nengine++] = ring; + } + } else { + gem_require_ring(fd, ring); + igt_require(gem_can_store_dword(fd, ring)); + engines[nengine++] = ring; + } + igt_require(nengine); + + memset(&obj, 0, sizeof(obj)); + obj.handle = gem_create(fd, 4096); + obj.relocs_ptr = to_user_pointer(reloc); + obj.relocation_count = ARRAY_SIZE(reloc); + + r = memset(reloc, 0, sizeof(reloc)); + batch = gem_mmap__wc(fd, obj.handle, 0, 4096, PROT_WRITE); + + for (unsigned int start_offset = 0; + start_offset <= 128; + start_offset += 128) { + uint32_t *b = batch + start_offset / sizeof(*batch); + + r->target_handle = obj.handle; + r->offset = (b - batch + 1) * sizeof(uint32_t); + r->delta = 4092; + r->read_domains = I915_GEM_DOMAIN_RENDER; + + *b = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0); + if (gen >= 8) { + *++b = r->delta; + *++b = 0; + } else if (gen >= 4) { + r->offset += sizeof(uint32_t); + *++b = 0; + *++b = r->delta; + } else { + *b -= 1; + *++b = r->delta; + } + *++b = start_offset != 0; + r++; + + b = batch + (start_offset + 64) / sizeof(*batch); + bbe[start_offset != 0] = b; + *b++ = MI_ARB_CHK; + + r->target_handle = obj.handle; + r->offset = (b - batch + 1) * sizeof(uint32_t); + r->read_domains = I915_GEM_DOMAIN_COMMAND; + r->delta = start_offset + 64; + if (gen >= 8) { + *b++ = MI_BATCH_BUFFER_START | 1 << 8 | 1; + *b++ = r->delta; + *b++ = 0; + } else if (gen >= 6) { + *b++ = MI_BATCH_BUFFER_START | 1 << 8; + *b++ = r->delta; + } else { + *b++ = MI_BATCH_BUFFER_START | 2 << 6; + if (gen < 4) + r->delta |= 1; + *b++ = r->delta; + } + r++; + } + igt_assert(r == reloc + ARRAY_SIZE(reloc)); + state = batch + 1023; + + memset(&execbuf, 0, sizeof(execbuf)); + execbuf.buffers_ptr = to_user_pointer(&obj); + execbuf.buffer_count = 1; + execbuf.flags = engines[0]; + + cycles = 0; + do { + unsigned int idx = ++cycles & 1; + + *bbe[idx] = MI_ARB_CHK; + execbuf.batch_start_offset = + (bbe[idx] - batch) * sizeof(*batch) - 64; + + execbuf.flags = engines[cycles % nengine] | flags; + gem_execbuf(fd, &execbuf); + + *bbe[!idx] = MI_BATCH_BUFFER_END; + __sync_synchronize(); + + while (READ_ONCE(*state) != idx) + ; + } while ((elapsed = igt_nsec_elapsed(&tv)) >> 30 < timeout); + *bbe[cycles & 1] = MI_BATCH_BUFFER_END; + gem_sync(fd, obj.handle); + + igt_info("%s completed %ld cycles: %.3f us\n", + name, cycles, elapsed*1e-3/cycles); + + munmap(batch, 4096); + gem_close(fd, obj.handle); +} + static void single(int fd, uint32_t handle, unsigned ring_id, const char *ring_name) { @@ -675,10 +798,25 @@ igt_main } } - igt_subtest("headless") { - /* Requires master for changing display modes */ - igt_device_set_master(device); - headless(device, handle); + igt_subtest_group { + igt_fixture { + igt_device_set_master(device); + } + + for (e = intel_execution_engines; e->name; e++) { + /* Requires master for STORE_DWORD on gen4/5 */ + igt_subtest_f("poll-%s", e->name) + poll_ring(device, + e->exec_id | e->flags, e->name, 20); + } + + igt_subtest("poll-sequential") + poll_ring(device, ALL_ENGINES, "Sequential", 20); + + igt_subtest("headless") { + /* Requires master for changing display modes */ + headless(device, handle); + } } igt_fixture {