From patchwork Wed Apr 11 10:13:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10335283 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E10E76053B for ; Wed, 11 Apr 2018 10:14:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D00AF27F8F for ; Wed, 11 Apr 2018 10:14:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C48502872D; Wed, 11 Apr 2018 10:14:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1F2D527F8F for ; Wed, 11 Apr 2018 10:14:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6E6766E5A0; Wed, 11 Apr 2018 10:14:10 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 93CE36E59C; Wed, 11 Apr 2018 10:14:08 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 11333135-1500050 for multiple; Wed, 11 Apr 2018 11:13:57 +0100 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Wed, 11 Apr 2018 11:13:58 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 11 Apr 2018 11:13:56 +0100 Message-Id: <20180411101356.27159-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.17.0 MIME-Version: 1.0 X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 Subject: [Intel-gfx] [PATCH igt] igt/gem_exec_schedule: Exercise "deep" preemption X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: igt-dev@lists.freedesktop.org, Mika Kuoppala Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP In investigating the issue with having to force preemption within the executing ELSP[], we want to trigger preemption between all elements of that array. To that end, we issue a series of requests with different priorities to fill the in-flight ELSP[] and then demand preemption into the middle of that series. One can think of even more complicated reordering requirements of ELSP[], trying to switch between every possible combination of permutations. Rather than check all 2 billion combinations, be content with a few. v2: Add a different pattern for queued requests. Not only do we need to inject a request into the middle of a single context with a queue of different priority contexts, but we also want a queue of different contexts, as they have different patterns of ELSP[] behaviour. Signed-off-by: Chris Wilson Cc: Mika Kuoppala Cc: Michał Winiarski Reviewed-by: Michał Winiarski --- tests/gem_exec_schedule.c | 188 ++++++++++++++++++++++++++++++++++---- 1 file changed, 169 insertions(+), 19 deletions(-) diff --git a/tests/gem_exec_schedule.c b/tests/gem_exec_schedule.c index d2f040ab..6ff15b6e 100644 --- a/tests/gem_exec_schedule.c +++ b/tests/gem_exec_schedule.c @@ -373,13 +373,78 @@ static void preempt(int fd, unsigned ring, unsigned flags) gem_close(fd, result); } -static void preempt_other(int fd, unsigned ring) +#define CHAIN 0x1 +#define CONTEXTS 0x2 + +static igt_spin_t *__noise(int fd, uint32_t ctx, int prio, igt_spin_t *spin) +{ + unsigned other; + + gem_context_set_priority(fd, ctx, prio); + + for_each_physical_engine(fd, other) { + if (spin == NULL) { + spin = __igt_spin_batch_new(fd, ctx, other, 0); + } else { + struct drm_i915_gem_exec_object2 obj = { + .handle = spin->handle, + }; + struct drm_i915_gem_execbuffer2 eb = { + .buffer_count = 1, + .buffers_ptr = to_user_pointer(&obj), + .rsvd1 = ctx, + .flags = other, + }; + gem_execbuf(fd, &eb); + } + } + + return spin; +} + +static void __preempt_other(int fd, + uint32_t *ctx, + unsigned int target, unsigned int primary, + unsigned flags) { uint32_t result = gem_create(fd, 4096); uint32_t *ptr = gem_mmap__gtt(fd, result, 4096, PROT_READ); - igt_spin_t *spin[MAX_ENGINES]; - unsigned int other; - unsigned int n, i; + unsigned int n, i, other; + + n = 0; + store_dword(fd, ctx[LO], primary, + result, (n + 1)*sizeof(uint32_t), n + 1, + 0, I915_GEM_DOMAIN_RENDER); + n++; + + if (flags & CHAIN) { + for_each_physical_engine(fd, other) { + store_dword(fd, ctx[LO], other, + result, (n + 1)*sizeof(uint32_t), n + 1, + 0, I915_GEM_DOMAIN_RENDER); + n++; + } + } + + store_dword(fd, ctx[HI], target, + result, (n + 1)*sizeof(uint32_t), n + 1, + 0, I915_GEM_DOMAIN_RENDER); + + igt_debugfs_dump(fd, "i915_engine_info"); + gem_set_domain(fd, result, I915_GEM_DOMAIN_GTT, 0); + + n++; + for (i = 0; i <= n; i++) + igt_assert_eq_u32(ptr[i], i); + + munmap(ptr, 4096); + gem_close(fd, result); +} + +static void preempt_other(int fd, unsigned ring, unsigned int flags) +{ + unsigned int primary; + igt_spin_t *spin = NULL; uint32_t ctx[3]; /* On each engine, insert @@ -396,36 +461,97 @@ static void preempt_other(int fd, unsigned ring) gem_context_set_priority(fd, ctx[LO], MIN_PRIO); ctx[NOISE] = gem_context_create(fd); + spin = __noise(fd, ctx[NOISE], 0, NULL); ctx[HI] = gem_context_create(fd); gem_context_set_priority(fd, ctx[HI], MAX_PRIO); + for_each_physical_engine(fd, primary) { + igt_debug("Primary engine: %s\n", e__->name); + __preempt_other(fd, ctx, ring, primary, flags); + + } + + igt_assert(gem_bo_busy(fd, spin->handle)); + igt_spin_batch_free(fd, spin); + + gem_context_destroy(fd, ctx[LO]); + gem_context_destroy(fd, ctx[NOISE]); + gem_context_destroy(fd, ctx[HI]); +} + +static void __preempt_queue(int fd, + unsigned target, unsigned primary, + unsigned depth, unsigned flags) +{ + uint32_t result = gem_create(fd, 4096); + uint32_t *ptr = gem_mmap__gtt(fd, result, 4096, PROT_READ); + igt_spin_t *above = NULL, *below = NULL; + unsigned int other, n, i; + int prio = MAX_PRIO; + uint32_t ctx[3] = { + gem_context_create(fd), + gem_context_create(fd), + gem_context_create(fd), + }; + + for (n = 0; n < depth; n++) { + if (flags & CONTEXTS) { + gem_context_destroy(fd, ctx[NOISE]); + ctx[NOISE] = gem_context_create(fd); + } + above = __noise(fd, ctx[NOISE], prio--, above); + } + + gem_context_set_priority(fd, ctx[HI], prio--); + + for (; n < MAX_ELSP_QLEN; n++) { + if (flags & CONTEXTS) { + gem_context_destroy(fd, ctx[NOISE]); + ctx[NOISE] = gem_context_create(fd); + } + below = __noise(fd, ctx[NOISE], prio--, below); + } + + gem_context_set_priority(fd, ctx[LO], prio--); + n = 0; - for_each_physical_engine(fd, other) { - igt_assert(n < ARRAY_SIZE(spin)); + store_dword(fd, ctx[LO], primary, + result, (n + 1)*sizeof(uint32_t), n + 1, + 0, I915_GEM_DOMAIN_RENDER); + n++; - spin[n] = __igt_spin_batch_new(fd, ctx[NOISE], other, 0); - store_dword(fd, ctx[LO], other, - result, (n + 1)*sizeof(uint32_t), n + 1, - 0, I915_GEM_DOMAIN_RENDER); - n++; + if (flags & CHAIN) { + for_each_physical_engine(fd, other) { + store_dword(fd, ctx[LO], other, + result, (n + 1)*sizeof(uint32_t), n + 1, + 0, I915_GEM_DOMAIN_RENDER); + n++; + } } - store_dword(fd, ctx[HI], ring, + + store_dword(fd, ctx[HI], target, result, (n + 1)*sizeof(uint32_t), n + 1, 0, I915_GEM_DOMAIN_RENDER); igt_debugfs_dump(fd, "i915_engine_info"); - gem_set_domain(fd, result, I915_GEM_DOMAIN_GTT, 0); - for (i = 0; i < n; i++) { - igt_assert(gem_bo_busy(fd, spin[i]->handle)); - igt_spin_batch_free(fd, spin[i]); + if (above) { + igt_assert(gem_bo_busy(fd, above->handle)); + igt_spin_batch_free(fd, above); } + gem_set_domain(fd, result, I915_GEM_DOMAIN_GTT, 0); + n++; for (i = 0; i <= n; i++) igt_assert_eq_u32(ptr[i], i); + if (below) { + igt_assert(gem_bo_busy(fd, below->handle)); + igt_spin_batch_free(fd, below); + } + gem_context_destroy(fd, ctx[LO]); gem_context_destroy(fd, ctx[NOISE]); gem_context_destroy(fd, ctx[HI]); @@ -434,6 +560,16 @@ static void preempt_other(int fd, unsigned ring) gem_close(fd, result); } +static void preempt_queue(int fd, unsigned ring, unsigned int flags) +{ + unsigned other; + + for_each_physical_engine(fd, other) { + for (unsigned depth = 0; depth <= MAX_ELSP_QLEN; depth++) + __preempt_queue(fd, ring, other, depth, flags); + } +} + static void preempt_self(int fd, unsigned ring) { uint32_t result = gem_create(fd, 4096); @@ -981,12 +1117,26 @@ igt_main igt_subtest_f("preempt-contexts-%s", e->name) preempt(fd, e->exec_id | e->flags, NEW_CTX); - igt_subtest_f("preempt-other-%s", e->name) - preempt_other(fd, e->exec_id | e->flags); - igt_subtest_f("preempt-self-%s", e->name) preempt_self(fd, e->exec_id | e->flags); + igt_subtest_f("preempt-other-%s", e->name) + preempt_other(fd, e->exec_id | e->flags, 0); + + igt_subtest_f("preempt-other-chain-%s", e->name) + preempt_other(fd, e->exec_id | e->flags, CHAIN); + + igt_subtest_f("preempt-queue-%s", e->name) + preempt_queue(fd, e->exec_id | e->flags, 0); + + igt_subtest_f("preempt-queue-chain-%s", e->name) + preempt_queue(fd, e->exec_id | e->flags, CHAIN); + igt_subtest_f("preempt-contexts-%s", e->name) + preempt_queue(fd, e->exec_id | e->flags, CONTEXTS); + + igt_subtest_f("preempt-contexts-chain-%s", e->name) + preempt_queue(fd, e->exec_id | e->flags, CONTEXTS | CHAIN); + igt_subtest_group { igt_hang_t hang;