From patchwork Thu Dec 7 22:48:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Antonio Argenziano X-Patchwork-Id: 10101229 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5506560325 for ; Thu, 7 Dec 2017 22:58:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4371428897 for ; Thu, 7 Dec 2017 22:58:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 368DF2889E; Thu, 7 Dec 2017 22:58:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1EA0928897 for ; Thu, 7 Dec 2017 22:58:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 972146E87F; Thu, 7 Dec 2017 22:58:20 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org X-Greylist: delayed 565 seconds by postgrey-1.35 at gabe; Thu, 07 Dec 2017 22:58:16 UTC Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id A6B116E286 for ; Thu, 7 Dec 2017 22:58:16 +0000 (UTC) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Dec 2017 14:48:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,374,1508828400"; d="scan'208";a="182315497" Received: from relo-linux-2.fm.intel.com ([10.1.27.122]) by orsmga005.jf.intel.com with ESMTP; 07 Dec 2017 14:48:52 -0800 From: Antonio Argenziano To: intel-gfx@lists.freedesktop.org Date: Thu, 7 Dec 2017 14:48:22 -0800 Message-Id: <20171207224823.30052-4-antonio.argenziano@intel.com> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171207224823.30052-1-antonio.argenziano@intel.com> References: <20171207224823.30052-1-antonio.argenziano@intel.com> Subject: [Intel-gfx] [PATCH i-g-t 4/5] igt_dummyload: Add preemptible parameter to recursive batch X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP This patch adds a parameter that allows to make the spinning batch pre-emptible by conditionally adding an arbitration point to the spinning loop. From RFC: - Implicitly initialize struct members to zero. (Chris) Cc: Chris Wilson Signed-off-by: Antonio Argenziano --- lib/igt_dummyload.c | 6 +++--- lib/igt_dummyload.h | 3 ++- lib/igt_gt.c | 5 +++-- tests/drv_missed_irq.c | 2 +- tests/gem_busy.c | 18 ++++++++++++------ tests/gem_exec_fence.c | 20 +++++++++++++------- tests/gem_exec_latency.c | 3 ++- tests/gem_exec_nop.c | 3 ++- tests/gem_exec_reloc.c | 10 +++++++--- tests/gem_exec_schedule.c | 12 ++++++++---- tests/gem_exec_suspend.c | 4 +++- tests/gem_shrink.c | 7 +++++-- tests/gem_spin_batch.c | 3 ++- tests/gem_wait.c | 3 ++- tests/kms_busy.c | 12 ++++++++---- tests/kms_cursor_legacy.c | 7 +++++-- tests/perf_pmu.c | 46 ++++++++++++++++++++++++++++------------------ tests/pm_rps.c | 5 ++++- 18 files changed, 110 insertions(+), 59 deletions(-) diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c index 0bb02e5b..483344cf 100644 --- a/lib/igt_dummyload.c +++ b/lib/igt_dummyload.c @@ -121,8 +121,8 @@ void emit_recursive_batch(igt_spin_t *spin, spin->batch = batch; spin->handle = obj[BATCH].handle; - /* Allow ourselves to be preempted */ - *batch++ = MI_ARB_CHK; + if (opts.preemptible) + *batch++ = MI_ARB_CHK; /* Allow ourselves to be preempted */ /* Pad with a few nops so that we do not completely hog the system. * @@ -169,7 +169,7 @@ void emit_recursive_batch(igt_spin_t *spin, gem_execbuf(fd, &execbuf); } - spin->spinning_offset = obj->offset; + spin->gtt_offset = obj[BATCH].offset; } igt_spin_t * diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h index 2f3f2ebf..b8edefbf 100644 --- a/lib/igt_dummyload.h +++ b/lib/igt_dummyload.h @@ -35,13 +35,14 @@ typedef struct igt_spin { timer_t timer; struct igt_list link; uint32_t *batch; - uint64_t spinning_offset; + uint64_t gtt_offset; } igt_spin_t; typedef struct igt_spin_opt { uint32_t ctx; unsigned engine; uint32_t dep; + bool preemptible; } igt_spin_opt_t; void emit_recursive_batch(igt_spin_t *spin, int fd, igt_spin_opt_t opts); diff --git a/lib/igt_gt.c b/lib/igt_gt.c index a9a69ccd..614fd83b 100644 --- a/lib/igt_gt.c +++ b/lib/igt_gt.c @@ -295,10 +295,11 @@ igt_hang_t igt_hang_ctx(int fd, igt_hang_opt_t opts) emit_recursive_batch(&spin, fd, (igt_spin_opt_t){ .ctx = opts.ctx, - .engine = opts.ring}); + .engine = opts.ring, + .preemptible = false}); if (opts.offset) - *opts.offset = spin.spinning_offset; + *opts.offset = spin.gtt_offset; return (igt_hang_t){ spin.handle, opts.ctx, ban, opts.flags }; } diff --git a/tests/drv_missed_irq.c b/tests/drv_missed_irq.c index fb899dad..ac44dce4 100644 --- a/tests/drv_missed_irq.c +++ b/tests/drv_missed_irq.c @@ -34,7 +34,7 @@ IGT_TEST_DESCRIPTION("Inject missed interrupts and make sure they are caught"); static void trigger_missed_interrupt(int fd, unsigned ring) { igt_spin_t *spin = __igt_spin_batch_new(fd, - (igt_spin_opt_t){.engine = ring}); + (igt_spin_opt_t){.engine = ring, .preemptible = true}); igt_fork(child, 1) { /* We are now a low priority child on the *same* CPU as the diff --git a/tests/gem_busy.c b/tests/gem_busy.c index 39f01c67..0943dbff 100644 --- a/tests/gem_busy.c +++ b/tests/gem_busy.c @@ -115,7 +115,8 @@ static void semaphore(int fd, unsigned ring, uint32_t flags) handle[BUSY] = gem_create(fd, 4096); spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ .engine = ring, - .dep = handle[BUSY]}); + .dep = handle[BUSY], + .preemptible = true}); /* Queue a batch after the busy, it should block and remain "busy" */ igt_assert(exec_noop(fd, handle, ring | flags, false)); @@ -462,7 +463,8 @@ static void close_race(int fd) for (i = 0; i < nhandles; i++) { spin[i] = igt_spin_batch_new(fd, (igt_spin_opt_t){ - .engine =engines[rand() % nengine]}); + .engine =engines[rand() % nengine], + .preemptible = true}); handles[i] = spin[i]->handle; } @@ -470,7 +472,8 @@ static void close_race(int fd) for (i = 0; i < nhandles; i++) { igt_spin_batch_free(fd, spin[i]); spin[i] = igt_spin_batch_new(fd, (igt_spin_opt_t){ - .engine = engines[rand() % nengine]}); + .engine = engines[rand() % nengine], + .preemptible = true}); handles[i] = spin[i]->handle; __sync_synchronize(); } @@ -512,8 +515,9 @@ static bool has_semaphores(int fd) static bool has_extended_busy_ioctl(int fd) { - igt_spin_t *spin = igt_spin_batch_new(fd, - (igt_spin_opt_t){.engine = I915_EXEC_RENDER}); + igt_spin_t *spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ + .engine = I915_EXEC_RENDER, + .preemptible = true}); uint32_t read, write; __gem_busy(fd, spin->handle, &read, &write); @@ -524,7 +528,9 @@ static bool has_extended_busy_ioctl(int fd) static void basic(int fd, unsigned ring, unsigned flags) { - igt_spin_t *spin = igt_spin_batch_new(fd, (igt_spin_opt_t){.engine = ring}); + igt_spin_t *spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ + .engine = ring, + .preemptible = true}); struct timespec tv; int timeout; bool busy; diff --git a/tests/gem_exec_fence.c b/tests/gem_exec_fence.c index 003a3c78..d78c3384 100644 --- a/tests/gem_exec_fence.c +++ b/tests/gem_exec_fence.c @@ -440,7 +440,8 @@ static void test_parallel(int fd, unsigned int master) */ spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ .engine = master, - .dep = c.handle}); + .dep = c.handle, + .preemptible = true}); resubmit(fd, spin->handle, master, 16); /* Now queue the master request and its secondaries */ @@ -963,7 +964,8 @@ static void test_syncobj_unused_fence(int fd) struct local_gem_exec_fence fence = { .handle = syncobj_create(fd), }; - igt_spin_t *spin = igt_spin_batch_new(fd, (igt_spin_opt_t){/* All 0s */}); + igt_spin_t *spin = igt_spin_batch_new(fd, + (igt_spin_opt_t){.preemptible = true}); /* sanity check our syncobj_to_sync_file interface */ igt_assert_eq(__syncobj_to_sync_file(fd, 0), -ENOENT); @@ -1055,7 +1057,8 @@ static void test_syncobj_signal(int fd) struct local_gem_exec_fence fence = { .handle = syncobj_create(fd), }; - igt_spin_t *spin = igt_spin_batch_new(fd, (igt_spin_opt_t){/* All 0s */}); + igt_spin_t *spin = igt_spin_batch_new(fd, + (igt_spin_opt_t){.preemptible = true}); /* Check that the syncobj is signaled only when our request/fence is */ @@ -1105,7 +1108,7 @@ static void test_syncobj_wait(int fd) gem_quiescent_gpu(fd); - spin = igt_spin_batch_new(fd, (igt_spin_opt_t){/* All 0s */}); + spin = igt_spin_batch_new(fd, (igt_spin_opt_t){.preemptible = true}); memset(&execbuf, 0, sizeof(execbuf)); execbuf.buffers_ptr = to_user_pointer(&obj); @@ -1175,7 +1178,8 @@ static void test_syncobj_export(int fd) .handle = syncobj_create(fd), }; int export[2]; - igt_spin_t *spin = igt_spin_batch_new(fd, (igt_spin_opt_t){/* All 0s */}); + igt_spin_t *spin = igt_spin_batch_new(fd, + (igt_spin_opt_t){.preemptible = true}); /* Check that if we export the syncobj prior to use it picks up * the later fence. This allows a syncobj to establish a channel @@ -1233,7 +1237,8 @@ static void test_syncobj_repeat(int fd) struct drm_i915_gem_execbuffer2 execbuf; struct local_gem_exec_fence *fence; int export; - igt_spin_t *spin = igt_spin_batch_new(fd, (igt_spin_opt_t){/* All 0s */}); + igt_spin_t *spin = igt_spin_batch_new(fd, + (igt_spin_opt_t){.preemptible = true}); /* Check that we can wait on the same fence multiple times */ fence = calloc(nfences, sizeof(*fence)); @@ -1288,7 +1293,8 @@ static void test_syncobj_import(int fd) const uint32_t bbe = MI_BATCH_BUFFER_END; struct drm_i915_gem_exec_object2 obj; struct drm_i915_gem_execbuffer2 execbuf; - igt_spin_t *spin = igt_spin_batch_new(fd, (igt_spin_opt_t){/* All 0s */}); + igt_spin_t *spin = igt_spin_batch_new(fd, + (igt_spin_opt_t){.preemptible = true}); uint32_t sync = syncobj_create(fd); int fence; diff --git a/tests/gem_exec_latency.c b/tests/gem_exec_latency.c index d0a07bb2..686ec50e 100644 --- a/tests/gem_exec_latency.c +++ b/tests/gem_exec_latency.c @@ -346,7 +346,8 @@ static void latency_from_ring(int fd, if (flags & PREEMPT) spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ .ctx = ctx[0], - .engine = ring}); + .engine = ring, + .preemptible = true}); if (flags & CORK) { plug(fd, &c); diff --git a/tests/gem_exec_nop.c b/tests/gem_exec_nop.c index edd589e4..d9606a67 100644 --- a/tests/gem_exec_nop.c +++ b/tests/gem_exec_nop.c @@ -622,7 +622,8 @@ static void preempt(int fd, uint32_t handle, igt_spin_t *spin = __igt_spin_batch_new(fd, (igt_spin_opt_t){ .ctx = ctx[0], - .engine = ring_id}); + .engine = ring_id, + .preemptible = true}); for (int loop = 0; loop < 1024; loop++) gem_execbuf(fd, &execbuf); diff --git a/tests/gem_exec_reloc.c b/tests/gem_exec_reloc.c index 1e424efc..d71d21c1 100644 --- a/tests/gem_exec_reloc.c +++ b/tests/gem_exec_reloc.c @@ -390,7 +390,8 @@ static void basic_reloc(int fd, unsigned before, unsigned after, unsigned flags) if (flags & ACTIVE) { spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ .engine = I915_EXEC_DEFAULT, - .dep = obj.handle}); + .dep = obj.handle, + .preemptible = true}); if (!(flags & HANG)) igt_spin_batch_set_timeout(spin, NSEC_PER_SEC/100); igt_assert(gem_bo_busy(fd, obj.handle)); @@ -458,7 +459,8 @@ static void basic_reloc(int fd, unsigned before, unsigned after, unsigned flags) if (flags & ACTIVE) { spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ .engine = I915_EXEC_DEFAULT, - .dep = obj.handle}); + .dep = obj.handle, + .preemptible = true}); if (!(flags & HANG)) igt_spin_batch_set_timeout(spin, NSEC_PER_SEC/100); igt_assert(gem_bo_busy(fd, obj.handle)); @@ -585,7 +587,9 @@ static void basic_range(int fd, unsigned flags) execbuf.buffer_count = n + 1; if (flags & ACTIVE) { - spin = igt_spin_batch_new(fd, (igt_spin_opt_t){.dep = obj[n].handle}); + spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ + .dep = obj[n].handle, + .preemptible = true}); if (!(flags & HANG)) igt_spin_batch_set_timeout(spin, NSEC_PER_SEC/100); igt_assert(gem_bo_busy(fd, obj[n].handle)); diff --git a/tests/gem_exec_schedule.c b/tests/gem_exec_schedule.c index 33836403..4ba863f3 100644 --- a/tests/gem_exec_schedule.c +++ b/tests/gem_exec_schedule.c @@ -149,7 +149,8 @@ static void unplug_show_queue(int fd, struct cork *c, unsigned int engine) uint32_t ctx = create_highest_priority(fd); spin[n] = __igt_spin_batch_new(fd, (igt_spin_opt_t){ .ctx = ctx, - .engine = engine}); + .engine = engine, + .preemptible = true}); gem_context_destroy(fd, ctx); } @@ -380,7 +381,8 @@ static void preempt(int fd, unsigned ring, unsigned flags) } spin[n] = __igt_spin_batch_new(fd, (igt_spin_opt_t){ .ctx = ctx[LO], - .engine = ring}); + .engine = ring, + .preemptible = true}); igt_debug("spin[%d].handle=%d\n", n, spin[n]->handle); store_dword(fd, ctx[HI], ring, result, 0, n + 1, 0, I915_GEM_DOMAIN_RENDER); @@ -431,7 +433,8 @@ static void preempt_other(int fd, unsigned ring) for_each_engine(fd, other) { spin[n] = __igt_spin_batch_new(fd, (igt_spin_opt_t){ .ctx = ctx[NOISE], - .engine = other}); + .engine = other, + .preemptible = true}); store_dword(fd, ctx[LO], other, result, (n + 1)*sizeof(uint32_t), n + 1, 0, I915_GEM_DOMAIN_RENDER); @@ -486,7 +489,8 @@ static void preempt_self(int fd, unsigned ring) for_each_engine(fd, other) { spin[n] = __igt_spin_batch_new(fd, (igt_spin_opt_t){ .ctx = ctx[NOISE], - .engine = other}); + .engine = other, + .preemptible = true}); store_dword(fd, ctx[HI], other, result, (n + 1)*sizeof(uint32_t), n + 1, 0, I915_GEM_DOMAIN_RENDER); diff --git a/tests/gem_exec_suspend.c b/tests/gem_exec_suspend.c index bea19665..e88df9d1 100644 --- a/tests/gem_exec_suspend.c +++ b/tests/gem_exec_suspend.c @@ -201,7 +201,9 @@ static void run_test(int fd, unsigned engine, unsigned flags) } if (flags & HANG) - spin = igt_spin_batch_new(fd, (igt_spin_opt_t){.engine = engine}); + spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ + .engine = engine, + .preemptible = true}); switch (mode(flags)) { case NOSLEEP: diff --git a/tests/gem_shrink.c b/tests/gem_shrink.c index b1b35c6c..a9be52c1 100644 --- a/tests/gem_shrink.c +++ b/tests/gem_shrink.c @@ -311,10 +311,13 @@ static void reclaim(unsigned engine, int timeout) } while (!*shared); } - spin = igt_spin_batch_new(fd, (igt_spin_opt_t){.engine = engine}); + spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ + .engine = engine, + .preemptible = true}); igt_until_timeout(timeout) { igt_spin_t *next = __igt_spin_batch_new(fd, (igt_spin_opt_t){ - .engine = engine}); + .engine = engine, + .preemptible = true}); igt_spin_batch_set_timeout(spin, timeout_100ms); gem_sync(fd, spin->handle); diff --git a/tests/gem_spin_batch.c b/tests/gem_spin_batch.c index 9bd808da..4588e6d0 100644 --- a/tests/gem_spin_batch.c +++ b/tests/gem_spin_batch.c @@ -44,7 +44,8 @@ static void spin(int fd, unsigned int engine, unsigned int timeout_sec) spin = igt_spin_batch_new(fd, (igt_spin_opt_t){.engine = engine}); while ((elapsed = igt_nsec_elapsed(&tv)) >> 30 < timeout_sec) { igt_spin_t *next = __igt_spin_batch_new(fd, (igt_spin_opt_t){ - .engine = engine}); + .engine = engine, + .preemptible = true}); igt_spin_batch_set_timeout(spin, timeout_100ms - igt_nsec_elapsed(&itv)); diff --git a/tests/gem_wait.c b/tests/gem_wait.c index e7818caf..f746de9c 100644 --- a/tests/gem_wait.c +++ b/tests/gem_wait.c @@ -112,7 +112,8 @@ static void basic(int fd, unsigned engine, unsigned flags) struct cork cork = plug(fd, flags); igt_spin_t *spin = igt_spin_batch_new(fd, (igt_spin_opt_t){ .engine = engine, - .dep = cork.handle}); + .dep = cork.handle, + .preemptible = true}); struct drm_i915_gem_wait wait = { flags & WRITE ? cork.handle : spin->handle }; diff --git a/tests/kms_busy.c b/tests/kms_busy.c index bb1c69b7..fe79319e 100644 --- a/tests/kms_busy.c +++ b/tests/kms_busy.c @@ -91,8 +91,10 @@ static void flip_to_fb(igt_display_t *dpy, int pipe, struct timespec tv = { 1, 0 }; struct drm_event_vblank ev; - igt_spin_t *t = igt_spin_batch_new(dpy->drm_fd, - (igt_spin_opt_t){.engine = ring, .dep = fb->gem_handle}); + igt_spin_t *t = igt_spin_batch_new(dpy->drm_fd, (igt_spin_opt_t){ + .engine = ring, + .dep = fb->gem_handle, + .preemptible = true}); if (modeset) { /* @@ -210,7 +212,8 @@ static void test_atomic_commit_hang(igt_display_t *dpy, igt_plane_t *primary, igt_spin_t *t = igt_spin_batch_new(dpy->drm_fd, (igt_spin_opt_t){ .engine = ring, - .dep = busy_fb->gem_handle}); + .dep = busy_fb->gem_handle, + .preemptible = true}); struct pollfd pfd = { .fd = dpy->drm_fd, .events = POLLIN }; unsigned flags = 0; struct drm_event_vblank ev; @@ -299,7 +302,8 @@ static void test_pageflip_modeset_hang(igt_display_t *dpy, t = igt_spin_batch_new(dpy->drm_fd, (igt_spin_opt_t){ .engine = ring, - .dep = fb.gem_handle}); + .dep = fb.gem_handle, + .preemptible = true}); do_or_die(drmModePageFlip(dpy->drm_fd, dpy->pipes[pipe].crtc_id, fb.fb_id, DRM_MODE_PAGE_FLIP_EVENT, &fb)); diff --git a/tests/kms_cursor_legacy.c b/tests/kms_cursor_legacy.c index 33853697..356bb082 100644 --- a/tests/kms_cursor_legacy.c +++ b/tests/kms_cursor_legacy.c @@ -533,7 +533,9 @@ static void basic_flip_cursor(igt_display_t *display, spin = NULL; if (flags & BASIC_BUSY) spin = igt_spin_batch_new(display->drm_fd, - (igt_spin_opt_t){.dep = fb_info.gem_handle}); + (igt_spin_opt_t){ + .dep = fb_info.gem_handle, + .preemptible = true}); /* Start with a synchronous query to align with the vblank */ vblank_start = get_vblank(display->drm_fd, pipe, DRM_VBLANK_NEXTONMISS); @@ -1300,7 +1302,8 @@ static void flip_vs_cursor_busy_crc(igt_display_t *display, bool atomic) static const int max_crcs = 8; spin = igt_spin_batch_new(display->drm_fd, (igt_spin_opt_t){ - .dep = fb_info[1].gem_handle}); + .dep = fb_info[1].gem_handle, + .preemptible = true}); vblank_start = get_vblank(display->drm_fd, pipe, DRM_VBLANK_NEXTONMISS); diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c index 19562497..ea7d6686 100644 --- a/tests/perf_pmu.c +++ b/tests/perf_pmu.c @@ -141,8 +141,9 @@ single(int gem_fd, const struct intel_execution_engine2 *e, bool busy) fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance)); if (busy) { - spin = igt_spin_batch_new(gem_fd, - (igt_spin_opt_t){.engine = e2ring(gem_fd, e)}); + spin = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){ + .engine = e2ring(gem_fd, e), + .preemptible = true}); igt_spin_batch_set_timeout(spin, batch_duration_ns); } else { usleep(batch_duration_ns / 1000); @@ -204,8 +205,9 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, igt_assert_eq(i, num_engines); - spin = igt_spin_batch_new(gem_fd, - (igt_spin_opt_t){.engine = e2ring(gem_fd, e)}); + spin = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){ + .engine = e2ring(gem_fd, e), + .preemptible = true}); igt_spin_batch_set_timeout(spin, batch_duration_ns); gem_sync(gem_fd, spin->handle); @@ -251,7 +253,9 @@ most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, idle_idx = i; } else { spin[i] = igt_spin_batch_new(gem_fd, - (igt_spin_opt_t){.engine = e2ring(gem_fd, e_)}); + (igt_spin_opt_t){ + .engine = e2ring(gem_fd, e_), + .preemptible = true}); igt_spin_batch_set_timeout(spin[i], batch_duration_ns); } @@ -299,8 +303,9 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines) fd[i] = open_group(I915_PMU_ENGINE_BUSY(e->class, e->instance), fd[0]); - spin[i] = igt_spin_batch_new(gem_fd, - (igt_spin_opt_t){.engine = e2ring(gem_fd, e)}); + spin[i] = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){ + .engine = e2ring(gem_fd, e), + .preemptible = true}); igt_spin_batch_set_timeout(spin[i], batch_duration_ns); i++; @@ -331,8 +336,9 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, bool busy) open_group(I915_PMU_ENGINE_WAIT(e->class, e->instance), fd); if (busy) { - spin = igt_spin_batch_new(gem_fd, - (igt_spin_opt_t){.engine = e2ring(gem_fd, e)}); + spin = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){ + .engine = e2ring(gem_fd, e), + .preemptible = true}); igt_spin_batch_set_timeout(spin, batch_duration_ns); } else { usleep(batch_duration_ns / 1000); @@ -651,8 +657,9 @@ multi_client(int gem_fd, const struct intel_execution_engine2 *e) */ fd[1] = open_pmu(config); - spin = igt_spin_batch_new(gem_fd, - (igt_spin_opt_t){.engine = e2ring(gem_fd, e)}); + spin = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){ + .engine = e2ring(gem_fd, e), + .preemptible = true}); igt_spin_batch_set_timeout(spin, 2 * batch_duration_ns); slept = measured_usleep(batch_duration_ns / 1000); @@ -757,8 +764,9 @@ static void cpu_hotplug(int gem_fd) fd = perf_i915_open(I915_PMU_ENGINE_BUSY(I915_ENGINE_CLASS_RENDER, 0)); igt_assert(fd >= 0); - spin = igt_spin_batch_new(gem_fd, - (igt_spin_opt_t){.engine = I915_EXEC_RENDER}); + spin = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){ + .engine = I915_EXEC_RENDER, + .preemptible = true}); igt_nsec_elapsed(&start); @@ -871,7 +879,7 @@ test_interrupts(int gem_fd) gem_quiescent_gpu(gem_fd); fd = open_pmu(I915_PMU_INTERRUPTS); - spin = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){/* All 0s */}); + spin = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){.preemptible = true}); obj.handle = gem_create(gem_fd, sz); gem_write(gem_fd, obj.handle, sz - sizeof(bbe), &bbe, sizeof(bbe)); @@ -953,8 +961,9 @@ test_frequency(int gem_fd) pmu_read_multi(fd, 2, start); - spin = igt_spin_batch_new(gem_fd, - (igt_spin_opt_t){.engine = I915_EXEC_RENDER}); + spin = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){ + .engine = I915_EXEC_RENDER, + .preemptible = true}); igt_spin_batch_set_timeout(spin, duration_ns); gem_sync(gem_fd, spin->handle); @@ -979,8 +988,9 @@ test_frequency(int gem_fd) pmu_read_multi(fd, 2, start); - spin = igt_spin_batch_new(gem_fd, - (igt_spin_opt_t){.engine = I915_EXEC_RENDER}); + spin = igt_spin_batch_new(gem_fd, (igt_spin_opt_t){ + .engine = I915_EXEC_RENDER, + .preemptible = true}); igt_spin_batch_set_timeout(spin, duration_ns); gem_sync(gem_fd, spin->handle); diff --git a/tests/pm_rps.c b/tests/pm_rps.c index d4d2f6f0..aa674627 100644 --- a/tests/pm_rps.c +++ b/tests/pm_rps.c @@ -588,7 +588,10 @@ static void boost_freq(int fd, int *boost_freqs) engine = I915_EXEC_RENDER; if (intel_gen(lh.devid) >= 6) engine = I915_EXEC_BLT; - load = igt_spin_batch_new(fd, (igt_spin_opt_t){.engine = engine}); + load = igt_spin_batch_new(fd, (igt_spin_opt_t){ + .engine = engine, + .preemptible = true}); + /* Waiting will grant us a boost to maximum */ gem_wait(fd, load->handle, &timeout);