From patchwork Thu Apr 11 12:28:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 10895857 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB226922 for ; Thu, 11 Apr 2019 12:28:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9013628CC0 for ; Thu, 11 Apr 2019 12:28:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8499528CD3; Thu, 11 Apr 2019 12:28:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3781A28CC0 for ; Thu, 11 Apr 2019 12:28:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7529189831; Thu, 11 Apr 2019 12:28:35 +0000 (UTC) X-Original-To: Intel-gfx@lists.freedesktop.org Delivered-To: Intel-gfx@lists.freedesktop.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6EAC7897FD; Thu, 11 Apr 2019 12:28:34 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Apr 2019 05:28:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,337,1549958400"; d="scan'208";a="141897152" Received: from hederi-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.252.15.159]) by fmsmga007.fm.intel.com with ESMTP; 11 Apr 2019 05:28:32 -0700 From: Tvrtko Ursulin To: igt-dev@lists.freedesktop.org Date: Thu, 11 Apr 2019 13:28:31 +0100 Message-Id: <20190411122831.1787-1-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190408161533.2421-6-andi.shyti@intel.com> References: <20190408161533.2421-6-andi.shyti@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFT i-g-t 5/6] lib: igt_dummyload: use for_each_context_engine() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Andi Shyti With the new getparam/setparam api, engines are mapped to context. Use for_each_context_engine() to loop through existing engines. Suggested-by: Tvrtko Ursulin Signed-off-by: Andi Shyti Reviewed-by: Tvrtko Ursulin --- Just some debug to get more data from CI. --- lib/igt_dummyload.c | 38 +++++++++++++++++++++++++------------- 1 file changed, 25 insertions(+), 13 deletions(-) diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c index 47f6b92b424b..e7f3f480dc26 100644 --- a/lib/igt_dummyload.c +++ b/lib/igt_dummyload.c @@ -39,6 +39,7 @@ #include "ioctl_wrappers.h" #include "sw_sync.h" #include "igt_vgem.h" +#include "i915/gem_engine_topology.h" #include "i915/gem_mman.h" /** @@ -86,7 +87,7 @@ emit_recursive_batch(igt_spin_t *spin, struct drm_i915_gem_relocation_entry relocs[2], *r; struct drm_i915_gem_execbuffer2 *execbuf; struct drm_i915_gem_exec_object2 *obj; - unsigned int engines[16]; + unsigned int flags[GEM_MAX_ENGINES]; unsigned int nengine; int fence_fd = -1; uint32_t *batch, *batch_start; @@ -94,17 +95,33 @@ emit_recursive_batch(igt_spin_t *spin, nengine = 0; if (opts->engine == ALL_ENGINES) { - unsigned int engine; + struct intel_execution_engine2 *engine; - for_each_physical_engine(fd, engine) { + for_each_context_engine(fd, opts->ctx, engine) { if (opts->flags & IGT_SPIN_POLL_RUN && - !gem_can_store_dword(fd, engine)) + !gem_class_can_store_dword(fd, engine->class)) continue; - engines[nengine++] = engine; + igt_debug("%u=%llx (%u:%u)\n", + nengine, + engine->flags, engine->class, engine->instance); + flags[nengine++] = engine->flags; } } else { - engines[nengine++] = opts->engine; + struct intel_execution_engine2 *e; + int class; + + if (!gem_ctx_get_engine(fd, opts->engine, opts->ctx, e)) { + class = e->class; + } else { + gem_require_ring(fd, opts->engine); + class = gem_eb_to_class(opts->engine); + } + + if (opts->flags & IGT_SPIN_POLL_RUN) + igt_require(gem_class_can_store_dword(fd, class)); + + flags[nengine++] = opts->engine; } igt_require(nengine); @@ -234,8 +251,9 @@ emit_recursive_batch(igt_spin_t *spin, for (i = 0; i < nengine; i++) { execbuf->flags &= ~ENGINE_MASK; - execbuf->flags |= engines[i]; + execbuf->flags |= flags[i]; + igt_debug("eb %u = %llx\n", i, flags[i]); gem_execbuf_wr(fd, execbuf); if (opts->flags & IGT_SPIN_FENCE_OUT) { @@ -308,12 +326,6 @@ igt_spin_batch_factory(int fd, const struct igt_spin_factory *opts) igt_require_gem(fd); - if (opts->engine != ALL_ENGINES) { - gem_require_ring(fd, opts->engine); - if (opts->flags & IGT_SPIN_POLL_RUN) - igt_require(gem_can_store_dword(fd, opts->engine)); - } - spin = spin_batch_create(fd, opts); igt_assert(gem_bo_busy(fd, spin->handle)); From patchwork Thu Apr 11 12:26:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 10895855 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B4FCD922 for ; Thu, 11 Apr 2019 12:27:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9242928CD4 for ; Thu, 11 Apr 2019 12:27:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 86B0B28CD7; Thu, 11 Apr 2019 12:27:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6FF4128CD4 for ; Thu, 11 Apr 2019 12:27:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AD2C189864; Thu, 11 Apr 2019 12:26:59 +0000 (UTC) X-Original-To: Intel-gfx@lists.freedesktop.org Delivered-To: Intel-gfx@lists.freedesktop.org Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 456F689856; Thu, 11 Apr 2019 12:26:58 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Apr 2019 05:26:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,337,1549958400"; d="scan'208";a="290645589" Received: from hederi-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.252.15.159]) by orsmga004.jf.intel.com with ESMTP; 11 Apr 2019 05:26:55 -0700 From: Tvrtko Ursulin To: igt-dev@lists.freedesktop.org Date: Thu, 11 Apr 2019 13:26:54 +0100 Message-Id: <20190411122654.1680-1-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190408161533.2421-6-andi.shyti@intel.com> References: <20190408161533.2421-6-andi.shyti@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFT i-g-t 6/6] test: perf_pmu: use the gem_engine_topology library X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Andi Shyti Replace the legacy for_each_engine* defines with the ones implemented in the gem_engine_topology library. Use whenever possible gem_engine_can_store_dword() that checks class instead of flags. Now the __for_each_engine_class_instance and for_each_engine_class_instance are unused, remove them. Suggested-by: Tvrtko Ursulin Signed-off-by: Andi Shyti Cc: Tvrtko Ursulin Reviewed-by: Tvrtko Ursulin --- lib/igt_gt.h | 7 --- tests/perf_pmu.c | 143 +++++++++++++++++++++++++++++------------------ 2 files changed, 88 insertions(+), 62 deletions(-) diff --git a/lib/igt_gt.h b/lib/igt_gt.h index af4cc38a1ef7..c2ca07e03738 100644 --- a/lib/igt_gt.h +++ b/lib/igt_gt.h @@ -119,11 +119,4 @@ void gem_require_engine(int gem_fd, igt_require(gem_has_engine(gem_fd, class, instance)); } -#define __for_each_engine_class_instance(e__) \ - for ((e__) = intel_execution_engines2; (e__)->name; (e__)++) - -#define for_each_engine_class_instance(fd__, e__) \ - for ((e__) = intel_execution_engines2; (e__)->name; (e__)++) \ - for_if (gem_has_engine((fd__), (e__)->class, (e__)->instance)) - #endif /* IGT_GT_H */ diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c index 4f552bc2ae28..a889b552236d 100644 --- a/tests/perf_pmu.c +++ b/tests/perf_pmu.c @@ -72,7 +72,7 @@ static int open_group(uint64_t config, int group) } static void -init(int gem_fd, const struct intel_execution_engine2 *e, uint8_t sample) +init(int gem_fd, struct intel_execution_engine2 *e, uint8_t sample) { int fd, err = 0; bool exists; @@ -158,11 +158,6 @@ static unsigned int measured_usleep(unsigned int usec) return igt_nsec_elapsed(&ts); } -static unsigned int e2ring(int gem_fd, const struct intel_execution_engine2 *e) -{ - return gem_class_instance_to_eb_flags(gem_fd, e->class, e->instance); -} - #define TEST_BUSY (1) #define FLAG_SYNC (2) #define TEST_TRAILING_IDLE (4) @@ -170,14 +165,15 @@ static unsigned int e2ring(int gem_fd, const struct intel_execution_engine2 *e) #define FLAG_LONG (16) #define FLAG_HANG (32) -static igt_spin_t * __spin_poll(int fd, uint32_t ctx, unsigned long flags) +static igt_spin_t * __spin_poll(int fd, uint32_t ctx, + struct intel_execution_engine2 *e) { struct igt_spin_factory opts = { .ctx = ctx, - .engine = flags, + .engine = e->flags, }; - if (gem_can_store_dword(fd, flags)) + if (gem_class_can_store_dword(fd, e->class)) opts.flags |= IGT_SPIN_POLL_RUN; return __igt_spin_batch_factory(fd, &opts); @@ -209,20 +205,34 @@ static unsigned long __spin_wait(int fd, igt_spin_t *spin) return igt_nsec_elapsed(&start); } -static igt_spin_t * __spin_sync(int fd, uint32_t ctx, unsigned long flags) +static igt_spin_t * __spin_sync(int fd, uint32_t ctx, + struct intel_execution_engine2 *e) { - igt_spin_t *spin = __spin_poll(fd, ctx, flags); + igt_spin_t *spin = __spin_poll(fd, ctx, e); __spin_wait(fd, spin); return spin; } -static igt_spin_t * spin_sync(int fd, uint32_t ctx, unsigned long flags) +static igt_spin_t * spin_sync(int fd, uint32_t ctx, + struct intel_execution_engine2 *e) { igt_require_gem(fd); - return __spin_sync(fd, ctx, flags); + return __spin_sync(fd, ctx, e); +} + +static igt_spin_t * spin_sync_flags(int fd, uint32_t ctx, unsigned int flags) +{ + struct intel_execution_engine2 e = { }; + + e.class = gem_eb_to_class(flags); + e.instance = (flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK)) == + (I915_EXEC_BSD | I915_EXEC_BSD_RING2) ? 1 : 0; + e.flags = flags; + + return spin_sync(fd, ctx, &e); } static void end_spin(int fd, igt_spin_t *spin, unsigned int flags) @@ -257,7 +267,7 @@ static void end_spin(int fd, igt_spin_t *spin, unsigned int flags) } static void -single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags) +single(int gem_fd, struct intel_execution_engine2 *e, unsigned int flags) { unsigned long slept; igt_spin_t *spin; @@ -267,7 +277,7 @@ single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags) fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance)); if (flags & TEST_BUSY) - spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e)); + spin = spin_sync(gem_fd, 0, e); else spin = NULL; @@ -303,7 +313,7 @@ single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags) } static void -busy_start(int gem_fd, const struct intel_execution_engine2 *e) +busy_start(int gem_fd, struct intel_execution_engine2 *e) { unsigned long slept; uint64_t val, ts[2]; @@ -316,7 +326,7 @@ busy_start(int gem_fd, const struct intel_execution_engine2 *e) */ sleep(2); - spin = __spin_sync(gem_fd, 0, e2ring(gem_fd, e)); + spin = __spin_sync(gem_fd, 0, e); fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance)); @@ -338,7 +348,7 @@ busy_start(int gem_fd, const struct intel_execution_engine2 *e) * will depend on the CI systems running it a lot to detect issues. */ static void -busy_double_start(int gem_fd, const struct intel_execution_engine2 *e) +busy_double_start(int gem_fd, struct intel_execution_engine2 *e) { unsigned long slept; uint64_t val, val2, ts[2]; @@ -347,6 +357,7 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e) int fd; ctx = gem_context_create(gem_fd); + intel_init_engine_list(gem_fd, ctx); /* * Defeat the busy stats delayed disable, we need to guarantee we are @@ -359,11 +370,11 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e) * re-submission in execlists mode. Make sure busyness is correctly * reported with the engine busy, and after the engine went idle. */ - spin[0] = __spin_sync(gem_fd, 0, e2ring(gem_fd, e)); + spin[0] = __spin_sync(gem_fd, 0, e); usleep(500e3); spin[1] = __igt_spin_batch_new(gem_fd, .ctx = ctx, - .engine = e2ring(gem_fd, e)); + .engine = e->flags); /* * Open PMU as fast as possible after the second spin batch in attempt @@ -421,10 +432,10 @@ static void log_busy(unsigned int num_engines, uint64_t *val) } static void -busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, +busy_check_all(int gem_fd, struct intel_execution_engine2 *e, const unsigned int num_engines, unsigned int flags) { - const struct intel_execution_engine2 *e_; + struct intel_execution_engine2 *e_; uint64_t tval[2][num_engines]; unsigned int busy_idx = 0, i; uint64_t val[num_engines]; @@ -434,8 +445,8 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, i = 0; fd[0] = -1; - for_each_engine_class_instance(gem_fd, e_) { - if (e == e_) + __for_each_physical_engine(gem_fd, e_) { + if (e->class == e_->class && e->instance == e_->instance) busy_idx = i; fd[i++] = open_group(I915_PMU_ENGINE_BUSY(e_->class, @@ -445,7 +456,7 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, igt_assert_eq(i, num_engines); - spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e)); + spin = spin_sync(gem_fd, 0, e); pmu_read_multi(fd[0], num_engines, tval[0]); slept = measured_usleep(batch_duration_ns / 1000); if (flags & TEST_TRAILING_IDLE) @@ -472,23 +483,23 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, static void __submit_spin_batch(int gem_fd, igt_spin_t *spin, - const struct intel_execution_engine2 *e, + struct intel_execution_engine2 *e, int offset) { struct drm_i915_gem_execbuffer2 eb = spin->execbuf; eb.flags &= ~(0x3f | I915_EXEC_BSD_MASK); - eb.flags |= e2ring(gem_fd, e) | I915_EXEC_NO_RELOC; + eb.flags |= e->flags | I915_EXEC_NO_RELOC; eb.batch_start_offset += offset; gem_execbuf(gem_fd, &eb); } static void -most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, +most_busy_check_all(int gem_fd, struct intel_execution_engine2 *e, const unsigned int num_engines, unsigned int flags) { - const struct intel_execution_engine2 *e_; + struct intel_execution_engine2 *e_; uint64_t tval[2][num_engines]; uint64_t val[num_engines]; int fd[num_engines]; @@ -497,13 +508,13 @@ most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, unsigned int idle_idx, i; i = 0; - for_each_engine_class_instance(gem_fd, e_) { - if (e == e_) + __for_each_physical_engine(gem_fd, e_) { + if (e->class == e_->class && e->instance == e_->instance) idle_idx = i; else if (spin) __submit_spin_batch(gem_fd, spin, e_, 64); else - spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e_)); + spin = __spin_poll(gem_fd, 0, e_); val[i++] = I915_PMU_ENGINE_BUSY(e_->class, e_->instance); } @@ -545,7 +556,7 @@ static void all_busy_check_all(int gem_fd, const unsigned int num_engines, unsigned int flags) { - const struct intel_execution_engine2 *e; + struct intel_execution_engine2 *e; uint64_t tval[2][num_engines]; uint64_t val[num_engines]; int fd[num_engines]; @@ -554,11 +565,11 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines, unsigned int i; i = 0; - for_each_engine_class_instance(gem_fd, e) { + __for_each_physical_engine(gem_fd, e) { if (spin) __submit_spin_batch(gem_fd, spin, e, 64); else - spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e)); + spin = __spin_poll(gem_fd, 0, e); val[i++] = I915_PMU_ENGINE_BUSY(e->class, e->instance); } @@ -592,7 +603,7 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines, } static void -no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags) +no_sema(int gem_fd, struct intel_execution_engine2 *e, unsigned int flags) { igt_spin_t *spin; uint64_t val[2][2]; @@ -602,7 +613,7 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags) open_group(I915_PMU_ENGINE_WAIT(e->class, e->instance), fd); if (flags & TEST_BUSY) - spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e)); + spin = spin_sync(gem_fd, 0, e); else spin = NULL; @@ -631,7 +642,7 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags) #define MI_SEMAPHORE_SAD_GTE_SDD (1<<12) static void -sema_wait(int gem_fd, const struct intel_execution_engine2 *e, +sema_wait(int gem_fd, struct intel_execution_engine2 *e, unsigned int flags) { struct drm_i915_gem_relocation_entry reloc[2] = {}; @@ -689,7 +700,7 @@ sema_wait(int gem_fd, const struct intel_execution_engine2 *e, eb.buffer_count = 2; eb.buffers_ptr = to_user_pointer(obj); - eb.flags = e2ring(gem_fd, e); + eb.flags = e->flags; /** * Start the semaphore wait PMU and after some known time let the above @@ -792,7 +803,7 @@ static int wait_vblank(int fd, union drm_wait_vblank *vbl) } static void -event_wait(int gem_fd, const struct intel_execution_engine2 *e) +event_wait(int gem_fd, struct intel_execution_engine2 *e) { struct drm_i915_gem_exec_object2 obj = { }; struct drm_i915_gem_execbuffer2 eb = { }; @@ -845,7 +856,7 @@ event_wait(int gem_fd, const struct intel_execution_engine2 *e) eb.buffer_count = 1; eb.buffers_ptr = to_user_pointer(&obj); - eb.flags = e2ring(gem_fd, e) | I915_EXEC_SECURE; + eb.flags = e->flags | I915_EXEC_SECURE; for_each_pipe_with_valid_output(&data.display, p, output) { struct igt_helper_process waiter = { }; @@ -917,7 +928,7 @@ event_wait(int gem_fd, const struct intel_execution_engine2 *e) } static void -multi_client(int gem_fd, const struct intel_execution_engine2 *e) +multi_client(int gem_fd, struct intel_execution_engine2 *e) { uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance); unsigned long slept[2]; @@ -936,7 +947,7 @@ multi_client(int gem_fd, const struct intel_execution_engine2 *e) */ fd[1] = open_pmu(config); - spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e)); + spin = spin_sync(gem_fd, 0, e); val[0] = val[1] = __pmu_read_single(fd[0], &ts[0]); slept[1] = measured_usleep(batch_duration_ns / 1000); @@ -1039,6 +1050,7 @@ static void cpu_hotplug(int gem_fd) igt_spin_t *spin[2]; uint64_t ts[2]; uint64_t val; + uint32_t ctx; int link[2]; int fd, ret; int cur = 0; @@ -1046,14 +1058,18 @@ static void cpu_hotplug(int gem_fd) igt_require(cpu0_hotplug_support()); + ctx = gem_context_create(gem_fd); + fd = open_pmu(I915_PMU_ENGINE_BUSY(I915_ENGINE_CLASS_RENDER, 0)); /* * Create two spinners so test can ensure shorter gaps in engine * busyness as it is terminating one and re-starting the other. */ - spin[0] = igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER); - spin[1] = __igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER); + spin[0] = igt_spin_batch_new(gem_fd, + .engine = I915_EXEC_RENDER, .ctx = ctx); + spin[1] = __igt_spin_batch_new(gem_fd, + .engine = I915_EXEC_RENDER, .ctx = ctx); val = __pmu_read_single(fd, &ts[0]); @@ -1137,6 +1153,7 @@ static void cpu_hotplug(int gem_fd) igt_spin_batch_free(gem_fd, spin[cur]); spin[cur] = __igt_spin_batch_new(gem_fd, + .ctx = ctx, .engine = I915_EXEC_RENDER); cur ^= 1; } @@ -1150,6 +1167,7 @@ static void cpu_hotplug(int gem_fd) igt_waitchildren(); close(fd); close(link[0]); + gem_context_destroy(gem_fd, ctx); /* Skip if child signals a problem with offlining a CPU. */ igt_skip_on(buf == 's'); @@ -1165,17 +1183,21 @@ test_interrupts(int gem_fd) igt_spin_t *spin[target]; struct pollfd pfd; uint64_t idle, busy; + uint32_t ctx; int fence_fd; int fd; gem_quiescent_gpu(gem_fd); + ctx = gem_context_create(gem_fd); + fd = open_pmu(I915_PMU_INTERRUPTS); /* Queue spinning batches. */ for (int i = 0; i < target; i++) { spin[i] = __igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER, + .ctx = ctx, .flags = IGT_SPIN_FENCE_OUT); if (i == 0) { fence_fd = spin[i]->out_fence; @@ -1217,6 +1239,7 @@ test_interrupts(int gem_fd) /* Check at least as many interrupts has been generated. */ busy = pmu_read_single(fd) - idle; close(fd); + gem_context_destroy(gem_fd, ctx); igt_assert_lte(target, busy); } @@ -1229,15 +1252,19 @@ test_interrupts_sync(int gem_fd) igt_spin_t *spin[target]; struct pollfd pfd; uint64_t idle, busy; + uint32_t ctx; int fd; gem_quiescent_gpu(gem_fd); + ctx = gem_context_create(gem_fd); + fd = open_pmu(I915_PMU_INTERRUPTS); /* Queue spinning batches. */ for (int i = 0; i < target; i++) spin[i] = __igt_spin_batch_new(gem_fd, + .ctx = ctx, .flags = IGT_SPIN_FENCE_OUT); /* Wait for idle state. */ @@ -1262,6 +1289,7 @@ test_interrupts_sync(int gem_fd) /* Check at least as many interrupts has been generated. */ busy = pmu_read_single(fd) - idle; close(fd); + gem_context_destroy(gem_fd, ctx); igt_assert_lte(target, busy); } @@ -1274,6 +1302,9 @@ test_frequency(int gem_fd) double min[2], max[2]; igt_spin_t *spin; int fd, sysfs; + uint32_t ctx; + + ctx = gem_context_create(gem_fd); sysfs = igt_sysfs_open(gem_fd); igt_require(sysfs >= 0); @@ -1301,7 +1332,7 @@ test_frequency(int gem_fd) igt_require(igt_sysfs_get_u32(sysfs, "gt_boost_freq_mhz") == min_freq); gem_quiescent_gpu(gem_fd); /* Idle to be sure the change takes effect */ - spin = spin_sync(gem_fd, 0, I915_EXEC_RENDER); + spin = spin_sync_flags(gem_fd, ctx, I915_EXEC_RENDER); slept = pmu_read_multi(fd, 2, start); measured_usleep(batch_duration_ns / 1000); @@ -1327,7 +1358,7 @@ test_frequency(int gem_fd) igt_require(igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz") == max_freq); gem_quiescent_gpu(gem_fd); - spin = spin_sync(gem_fd, 0, I915_EXEC_RENDER); + spin = spin_sync_flags(gem_fd, ctx, I915_EXEC_RENDER); slept = pmu_read_multi(fd, 2, start); measured_usleep(batch_duration_ns / 1000); @@ -1348,6 +1379,8 @@ test_frequency(int gem_fd) min_freq, igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz")); close(fd); + gem_context_destroy(gem_fd, ctx); + igt_info("Min frequency: requested %.1f, actual %.1f\n", min[0], min[1]); igt_info("Max frequency: requested %.1f, actual %.1f\n", @@ -1448,7 +1481,7 @@ test_rc6(int gem_fd, unsigned int flags) } static void -test_enable_race(int gem_fd, const struct intel_execution_engine2 *e) +test_enable_race(int gem_fd, struct intel_execution_engine2 *e) { uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance); struct igt_helper_process engine_load = { }; @@ -1465,7 +1498,7 @@ test_enable_race(int gem_fd, const struct intel_execution_engine2 *e) eb.buffer_count = 1; eb.buffers_ptr = to_user_pointer(&obj); - eb.flags = e2ring(gem_fd, e); + eb.flags = e->flags; /* * This test is probabilistic so run in a few times to increase the @@ -1520,7 +1553,7 @@ static void __rearm_spin_batch(igt_spin_t *spin) __assert_within(x, ref, tolerance, tolerance) static void -accuracy(int gem_fd, const struct intel_execution_engine2 *e, +accuracy(int gem_fd, struct intel_execution_engine2 *e, unsigned long target_busy_pct, unsigned long target_iters) { @@ -1570,7 +1603,7 @@ accuracy(int gem_fd, const struct intel_execution_engine2 *e, igt_spin_t *spin; /* Allocate our spin batch and idle it. */ - spin = igt_spin_batch_new(gem_fd, .engine = e2ring(gem_fd, e)); + spin = igt_spin_batch_new(gem_fd, .engine = e->flags); igt_spin_batch_end(spin); gem_sync(gem_fd, spin->handle); @@ -1674,7 +1707,7 @@ igt_main I915_PMU_LAST - __I915_PMU_OTHER(0) + 1; unsigned int num_engines = 0; int fd = -1; - const struct intel_execution_engine2 *e; + struct intel_execution_engine2 *e; unsigned int i; igt_fixture { @@ -1683,7 +1716,7 @@ igt_main igt_require_gem(fd); igt_require(i915_type_id() > 0); - for_each_engine_class_instance(fd, e) + __for_each_physical_engine(fd, e) num_engines++; } @@ -1693,7 +1726,7 @@ igt_main igt_subtest("invalid-init") invalid_init(); - __for_each_engine_class_instance(e) { + __for_each_physical_engine(fd, e) { const unsigned int pct[] = { 2, 50, 98 }; /** @@ -1897,7 +1930,7 @@ igt_main gem_quiescent_gpu(fd); } - __for_each_engine_class_instance(e) { + __for_each_physical_engine(render_fd, e) { igt_subtest_group { igt_fixture { gem_require_engine(render_fd,