diff mbox series

[RFT,i-g-t,6/6] test: perf_pmu: use the gem_engine_topology library

Message ID 20190411122654.1680-1-tvrtko.ursulin@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series None | expand

Commit Message

Tvrtko Ursulin April 11, 2019, 12:26 p.m. UTC
From: Andi Shyti <andi.shyti@intel.com>

Replace the legacy for_each_engine* defines with the ones
implemented in the gem_engine_topology library.

Use whenever possible gem_engine_can_store_dword() that checks
class instead of flags.

Now the __for_each_engine_class_instance and
for_each_engine_class_instance are unused, remove them.

Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Andi Shyti <andi.shyti@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 lib/igt_gt.h     |   7 ---
 tests/perf_pmu.c | 143 +++++++++++++++++++++++++++++------------------
 2 files changed, 88 insertions(+), 62 deletions(-)

Comments

Chris Wilson April 11, 2019, 12:32 p.m. UTC | #1
Quoting Tvrtko Ursulin (2019-04-11 13:26:54)
> From: Andi Shyti <andi.shyti@intel.com>
> 
> Replace the legacy for_each_engine* defines with the ones
> implemented in the gem_engine_topology library.
> 
> Use whenever possible gem_engine_can_store_dword() that checks
> class instead of flags.
> 
> Now the __for_each_engine_class_instance and
> for_each_engine_class_instance are unused, remove them.
> 
> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Signed-off-by: Andi Shyti <andi.shyti@intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

I see a lot of new gem_context_create(), but not gem_require_contexts().
-Chris
Tvrtko Ursulin April 11, 2019, 12:53 p.m. UTC | #2
On 11/04/2019 13:32, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-04-11 13:26:54)
>> From: Andi Shyti <andi.shyti@intel.com>
>>
>> Replace the legacy for_each_engine* defines with the ones
>> implemented in the gem_engine_topology library.
>>
>> Use whenever possible gem_engine_can_store_dword() that checks
>> class instead of flags.
>>
>> Now the __for_each_engine_class_instance and
>> for_each_engine_class_instance are unused, remove them.
>>
>> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> Signed-off-by: Andi Shyti <andi.shyti@intel.com>
>> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
>> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> I see a lot of new gem_context_create(), but not gem_require_contexts().

Problem is __for_each_physical_engine modifies the default context so 
tests under that iterator can't keep using I915_EXEC_RENDER.

It was perhaps a knee-jerk reaction to make them create their own 
context as a workaround.

Should we have a helper after all to get eb flags for given class:instance?

unsigned int
gem_get_ctx_eb_flags(uint32_t ctx, uint16_t class, uint16_t instance)
{
	if (ctx.has_map)
		return find_class_instance_in_map->flags;
	else
		return gem_class_instance_to_eb_flags(class, instance);
}

?

Regards,

Tvrtko
Andi Shyti April 11, 2019, 1:01 p.m. UTC | #3
On Thu, Apr 11, 2019 at 01:32:03PM +0100, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-04-11 13:26:54)
> > From: Andi Shyti <andi.shyti@intel.com>
> > 
> > Replace the legacy for_each_engine* defines with the ones
> > implemented in the gem_engine_topology library.
> > 
> > Use whenever possible gem_engine_can_store_dword() that checks
> > class instead of flags.
> > 
> > Now the __for_each_engine_class_instance and
> > for_each_engine_class_instance are unused, remove them.
> > 
> > Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > Signed-off-by: Andi Shyti <andi.shyti@intel.com>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> > Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> I see a lot of new gem_context_create(), but not gem_require_contexts().

gem_require_contexts() should go on top of engine_topology.
I will add it there.

Thanks,
Andi

> -Chris
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Chris Wilson April 11, 2019, 1:40 p.m. UTC | #4
Quoting Andi Shyti (2019-04-11 14:01:01)
> On Thu, Apr 11, 2019 at 01:32:03PM +0100, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2019-04-11 13:26:54)
> > > From: Andi Shyti <andi.shyti@intel.com>
> > > 
> > > Replace the legacy for_each_engine* defines with the ones
> > > implemented in the gem_engine_topology library.
> > > 
> > > Use whenever possible gem_engine_can_store_dword() that checks
> > > class instead of flags.
> > > 
> > > Now the __for_each_engine_class_instance and
> > > for_each_engine_class_instance are unused, remove them.
> > > 
> > > Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > Signed-off-by: Andi Shyti <andi.shyti@intel.com>
> > > Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> > > Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > 
> > I see a lot of new gem_context_create(), but not gem_require_contexts().
> 
> gem_require_contexts() should go on top of engine_topology.

engine_topology shouldn't require contexts per se, as we should be able
to quite happily modify the engine map for Pineview. To what purpose,
we do not ask!

Just be careful you don't legacy tests.
-Chris
Chris Wilson April 11, 2019, 1:50 p.m. UTC | #5
Quoting Tvrtko Ursulin (2019-04-11 13:53:24)
> 
> On 11/04/2019 13:32, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2019-04-11 13:26:54)
> >> From: Andi Shyti <andi.shyti@intel.com>
> >>
> >> Replace the legacy for_each_engine* defines with the ones
> >> implemented in the gem_engine_topology library.
> >>
> >> Use whenever possible gem_engine_can_store_dword() that checks
> >> class instead of flags.
> >>
> >> Now the __for_each_engine_class_instance and
> >> for_each_engine_class_instance are unused, remove them.
> >>
> >> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >> Signed-off-by: Andi Shyti <andi.shyti@intel.com>
> >> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> >> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > 
> > I see a lot of new gem_context_create(), but not gem_require_contexts().
> 
> Problem is __for_each_physical_engine modifies the default context so 
> tests under that iterator can't keep using I915_EXEC_RENDER.
> 
> It was perhaps a knee-jerk reaction to make them create their own 
> context as a workaround.

From the looks of the tests affected, they all should support contexts. I
would be more concerned if the basic busyness checking tests grew a new
dependency.

Operating on the default context, should be fine though. May require an
operation to reset in case a previous subtest failed, which is a
downside.
 
> Should we have a helper after all to get eb flags for given class:instance?

I think resist/delay as much as possible. In 3 months time we can sweep
away any evidence of the old ways, and only the explicit legacy ABI
tests need concern themselves with BSD | BSD2 ;)
-Chris
Tvrtko Ursulin April 11, 2019, 2:37 p.m. UTC | #6
On 11/04/2019 14:50, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-04-11 13:53:24)
>>
>> On 11/04/2019 13:32, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2019-04-11 13:26:54)
>>>> From: Andi Shyti <andi.shyti@intel.com>
>>>>
>>>> Replace the legacy for_each_engine* defines with the ones
>>>> implemented in the gem_engine_topology library.
>>>>
>>>> Use whenever possible gem_engine_can_store_dword() that checks
>>>> class instead of flags.
>>>>
>>>> Now the __for_each_engine_class_instance and
>>>> for_each_engine_class_instance are unused, remove them.
>>>>
>>>> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>> Signed-off-by: Andi Shyti <andi.shyti@intel.com>
>>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
>>>> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>
>>> I see a lot of new gem_context_create(), but not gem_require_contexts().
>>
>> Problem is __for_each_physical_engine modifies the default context so
>> tests under that iterator can't keep using I915_EXEC_RENDER.
>>
>> It was perhaps a knee-jerk reaction to make them create their own
>> context as a workaround.
> 
>  From the looks of the tests affected, they all should support contexts. I
> would be more concerned if the basic busyness checking tests grew a new
> dependency.

Basic busyness ones have not, but the ones I fixed up by making them 
create a context have. Which is fixable with below.

Failures are only with gem_wait/basic-*-all, so I suspect something's 
off in igt_dummyload.c. Will see what happens with the debugs added.

> Operating on the default context, should be fine though. May require an
> operation to reset in case a previous subtest failed, which is a
> downside.
>   
>> Should we have a helper after all to get eb flags for given class:instance?
> 
> I think resist/delay as much as possible. In 3 months time we can sweep
> away any evidence of the old ways, and only the explicit legacy ABI
> tests need concern themselves with BSD | BSD2 ;)

perf_pmu tests which I added context_create to open PMU for rcs:0 and 
submit batches to I915_EXEC_RENDER.

Well alternative to the helper could be to change them to submit to 
I915_EXEC_DEFAULT and rely on context map to always have rcs:0 at index 
0. Which could even be passable. If the promise is broken the tests will 
fail to let us know.

Okay with this alternative hack?

Regards,

Tvrtko
Andi Shyti April 11, 2019, 2:55 p.m. UTC | #7
On Thu, Apr 11, 2019 at 02:40:45PM +0100, Chris Wilson wrote:
> Quoting Andi Shyti (2019-04-11 14:01:01)
> > On Thu, Apr 11, 2019 at 01:32:03PM +0100, Chris Wilson wrote:
> > > Quoting Tvrtko Ursulin (2019-04-11 13:26:54)
> > > > From: Andi Shyti <andi.shyti@intel.com>
> > > > 
> > > > Replace the legacy for_each_engine* defines with the ones
> > > > implemented in the gem_engine_topology library.
> > > > 
> > > > Use whenever possible gem_engine_can_store_dword() that checks
> > > > class instead of flags.
> > > > 
> > > > Now the __for_each_engine_class_instance and
> > > > for_each_engine_class_instance are unused, remove them.
> > > > 
> > > > Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > > Signed-off-by: Andi Shyti <andi.shyti@intel.com>
> > > > Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> > > > Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > 
> > > I see a lot of new gem_context_create(), but not gem_require_contexts().
> > 
> > gem_require_contexts() should go on top of engine_topology.
> 
> engine_topology shouldn't require contexts per se, as we should be able
> to quite happily modify the engine map for Pineview. To what purpose,
> we do not ask!
> 
> Just be careful you don't legacy tests.

I created a helper to avoid initializing the whole "think".
Tvrtko, indeed at some point calls "init_engine_list".

The helper looks like this:

+int64_t gem_map_all_engines(int fd)
+{
+       DEFINE_CONTEXT_PARAM(engines, param, 0, GEM_MAX_ENGINES);
+       struct intel_engine_data engine_data = { };

/* here is where I would add gem_require_contexts() */

+       param.ctx_id = gem_context_create(fd);
+
+       if (gem_topology_get_param(fd, &param) > 0 && !param.size) {
+               query_engine_list(fd, &engine_data);
+               ctx_map_engines(fd, &engine_data, &param);
+       }
+
+       return param.ctx_id;
+}

and of course there is an unmap_engines that just deletes the
context.

Maybe such a helper is not required in the new world, but it can
be useful for legacy tests.

Andi
diff mbox series

Patch

diff --git a/lib/igt_gt.h b/lib/igt_gt.h
index af4cc38a1ef7..c2ca07e03738 100644
--- a/lib/igt_gt.h
+++ b/lib/igt_gt.h
@@ -119,11 +119,4 @@  void gem_require_engine(int gem_fd,
 	igt_require(gem_has_engine(gem_fd, class, instance));
 }
 
-#define __for_each_engine_class_instance(e__) \
-	for ((e__) = intel_execution_engines2; (e__)->name; (e__)++)
-
-#define for_each_engine_class_instance(fd__, e__) \
-	for ((e__) = intel_execution_engines2; (e__)->name; (e__)++) \
-		for_if (gem_has_engine((fd__), (e__)->class, (e__)->instance))
-
 #endif /* IGT_GT_H */
diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c
index 4f552bc2ae28..a889b552236d 100644
--- a/tests/perf_pmu.c
+++ b/tests/perf_pmu.c
@@ -72,7 +72,7 @@  static int open_group(uint64_t config, int group)
 }
 
 static void
-init(int gem_fd, const struct intel_execution_engine2 *e, uint8_t sample)
+init(int gem_fd, struct intel_execution_engine2 *e, uint8_t sample)
 {
 	int fd, err = 0;
 	bool exists;
@@ -158,11 +158,6 @@  static unsigned int measured_usleep(unsigned int usec)
 	return igt_nsec_elapsed(&ts);
 }
 
-static unsigned int e2ring(int gem_fd, const struct intel_execution_engine2 *e)
-{
-	return gem_class_instance_to_eb_flags(gem_fd, e->class, e->instance);
-}
-
 #define TEST_BUSY (1)
 #define FLAG_SYNC (2)
 #define TEST_TRAILING_IDLE (4)
@@ -170,14 +165,15 @@  static unsigned int e2ring(int gem_fd, const struct intel_execution_engine2 *e)
 #define FLAG_LONG (16)
 #define FLAG_HANG (32)
 
-static igt_spin_t * __spin_poll(int fd, uint32_t ctx, unsigned long flags)
+static igt_spin_t * __spin_poll(int fd, uint32_t ctx,
+				struct intel_execution_engine2 *e)
 {
 	struct igt_spin_factory opts = {
 		.ctx = ctx,
-		.engine = flags,
+		.engine = e->flags,
 	};
 
-	if (gem_can_store_dword(fd, flags))
+	if (gem_class_can_store_dword(fd, e->class))
 		opts.flags |= IGT_SPIN_POLL_RUN;
 
 	return __igt_spin_batch_factory(fd, &opts);
@@ -209,20 +205,34 @@  static unsigned long __spin_wait(int fd, igt_spin_t *spin)
 	return igt_nsec_elapsed(&start);
 }
 
-static igt_spin_t * __spin_sync(int fd, uint32_t ctx, unsigned long flags)
+static igt_spin_t * __spin_sync(int fd, uint32_t ctx,
+				struct intel_execution_engine2 *e)
 {
-	igt_spin_t *spin = __spin_poll(fd, ctx, flags);
+	igt_spin_t *spin = __spin_poll(fd, ctx, e);
 
 	__spin_wait(fd, spin);
 
 	return spin;
 }
 
-static igt_spin_t * spin_sync(int fd, uint32_t ctx, unsigned long flags)
+static igt_spin_t * spin_sync(int fd, uint32_t ctx,
+			      struct intel_execution_engine2 *e)
 {
 	igt_require_gem(fd);
 
-	return __spin_sync(fd, ctx, flags);
+	return __spin_sync(fd, ctx, e);
+}
+
+static igt_spin_t * spin_sync_flags(int fd, uint32_t ctx, unsigned int flags)
+{
+	struct intel_execution_engine2 e = { };
+
+	e.class = gem_eb_to_class(flags);
+	e.instance = (flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK)) ==
+		     (I915_EXEC_BSD | I915_EXEC_BSD_RING2) ? 1 : 0;
+	e.flags = flags;
+
+	return spin_sync(fd, ctx, &e);
 }
 
 static void end_spin(int fd, igt_spin_t *spin, unsigned int flags)
@@ -257,7 +267,7 @@  static void end_spin(int fd, igt_spin_t *spin, unsigned int flags)
 }
 
 static void
-single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
+single(int gem_fd, struct intel_execution_engine2 *e, unsigned int flags)
 {
 	unsigned long slept;
 	igt_spin_t *spin;
@@ -267,7 +277,7 @@  single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 	fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance));
 
 	if (flags & TEST_BUSY)
-		spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+		spin = spin_sync(gem_fd, 0, e);
 	else
 		spin = NULL;
 
@@ -303,7 +313,7 @@  single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 }
 
 static void
-busy_start(int gem_fd, const struct intel_execution_engine2 *e)
+busy_start(int gem_fd, struct intel_execution_engine2 *e)
 {
 	unsigned long slept;
 	uint64_t val, ts[2];
@@ -316,7 +326,7 @@  busy_start(int gem_fd, const struct intel_execution_engine2 *e)
 	 */
 	sleep(2);
 
-	spin = __spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+	spin = __spin_sync(gem_fd, 0, e);
 
 	fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance));
 
@@ -338,7 +348,7 @@  busy_start(int gem_fd, const struct intel_execution_engine2 *e)
  * will depend on the CI systems running it a lot to detect issues.
  */
 static void
-busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
+busy_double_start(int gem_fd, struct intel_execution_engine2 *e)
 {
 	unsigned long slept;
 	uint64_t val, val2, ts[2];
@@ -347,6 +357,7 @@  busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
 	int fd;
 
 	ctx = gem_context_create(gem_fd);
+	intel_init_engine_list(gem_fd, ctx);
 
 	/*
 	 * Defeat the busy stats delayed disable, we need to guarantee we are
@@ -359,11 +370,11 @@  busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
 	 * re-submission in execlists mode. Make sure busyness is correctly
 	 * reported with the engine busy, and after the engine went idle.
 	 */
-	spin[0] = __spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+	spin[0] = __spin_sync(gem_fd, 0, e);
 	usleep(500e3);
 	spin[1] = __igt_spin_batch_new(gem_fd,
 				       .ctx = ctx,
-				       .engine = e2ring(gem_fd, e));
+				       .engine = e->flags);
 
 	/*
 	 * Open PMU as fast as possible after the second spin batch in attempt
@@ -421,10 +432,10 @@  static void log_busy(unsigned int num_engines, uint64_t *val)
 }
 
 static void
-busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
+busy_check_all(int gem_fd, struct intel_execution_engine2 *e,
 	       const unsigned int num_engines, unsigned int flags)
 {
-	const struct intel_execution_engine2 *e_;
+	struct intel_execution_engine2 *e_;
 	uint64_t tval[2][num_engines];
 	unsigned int busy_idx = 0, i;
 	uint64_t val[num_engines];
@@ -434,8 +445,8 @@  busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 
 	i = 0;
 	fd[0] = -1;
-	for_each_engine_class_instance(gem_fd, e_) {
-		if (e == e_)
+	__for_each_physical_engine(gem_fd, e_) {
+		if (e->class == e_->class && e->instance == e_->instance)
 			busy_idx = i;
 
 		fd[i++] = open_group(I915_PMU_ENGINE_BUSY(e_->class,
@@ -445,7 +456,7 @@  busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 
 	igt_assert_eq(i, num_engines);
 
-	spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+	spin = spin_sync(gem_fd, 0, e);
 	pmu_read_multi(fd[0], num_engines, tval[0]);
 	slept = measured_usleep(batch_duration_ns / 1000);
 	if (flags & TEST_TRAILING_IDLE)
@@ -472,23 +483,23 @@  busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 
 static void
 __submit_spin_batch(int gem_fd, igt_spin_t *spin,
-		    const struct intel_execution_engine2 *e,
+		    struct intel_execution_engine2 *e,
 		    int offset)
 {
 	struct drm_i915_gem_execbuffer2 eb = spin->execbuf;
 
 	eb.flags &= ~(0x3f | I915_EXEC_BSD_MASK);
-	eb.flags |= e2ring(gem_fd, e) | I915_EXEC_NO_RELOC;
+	eb.flags |= e->flags | I915_EXEC_NO_RELOC;
 	eb.batch_start_offset += offset;
 
 	gem_execbuf(gem_fd, &eb);
 }
 
 static void
-most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
+most_busy_check_all(int gem_fd, struct intel_execution_engine2 *e,
 		    const unsigned int num_engines, unsigned int flags)
 {
-	const struct intel_execution_engine2 *e_;
+	struct intel_execution_engine2 *e_;
 	uint64_t tval[2][num_engines];
 	uint64_t val[num_engines];
 	int fd[num_engines];
@@ -497,13 +508,13 @@  most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 	unsigned int idle_idx, i;
 
 	i = 0;
-	for_each_engine_class_instance(gem_fd, e_) {
-		if (e == e_)
+	__for_each_physical_engine(gem_fd, e_) {
+		if (e->class == e_->class && e->instance == e_->instance)
 			idle_idx = i;
 		else if (spin)
 			__submit_spin_batch(gem_fd, spin, e_, 64);
 		else
-			spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e_));
+			spin = __spin_poll(gem_fd, 0, e_);
 
 		val[i++] = I915_PMU_ENGINE_BUSY(e_->class, e_->instance);
 	}
@@ -545,7 +556,7 @@  static void
 all_busy_check_all(int gem_fd, const unsigned int num_engines,
 		   unsigned int flags)
 {
-	const struct intel_execution_engine2 *e;
+	struct intel_execution_engine2 *e;
 	uint64_t tval[2][num_engines];
 	uint64_t val[num_engines];
 	int fd[num_engines];
@@ -554,11 +565,11 @@  all_busy_check_all(int gem_fd, const unsigned int num_engines,
 	unsigned int i;
 
 	i = 0;
-	for_each_engine_class_instance(gem_fd, e) {
+	__for_each_physical_engine(gem_fd, e) {
 		if (spin)
 			__submit_spin_batch(gem_fd, spin, e, 64);
 		else
-			spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e));
+			spin = __spin_poll(gem_fd, 0, e);
 
 		val[i++] = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	}
@@ -592,7 +603,7 @@  all_busy_check_all(int gem_fd, const unsigned int num_engines,
 }
 
 static void
-no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
+no_sema(int gem_fd, struct intel_execution_engine2 *e, unsigned int flags)
 {
 	igt_spin_t *spin;
 	uint64_t val[2][2];
@@ -602,7 +613,7 @@  no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 	open_group(I915_PMU_ENGINE_WAIT(e->class, e->instance), fd);
 
 	if (flags & TEST_BUSY)
-		spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+		spin = spin_sync(gem_fd, 0, e);
 	else
 		spin = NULL;
 
@@ -631,7 +642,7 @@  no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 #define   MI_SEMAPHORE_SAD_GTE_SDD	(1<<12)
 
 static void
-sema_wait(int gem_fd, const struct intel_execution_engine2 *e,
+sema_wait(int gem_fd, struct intel_execution_engine2 *e,
 	  unsigned int flags)
 {
 	struct drm_i915_gem_relocation_entry reloc[2] = {};
@@ -689,7 +700,7 @@  sema_wait(int gem_fd, const struct intel_execution_engine2 *e,
 
 	eb.buffer_count = 2;
 	eb.buffers_ptr = to_user_pointer(obj);
-	eb.flags = e2ring(gem_fd, e);
+	eb.flags = e->flags;
 
 	/**
 	 * Start the semaphore wait PMU and after some known time let the above
@@ -792,7 +803,7 @@  static int wait_vblank(int fd, union drm_wait_vblank *vbl)
 }
 
 static void
-event_wait(int gem_fd, const struct intel_execution_engine2 *e)
+event_wait(int gem_fd, struct intel_execution_engine2 *e)
 {
 	struct drm_i915_gem_exec_object2 obj = { };
 	struct drm_i915_gem_execbuffer2 eb = { };
@@ -845,7 +856,7 @@  event_wait(int gem_fd, const struct intel_execution_engine2 *e)
 
 	eb.buffer_count = 1;
 	eb.buffers_ptr = to_user_pointer(&obj);
-	eb.flags = e2ring(gem_fd, e) | I915_EXEC_SECURE;
+	eb.flags = e->flags | I915_EXEC_SECURE;
 
 	for_each_pipe_with_valid_output(&data.display, p, output) {
 		struct igt_helper_process waiter = { };
@@ -917,7 +928,7 @@  event_wait(int gem_fd, const struct intel_execution_engine2 *e)
 }
 
 static void
-multi_client(int gem_fd, const struct intel_execution_engine2 *e)
+multi_client(int gem_fd, struct intel_execution_engine2 *e)
 {
 	uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	unsigned long slept[2];
@@ -936,7 +947,7 @@  multi_client(int gem_fd, const struct intel_execution_engine2 *e)
 	 */
 	fd[1] = open_pmu(config);
 
-	spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+	spin = spin_sync(gem_fd, 0, e);
 
 	val[0] = val[1] = __pmu_read_single(fd[0], &ts[0]);
 	slept[1] = measured_usleep(batch_duration_ns / 1000);
@@ -1039,6 +1050,7 @@  static void cpu_hotplug(int gem_fd)
 	igt_spin_t *spin[2];
 	uint64_t ts[2];
 	uint64_t val;
+	uint32_t ctx;
 	int link[2];
 	int fd, ret;
 	int cur = 0;
@@ -1046,14 +1058,18 @@  static void cpu_hotplug(int gem_fd)
 
 	igt_require(cpu0_hotplug_support());
 
+	ctx = gem_context_create(gem_fd);
+
 	fd = open_pmu(I915_PMU_ENGINE_BUSY(I915_ENGINE_CLASS_RENDER, 0));
 
 	/*
 	 * Create two spinners so test can ensure shorter gaps in engine
 	 * busyness as it is terminating one and re-starting the other.
 	 */
-	spin[0] = igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER);
-	spin[1] = __igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER);
+	spin[0] = igt_spin_batch_new(gem_fd,
+				     .engine = I915_EXEC_RENDER, .ctx = ctx);
+	spin[1] = __igt_spin_batch_new(gem_fd,
+				       .engine = I915_EXEC_RENDER, .ctx = ctx);
 
 	val = __pmu_read_single(fd, &ts[0]);
 
@@ -1137,6 +1153,7 @@  static void cpu_hotplug(int gem_fd)
 
 		igt_spin_batch_free(gem_fd, spin[cur]);
 		spin[cur] = __igt_spin_batch_new(gem_fd,
+						 .ctx = ctx,
 						 .engine = I915_EXEC_RENDER);
 		cur ^= 1;
 	}
@@ -1150,6 +1167,7 @@  static void cpu_hotplug(int gem_fd)
 	igt_waitchildren();
 	close(fd);
 	close(link[0]);
+	gem_context_destroy(gem_fd, ctx);
 
 	/* Skip if child signals a problem with offlining a CPU. */
 	igt_skip_on(buf == 's');
@@ -1165,17 +1183,21 @@  test_interrupts(int gem_fd)
 	igt_spin_t *spin[target];
 	struct pollfd pfd;
 	uint64_t idle, busy;
+	uint32_t ctx;
 	int fence_fd;
 	int fd;
 
 	gem_quiescent_gpu(gem_fd);
 
+	ctx = gem_context_create(gem_fd);
+
 	fd = open_pmu(I915_PMU_INTERRUPTS);
 
 	/* Queue spinning batches. */
 	for (int i = 0; i < target; i++) {
 		spin[i] = __igt_spin_batch_new(gem_fd,
 					       .engine = I915_EXEC_RENDER,
+					       .ctx = ctx,
 					       .flags = IGT_SPIN_FENCE_OUT);
 		if (i == 0) {
 			fence_fd = spin[i]->out_fence;
@@ -1217,6 +1239,7 @@  test_interrupts(int gem_fd)
 	/* Check at least as many interrupts has been generated. */
 	busy = pmu_read_single(fd) - idle;
 	close(fd);
+	gem_context_destroy(gem_fd, ctx);
 
 	igt_assert_lte(target, busy);
 }
@@ -1229,15 +1252,19 @@  test_interrupts_sync(int gem_fd)
 	igt_spin_t *spin[target];
 	struct pollfd pfd;
 	uint64_t idle, busy;
+	uint32_t ctx;
 	int fd;
 
 	gem_quiescent_gpu(gem_fd);
 
+	ctx = gem_context_create(gem_fd);
+
 	fd = open_pmu(I915_PMU_INTERRUPTS);
 
 	/* Queue spinning batches. */
 	for (int i = 0; i < target; i++)
 		spin[i] = __igt_spin_batch_new(gem_fd,
+					       .ctx = ctx,
 					       .flags = IGT_SPIN_FENCE_OUT);
 
 	/* Wait for idle state. */
@@ -1262,6 +1289,7 @@  test_interrupts_sync(int gem_fd)
 	/* Check at least as many interrupts has been generated. */
 	busy = pmu_read_single(fd) - idle;
 	close(fd);
+	gem_context_destroy(gem_fd, ctx);
 
 	igt_assert_lte(target, busy);
 }
@@ -1274,6 +1302,9 @@  test_frequency(int gem_fd)
 	double min[2], max[2];
 	igt_spin_t *spin;
 	int fd, sysfs;
+	uint32_t ctx;
+
+	ctx = gem_context_create(gem_fd);
 
 	sysfs = igt_sysfs_open(gem_fd);
 	igt_require(sysfs >= 0);
@@ -1301,7 +1332,7 @@  test_frequency(int gem_fd)
 	igt_require(igt_sysfs_get_u32(sysfs, "gt_boost_freq_mhz") == min_freq);
 
 	gem_quiescent_gpu(gem_fd); /* Idle to be sure the change takes effect */
-	spin = spin_sync(gem_fd, 0, I915_EXEC_RENDER);
+	spin = spin_sync_flags(gem_fd, ctx, I915_EXEC_RENDER);
 
 	slept = pmu_read_multi(fd, 2, start);
 	measured_usleep(batch_duration_ns / 1000);
@@ -1327,7 +1358,7 @@  test_frequency(int gem_fd)
 	igt_require(igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz") == max_freq);
 
 	gem_quiescent_gpu(gem_fd);
-	spin = spin_sync(gem_fd, 0, I915_EXEC_RENDER);
+	spin = spin_sync_flags(gem_fd, ctx, I915_EXEC_RENDER);
 
 	slept = pmu_read_multi(fd, 2, start);
 	measured_usleep(batch_duration_ns / 1000);
@@ -1348,6 +1379,8 @@  test_frequency(int gem_fd)
 			 min_freq, igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz"));
 	close(fd);
 
+	gem_context_destroy(gem_fd, ctx);
+
 	igt_info("Min frequency: requested %.1f, actual %.1f\n",
 		 min[0], min[1]);
 	igt_info("Max frequency: requested %.1f, actual %.1f\n",
@@ -1448,7 +1481,7 @@  test_rc6(int gem_fd, unsigned int flags)
 }
 
 static void
-test_enable_race(int gem_fd, const struct intel_execution_engine2 *e)
+test_enable_race(int gem_fd, struct intel_execution_engine2 *e)
 {
 	uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	struct igt_helper_process engine_load = { };
@@ -1465,7 +1498,7 @@  test_enable_race(int gem_fd, const struct intel_execution_engine2 *e)
 
 	eb.buffer_count = 1;
 	eb.buffers_ptr = to_user_pointer(&obj);
-	eb.flags = e2ring(gem_fd, e);
+	eb.flags = e->flags;
 
 	/*
 	 * This test is probabilistic so run in a few times to increase the
@@ -1520,7 +1553,7 @@  static void __rearm_spin_batch(igt_spin_t *spin)
 	__assert_within(x, ref, tolerance, tolerance)
 
 static void
-accuracy(int gem_fd, const struct intel_execution_engine2 *e,
+accuracy(int gem_fd, struct intel_execution_engine2 *e,
 	 unsigned long target_busy_pct,
 	 unsigned long target_iters)
 {
@@ -1570,7 +1603,7 @@  accuracy(int gem_fd, const struct intel_execution_engine2 *e,
 		igt_spin_t *spin;
 
 		/* Allocate our spin batch and idle it. */
-		spin = igt_spin_batch_new(gem_fd, .engine = e2ring(gem_fd, e));
+		spin = igt_spin_batch_new(gem_fd, .engine = e->flags);
 		igt_spin_batch_end(spin);
 		gem_sync(gem_fd, spin->handle);
 
@@ -1674,7 +1707,7 @@  igt_main
 				I915_PMU_LAST - __I915_PMU_OTHER(0) + 1;
 	unsigned int num_engines = 0;
 	int fd = -1;
-	const struct intel_execution_engine2 *e;
+	struct intel_execution_engine2 *e;
 	unsigned int i;
 
 	igt_fixture {
@@ -1683,7 +1716,7 @@  igt_main
 		igt_require_gem(fd);
 		igt_require(i915_type_id() > 0);
 
-		for_each_engine_class_instance(fd, e)
+		__for_each_physical_engine(fd, e)
 			num_engines++;
 	}
 
@@ -1693,7 +1726,7 @@  igt_main
 	igt_subtest("invalid-init")
 		invalid_init();
 
-	__for_each_engine_class_instance(e) {
+	__for_each_physical_engine(fd, e) {
 		const unsigned int pct[] = { 2, 50, 98 };
 
 		/**
@@ -1897,7 +1930,7 @@  igt_main
 			gem_quiescent_gpu(fd);
 		}
 
-		__for_each_engine_class_instance(e) {
+		__for_each_physical_engine(render_fd, e) {
 			igt_subtest_group {
 				igt_fixture {
 					gem_require_engine(render_fd,