Message ID | 20180619135556.17993-1-chris@chris-wilson.co.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 19/06/18 06:55, Chris Wilson wrote: > When using the pollable spinner, we often want to use it as a means of > ensuring the task is running on the GPU before switching to something > else. In which case we don't want to add extra delay inside the spinner, > but the current 1000 NOPs add on order of 5us, which is often larger > than the target latency. > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Antonio Argenziano <antonio.argenziano@intel.com> > --- > lib/igt_dummyload.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c > index d32b421c6..b090b8004 100644 > --- a/lib/igt_dummyload.c > +++ b/lib/igt_dummyload.c > @@ -78,6 +78,7 @@ fill_reloc(struct drm_i915_gem_relocation_entry *reloc, > #define OUT_FENCE (1 << 0) > #define POLL_RUN (1 << 1) > #define NO_PREEMPTION (1 << 2) > +#define SPIN_FAST (1 << 3) > > static int > emit_recursive_batch(igt_spin_t *spin, int fd, uint32_t ctx, unsigned engine, > @@ -212,7 +213,8 @@ emit_recursive_batch(igt_spin_t *spin, int fd, uint32_t ctx, unsigned engine, > * between function calls, that appears enough to keep SNB out of > * trouble. See https://bugs.freedesktop.org/show_bug.cgi?id=102262 > */ > - batch += 1000; > + if (!(flags & SPIN_FAST)) > + batch += 1000; > > /* recurse */ > r = &relocs[obj[BATCH].relocation_count++]; > @@ -369,7 +371,7 @@ igt_spin_batch_new_fence(int fd, uint32_t ctx, unsigned engine) > igt_spin_t * > __igt_spin_batch_new_poll(int fd, uint32_t ctx, unsigned engine) > { > - return ___igt_spin_batch_new(fd, ctx, engine, 0, POLL_RUN); > + return ___igt_spin_batch_new(fd, ctx, engine, 0, POLL_RUN | SPIN_FAST); > } > > igt_spin_t * >
diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c index d32b421c6..b090b8004 100644 --- a/lib/igt_dummyload.c +++ b/lib/igt_dummyload.c @@ -78,6 +78,7 @@ fill_reloc(struct drm_i915_gem_relocation_entry *reloc, #define OUT_FENCE (1 << 0) #define POLL_RUN (1 << 1) #define NO_PREEMPTION (1 << 2) +#define SPIN_FAST (1 << 3) static int emit_recursive_batch(igt_spin_t *spin, int fd, uint32_t ctx, unsigned engine, @@ -212,7 +213,8 @@ emit_recursive_batch(igt_spin_t *spin, int fd, uint32_t ctx, unsigned engine, * between function calls, that appears enough to keep SNB out of * trouble. See https://bugs.freedesktop.org/show_bug.cgi?id=102262 */ - batch += 1000; + if (!(flags & SPIN_FAST)) + batch += 1000; /* recurse */ r = &relocs[obj[BATCH].relocation_count++]; @@ -369,7 +371,7 @@ igt_spin_batch_new_fence(int fd, uint32_t ctx, unsigned engine) igt_spin_t * __igt_spin_batch_new_poll(int fd, uint32_t ctx, unsigned engine) { - return ___igt_spin_batch_new(fd, ctx, engine, 0, POLL_RUN); + return ___igt_spin_batch_new(fd, ctx, engine, 0, POLL_RUN | SPIN_FAST); } igt_spin_t *
When using the pollable spinner, we often want to use it as a means of ensuring the task is running on the GPU before switching to something else. In which case we don't want to add extra delay inside the spinner, but the current 1000 NOPs add on order of 5us, which is often larger than the target latency. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> --- lib/igt_dummyload.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)