Message ID | 20180815111351.11711-1-chris@chris-wilson.co.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | RFT: Do we detect WA_TAIL? | expand |
Quoting Chris Wilson (2018-08-15 12:13:51) The answer is no. So either our wait-for-ack is the ultimate panacea, or we just don't have the right pattern to trigger the bug. -Chris
Quoting Chris Wilson (2018-08-15 13:23:12) > Quoting Chris Wilson (2018-08-15 12:13:51) > > The answer is no. So either our wait-for-ack is the ultimate panacea, or > we just don't have the right pattern to trigger the bug. Fwiw, the small batch sizes for gem_concurrent_blit were good at hitting it on bdw/skl. -Chris
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 3f90c74038ef..fff2fbb6bac5 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -161,7 +161,7 @@ /* Typical size of the average request (2 pipecontrols and a MI_BB) */ #define EXECLISTS_REQUEST_SIZE 64 /* bytes */ -#define WA_TAIL_DWORDS 2 +#define WA_TAIL_DWORDS 0 #define WA_TAIL_BYTES (sizeof(u32) * WA_TAIL_DWORDS) static int execlists_context_deferred_alloc(struct i915_gem_context *ctx, @@ -2195,8 +2195,10 @@ static int gen8_emit_flush_render(struct i915_request *request, static void gen8_emit_wa_tail(struct i915_request *request, u32 *cs) { /* Ensure there's always at least one preemption point per-request. */ - *cs++ = MI_ARB_CHECK; - *cs++ = MI_NOOP; + if (WA_TAIL_DWORDS) { + *cs++ = MI_ARB_CHECK; + *cs++ = MI_NOOP; + } request->wa_tail = intel_ring_offset(request, cs); }