Message ID | 20181207090213.14352-1-chris@chris-wilson.co.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] drm/i915: Push EMIT_INVALIDATE at request start to backends | expand |
On 07/12/2018 09:02, Chris Wilson wrote: > Move the common engine->emit_flush(EMIT_INVALIDATE) back to the backends > (where it was once previously) as we seek to specialise it in future > patches. > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> > --- > drivers/gpu/drm/i915/i915_request.c | 5 ----- > drivers/gpu/drm/i915/intel_lrc.c | 9 ++++++--- > drivers/gpu/drm/i915/intel_ringbuffer.c | 6 ++++-- > 3 files changed, 10 insertions(+), 10 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c > index ca95ab2f4cfa..8ab8e8e6a086 100644 > --- a/drivers/gpu/drm/i915/i915_request.c > +++ b/drivers/gpu/drm/i915/i915_request.c > @@ -719,11 +719,6 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx) > */ > rq->head = rq->ring->emit; > > - /* Unconditionally invalidate GPU caches and TLBs. */ > - ret = engine->emit_flush(rq, EMIT_INVALIDATE); > - if (ret) > - goto err_unwind; > - > ret = engine->request_alloc(rq); > if (ret) > goto err_unwind; > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c > index 27d3a780611a..b1f5db3442eb 100644 > --- a/drivers/gpu/drm/i915/intel_lrc.c > +++ b/drivers/gpu/drm/i915/intel_lrc.c > @@ -1253,17 +1253,20 @@ static int execlists_request_alloc(struct i915_request *request) > > GEM_BUG_ON(!request->hw_context->pin_count); > > - /* Flush enough space to reduce the likelihood of waiting after > + /* > + * Flush enough space to reduce the likelihood of waiting after > * we start building the request - in which case we will just > * have to repeat work. > */ > request->reserved_space += EXECLISTS_REQUEST_SIZE; > > - ret = intel_ring_wait_for_space(request->ring, request->reserved_space); > + /* Unconditionally invalidate GPU caches and TLBs. */ > + ret = request->engine->emit_flush(request, EMIT_INVALIDATE); > if (ret) > return ret; > > - /* Note that after this point, we have committed to using > + /* > + * Note that after this point, we have committed to using > * this request as it is being used to both track the > * state of engine initialisation and liveness of the > * golden renderstate above. Think twice before you try > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c > index c5eb26a7ee79..16084749adf5 100644 > --- a/drivers/gpu/drm/i915/intel_ringbuffer.c > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c > @@ -1820,13 +1820,15 @@ static int ring_request_alloc(struct i915_request *request) > > GEM_BUG_ON(!request->hw_context->pin_count); > > - /* Flush enough space to reduce the likelihood of waiting after > + /* > + * Flush enough space to reduce the likelihood of waiting after > * we start building the request - in which case we will just > * have to repeat work. > */ > request->reserved_space += LEGACY_REQUEST_SIZE; > > - ret = intel_ring_wait_for_space(request->ring, request->reserved_space); > + /* Unconditionally invalidate GPU caches and TLBs. */ > + ret = request->engine->emit_flush(request, EMIT_INVALIDATE); > if (ret) > return ret; > > intel_ring_wait_for_space is the bit paranoid me actually wanted to have split out. But okay, maybe I did not say it clear enough. This already helps singling out that change should something unexpected happen. Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Regards, Tvrtko
Quoting Patchwork (2018-12-07 11:28:17) > == Series Details == > > Series: series starting with [1/3] drm/i915: Push EMIT_INVALIDATE at request start to backends > URL : https://patchwork.freedesktop.org/series/53729/ > State : success > > == Summary == > > CI Bug Log - changes from CI_DRM_5282 -> Patchwork_11044 > ==================================================== > > Summary > ------- > > **SUCCESS** > > No regressions found. With all fingers crossed, pushed. -Chris
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index ca95ab2f4cfa..8ab8e8e6a086 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -719,11 +719,6 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx) */ rq->head = rq->ring->emit; - /* Unconditionally invalidate GPU caches and TLBs. */ - ret = engine->emit_flush(rq, EMIT_INVALIDATE); - if (ret) - goto err_unwind; - ret = engine->request_alloc(rq); if (ret) goto err_unwind; diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 27d3a780611a..b1f5db3442eb 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1253,17 +1253,20 @@ static int execlists_request_alloc(struct i915_request *request) GEM_BUG_ON(!request->hw_context->pin_count); - /* Flush enough space to reduce the likelihood of waiting after + /* + * Flush enough space to reduce the likelihood of waiting after * we start building the request - in which case we will just * have to repeat work. */ request->reserved_space += EXECLISTS_REQUEST_SIZE; - ret = intel_ring_wait_for_space(request->ring, request->reserved_space); + /* Unconditionally invalidate GPU caches and TLBs. */ + ret = request->engine->emit_flush(request, EMIT_INVALIDATE); if (ret) return ret; - /* Note that after this point, we have committed to using + /* + * Note that after this point, we have committed to using * this request as it is being used to both track the * state of engine initialisation and liveness of the * golden renderstate above. Think twice before you try diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index c5eb26a7ee79..16084749adf5 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -1820,13 +1820,15 @@ static int ring_request_alloc(struct i915_request *request) GEM_BUG_ON(!request->hw_context->pin_count); - /* Flush enough space to reduce the likelihood of waiting after + /* + * Flush enough space to reduce the likelihood of waiting after * we start building the request - in which case we will just * have to repeat work. */ request->reserved_space += LEGACY_REQUEST_SIZE; - ret = intel_ring_wait_for_space(request->ring, request->reserved_space); + /* Unconditionally invalidate GPU caches and TLBs. */ + ret = request->engine->emit_flush(request, EMIT_INVALIDATE); if (ret) return ret;
Move the common engine->emit_flush(EMIT_INVALIDATE) back to the backends (where it was once previously) as we seek to specialise it in future patches. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> --- drivers/gpu/drm/i915/i915_request.c | 5 ----- drivers/gpu/drm/i915/intel_lrc.c | 9 ++++++--- drivers/gpu/drm/i915/intel_ringbuffer.c | 6 ++++-- 3 files changed, 10 insertions(+), 10 deletions(-)