Message ID | 1424366285-29232-25-git-send-email-John.C.Harrison@Intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 19/02/2015 17:17, John.C.Harrison@Intel.com wrote: > From: John Harrison <John.C.Harrison@Intel.com> > > In execlist mode, context initialisation is deferred until first use of the > given context. This is because execlist mode has many more contexts than legacy > mode and many are never actually used. Previously, the initialisation commands > were written to the ring and tagged with some random request structure via the > OLR. This seemed to be causing a null pointer deference bug under certain > circumstances (BZ:40112). > > This patch adds explicit request creation and submission to the deferred > initialisation code path. Thus removing any reliance on or randomness caused by > the OLR. > > For: VIZ-5115 > Signed-off-by: John Harrison <John.C.Harrison@Intel.com> > --- > drivers/gpu/drm/i915/intel_lrc.c | 17 ++++++++++++++++- > 1 file changed, 16 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c > index dff7829..4bcb70e 100644 > --- a/drivers/gpu/drm/i915/intel_lrc.c > +++ b/drivers/gpu/drm/i915/intel_lrc.c > @@ -1849,6 +1849,7 @@ static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring, > int intel_lr_context_deferred_create(struct intel_context *ctx, > struct intel_engine_cs *ring) > { > + struct drm_i915_private *dev_priv = ring->dev->dev_private; > const bool is_global_default_ctx = (ctx == ring->default_context); > struct drm_device *dev = ring->dev; > struct drm_i915_gem_object *ctx_obj; > @@ -1929,13 +1930,27 @@ int intel_lr_context_deferred_create(struct intel_context *ctx, > lrc_setup_hardware_status_page(ring, ctx_obj); > else if (ring->id == RCS && !ctx->rcs_initialized) { > if (ring->init_context) { > - ret = ring->init_context(ring, ctx); > + struct drm_i915_gem_request *req; > + > + ret = dev_priv->gt.alloc_request(ring, ctx, &req); > + if (ret) > + return ret; > + > + ret = ring->init_context(req->ring, ctx); > if (ret) { > DRM_ERROR("ring init context: %d\n", ret); > + i915_gem_request_unreference(req); > ctx->engine[ring->id].ringbuf = NULL; > ctx->engine[ring->id].state = NULL; > goto error; > } > + > + ret = i915_add_request_no_flush(req->ring); > + if (ret) { > + DRM_ERROR("ring init context: %d\n", ret); > + i915_gem_request_unreference(req); > + goto error; > + } > } > > ctx->rcs_initialized = true; > Reviewed-by: Tomas Elf <tomas.elf@intel.com> Thanks, Tomas
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index dff7829..4bcb70e 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1849,6 +1849,7 @@ static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring, int intel_lr_context_deferred_create(struct intel_context *ctx, struct intel_engine_cs *ring) { + struct drm_i915_private *dev_priv = ring->dev->dev_private; const bool is_global_default_ctx = (ctx == ring->default_context); struct drm_device *dev = ring->dev; struct drm_i915_gem_object *ctx_obj; @@ -1929,13 +1930,27 @@ int intel_lr_context_deferred_create(struct intel_context *ctx, lrc_setup_hardware_status_page(ring, ctx_obj); else if (ring->id == RCS && !ctx->rcs_initialized) { if (ring->init_context) { - ret = ring->init_context(ring, ctx); + struct drm_i915_gem_request *req; + + ret = dev_priv->gt.alloc_request(ring, ctx, &req); + if (ret) + return ret; + + ret = ring->init_context(req->ring, ctx); if (ret) { DRM_ERROR("ring init context: %d\n", ret); + i915_gem_request_unreference(req); ctx->engine[ring->id].ringbuf = NULL; ctx->engine[ring->id].state = NULL; goto error; } + + ret = i915_add_request_no_flush(req->ring); + if (ret) { + DRM_ERROR("ring init context: %d\n", ret); + i915_gem_request_unreference(req); + goto error; + } } ctx->rcs_initialized = true;