Message ID | 1441374564-4074-1-git-send-email-nicholas.hoath@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
This needs a rebase as a new GuC TLB invalidation call has been added when the context objects are pinned. Your patch misses this for the default context. In fact I reckon there's a case to consolidate the two (default / non-default) object pinning sequence into a single function to avoid further duplication or missing bits in future. Also, as you're rebasing, see my nit-picks below. > -----Original Message----- > From: Intel-gfx [mailto:intel-gfx-bounces@lists.freedesktop.org] On Behalf Of > Nick Hoath > Sent: Friday, September 4, 2015 2:49 PM > To: intel-gfx@lists.freedesktop.org > Cc: Daniel Vetter > Subject: [Intel-gfx] [PATCH] drm/i915: Split alloc from init for lrc > > Extend init/init_hw split to context init. > - Move context initialisation in to i915_gem_init_hw > - Move one off initialisation for render ring to > i915_gem_validate_context > - Move default context initialisation to logical_ring_init > > Rename intel_lr_context_deferred_create to > intel_lr_context_deferred_alloc, to reflect reduced functionality & > alloc/init split. > > This patch is intended to split out the allocation of resources & > initialisation to allow easier reuse of code for resume/gpu reset. > > v2: Removed function ptr wrapping of do_switch_context (Daniel Vetter) > Left ->init_context int intel_lr_context_deferred_alloc > (Daniel Vetter) > Remove unnecessary init flag & ring type test. (Daniel Vetter) > Improve commit message (Daniel Vetter) > v3: On init/reinit, set the hw next sequence number to the sw next > sequence number. This is set to 1 at driver load time. This prevents > the seqno being reset on reinit (Chris Wilson) > v4: Set seqno back to ~0 - 0x1000 at start-of-day, and increment by 0x100 > on reset. > This makes it obvious which bbs are which after a reset. (David Gordon > & John Harrison) > Rebase. > > Issue: VIZ-4798 > Signed-off-by: Nick Hoath <nicholas.hoath@intel.com> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > Cc: Chris Wilson <chris@chris-wilson.co.uk> > Cc: John Harrison <john.c.harrison@intel.com> > Cc: David Gordon <david.s.gordon@intel.com> > --- > drivers/gpu/drm/i915/i915_drv.h | 1 - > drivers/gpu/drm/i915/i915_gem.c | 24 +++-- > drivers/gpu/drm/i915/i915_gem_execbuffer.c | 3 +- > drivers/gpu/drm/i915/intel_lrc.c | 155 ++++++++++++++--------------- > drivers/gpu/drm/i915/intel_lrc.h | 4 +- > 5 files changed, 93 insertions(+), 94 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h > index 1287007..ded7158 100644 > --- a/drivers/gpu/drm/i915/i915_drv.h > +++ b/drivers/gpu/drm/i915/i915_drv.h > @@ -888,7 +888,6 @@ struct intel_context { > } legacy_hw_ctx; > > /* Execlists */ > - bool rcs_initialized; > struct { > struct drm_i915_gem_object *state; > struct intel_ringbuffer *ringbuf; > diff --git a/drivers/gpu/drm/i915/i915_gem.c > b/drivers/gpu/drm/i915/i915_gem.c > index 41263cd..c8125a5 100644 > --- a/drivers/gpu/drm/i915/i915_gem.c > +++ b/drivers/gpu/drm/i915/i915_gem.c > @@ -4613,14 +4613,8 @@ int i915_gem_init_rings(struct drm_device *dev) > goto cleanup_vebox_ring; > } > > - ret = i915_gem_set_seqno(dev, ((u32)~0 - 0x1000)); > - if (ret) > - goto cleanup_bsd2_ring; > - > return 0; > > -cleanup_bsd2_ring: > - intel_cleanup_ring_buffer(&dev_priv->ring[VCS2]); > cleanup_vebox_ring: > intel_cleanup_ring_buffer(&dev_priv->ring[VECS]); > cleanup_blt_ring: > @@ -4639,6 +4633,7 @@ i915_gem_init_hw(struct drm_device *dev) > struct drm_i915_private *dev_priv = dev->dev_private; > struct intel_engine_cs *ring; > int ret, i, j; > + struct drm_i915_gem_request *req; Please leave req in the loop scope below to avoid unnecessary code churn. > > if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt()) > return -EIO; > @@ -4706,9 +4701,16 @@ i915_gem_init_hw(struct drm_device *dev) > goto out; > } > > + /* > + * Increment the next seqno by 0x100 so we have a visible break > + * on re-initialisation > + */ > + ret = i915_gem_set_seqno(dev, dev_priv->next_seqno+0x100); > + if (ret) > + goto out; > + > /* Now it is safe to go back round and do everything else: */ > for_each_ring(ring, dev_priv, i) { > - struct drm_i915_gem_request *req; > > WARN_ON(!ring->default_context); > > @@ -4907,6 +4909,14 @@ i915_gem_load(struct drm_device *dev) > dev_priv->num_fence_regs = > I915_READ(vgtif_reg(avail_rs.fence_num)); > > + /* > + * Set initial sequence number for requests. > + * Using this number allows the wraparound to happen early, > + * catching any obvious problems. > + */ > + dev_priv->next_seqno = ((u32)~0 - 0x1100); > + dev_priv->last_seqno = ((u32)~0 - 0x1101); > + > /* Initialize fence registers to zero */ > INIT_LIST_HEAD(&dev_priv->mm.fence_list); > i915_gem_restore_fences(dev); > diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c > b/drivers/gpu/drm/i915/i915_gem_execbuffer.c > index a953d49..64674dc 100644 > --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c > +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c > @@ -994,6 +994,7 @@ i915_gem_validate_context(struct drm_device *dev, > struct drm_file *file, > { > struct intel_context *ctx = NULL; > struct i915_ctx_hang_stats *hs; > + int ret; Another strictly unnecessary scope change. I would normally prefer function-scope ret, but in this case we usually return ctx. > > if (ring->id != RCS && ctx_id != DEFAULT_CONTEXT_HANDLE) > return ERR_PTR(-EINVAL); > @@ -1009,7 +1010,7 @@ i915_gem_validate_context(struct drm_device *dev, > struct drm_file *file, > } > > if (i915.enable_execlists && !ctx->engine[ring->id].state) { > - int ret = intel_lr_context_deferred_create(ctx, ring); > + ret = intel_lr_context_deferred_alloc(ctx, ring); > if (ret) { > DRM_DEBUG("Could not create LRC %u: %d\n", ctx_id, > ret); > return ERR_PTR(ret); > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c > index 28a712e..ebe30a9 100644 > --- a/drivers/gpu/drm/i915/intel_lrc.c > +++ b/drivers/gpu/drm/i915/intel_lrc.c > @@ -1445,11 +1445,32 @@ out: > return ret; > } > > +static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring, > + struct drm_i915_gem_object *default_ctx_obj) > +{ > + struct drm_i915_private *dev_priv = ring->dev->dev_private; > + struct page *page; > + > + /* The HWSP is part of the default context object in LRC mode. */ > + ring->status_page.gfx_addr = > i915_gem_obj_ggtt_offset(default_ctx_obj) > + + LRC_PPHWSP_PN * PAGE_SIZE; > + page = i915_gem_object_get_page(default_ctx_obj, LRC_PPHWSP_PN); > + ring->status_page.page_addr = kmap(page); > + ring->status_page.obj = default_ctx_obj; > + > + I915_WRITE(RING_HWS_PGA(ring->mmio_base), > + (u32)ring->status_page.gfx_addr); > + POSTING_READ(RING_HWS_PGA(ring->mmio_base)); > +} > + Moving this function will lead to unnecessary churn, making it difficult to follow history with git blame and to know whether you actually changed the function. Maybe adding a forward declaration is the better way to go. Not sure what is preferred generally though... > static int gen8_init_common_ring(struct intel_engine_cs *ring) > { > struct drm_device *dev = ring->dev; > struct drm_i915_private *dev_priv = dev->dev_private; > > + lrc_setup_hardware_status_page(ring, > + ring->default_context->engine[ring->id].state); > + > I915_WRITE_IMR(ring, ~(ring->irq_enable_mask | ring- > >irq_keep_mask)); > I915_WRITE(RING_HWSTAM(ring->mmio_base), 0xffffffff); > > @@ -1889,8 +1910,33 @@ static int logical_ring_init(struct drm_device *dev, > struct intel_engine_cs *rin > if (ret) > return ret; > > - ret = intel_lr_context_deferred_create(ring->default_context, ring); > + ret = intel_lr_context_deferred_alloc(ring->default_context, ring); > + if (ret) > + return ret; > + > + ret = i915_gem_obj_ggtt_pin( > + ring->default_context->engine[ring->id].state, > + GEN8_LR_CONTEXT_ALIGN, 0); > + if (ret) { > + DRM_DEBUG_DRIVER("Pin LRC backing obj failed: %d\n", > + ret); > + return ret; > + } > + > + ret = intel_pin_and_map_ringbuffer_obj(dev, > + ring->default_context->engine[ring->id].ringbuf); > + if (ret) { > + DRM_ERROR( > + "Failed to pin and map ringbuffer %s: %d\n", > + ring->name, ret); > + goto error_unpin_ggtt; > + } > + > + return ret; > > +error_unpin_ggtt: > + i915_gem_object_ggtt_unpin( > + ring->default_context->engine[ring->id].state); > return ret; > } > > @@ -2112,14 +2158,8 @@ int intel_logical_rings_init(struct drm_device *dev) > goto cleanup_vebox_ring; > } > > - ret = i915_gem_set_seqno(dev, ((u32)~0 - 0x1000)); > - if (ret) > - goto cleanup_bsd2_ring; > - > return 0; > > -cleanup_bsd2_ring: > - intel_logical_ring_cleanup(&dev_priv->ring[VCS2]); > cleanup_vebox_ring: > intel_logical_ring_cleanup(&dev_priv->ring[VECS]); > cleanup_blt_ring: > @@ -2370,26 +2410,8 @@ static uint32_t get_lr_context_size(struct > intel_engine_cs *ring) > return ret; > } > > -static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring, > - struct drm_i915_gem_object *default_ctx_obj) > -{ > - struct drm_i915_private *dev_priv = ring->dev->dev_private; > - struct page *page; > - > - /* The HWSP is part of the default context object in LRC mode. */ > - ring->status_page.gfx_addr = > i915_gem_obj_ggtt_offset(default_ctx_obj) > - + LRC_PPHWSP_PN * PAGE_SIZE; > - page = i915_gem_object_get_page(default_ctx_obj, LRC_PPHWSP_PN); > - ring->status_page.page_addr = kmap(page); > - ring->status_page.obj = default_ctx_obj; > - > - I915_WRITE(RING_HWS_PGA(ring->mmio_base), > - (u32)ring->status_page.gfx_addr); > - POSTING_READ(RING_HWS_PGA(ring->mmio_base)); > -} > - > /** > - * intel_lr_context_deferred_create() - create the LRC specific bits of a context > + * intel_lr_context_deferred_alloc() - create the LRC specific bits of a context > * @ctx: LR context to create. > * @ring: engine to be used with the context. > * > @@ -2401,12 +2423,11 @@ static void lrc_setup_hardware_status_page(struct > intel_engine_cs *ring, > * > * Return: non-zero on error. > */ > -int intel_lr_context_deferred_create(struct intel_context *ctx, > + > +int intel_lr_context_deferred_alloc(struct intel_context *ctx, > struct intel_engine_cs *ring) > { > - const bool is_global_default_ctx = (ctx == ring->default_context); > struct drm_device *dev = ring->dev; > - struct drm_i915_private *dev_priv = dev->dev_private; > struct drm_i915_gem_object *ctx_obj; > uint32_t context_size; > struct intel_ringbuffer *ringbuf; > @@ -2426,82 +2447,50 @@ int intel_lr_context_deferred_create(struct > intel_context *ctx, > return -ENOMEM; > } > > - if (is_global_default_ctx) { > - ret = i915_gem_obj_ggtt_pin(ctx_obj, > GEN8_LR_CONTEXT_ALIGN, > - PIN_OFFSET_BIAS | GUC_WOPCM_TOP); > - if (ret) { > - DRM_DEBUG_DRIVER("Pin LRC backing obj failed: > %d\n", > - ret); > - drm_gem_object_unreference(&ctx_obj->base); > - return ret; > - } > - > - /* Invalidate GuC TLB. */ > - if (i915.enable_guc_submission) > - I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE); > - } > - > ringbuf = intel_engine_create_ringbuffer(ring, 4 * PAGE_SIZE); > if (IS_ERR(ringbuf)) { > ret = PTR_ERR(ringbuf); > - goto error_unpin_ctx; > - } > - > - if (is_global_default_ctx) { > - ret = intel_pin_and_map_ringbuffer_obj(dev, ringbuf); > - if (ret) { > - DRM_ERROR( > - "Failed to pin and map ringbuffer %s: %d\n", > - ring->name, ret); > - goto error_ringbuf; > - } > + goto error_deref_obj; > } > > ret = populate_lr_context(ctx, ctx_obj, ring, ringbuf); > if (ret) { > DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret); > - goto error; > + goto error_ringbuf; > } > > ctx->engine[ring->id].ringbuf = ringbuf; > ctx->engine[ring->id].state = ctx_obj; > > - if (ctx == ring->default_context) > - lrc_setup_hardware_status_page(ring, ctx_obj); > - else if (ring->id == RCS && !ctx->rcs_initialized) { > - if (ring->init_context) { > - struct drm_i915_gem_request *req; > - > - ret = i915_gem_request_alloc(ring, ctx, &req); > - if (ret) > - return ret; > + if (ctx != ring->default_context && ring->init_context) { > + struct drm_i915_gem_request *req; > > - ret = ring->init_context(req); > - if (ret) { > - DRM_ERROR("ring init context: %d\n", ret); > - i915_gem_request_cancel(req); > - ctx->engine[ring->id].ringbuf = NULL; > - ctx->engine[ring->id].state = NULL; > - goto error; > - } > - > - i915_add_request_no_flush(req); > + ret = i915_gem_request_alloc(ring, > + ring->default_context, &req); > + if (ret) { > + DRM_ERROR("ring create req: %d\n", > + ret); > + i915_gem_request_cancel(req); > + goto error_ringbuf; > } > > - ctx->rcs_initialized = true; > + ret = ring->init_context(req); > + if (ret) { > + DRM_ERROR("ring init context: %d\n", > + ret); > + i915_gem_request_cancel(req); > + goto error_ringbuf; > + } > + i915_add_request_no_flush(req); > } > - > return 0; > > -error: > - if (is_global_default_ctx) > - intel_unpin_ringbuffer_obj(ringbuf); > error_ringbuf: > intel_ringbuffer_free(ringbuf); > -error_unpin_ctx: > - if (is_global_default_ctx) > - i915_gem_object_ggtt_unpin(ctx_obj); > +error_deref_obj: > drm_gem_object_unreference(&ctx_obj->base); > + ctx->engine[ring->id].ringbuf = NULL; > + ctx->engine[ring->id].state = NULL; > return ret; > } > > diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h > index 4cc54b3..69d99f0 100644 > --- a/drivers/gpu/drm/i915/intel_lrc.h > +++ b/drivers/gpu/drm/i915/intel_lrc.h > @@ -75,8 +75,8 @@ static inline void intel_logical_ring_emit(struct > intel_ringbuffer *ringbuf, > #define LRC_STATE_PN (LRC_PPHWSP_PN + 1) > > void intel_lr_context_free(struct intel_context *ctx); > -int intel_lr_context_deferred_create(struct intel_context *ctx, > - struct intel_engine_cs *ring); > +int intel_lr_context_deferred_alloc(struct intel_context *ctx, > + struct intel_engine_cs *ring); > void intel_lr_context_unpin(struct drm_i915_gem_request *req); > void intel_lr_context_reset(struct drm_device *dev, > struct intel_context *ctx); > -- > 2.1.1 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/intel-gfx
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 1287007..ded7158 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -888,7 +888,6 @@ struct intel_context { } legacy_hw_ctx; /* Execlists */ - bool rcs_initialized; struct { struct drm_i915_gem_object *state; struct intel_ringbuffer *ringbuf; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 41263cd..c8125a5 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -4613,14 +4613,8 @@ int i915_gem_init_rings(struct drm_device *dev) goto cleanup_vebox_ring; } - ret = i915_gem_set_seqno(dev, ((u32)~0 - 0x1000)); - if (ret) - goto cleanup_bsd2_ring; - return 0; -cleanup_bsd2_ring: - intel_cleanup_ring_buffer(&dev_priv->ring[VCS2]); cleanup_vebox_ring: intel_cleanup_ring_buffer(&dev_priv->ring[VECS]); cleanup_blt_ring: @@ -4639,6 +4633,7 @@ i915_gem_init_hw(struct drm_device *dev) struct drm_i915_private *dev_priv = dev->dev_private; struct intel_engine_cs *ring; int ret, i, j; + struct drm_i915_gem_request *req; if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt()) return -EIO; @@ -4706,9 +4701,16 @@ i915_gem_init_hw(struct drm_device *dev) goto out; } + /* + * Increment the next seqno by 0x100 so we have a visible break + * on re-initialisation + */ + ret = i915_gem_set_seqno(dev, dev_priv->next_seqno+0x100); + if (ret) + goto out; + /* Now it is safe to go back round and do everything else: */ for_each_ring(ring, dev_priv, i) { - struct drm_i915_gem_request *req; WARN_ON(!ring->default_context); @@ -4907,6 +4909,14 @@ i915_gem_load(struct drm_device *dev) dev_priv->num_fence_regs = I915_READ(vgtif_reg(avail_rs.fence_num)); + /* + * Set initial sequence number for requests. + * Using this number allows the wraparound to happen early, + * catching any obvious problems. + */ + dev_priv->next_seqno = ((u32)~0 - 0x1100); + dev_priv->last_seqno = ((u32)~0 - 0x1101); + /* Initialize fence registers to zero */ INIT_LIST_HEAD(&dev_priv->mm.fence_list); i915_gem_restore_fences(dev); diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index a953d49..64674dc 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -994,6 +994,7 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file, { struct intel_context *ctx = NULL; struct i915_ctx_hang_stats *hs; + int ret; if (ring->id != RCS && ctx_id != DEFAULT_CONTEXT_HANDLE) return ERR_PTR(-EINVAL); @@ -1009,7 +1010,7 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file, } if (i915.enable_execlists && !ctx->engine[ring->id].state) { - int ret = intel_lr_context_deferred_create(ctx, ring); + ret = intel_lr_context_deferred_alloc(ctx, ring); if (ret) { DRM_DEBUG("Could not create LRC %u: %d\n", ctx_id, ret); return ERR_PTR(ret); diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 28a712e..ebe30a9 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1445,11 +1445,32 @@ out: return ret; } +static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring, + struct drm_i915_gem_object *default_ctx_obj) +{ + struct drm_i915_private *dev_priv = ring->dev->dev_private; + struct page *page; + + /* The HWSP is part of the default context object in LRC mode. */ + ring->status_page.gfx_addr = i915_gem_obj_ggtt_offset(default_ctx_obj) + + LRC_PPHWSP_PN * PAGE_SIZE; + page = i915_gem_object_get_page(default_ctx_obj, LRC_PPHWSP_PN); + ring->status_page.page_addr = kmap(page); + ring->status_page.obj = default_ctx_obj; + + I915_WRITE(RING_HWS_PGA(ring->mmio_base), + (u32)ring->status_page.gfx_addr); + POSTING_READ(RING_HWS_PGA(ring->mmio_base)); +} + static int gen8_init_common_ring(struct intel_engine_cs *ring) { struct drm_device *dev = ring->dev; struct drm_i915_private *dev_priv = dev->dev_private; + lrc_setup_hardware_status_page(ring, + ring->default_context->engine[ring->id].state); + I915_WRITE_IMR(ring, ~(ring->irq_enable_mask | ring->irq_keep_mask)); I915_WRITE(RING_HWSTAM(ring->mmio_base), 0xffffffff); @@ -1889,8 +1910,33 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin if (ret) return ret; - ret = intel_lr_context_deferred_create(ring->default_context, ring); + ret = intel_lr_context_deferred_alloc(ring->default_context, ring); + if (ret) + return ret; + + ret = i915_gem_obj_ggtt_pin( + ring->default_context->engine[ring->id].state, + GEN8_LR_CONTEXT_ALIGN, 0); + if (ret) { + DRM_DEBUG_DRIVER("Pin LRC backing obj failed: %d\n", + ret); + return ret; + } + + ret = intel_pin_and_map_ringbuffer_obj(dev, + ring->default_context->engine[ring->id].ringbuf); + if (ret) { + DRM_ERROR( + "Failed to pin and map ringbuffer %s: %d\n", + ring->name, ret); + goto error_unpin_ggtt; + } + + return ret; +error_unpin_ggtt: + i915_gem_object_ggtt_unpin( + ring->default_context->engine[ring->id].state); return ret; } @@ -2112,14 +2158,8 @@ int intel_logical_rings_init(struct drm_device *dev) goto cleanup_vebox_ring; } - ret = i915_gem_set_seqno(dev, ((u32)~0 - 0x1000)); - if (ret) - goto cleanup_bsd2_ring; - return 0; -cleanup_bsd2_ring: - intel_logical_ring_cleanup(&dev_priv->ring[VCS2]); cleanup_vebox_ring: intel_logical_ring_cleanup(&dev_priv->ring[VECS]); cleanup_blt_ring: @@ -2370,26 +2410,8 @@ static uint32_t get_lr_context_size(struct intel_engine_cs *ring) return ret; } -static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring, - struct drm_i915_gem_object *default_ctx_obj) -{ - struct drm_i915_private *dev_priv = ring->dev->dev_private; - struct page *page; - - /* The HWSP is part of the default context object in LRC mode. */ - ring->status_page.gfx_addr = i915_gem_obj_ggtt_offset(default_ctx_obj) - + LRC_PPHWSP_PN * PAGE_SIZE; - page = i915_gem_object_get_page(default_ctx_obj, LRC_PPHWSP_PN); - ring->status_page.page_addr = kmap(page); - ring->status_page.obj = default_ctx_obj; - - I915_WRITE(RING_HWS_PGA(ring->mmio_base), - (u32)ring->status_page.gfx_addr); - POSTING_READ(RING_HWS_PGA(ring->mmio_base)); -} - /** - * intel_lr_context_deferred_create() - create the LRC specific bits of a context + * intel_lr_context_deferred_alloc() - create the LRC specific bits of a context * @ctx: LR context to create. * @ring: engine to be used with the context. * @@ -2401,12 +2423,11 @@ static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring, * * Return: non-zero on error. */ -int intel_lr_context_deferred_create(struct intel_context *ctx, + +int intel_lr_context_deferred_alloc(struct intel_context *ctx, struct intel_engine_cs *ring) { - const bool is_global_default_ctx = (ctx == ring->default_context); struct drm_device *dev = ring->dev; - struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_gem_object *ctx_obj; uint32_t context_size; struct intel_ringbuffer *ringbuf; @@ -2426,82 +2447,50 @@ int intel_lr_context_deferred_create(struct intel_context *ctx, return -ENOMEM; } - if (is_global_default_ctx) { - ret = i915_gem_obj_ggtt_pin(ctx_obj, GEN8_LR_CONTEXT_ALIGN, - PIN_OFFSET_BIAS | GUC_WOPCM_TOP); - if (ret) { - DRM_DEBUG_DRIVER("Pin LRC backing obj failed: %d\n", - ret); - drm_gem_object_unreference(&ctx_obj->base); - return ret; - } - - /* Invalidate GuC TLB. */ - if (i915.enable_guc_submission) - I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE); - } - ringbuf = intel_engine_create_ringbuffer(ring, 4 * PAGE_SIZE); if (IS_ERR(ringbuf)) { ret = PTR_ERR(ringbuf); - goto error_unpin_ctx; - } - - if (is_global_default_ctx) { - ret = intel_pin_and_map_ringbuffer_obj(dev, ringbuf); - if (ret) { - DRM_ERROR( - "Failed to pin and map ringbuffer %s: %d\n", - ring->name, ret); - goto error_ringbuf; - } + goto error_deref_obj; } ret = populate_lr_context(ctx, ctx_obj, ring, ringbuf); if (ret) { DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret); - goto error; + goto error_ringbuf; } ctx->engine[ring->id].ringbuf = ringbuf; ctx->engine[ring->id].state = ctx_obj; - if (ctx == ring->default_context) - lrc_setup_hardware_status_page(ring, ctx_obj); - else if (ring->id == RCS && !ctx->rcs_initialized) { - if (ring->init_context) { - struct drm_i915_gem_request *req; - - ret = i915_gem_request_alloc(ring, ctx, &req); - if (ret) - return ret; + if (ctx != ring->default_context && ring->init_context) { + struct drm_i915_gem_request *req; - ret = ring->init_context(req); - if (ret) { - DRM_ERROR("ring init context: %d\n", ret); - i915_gem_request_cancel(req); - ctx->engine[ring->id].ringbuf = NULL; - ctx->engine[ring->id].state = NULL; - goto error; - } - - i915_add_request_no_flush(req); + ret = i915_gem_request_alloc(ring, + ring->default_context, &req); + if (ret) { + DRM_ERROR("ring create req: %d\n", + ret); + i915_gem_request_cancel(req); + goto error_ringbuf; } - ctx->rcs_initialized = true; + ret = ring->init_context(req); + if (ret) { + DRM_ERROR("ring init context: %d\n", + ret); + i915_gem_request_cancel(req); + goto error_ringbuf; + } + i915_add_request_no_flush(req); } - return 0; -error: - if (is_global_default_ctx) - intel_unpin_ringbuffer_obj(ringbuf); error_ringbuf: intel_ringbuffer_free(ringbuf); -error_unpin_ctx: - if (is_global_default_ctx) - i915_gem_object_ggtt_unpin(ctx_obj); +error_deref_obj: drm_gem_object_unreference(&ctx_obj->base); + ctx->engine[ring->id].ringbuf = NULL; + ctx->engine[ring->id].state = NULL; return ret; } diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h index 4cc54b3..69d99f0 100644 --- a/drivers/gpu/drm/i915/intel_lrc.h +++ b/drivers/gpu/drm/i915/intel_lrc.h @@ -75,8 +75,8 @@ static inline void intel_logical_ring_emit(struct intel_ringbuffer *ringbuf, #define LRC_STATE_PN (LRC_PPHWSP_PN + 1) void intel_lr_context_free(struct intel_context *ctx); -int intel_lr_context_deferred_create(struct intel_context *ctx, - struct intel_engine_cs *ring); +int intel_lr_context_deferred_alloc(struct intel_context *ctx, + struct intel_engine_cs *ring); void intel_lr_context_unpin(struct drm_i915_gem_request *req); void intel_lr_context_reset(struct drm_device *dev, struct intel_context *ctx);
Extend init/init_hw split to context init. - Move context initialisation in to i915_gem_init_hw - Move one off initialisation for render ring to i915_gem_validate_context - Move default context initialisation to logical_ring_init Rename intel_lr_context_deferred_create to intel_lr_context_deferred_alloc, to reflect reduced functionality & alloc/init split. This patch is intended to split out the allocation of resources & initialisation to allow easier reuse of code for resume/gpu reset. v2: Removed function ptr wrapping of do_switch_context (Daniel Vetter) Left ->init_context int intel_lr_context_deferred_alloc (Daniel Vetter) Remove unnecessary init flag & ring type test. (Daniel Vetter) Improve commit message (Daniel Vetter) v3: On init/reinit, set the hw next sequence number to the sw next sequence number. This is set to 1 at driver load time. This prevents the seqno being reset on reinit (Chris Wilson) v4: Set seqno back to ~0 - 0x1000 at start-of-day, and increment by 0x100 on reset. This makes it obvious which bbs are which after a reset. (David Gordon & John Harrison) Rebase. Issue: VIZ-4798 Signed-off-by: Nick Hoath <nicholas.hoath@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: John Harrison <john.c.harrison@intel.com> Cc: David Gordon <david.s.gordon@intel.com> --- drivers/gpu/drm/i915/i915_drv.h | 1 - drivers/gpu/drm/i915/i915_gem.c | 24 +++-- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 3 +- drivers/gpu/drm/i915/intel_lrc.c | 155 ++++++++++++++--------------- drivers/gpu/drm/i915/intel_lrc.h | 4 +- 5 files changed, 93 insertions(+), 94 deletions(-)