Message ID | 20180514155600.16521-8-lionel.g.landwerlin@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 14/05/2018 16:56, Lionel Landwerlin wrote: > From: Chris Wilson <chris@chris-wilson.co.uk> > > We want to allow userspace to reconfigure the subslice configuration for > its own use case. To do so, we expose a context parameter to allow > adjustment of the RPCS register stored within the context image (and > currently not accessible via LRI). If the context is adjusted before > first use, the adjustment is for "free"; otherwise if the context is > active we flush the context off the GPU (stalling all users) and forcing > the GPU to save the context to memory where we can modify it and so > ensure that the register is reloaded on next execution. > > The overhead of managing additional EU subslices can be significant, > especially in multi-context workloads. Non-GPGPU contexts should > preferably disable the subslices it is not using, and others should > fine-tune the number to match their workload. > > We expose complete control over the RPCS register, allowing > configuration of slice/subslice, via masks packed into a u64 for > simplicity. For example, > > struct drm_i915_gem_context_param arg; > struct drm_i915_gem_context_param_sseu sseu = { .class = 0, > .instance = 0, }; > > memset(&arg, 0, sizeof(arg)); > arg.ctx_id = ctx; > arg.param = I915_CONTEXT_PARAM_SSEU; > arg.value = (uintptr_t) &sseu; > if (drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, &arg) == 0) { > sseu.packed.subslice_mask = 0; > > drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, &arg); > } > > could be used to disable all subslices where supported. > > v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu() (Lionel) > > v3: Add ability to program this per engine (Chris) > > v4: Move most get_sseu() into i915_gem_context.c (Lionel) > > v5: Validate sseu configuration against the device's capabilities (Lionel) > > v6: Change context powergating settings through MI_SDM on kernel context (Chris) > > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100899 > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> > Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> > c: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> > CC: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > CC: Zhipeng Gong <zhipeng.gong@intel.com> > CC: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> > --- > drivers/gpu/drm/i915/i915_gem_context.c | 169 ++++++++++++++++++++++++ > drivers/gpu/drm/i915/intel_lrc.c | 103 ++++++++++----- > drivers/gpu/drm/i915/intel_ringbuffer.c | 2 + > drivers/gpu/drm/i915/intel_ringbuffer.h | 4 + > include/uapi/drm/i915_drm.h | 38 ++++++ > 5 files changed, 281 insertions(+), 35 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c > index 01310c99e032..0b72a771c3f3 100644 > --- a/drivers/gpu/drm/i915/i915_gem_context.c > +++ b/drivers/gpu/drm/i915/i915_gem_context.c > @@ -734,6 +734,110 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data, > return 0; > } > > +static int > +intel_sseu_from_user_sseu(const struct sseu_dev_info *sseu, > + const struct drm_i915_gem_context_param_sseu *user_sseu, > + union intel_sseu *ctx_sseu) > +{ > + if ((user_sseu->slice_mask & ~sseu->slice_mask) != 0 || > + user_sseu->slice_mask == 0) > + return -EINVAL; > + > + if ((user_sseu->subslice_mask & ~sseu->subslice_mask[0]) != 0 || > + user_sseu->subslice_mask == 0) > + return -EINVAL; > + > + if (user_sseu->min_eus_per_subslice > sseu->max_eus_per_subslice) > + return -EINVAL; > + > + if (user_sseu->max_eus_per_subslice > sseu->max_eus_per_subslice || > + user_sseu->max_eus_per_subslice < user_sseu->min_eus_per_subslice || > + user_sseu->max_eus_per_subslice == 0) > + return -EINVAL; > + > + ctx_sseu->slice_mask = user_sseu->slice_mask; > + ctx_sseu->subslice_mask = user_sseu->subslice_mask; > + ctx_sseu->min_eus_per_subslice = user_sseu->min_eus_per_subslice; > + ctx_sseu->max_eus_per_subslice = user_sseu->max_eus_per_subslice; > + > + return 0; > +} > + > +static int > +i915_gem_context_reconfigure_sseu(struct i915_gem_context *ctx, > + struct intel_engine_cs *engine, > + union intel_sseu sseu) > +{ > + struct drm_i915_private *dev_priv = ctx->i915; > + struct i915_timeline *timeline; > + struct i915_request *rq; > + union intel_sseu actual_sseu; > + enum intel_engine_id id; > + int ret; > + > + /* > + * First notify user when this capability is not available so that it > + * can be detected with any valid input. > + */ > + if (!engine->emit_rpcs_config) > + return -ENODEV; > + > + if (to_intel_context(ctx, engine)->sseu.value == sseu.value) Are there other uses for the value union in the series? Think whether it could be dropped and memcmp used here for simplicity. > + return 0; > + > + lockdep_assert_held(&dev_priv->drm.struct_mutex); > + > + i915_retire_requests(dev_priv); > + > + /* Now use the RCS to actually reconfigure. */ > + engine = dev_priv->engine[RCS]; > + > + rq = i915_request_alloc(engine, dev_priv->kernel_context); > + if (IS_ERR(rq)) > + return PTR_ERR(rq); > + > + /* > + * If i915/perf is active, we want a stable powergating configuration > + * on the system. Act as if we recorded the user's request but program > + * the default sseu configuration. When i915/perf is stopped, the > + * recorded configuration will be programmed. > + */ > + actual_sseu = dev_priv->perf.oa.exclusive_stream ? This feels it should jump out more since there is no obvious connection between the two. Probably a helper of some sort like intel_sanitize_sseu? Or intel_apply_sseu? Merge? ... ? The comment would then be in the helper which I think would be better than sprinkle OA knowledge into context params code. > + intel_sseu_from_device_sseu(&INTEL_INFO(dev_priv)->sseu) : > + sseu; > + > + ret = engine->emit_rpcs_config(rq, ctx, actual_sseu); > + if (ret) { > + __i915_request_add(rq, true); > + return ret; > + } > + > + /* Queue this switch after all other activity */ > + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { > + struct i915_request *prev; > + > + prev = last_request_on_engine(timeline, engine); > + if (prev) > + i915_sw_fence_await_sw_fence_gfp(&rq->submit, > + &prev->submit, > + I915_FENCE_GFP); > + } > + > + __i915_request_add(rq, true); This is actually the bit I reading the patch for. So this I think is much better/safer than previous idling. However one concern, and maybe not a concern but just something which needs to be explained in the uAPI, is what it means with regards to the point from which the new configuration becomes live. Could preemption for instance make it not defined enough? Could we, or should we, also link this request somehow onto the issuing context timeline so it must be first there? Hm, should we use the user context instead of the kernel one, and set the highest priority? But would have to avoid triggering preemption. > + /* > + * Apply the configuration to all engine. Our hardware doesn't > + * currently support different configurations for each engine. > + */ > + for_each_engine(engine, dev_priv, id) { > + struct intel_context *ce = to_intel_context(ctx, engine); > + > + ce->sseu.value = sseu.value; > + } > + > + return 0; > +} > + > int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data, > struct drm_file *file) > { > @@ -771,6 +875,37 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data, > case I915_CONTEXT_PARAM_PRIORITY: > args->value = ctx->sched.priority; > break; > + case I915_CONTEXT_PARAM_SSEU: { > + struct drm_i915_gem_context_param_sseu param_sseu; > + struct intel_engine_cs *engine; > + struct intel_context *ce; > + > + if (copy_from_user(¶m_sseu, u64_to_user_ptr(args->value), > + sizeof(param_sseu))) { > + ret = -EFAULT; > + break; > + } > + > + engine = intel_engine_lookup_user(to_i915(dev), > + param_sseu.class, > + param_sseu.instance); > + if (!engine) { > + ret = -EINVAL; > + break; > + } > + > + ce = &ctx->__engine[engine->id]; to_intel_context(ctx, engine) ? > + > + param_sseu.slice_mask = ce->sseu.slice_mask; > + param_sseu.subslice_mask = ce->sseu.subslice_mask; > + param_sseu.min_eus_per_subslice = ce->sseu.min_eus_per_subslice; > + param_sseu.max_eus_per_subslice = ce->sseu.max_eus_per_subslice; > + > + if (copy_to_user(u64_to_user_ptr(args->value), ¶m_sseu, > + sizeof(param_sseu))) > + ret = -EFAULT; > + break; > + } > default: > ret = -EINVAL; > break; > @@ -845,7 +980,41 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data, > ctx->sched.priority = priority; > } > break; > + case I915_CONTEXT_PARAM_SSEU: > + { > + struct drm_i915_private *dev_priv = to_i915(dev); > + struct drm_i915_gem_context_param_sseu user_sseu; > + struct intel_engine_cs *engine; > + union intel_sseu ctx_sseu; > + > + if (args->size) { > + ret = -EINVAL; > + break; > + } > + > + if (copy_from_user(&user_sseu, u64_to_user_ptr(args->value), > + sizeof(user_sseu))) { > + ret = -EFAULT; > + break; > + } > + > + engine = intel_engine_lookup_user(dev_priv, > + user_sseu.class, > + user_sseu.instance); > + if (!engine) { > + ret = -EINVAL; > + break; > + } > + > + ret = intel_sseu_from_user_sseu(&INTEL_INFO(dev_priv)->sseu, > + &user_sseu, &ctx_sseu); Should setter have a helper as well? > + if (ret) > + break; > > + ret = i915_gem_context_reconfigure_sseu(ctx, engine, > + ctx_sseu); > + } > + break; > default: > ret = -EINVAL; > break; > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c > index 320b416482e1..4e0216f33c75 100644 > --- a/drivers/gpu/drm/i915/intel_lrc.c > +++ b/drivers/gpu/drm/i915/intel_lrc.c > @@ -2184,6 +2184,72 @@ static void gen8_emit_breadcrumb_rcs(struct i915_request *request, u32 *cs) > } > static const int gen8_emit_breadcrumb_rcs_sz = 8 + WA_TAIL_DWORDS; > > +u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, > + union intel_sseu ctx_sseu) > +{ > + u32 rpcs = 0; > + > + /* > + * Starting in Gen9, render power gating can leave > + * slice/subslice/EU in a partially enabled state. We > + * must make an explicit request through RPCS for full > + * enablement. > + */ > + if (sseu->has_slice_pg) { > + rpcs |= GEN8_RPCS_S_CNT_ENABLE; > + rpcs |= hweight8(ctx_sseu.slice_mask) << GEN8_RPCS_S_CNT_SHIFT; > + rpcs |= GEN8_RPCS_ENABLE; > + } > + > + if (sseu->has_subslice_pg) { > + rpcs |= GEN8_RPCS_SS_CNT_ENABLE; > + rpcs |= hweight8(ctx_sseu.subslice_mask) << > + GEN8_RPCS_SS_CNT_SHIFT; > + rpcs |= GEN8_RPCS_ENABLE; > + } > + > + if (sseu->has_eu_pg) { > + rpcs |= ctx_sseu.min_eus_per_subslice << > + GEN8_RPCS_EU_MIN_SHIFT; > + rpcs |= ctx_sseu.max_eus_per_subslice << > + GEN8_RPCS_EU_MAX_SHIFT; > + rpcs |= GEN8_RPCS_ENABLE; > + } > + > + return rpcs; > +} > + > +static int gen8_emit_rpcs_config(struct i915_request *rq, > + struct i915_gem_context *ctx, > + union intel_sseu sseu) > +{ > + struct drm_i915_private *dev_priv = rq->i915; > + struct intel_context *ce = to_intel_context(ctx, dev_priv->engine[RCS]); > + u64 offset; > + u32 *cs; > + > + /* Let the deferred state allocation take care of this. */ > + if (!ce->state) > + return 0; > + > + cs = intel_ring_begin(rq, 4); > + if (IS_ERR(cs)) > + return PTR_ERR(cs); > + > + offset = ce->state->node.start + > + LRC_STATE_PN * PAGE_SIZE + > + (CTX_R_PWR_CLK_STATE + 1) * 4; > + > + *cs++ = MI_STORE_DWORD_IMM_GEN4; > + *cs++ = lower_32_bits(offset); > + *cs++ = upper_32_bits(offset); > + *cs++ = gen8_make_rpcs(&INTEL_INFO(dev_priv)->sseu, sseu); > + > + intel_ring_advance(rq, cs); > + > + return 0; > +} > + > static int gen8_init_rcs_context(struct i915_request *rq) > { > int ret; > @@ -2274,6 +2340,8 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine) > engine->emit_breadcrumb = gen8_emit_breadcrumb; > engine->emit_breadcrumb_sz = gen8_emit_breadcrumb_sz; > > + engine->emit_rpcs_config = gen8_emit_rpcs_config; > + > engine->set_default_submission = execlists_set_default_submission; > > if (INTEL_GEN(engine->i915) < 11) { > @@ -2422,41 +2490,6 @@ int logical_xcs_ring_init(struct intel_engine_cs *engine) > return logical_ring_init(engine); > } > > -u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, > - union intel_sseu ctx_sseu) > -{ > - u32 rpcs = 0; > - > - /* > - * Starting in Gen9, render power gating can leave > - * slice/subslice/EU in a partially enabled state. We > - * must make an explicit request through RPCS for full > - * enablement. > - */ > - if (sseu->has_slice_pg) { > - rpcs |= GEN8_RPCS_S_CNT_ENABLE; > - rpcs |= hweight8(ctx_sseu.slice_mask) << GEN8_RPCS_S_CNT_SHIFT; > - rpcs |= GEN8_RPCS_ENABLE; > - } > - > - if (sseu->has_subslice_pg) { > - rpcs |= GEN8_RPCS_SS_CNT_ENABLE; > - rpcs |= hweight8(ctx_sseu.subslice_mask) << > - GEN8_RPCS_SS_CNT_SHIFT; > - rpcs |= GEN8_RPCS_ENABLE; > - } > - > - if (sseu->has_eu_pg) { > - rpcs |= ctx_sseu.min_eus_per_subslice << > - GEN8_RPCS_EU_MIN_SHIFT; > - rpcs |= ctx_sseu.max_eus_per_subslice << > - GEN8_RPCS_EU_MAX_SHIFT; > - rpcs |= GEN8_RPCS_ENABLE; > - } > - > - return rpcs; > -} > - > static u32 intel_lr_indirect_ctx_offset(struct intel_engine_cs *engine) > { > u32 indirect_ctx_offset; > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c > index 8f19349a6055..44fb3a1cf8f9 100644 > --- a/drivers/gpu/drm/i915/intel_ringbuffer.c > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c > @@ -2026,6 +2026,8 @@ static void intel_ring_default_vfuncs(struct drm_i915_private *dev_priv, > engine->emit_breadcrumb_sz++; > } > > + engine->emit_rpcs_config = NULL; /* Only supported on Gen8+ */ > + > engine->set_default_submission = i9xx_set_default_submission; > > if (INTEL_GEN(dev_priv) >= 6) > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h > index cc7e73730469..9745f4ab8214 100644 > --- a/drivers/gpu/drm/i915/intel_ringbuffer.h > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h > @@ -463,6 +463,10 @@ struct intel_engine_cs { > void (*emit_breadcrumb)(struct i915_request *rq, u32 *cs); > int emit_breadcrumb_sz; > > + int (*emit_rpcs_config)(struct i915_request *rq, > + struct i915_gem_context *ctx, > + union intel_sseu sseu); > + > /* Pass the request to the hardware queue (e.g. directly into > * the legacy ringbuffer or to the end of an execlist). > * > diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h > index 7f5634ce8e88..24b90836ce1d 100644 > --- a/include/uapi/drm/i915_drm.h > +++ b/include/uapi/drm/i915_drm.h > @@ -1456,9 +1456,47 @@ struct drm_i915_gem_context_param { > #define I915_CONTEXT_MAX_USER_PRIORITY 1023 /* inclusive */ > #define I915_CONTEXT_DEFAULT_PRIORITY 0 > #define I915_CONTEXT_MIN_USER_PRIORITY -1023 /* inclusive */ > + /* > + * When using the following param, value should be a pointer to > + * drm_i915_gem_context_param_sseu. > + */ > +#define I915_CONTEXT_PARAM_SSEU 0x7 > __u64 value; > }; > > +struct drm_i915_gem_context_param_sseu { > + /* > + * Engine class & instance to be configured or queried. > + */ > + __u32 class; > + __u32 instance; > + > + /* > + * Mask of slices to enable for the context. Valid values are a subset > + * of the bitmask value returned for I915_PARAM_SLICE_MASK. > + */ > + __u8 slice_mask; > + > + /* > + * Mask of subslices to enable for the context. Valid values are a > + * subset of the bitmask value return by I915_PARAM_SUBSLICE_MASK. > + */ > + __u8 subslice_mask; > + > + /* > + * Minimum/Maximum number of EUs to enable per subslice for the > + * context. min_eus_per_subslice must be inferior or equal to > + * max_eus_per_subslice. > + */ > + __u8 min_eus_per_subslice; > + __u8 max_eus_per_subslice; > + > + /* > + * Unused for now. Must be cleared to zero. > + */ > + __u32 rsvd; > +}; > + > enum drm_i915_oa_format { > I915_OA_FORMAT_A13 = 1, /* HSW only */ > I915_OA_FORMAT_A29, /* HSW only */ > Regards, Tvrtko
On 15/05/2018 10:05, Tvrtko Ursulin wrote: > > On 14/05/2018 16:56, Lionel Landwerlin wrote: >> From: Chris Wilson <chris@chris-wilson.co.uk> >> >> We want to allow userspace to reconfigure the subslice configuration for >> its own use case. To do so, we expose a context parameter to allow >> adjustment of the RPCS register stored within the context image (and >> currently not accessible via LRI). If the context is adjusted before >> first use, the adjustment is for "free"; otherwise if the context is >> active we flush the context off the GPU (stalling all users) and forcing >> the GPU to save the context to memory where we can modify it and so >> ensure that the register is reloaded on next execution. >> >> The overhead of managing additional EU subslices can be significant, >> especially in multi-context workloads. Non-GPGPU contexts should >> preferably disable the subslices it is not using, and others should >> fine-tune the number to match their workload. >> >> We expose complete control over the RPCS register, allowing >> configuration of slice/subslice, via masks packed into a u64 for >> simplicity. For example, >> >> struct drm_i915_gem_context_param arg; >> struct drm_i915_gem_context_param_sseu sseu = { .class = 0, >> .instance = 0, }; >> >> memset(&arg, 0, sizeof(arg)); >> arg.ctx_id = ctx; >> arg.param = I915_CONTEXT_PARAM_SSEU; >> arg.value = (uintptr_t) &sseu; >> if (drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, &arg) == 0) { >> sseu.packed.subslice_mask = 0; >> >> drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, &arg); >> } >> >> could be used to disable all subslices where supported. >> >> v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu() >> (Lionel) >> >> v3: Add ability to program this per engine (Chris) >> >> v4: Move most get_sseu() into i915_gem_context.c (Lionel) >> >> v5: Validate sseu configuration against the device's capabilities >> (Lionel) >> >> v6: Change context powergating settings through MI_SDM on kernel >> context (Chris) >> >> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100899 >> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> >> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> >> c: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> >> CC: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >> CC: Zhipeng Gong <zhipeng.gong@intel.com> >> CC: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> >> --- >> drivers/gpu/drm/i915/i915_gem_context.c | 169 ++++++++++++++++++++++++ >> drivers/gpu/drm/i915/intel_lrc.c | 103 ++++++++++----- >> drivers/gpu/drm/i915/intel_ringbuffer.c | 2 + >> drivers/gpu/drm/i915/intel_ringbuffer.h | 4 + >> include/uapi/drm/i915_drm.h | 38 ++++++ >> 5 files changed, 281 insertions(+), 35 deletions(-) >> >> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c >> b/drivers/gpu/drm/i915/i915_gem_context.c >> index 01310c99e032..0b72a771c3f3 100644 >> --- a/drivers/gpu/drm/i915/i915_gem_context.c >> +++ b/drivers/gpu/drm/i915/i915_gem_context.c >> @@ -734,6 +734,110 @@ int i915_gem_context_destroy_ioctl(struct >> drm_device *dev, void *data, >> return 0; >> } >> +static int >> +intel_sseu_from_user_sseu(const struct sseu_dev_info *sseu, >> + const struct drm_i915_gem_context_param_sseu *user_sseu, >> + union intel_sseu *ctx_sseu) >> +{ >> + if ((user_sseu->slice_mask & ~sseu->slice_mask) != 0 || >> + user_sseu->slice_mask == 0) >> + return -EINVAL; >> + >> + if ((user_sseu->subslice_mask & ~sseu->subslice_mask[0]) != 0 || >> + user_sseu->subslice_mask == 0) >> + return -EINVAL; >> + >> + if (user_sseu->min_eus_per_subslice > sseu->max_eus_per_subslice) >> + return -EINVAL; >> + >> + if (user_sseu->max_eus_per_subslice > sseu->max_eus_per_subslice || >> + user_sseu->max_eus_per_subslice < >> user_sseu->min_eus_per_subslice || >> + user_sseu->max_eus_per_subslice == 0) >> + return -EINVAL; >> + >> + ctx_sseu->slice_mask = user_sseu->slice_mask; >> + ctx_sseu->subslice_mask = user_sseu->subslice_mask; >> + ctx_sseu->min_eus_per_subslice = user_sseu->min_eus_per_subslice; >> + ctx_sseu->max_eus_per_subslice = user_sseu->max_eus_per_subslice; >> + >> + return 0; >> +} >> + >> +static int >> +i915_gem_context_reconfigure_sseu(struct i915_gem_context *ctx, >> + struct intel_engine_cs *engine, >> + union intel_sseu sseu) >> +{ >> + struct drm_i915_private *dev_priv = ctx->i915; >> + struct i915_timeline *timeline; >> + struct i915_request *rq; >> + union intel_sseu actual_sseu; >> + enum intel_engine_id id; >> + int ret; >> + >> + /* >> + * First notify user when this capability is not available so >> that it >> + * can be detected with any valid input. >> + */ >> + if (!engine->emit_rpcs_config) >> + return -ENODEV; >> + >> + if (to_intel_context(ctx, engine)->sseu.value == sseu.value) > > Are there other uses for the value union in the series? Think whether it > could be dropped and memcmp used here for simplicity. > >> + return 0; >> + >> + lockdep_assert_held(&dev_priv->drm.struct_mutex); >> + >> + i915_retire_requests(dev_priv); >> + >> + /* Now use the RCS to actually reconfigure. */ >> + engine = dev_priv->engine[RCS]; >> + >> + rq = i915_request_alloc(engine, dev_priv->kernel_context); >> + if (IS_ERR(rq)) >> + return PTR_ERR(rq); >> + >> + /* >> + * If i915/perf is active, we want a stable powergating >> configuration >> + * on the system. Act as if we recorded the user's request but >> program >> + * the default sseu configuration. When i915/perf is stopped, the >> + * recorded configuration will be programmed. >> + */ >> + actual_sseu = dev_priv->perf.oa.exclusive_stream ? > > This feels it should jump out more since there is no obvious connection > between the two. Probably a helper of some sort like > intel_sanitize_sseu? Or intel_apply_sseu? Merge? ... ? > > The comment would then be in the helper which I think would be better > than sprinkle OA knowledge into context params code. Furthermore, that idea to have a global sysfs knob would fit into this helper. echo 1 > $i915_sysfs_root/allow_dynamic_slice_configuration Or something.. As long as it defaults to off I think both camps should be happy. Regards, Tvrtko > >> + intel_sseu_from_device_sseu(&INTEL_INFO(dev_priv)->sseu) : >> + sseu; >> + >> + ret = engine->emit_rpcs_config(rq, ctx, actual_sseu); >> + if (ret) { >> + __i915_request_add(rq, true); >> + return ret; >> + } >> + >> + /* Queue this switch after all other activity */ >> + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { >> + struct i915_request *prev; >> + >> + prev = last_request_on_engine(timeline, engine); >> + if (prev) >> + i915_sw_fence_await_sw_fence_gfp(&rq->submit, >> + &prev->submit, >> + I915_FENCE_GFP); >> + } >> + >> + __i915_request_add(rq, true); > > This is actually the bit I reading the patch for. So this I think is > much better/safer than previous idling. However one concern, and maybe > not a concern but just something which needs to be explained in the > uAPI, is what it means with regards to the point from which the new > configuration becomes live. > > Could preemption for instance make it not defined enough? Could we, or > should we, also link this request somehow onto the issuing context > timeline so it must be first there? Hm, should we use the user context > instead of the kernel one, and set the highest priority? But would have > to avoid triggering preemption. > >> + /* >> + * Apply the configuration to all engine. Our hardware doesn't >> + * currently support different configurations for each engine. >> + */ >> + for_each_engine(engine, dev_priv, id) { >> + struct intel_context *ce = to_intel_context(ctx, engine); >> + >> + ce->sseu.value = sseu.value; >> + } >> + >> + return 0; >> +} >> + >> int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data, >> struct drm_file *file) >> { >> @@ -771,6 +875,37 @@ int i915_gem_context_getparam_ioctl(struct >> drm_device *dev, void *data, >> case I915_CONTEXT_PARAM_PRIORITY: >> args->value = ctx->sched.priority; >> break; >> + case I915_CONTEXT_PARAM_SSEU: { >> + struct drm_i915_gem_context_param_sseu param_sseu; >> + struct intel_engine_cs *engine; >> + struct intel_context *ce; >> + >> + if (copy_from_user(¶m_sseu, u64_to_user_ptr(args->value), >> + sizeof(param_sseu))) { >> + ret = -EFAULT; >> + break; >> + } >> + >> + engine = intel_engine_lookup_user(to_i915(dev), >> + param_sseu.class, >> + param_sseu.instance); >> + if (!engine) { >> + ret = -EINVAL; >> + break; >> + } >> + >> + ce = &ctx->__engine[engine->id]; > > to_intel_context(ctx, engine) ? > >> + >> + param_sseu.slice_mask = ce->sseu.slice_mask; >> + param_sseu.subslice_mask = ce->sseu.subslice_mask; >> + param_sseu.min_eus_per_subslice = ce->sseu.min_eus_per_subslice; >> + param_sseu.max_eus_per_subslice = ce->sseu.max_eus_per_subslice; >> + >> + if (copy_to_user(u64_to_user_ptr(args->value), ¶m_sseu, >> + sizeof(param_sseu))) >> + ret = -EFAULT; >> + break; >> + } >> default: >> ret = -EINVAL; >> break; >> @@ -845,7 +980,41 @@ int i915_gem_context_setparam_ioctl(struct >> drm_device *dev, void *data, >> ctx->sched.priority = priority; >> } >> break; >> + case I915_CONTEXT_PARAM_SSEU: >> + { >> + struct drm_i915_private *dev_priv = to_i915(dev); >> + struct drm_i915_gem_context_param_sseu user_sseu; >> + struct intel_engine_cs *engine; >> + union intel_sseu ctx_sseu; >> + >> + if (args->size) { >> + ret = -EINVAL; >> + break; >> + } >> + >> + if (copy_from_user(&user_sseu, u64_to_user_ptr(args->value), >> + sizeof(user_sseu))) { >> + ret = -EFAULT; >> + break; >> + } >> + >> + engine = intel_engine_lookup_user(dev_priv, >> + user_sseu.class, >> + user_sseu.instance); >> + if (!engine) { >> + ret = -EINVAL; >> + break; >> + } >> + >> + ret = intel_sseu_from_user_sseu(&INTEL_INFO(dev_priv)->sseu, >> + &user_sseu, &ctx_sseu); > > Should setter have a helper as well? > >> + if (ret) >> + break; >> + ret = i915_gem_context_reconfigure_sseu(ctx, engine, >> + ctx_sseu); >> + } >> + break; >> default: >> ret = -EINVAL; >> break; >> diff --git a/drivers/gpu/drm/i915/intel_lrc.c >> b/drivers/gpu/drm/i915/intel_lrc.c >> index 320b416482e1..4e0216f33c75 100644 >> --- a/drivers/gpu/drm/i915/intel_lrc.c >> +++ b/drivers/gpu/drm/i915/intel_lrc.c >> @@ -2184,6 +2184,72 @@ static void gen8_emit_breadcrumb_rcs(struct >> i915_request *request, u32 *cs) >> } >> static const int gen8_emit_breadcrumb_rcs_sz = 8 + WA_TAIL_DWORDS; >> +u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, >> + union intel_sseu ctx_sseu) >> +{ >> + u32 rpcs = 0; >> + >> + /* >> + * Starting in Gen9, render power gating can leave >> + * slice/subslice/EU in a partially enabled state. We >> + * must make an explicit request through RPCS for full >> + * enablement. >> + */ >> + if (sseu->has_slice_pg) { >> + rpcs |= GEN8_RPCS_S_CNT_ENABLE; >> + rpcs |= hweight8(ctx_sseu.slice_mask) << GEN8_RPCS_S_CNT_SHIFT; >> + rpcs |= GEN8_RPCS_ENABLE; >> + } >> + >> + if (sseu->has_subslice_pg) { >> + rpcs |= GEN8_RPCS_SS_CNT_ENABLE; >> + rpcs |= hweight8(ctx_sseu.subslice_mask) << >> + GEN8_RPCS_SS_CNT_SHIFT; >> + rpcs |= GEN8_RPCS_ENABLE; >> + } >> + >> + if (sseu->has_eu_pg) { >> + rpcs |= ctx_sseu.min_eus_per_subslice << >> + GEN8_RPCS_EU_MIN_SHIFT; >> + rpcs |= ctx_sseu.max_eus_per_subslice << >> + GEN8_RPCS_EU_MAX_SHIFT; >> + rpcs |= GEN8_RPCS_ENABLE; >> + } >> + >> + return rpcs; >> +} >> + >> +static int gen8_emit_rpcs_config(struct i915_request *rq, >> + struct i915_gem_context *ctx, >> + union intel_sseu sseu) >> +{ >> + struct drm_i915_private *dev_priv = rq->i915; >> + struct intel_context *ce = to_intel_context(ctx, >> dev_priv->engine[RCS]); >> + u64 offset; >> + u32 *cs; >> + >> + /* Let the deferred state allocation take care of this. */ >> + if (!ce->state) >> + return 0; >> + >> + cs = intel_ring_begin(rq, 4); >> + if (IS_ERR(cs)) >> + return PTR_ERR(cs); >> + >> + offset = ce->state->node.start + >> + LRC_STATE_PN * PAGE_SIZE + >> + (CTX_R_PWR_CLK_STATE + 1) * 4; >> + >> + *cs++ = MI_STORE_DWORD_IMM_GEN4; >> + *cs++ = lower_32_bits(offset); >> + *cs++ = upper_32_bits(offset); >> + *cs++ = gen8_make_rpcs(&INTEL_INFO(dev_priv)->sseu, sseu); >> + >> + intel_ring_advance(rq, cs); >> + >> + return 0; >> +} >> + >> static int gen8_init_rcs_context(struct i915_request *rq) >> { >> int ret; >> @@ -2274,6 +2340,8 @@ logical_ring_default_vfuncs(struct >> intel_engine_cs *engine) >> engine->emit_breadcrumb = gen8_emit_breadcrumb; >> engine->emit_breadcrumb_sz = gen8_emit_breadcrumb_sz; >> + engine->emit_rpcs_config = gen8_emit_rpcs_config; >> + >> engine->set_default_submission = execlists_set_default_submission; >> if (INTEL_GEN(engine->i915) < 11) { >> @@ -2422,41 +2490,6 @@ int logical_xcs_ring_init(struct >> intel_engine_cs *engine) >> return logical_ring_init(engine); >> } >> -u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, >> - union intel_sseu ctx_sseu) >> -{ >> - u32 rpcs = 0; >> - >> - /* >> - * Starting in Gen9, render power gating can leave >> - * slice/subslice/EU in a partially enabled state. We >> - * must make an explicit request through RPCS for full >> - * enablement. >> - */ >> - if (sseu->has_slice_pg) { >> - rpcs |= GEN8_RPCS_S_CNT_ENABLE; >> - rpcs |= hweight8(ctx_sseu.slice_mask) << GEN8_RPCS_S_CNT_SHIFT; >> - rpcs |= GEN8_RPCS_ENABLE; >> - } >> - >> - if (sseu->has_subslice_pg) { >> - rpcs |= GEN8_RPCS_SS_CNT_ENABLE; >> - rpcs |= hweight8(ctx_sseu.subslice_mask) << >> - GEN8_RPCS_SS_CNT_SHIFT; >> - rpcs |= GEN8_RPCS_ENABLE; >> - } >> - >> - if (sseu->has_eu_pg) { >> - rpcs |= ctx_sseu.min_eus_per_subslice << >> - GEN8_RPCS_EU_MIN_SHIFT; >> - rpcs |= ctx_sseu.max_eus_per_subslice << >> - GEN8_RPCS_EU_MAX_SHIFT; >> - rpcs |= GEN8_RPCS_ENABLE; >> - } >> - >> - return rpcs; >> -} >> - >> static u32 intel_lr_indirect_ctx_offset(struct intel_engine_cs *engine) >> { >> u32 indirect_ctx_offset; >> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c >> b/drivers/gpu/drm/i915/intel_ringbuffer.c >> index 8f19349a6055..44fb3a1cf8f9 100644 >> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c >> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c >> @@ -2026,6 +2026,8 @@ static void intel_ring_default_vfuncs(struct >> drm_i915_private *dev_priv, >> engine->emit_breadcrumb_sz++; >> } >> + engine->emit_rpcs_config = NULL; /* Only supported on Gen8+ */ >> + >> engine->set_default_submission = i9xx_set_default_submission; >> if (INTEL_GEN(dev_priv) >= 6) >> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h >> b/drivers/gpu/drm/i915/intel_ringbuffer.h >> index cc7e73730469..9745f4ab8214 100644 >> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h >> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h >> @@ -463,6 +463,10 @@ struct intel_engine_cs { >> void (*emit_breadcrumb)(struct i915_request *rq, u32 *cs); >> int emit_breadcrumb_sz; >> + int (*emit_rpcs_config)(struct i915_request *rq, >> + struct i915_gem_context *ctx, >> + union intel_sseu sseu); >> + >> /* Pass the request to the hardware queue (e.g. directly into >> * the legacy ringbuffer or to the end of an execlist). >> * >> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h >> index 7f5634ce8e88..24b90836ce1d 100644 >> --- a/include/uapi/drm/i915_drm.h >> +++ b/include/uapi/drm/i915_drm.h >> @@ -1456,9 +1456,47 @@ struct drm_i915_gem_context_param { >> #define I915_CONTEXT_MAX_USER_PRIORITY 1023 /* inclusive */ >> #define I915_CONTEXT_DEFAULT_PRIORITY 0 >> #define I915_CONTEXT_MIN_USER_PRIORITY -1023 /* inclusive */ >> + /* >> + * When using the following param, value should be a pointer to >> + * drm_i915_gem_context_param_sseu. >> + */ >> +#define I915_CONTEXT_PARAM_SSEU 0x7 >> __u64 value; >> }; >> +struct drm_i915_gem_context_param_sseu { >> + /* >> + * Engine class & instance to be configured or queried. >> + */ >> + __u32 class; >> + __u32 instance; >> + >> + /* >> + * Mask of slices to enable for the context. Valid values are a >> subset >> + * of the bitmask value returned for I915_PARAM_SLICE_MASK. >> + */ >> + __u8 slice_mask; >> + >> + /* >> + * Mask of subslices to enable for the context. Valid values are a >> + * subset of the bitmask value return by I915_PARAM_SUBSLICE_MASK. >> + */ >> + __u8 subslice_mask; >> + >> + /* >> + * Minimum/Maximum number of EUs to enable per subslice for the >> + * context. min_eus_per_subslice must be inferior or equal to >> + * max_eus_per_subslice. >> + */ >> + __u8 min_eus_per_subslice; >> + __u8 max_eus_per_subslice; >> + >> + /* >> + * Unused for now. Must be cleared to zero. >> + */ >> + __u32 rsvd; >> +}; >> + >> enum drm_i915_oa_format { >> I915_OA_FORMAT_A13 = 1, /* HSW only */ >> I915_OA_FORMAT_A29, /* HSW only */ >> > > Regards, > > Tvrtko
On 16/05/18 16:40, Tvrtko Ursulin wrote: > > On 15/05/2018 10:05, Tvrtko Ursulin wrote: >> >> On 14/05/2018 16:56, Lionel Landwerlin wrote: >>> From: Chris Wilson <chris@chris-wilson.co.uk> >>> >>> We want to allow userspace to reconfigure the subslice configuration >>> for >>> its own use case. To do so, we expose a context parameter to allow >>> adjustment of the RPCS register stored within the context image (and >>> currently not accessible via LRI). If the context is adjusted before >>> first use, the adjustment is for "free"; otherwise if the context is >>> active we flush the context off the GPU (stalling all users) and >>> forcing >>> the GPU to save the context to memory where we can modify it and so >>> ensure that the register is reloaded on next execution. >>> >>> The overhead of managing additional EU subslices can be significant, >>> especially in multi-context workloads. Non-GPGPU contexts should >>> preferably disable the subslices it is not using, and others should >>> fine-tune the number to match their workload. >>> >>> We expose complete control over the RPCS register, allowing >>> configuration of slice/subslice, via masks packed into a u64 for >>> simplicity. For example, >>> >>> struct drm_i915_gem_context_param arg; >>> struct drm_i915_gem_context_param_sseu sseu = { .class = 0, >>> .instance = 0, }; >>> >>> memset(&arg, 0, sizeof(arg)); >>> arg.ctx_id = ctx; >>> arg.param = I915_CONTEXT_PARAM_SSEU; >>> arg.value = (uintptr_t) &sseu; >>> if (drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, &arg) == 0) { >>> sseu.packed.subslice_mask = 0; >>> >>> drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, &arg); >>> } >>> >>> could be used to disable all subslices where supported. >>> >>> v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu() >>> (Lionel) >>> >>> v3: Add ability to program this per engine (Chris) >>> >>> v4: Move most get_sseu() into i915_gem_context.c (Lionel) >>> >>> v5: Validate sseu configuration against the device's capabilities >>> (Lionel) >>> >>> v6: Change context powergating settings through MI_SDM on kernel >>> context (Chris) >>> >>> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100899 >>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> >>> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> >>> c: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> >>> CC: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>> CC: Zhipeng Gong <zhipeng.gong@intel.com> >>> CC: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> >>> --- >>> drivers/gpu/drm/i915/i915_gem_context.c | 169 >>> ++++++++++++++++++++++++ >>> drivers/gpu/drm/i915/intel_lrc.c | 103 ++++++++++----- >>> drivers/gpu/drm/i915/intel_ringbuffer.c | 2 + >>> drivers/gpu/drm/i915/intel_ringbuffer.h | 4 + >>> include/uapi/drm/i915_drm.h | 38 ++++++ >>> 5 files changed, 281 insertions(+), 35 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c >>> b/drivers/gpu/drm/i915/i915_gem_context.c >>> index 01310c99e032..0b72a771c3f3 100644 >>> --- a/drivers/gpu/drm/i915/i915_gem_context.c >>> +++ b/drivers/gpu/drm/i915/i915_gem_context.c >>> @@ -734,6 +734,110 @@ int i915_gem_context_destroy_ioctl(struct >>> drm_device *dev, void *data, >>> return 0; >>> } >>> +static int >>> +intel_sseu_from_user_sseu(const struct sseu_dev_info *sseu, >>> + const struct drm_i915_gem_context_param_sseu *user_sseu, >>> + union intel_sseu *ctx_sseu) >>> +{ >>> + if ((user_sseu->slice_mask & ~sseu->slice_mask) != 0 || >>> + user_sseu->slice_mask == 0) >>> + return -EINVAL; >>> + >>> + if ((user_sseu->subslice_mask & ~sseu->subslice_mask[0]) != 0 || >>> + user_sseu->subslice_mask == 0) >>> + return -EINVAL; >>> + >>> + if (user_sseu->min_eus_per_subslice > sseu->max_eus_per_subslice) >>> + return -EINVAL; >>> + >>> + if (user_sseu->max_eus_per_subslice > >>> sseu->max_eus_per_subslice || >>> + user_sseu->max_eus_per_subslice < >>> user_sseu->min_eus_per_subslice || >>> + user_sseu->max_eus_per_subslice == 0) >>> + return -EINVAL; >>> + >>> + ctx_sseu->slice_mask = user_sseu->slice_mask; >>> + ctx_sseu->subslice_mask = user_sseu->subslice_mask; >>> + ctx_sseu->min_eus_per_subslice = user_sseu->min_eus_per_subslice; >>> + ctx_sseu->max_eus_per_subslice = user_sseu->max_eus_per_subslice; >>> + >>> + return 0; >>> +} >>> + >>> +static int >>> +i915_gem_context_reconfigure_sseu(struct i915_gem_context *ctx, >>> + struct intel_engine_cs *engine, >>> + union intel_sseu sseu) >>> +{ >>> + struct drm_i915_private *dev_priv = ctx->i915; >>> + struct i915_timeline *timeline; >>> + struct i915_request *rq; >>> + union intel_sseu actual_sseu; >>> + enum intel_engine_id id; >>> + int ret; >>> + >>> + /* >>> + * First notify user when this capability is not available so >>> that it >>> + * can be detected with any valid input. >>> + */ >>> + if (!engine->emit_rpcs_config) >>> + return -ENODEV; >>> + >>> + if (to_intel_context(ctx, engine)->sseu.value == sseu.value) >> >> Are there other uses for the value union in the series? Think whether >> it could be dropped and memcmp used here for simplicity. >> >>> + return 0; >>> + >>> + lockdep_assert_held(&dev_priv->drm.struct_mutex); >>> + >>> + i915_retire_requests(dev_priv); >>> + >>> + /* Now use the RCS to actually reconfigure. */ >>> + engine = dev_priv->engine[RCS]; >>> + >>> + rq = i915_request_alloc(engine, dev_priv->kernel_context); >>> + if (IS_ERR(rq)) >>> + return PTR_ERR(rq); >>> + >>> + /* >>> + * If i915/perf is active, we want a stable powergating >>> configuration >>> + * on the system. Act as if we recorded the user's request but >>> program >>> + * the default sseu configuration. When i915/perf is stopped, the >>> + * recorded configuration will be programmed. >>> + */ >>> + actual_sseu = dev_priv->perf.oa.exclusive_stream ? >> >> This feels it should jump out more since there is no obvious >> connection between the two. Probably a helper of some sort like >> intel_sanitize_sseu? Or intel_apply_sseu? Merge? ... ? >> >> The comment would then be in the helper which I think would be better >> than sprinkle OA knowledge into context params code. > > Furthermore, that idea to have a global sysfs knob would fit into this > helper. > > echo 1 > $i915_sysfs_root/allow_dynamic_slice_configuration > > Or something.. As long as it defaults to off I think both camps should > be happy. > > Regards, > > Tvrtko Thanks Tvrtko, I was thinking the same as a simple first step. I can add that in another revision. Cheers, - Lionel > >> >>> + intel_sseu_from_device_sseu(&INTEL_INFO(dev_priv)->sseu) : >>> + sseu; >>> + >>> + ret = engine->emit_rpcs_config(rq, ctx, actual_sseu); >>> + if (ret) { >>> + __i915_request_add(rq, true); >>> + return ret; >>> + } >>> + >>> + /* Queue this switch after all other activity */ >>> + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { >>> + struct i915_request *prev; >>> + >>> + prev = last_request_on_engine(timeline, engine); >>> + if (prev) >>> + i915_sw_fence_await_sw_fence_gfp(&rq->submit, >>> + &prev->submit, >>> + I915_FENCE_GFP); >>> + } >>> + >>> + __i915_request_add(rq, true); >> >> This is actually the bit I reading the patch for. So this I think is >> much better/safer than previous idling. However one concern, and >> maybe not a concern but just something which needs to be explained in >> the uAPI, is what it means with regards to the point from which the >> new configuration becomes live. >> >> Could preemption for instance make it not defined enough? Could we, >> or should we, also link this request somehow onto the issuing context >> timeline so it must be first there? Hm, should we use the user >> context instead of the kernel one, and set the highest priority? But >> would have to avoid triggering preemption. >> >>> + /* >>> + * Apply the configuration to all engine. Our hardware doesn't >>> + * currently support different configurations for each engine. >>> + */ >>> + for_each_engine(engine, dev_priv, id) { >>> + struct intel_context *ce = to_intel_context(ctx, engine); >>> + >>> + ce->sseu.value = sseu.value; >>> + } >>> + >>> + return 0; >>> +} >>> + >>> int i915_gem_context_getparam_ioctl(struct drm_device *dev, void >>> *data, >>> struct drm_file *file) >>> { >>> @@ -771,6 +875,37 @@ int i915_gem_context_getparam_ioctl(struct >>> drm_device *dev, void *data, >>> case I915_CONTEXT_PARAM_PRIORITY: >>> args->value = ctx->sched.priority; >>> break; >>> + case I915_CONTEXT_PARAM_SSEU: { >>> + struct drm_i915_gem_context_param_sseu param_sseu; >>> + struct intel_engine_cs *engine; >>> + struct intel_context *ce; >>> + >>> + if (copy_from_user(¶m_sseu, u64_to_user_ptr(args->value), >>> + sizeof(param_sseu))) { >>> + ret = -EFAULT; >>> + break; >>> + } >>> + >>> + engine = intel_engine_lookup_user(to_i915(dev), >>> + param_sseu.class, >>> + param_sseu.instance); >>> + if (!engine) { >>> + ret = -EINVAL; >>> + break; >>> + } >>> + >>> + ce = &ctx->__engine[engine->id]; >> >> to_intel_context(ctx, engine) ? >> >>> + >>> + param_sseu.slice_mask = ce->sseu.slice_mask; >>> + param_sseu.subslice_mask = ce->sseu.subslice_mask; >>> + param_sseu.min_eus_per_subslice = >>> ce->sseu.min_eus_per_subslice; >>> + param_sseu.max_eus_per_subslice = >>> ce->sseu.max_eus_per_subslice; >>> + >>> + if (copy_to_user(u64_to_user_ptr(args->value), ¶m_sseu, >>> + sizeof(param_sseu))) >>> + ret = -EFAULT; >>> + break; >>> + } >>> default: >>> ret = -EINVAL; >>> break; >>> @@ -845,7 +980,41 @@ int i915_gem_context_setparam_ioctl(struct >>> drm_device *dev, void *data, >>> ctx->sched.priority = priority; >>> } >>> break; >>> + case I915_CONTEXT_PARAM_SSEU: >>> + { >>> + struct drm_i915_private *dev_priv = to_i915(dev); >>> + struct drm_i915_gem_context_param_sseu user_sseu; >>> + struct intel_engine_cs *engine; >>> + union intel_sseu ctx_sseu; >>> + >>> + if (args->size) { >>> + ret = -EINVAL; >>> + break; >>> + } >>> + >>> + if (copy_from_user(&user_sseu, >>> u64_to_user_ptr(args->value), >>> + sizeof(user_sseu))) { >>> + ret = -EFAULT; >>> + break; >>> + } >>> + >>> + engine = intel_engine_lookup_user(dev_priv, >>> + user_sseu.class, >>> + user_sseu.instance); >>> + if (!engine) { >>> + ret = -EINVAL; >>> + break; >>> + } >>> + >>> + ret = >>> intel_sseu_from_user_sseu(&INTEL_INFO(dev_priv)->sseu, >>> + &user_sseu, &ctx_sseu); >> >> Should setter have a helper as well? >> >>> + if (ret) >>> + break; >>> + ret = i915_gem_context_reconfigure_sseu(ctx, engine, >>> + ctx_sseu); >>> + } >>> + break; >>> default: >>> ret = -EINVAL; >>> break; >>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c >>> b/drivers/gpu/drm/i915/intel_lrc.c >>> index 320b416482e1..4e0216f33c75 100644 >>> --- a/drivers/gpu/drm/i915/intel_lrc.c >>> +++ b/drivers/gpu/drm/i915/intel_lrc.c >>> @@ -2184,6 +2184,72 @@ static void gen8_emit_breadcrumb_rcs(struct >>> i915_request *request, u32 *cs) >>> } >>> static const int gen8_emit_breadcrumb_rcs_sz = 8 + WA_TAIL_DWORDS; >>> +u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, >>> + union intel_sseu ctx_sseu) >>> +{ >>> + u32 rpcs = 0; >>> + >>> + /* >>> + * Starting in Gen9, render power gating can leave >>> + * slice/subslice/EU in a partially enabled state. We >>> + * must make an explicit request through RPCS for full >>> + * enablement. >>> + */ >>> + if (sseu->has_slice_pg) { >>> + rpcs |= GEN8_RPCS_S_CNT_ENABLE; >>> + rpcs |= hweight8(ctx_sseu.slice_mask) << >>> GEN8_RPCS_S_CNT_SHIFT; >>> + rpcs |= GEN8_RPCS_ENABLE; >>> + } >>> + >>> + if (sseu->has_subslice_pg) { >>> + rpcs |= GEN8_RPCS_SS_CNT_ENABLE; >>> + rpcs |= hweight8(ctx_sseu.subslice_mask) << >>> + GEN8_RPCS_SS_CNT_SHIFT; >>> + rpcs |= GEN8_RPCS_ENABLE; >>> + } >>> + >>> + if (sseu->has_eu_pg) { >>> + rpcs |= ctx_sseu.min_eus_per_subslice << >>> + GEN8_RPCS_EU_MIN_SHIFT; >>> + rpcs |= ctx_sseu.max_eus_per_subslice << >>> + GEN8_RPCS_EU_MAX_SHIFT; >>> + rpcs |= GEN8_RPCS_ENABLE; >>> + } >>> + >>> + return rpcs; >>> +} >>> + >>> +static int gen8_emit_rpcs_config(struct i915_request *rq, >>> + struct i915_gem_context *ctx, >>> + union intel_sseu sseu) >>> +{ >>> + struct drm_i915_private *dev_priv = rq->i915; >>> + struct intel_context *ce = to_intel_context(ctx, >>> dev_priv->engine[RCS]); >>> + u64 offset; >>> + u32 *cs; >>> + >>> + /* Let the deferred state allocation take care of this. */ >>> + if (!ce->state) >>> + return 0; >>> + >>> + cs = intel_ring_begin(rq, 4); >>> + if (IS_ERR(cs)) >>> + return PTR_ERR(cs); >>> + >>> + offset = ce->state->node.start + >>> + LRC_STATE_PN * PAGE_SIZE + >>> + (CTX_R_PWR_CLK_STATE + 1) * 4; >>> + >>> + *cs++ = MI_STORE_DWORD_IMM_GEN4; >>> + *cs++ = lower_32_bits(offset); >>> + *cs++ = upper_32_bits(offset); >>> + *cs++ = gen8_make_rpcs(&INTEL_INFO(dev_priv)->sseu, sseu); >>> + >>> + intel_ring_advance(rq, cs); >>> + >>> + return 0; >>> +} >>> + >>> static int gen8_init_rcs_context(struct i915_request *rq) >>> { >>> int ret; >>> @@ -2274,6 +2340,8 @@ logical_ring_default_vfuncs(struct >>> intel_engine_cs *engine) >>> engine->emit_breadcrumb = gen8_emit_breadcrumb; >>> engine->emit_breadcrumb_sz = gen8_emit_breadcrumb_sz; >>> + engine->emit_rpcs_config = gen8_emit_rpcs_config; >>> + >>> engine->set_default_submission = >>> execlists_set_default_submission; >>> if (INTEL_GEN(engine->i915) < 11) { >>> @@ -2422,41 +2490,6 @@ int logical_xcs_ring_init(struct >>> intel_engine_cs *engine) >>> return logical_ring_init(engine); >>> } >>> -u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, >>> - union intel_sseu ctx_sseu) >>> -{ >>> - u32 rpcs = 0; >>> - >>> - /* >>> - * Starting in Gen9, render power gating can leave >>> - * slice/subslice/EU in a partially enabled state. We >>> - * must make an explicit request through RPCS for full >>> - * enablement. >>> - */ >>> - if (sseu->has_slice_pg) { >>> - rpcs |= GEN8_RPCS_S_CNT_ENABLE; >>> - rpcs |= hweight8(ctx_sseu.slice_mask) << >>> GEN8_RPCS_S_CNT_SHIFT; >>> - rpcs |= GEN8_RPCS_ENABLE; >>> - } >>> - >>> - if (sseu->has_subslice_pg) { >>> - rpcs |= GEN8_RPCS_SS_CNT_ENABLE; >>> - rpcs |= hweight8(ctx_sseu.subslice_mask) << >>> - GEN8_RPCS_SS_CNT_SHIFT; >>> - rpcs |= GEN8_RPCS_ENABLE; >>> - } >>> - >>> - if (sseu->has_eu_pg) { >>> - rpcs |= ctx_sseu.min_eus_per_subslice << >>> - GEN8_RPCS_EU_MIN_SHIFT; >>> - rpcs |= ctx_sseu.max_eus_per_subslice << >>> - GEN8_RPCS_EU_MAX_SHIFT; >>> - rpcs |= GEN8_RPCS_ENABLE; >>> - } >>> - >>> - return rpcs; >>> -} >>> - >>> static u32 intel_lr_indirect_ctx_offset(struct intel_engine_cs >>> *engine) >>> { >>> u32 indirect_ctx_offset; >>> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c >>> b/drivers/gpu/drm/i915/intel_ringbuffer.c >>> index 8f19349a6055..44fb3a1cf8f9 100644 >>> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c >>> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c >>> @@ -2026,6 +2026,8 @@ static void intel_ring_default_vfuncs(struct >>> drm_i915_private *dev_priv, >>> engine->emit_breadcrumb_sz++; >>> } >>> + engine->emit_rpcs_config = NULL; /* Only supported on Gen8+ */ >>> + >>> engine->set_default_submission = i9xx_set_default_submission; >>> if (INTEL_GEN(dev_priv) >= 6) >>> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h >>> b/drivers/gpu/drm/i915/intel_ringbuffer.h >>> index cc7e73730469..9745f4ab8214 100644 >>> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h >>> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h >>> @@ -463,6 +463,10 @@ struct intel_engine_cs { >>> void (*emit_breadcrumb)(struct i915_request *rq, u32 *cs); >>> int emit_breadcrumb_sz; >>> + int (*emit_rpcs_config)(struct i915_request *rq, >>> + struct i915_gem_context *ctx, >>> + union intel_sseu sseu); >>> + >>> /* Pass the request to the hardware queue (e.g. directly into >>> * the legacy ringbuffer or to the end of an execlist). >>> * >>> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h >>> index 7f5634ce8e88..24b90836ce1d 100644 >>> --- a/include/uapi/drm/i915_drm.h >>> +++ b/include/uapi/drm/i915_drm.h >>> @@ -1456,9 +1456,47 @@ struct drm_i915_gem_context_param { >>> #define I915_CONTEXT_MAX_USER_PRIORITY 1023 /* inclusive */ >>> #define I915_CONTEXT_DEFAULT_PRIORITY 0 >>> #define I915_CONTEXT_MIN_USER_PRIORITY -1023 /* inclusive */ >>> + /* >>> + * When using the following param, value should be a pointer to >>> + * drm_i915_gem_context_param_sseu. >>> + */ >>> +#define I915_CONTEXT_PARAM_SSEU 0x7 >>> __u64 value; >>> }; >>> +struct drm_i915_gem_context_param_sseu { >>> + /* >>> + * Engine class & instance to be configured or queried. >>> + */ >>> + __u32 class; >>> + __u32 instance; >>> + >>> + /* >>> + * Mask of slices to enable for the context. Valid values are a >>> subset >>> + * of the bitmask value returned for I915_PARAM_SLICE_MASK. >>> + */ >>> + __u8 slice_mask; >>> + >>> + /* >>> + * Mask of subslices to enable for the context. Valid values are a >>> + * subset of the bitmask value return by I915_PARAM_SUBSLICE_MASK. >>> + */ >>> + __u8 subslice_mask; >>> + >>> + /* >>> + * Minimum/Maximum number of EUs to enable per subslice for the >>> + * context. min_eus_per_subslice must be inferior or equal to >>> + * max_eus_per_subslice. >>> + */ >>> + __u8 min_eus_per_subslice; >>> + __u8 max_eus_per_subslice; >>> + >>> + /* >>> + * Unused for now. Must be cleared to zero. >>> + */ >>> + __u32 rsvd; >>> +}; >>> + >>> enum drm_i915_oa_format { >>> I915_OA_FORMAT_A13 = 1, /* HSW only */ >>> I915_OA_FORMAT_A29, /* HSW only */ >>> >> >> Regards, >> >> Tvrtko >
Quoting Tvrtko Ursulin (2018-05-16 16:40:55) > > On 15/05/2018 10:05, Tvrtko Ursulin wrote: > > > > On 14/05/2018 16:56, Lionel Landwerlin wrote: > >> From: Chris Wilson <chris@chris-wilson.co.uk> > >> > >> We want to allow userspace to reconfigure the subslice configuration for > >> its own use case. To do so, we expose a context parameter to allow > >> adjustment of the RPCS register stored within the context image (and > >> currently not accessible via LRI). If the context is adjusted before > >> first use, the adjustment is for "free"; otherwise if the context is > >> active we flush the context off the GPU (stalling all users) and forcing > >> the GPU to save the context to memory where we can modify it and so > >> ensure that the register is reloaded on next execution. > >> > >> The overhead of managing additional EU subslices can be significant, > >> especially in multi-context workloads. Non-GPGPU contexts should > >> preferably disable the subslices it is not using, and others should > >> fine-tune the number to match their workload. > >> > >> We expose complete control over the RPCS register, allowing > >> configuration of slice/subslice, via masks packed into a u64 for > >> simplicity. For example, > >> > >> struct drm_i915_gem_context_param arg; > >> struct drm_i915_gem_context_param_sseu sseu = { .class = 0, > >> .instance = 0, }; > >> > >> memset(&arg, 0, sizeof(arg)); > >> arg.ctx_id = ctx; > >> arg.param = I915_CONTEXT_PARAM_SSEU; > >> arg.value = (uintptr_t) &sseu; > >> if (drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, &arg) == 0) { > >> sseu.packed.subslice_mask = 0; > >> > >> drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, &arg); > >> } > >> > >> could be used to disable all subslices where supported. > >> > >> v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu() > >> (Lionel) > >> > >> v3: Add ability to program this per engine (Chris) > >> > >> v4: Move most get_sseu() into i915_gem_context.c (Lionel) > >> > >> v5: Validate sseu configuration against the device's capabilities > >> (Lionel) > >> > >> v6: Change context powergating settings through MI_SDM on kernel > >> context (Chris) > >> > >> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100899 > >> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> > >> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> > >> c: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> > >> CC: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > >> CC: Zhipeng Gong <zhipeng.gong@intel.com> > >> CC: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> > >> --- > >> drivers/gpu/drm/i915/i915_gem_context.c | 169 ++++++++++++++++++++++++ > >> drivers/gpu/drm/i915/intel_lrc.c | 103 ++++++++++----- > >> drivers/gpu/drm/i915/intel_ringbuffer.c | 2 + > >> drivers/gpu/drm/i915/intel_ringbuffer.h | 4 + > >> include/uapi/drm/i915_drm.h | 38 ++++++ > >> 5 files changed, 281 insertions(+), 35 deletions(-) > >> > >> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c > >> b/drivers/gpu/drm/i915/i915_gem_context.c > >> index 01310c99e032..0b72a771c3f3 100644 > >> --- a/drivers/gpu/drm/i915/i915_gem_context.c > >> +++ b/drivers/gpu/drm/i915/i915_gem_context.c > >> @@ -734,6 +734,110 @@ int i915_gem_context_destroy_ioctl(struct > >> drm_device *dev, void *data, > >> return 0; > >> } > >> +static int > >> +intel_sseu_from_user_sseu(const struct sseu_dev_info *sseu, > >> + const struct drm_i915_gem_context_param_sseu *user_sseu, > >> + union intel_sseu *ctx_sseu) > >> +{ > >> + if ((user_sseu->slice_mask & ~sseu->slice_mask) != 0 || > >> + user_sseu->slice_mask == 0) > >> + return -EINVAL; > >> + > >> + if ((user_sseu->subslice_mask & ~sseu->subslice_mask[0]) != 0 || > >> + user_sseu->subslice_mask == 0) > >> + return -EINVAL; > >> + > >> + if (user_sseu->min_eus_per_subslice > sseu->max_eus_per_subslice) > >> + return -EINVAL; > >> + > >> + if (user_sseu->max_eus_per_subslice > sseu->max_eus_per_subslice || > >> + user_sseu->max_eus_per_subslice < > >> user_sseu->min_eus_per_subslice || > >> + user_sseu->max_eus_per_subslice == 0) > >> + return -EINVAL; > >> + > >> + ctx_sseu->slice_mask = user_sseu->slice_mask; > >> + ctx_sseu->subslice_mask = user_sseu->subslice_mask; > >> + ctx_sseu->min_eus_per_subslice = user_sseu->min_eus_per_subslice; > >> + ctx_sseu->max_eus_per_subslice = user_sseu->max_eus_per_subslice; > >> + > >> + return 0; > >> +} > >> + > >> +static int > >> +i915_gem_context_reconfigure_sseu(struct i915_gem_context *ctx, > >> + struct intel_engine_cs *engine, > >> + union intel_sseu sseu) > >> +{ > >> + struct drm_i915_private *dev_priv = ctx->i915; > >> + struct i915_timeline *timeline; > >> + struct i915_request *rq; > >> + union intel_sseu actual_sseu; > >> + enum intel_engine_id id; > >> + int ret; > >> + > >> + /* > >> + * First notify user when this capability is not available so > >> that it > >> + * can be detected with any valid input. > >> + */ > >> + if (!engine->emit_rpcs_config) > >> + return -ENODEV; > >> + > >> + if (to_intel_context(ctx, engine)->sseu.value == sseu.value) > > > > Are there other uses for the value union in the series? Think whether it > > could be dropped and memcmp used here for simplicity. > > > >> + return 0; > >> + > >> + lockdep_assert_held(&dev_priv->drm.struct_mutex); > >> + > >> + i915_retire_requests(dev_priv); > >> + > >> + /* Now use the RCS to actually reconfigure. */ > >> + engine = dev_priv->engine[RCS]; > >> + > >> + rq = i915_request_alloc(engine, dev_priv->kernel_context); > >> + if (IS_ERR(rq)) > >> + return PTR_ERR(rq); [snip] > >> + ret = engine->emit_rpcs_config(rq, ctx, actual_sseu); > >> + if (ret) { > >> + __i915_request_add(rq, true); > >> + return ret; > >> + } > >> + > >> + /* Queue this switch after all other activity */ > >> + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { > >> + struct i915_request *prev; > >> + > >> + prev = last_request_on_engine(timeline, engine); > >> + if (prev) > >> + i915_sw_fence_await_sw_fence_gfp(&rq->submit, > >> + &prev->submit, > >> + I915_FENCE_GFP); > >> + } > >> + > >> + __i915_request_add(rq, true); > > > > This is actually the bit I reading the patch for. So this I think is > > much better/safer than previous idling. However one concern, and maybe > > not a concern but just something which needs to be explained in the > > uAPI, is what it means with regards to the point from which the new > > configuration becomes live. > > > > Could preemption for instance make it not defined enough? Could we, or > > should we, also link this request somehow onto the issuing context > > timeline so it must be first there? Hm, should we use the user context > > instead of the kernel one, and set the highest priority? But would have > > to avoid triggering preemption. Preemption is a concern with the above code. A barrier does need to be added from the target context back to the SDM request. What I had in mind was preferably doing LRI _from_ each context. Then it is naturally ordered with respect to execution on every context. Failing that, to allow SDM you need to add a nop request on each context with await on the sdm request, or we assign the sdm request a U32_MAX priority. (Using that priority boost would massively mess up the timelines on the system though, so try to avoid it. ;) -Chris
On 15/05/18 10:05, Tvrtko Ursulin wrote: > > On 14/05/2018 16:56, Lionel Landwerlin wrote: >> From: Chris Wilson <chris@chris-wilson.co.uk> >> >> We want to allow userspace to reconfigure the subslice configuration for >> its own use case. To do so, we expose a context parameter to allow >> adjustment of the RPCS register stored within the context image (and >> currently not accessible via LRI). If the context is adjusted before >> first use, the adjustment is for "free"; otherwise if the context is >> active we flush the context off the GPU (stalling all users) and forcing >> the GPU to save the context to memory where we can modify it and so >> ensure that the register is reloaded on next execution. >> >> The overhead of managing additional EU subslices can be significant, >> especially in multi-context workloads. Non-GPGPU contexts should >> preferably disable the subslices it is not using, and others should >> fine-tune the number to match their workload. >> >> We expose complete control over the RPCS register, allowing >> configuration of slice/subslice, via masks packed into a u64 for >> simplicity. For example, >> >> struct drm_i915_gem_context_param arg; >> struct drm_i915_gem_context_param_sseu sseu = { .class = 0, >> .instance = 0, }; >> >> memset(&arg, 0, sizeof(arg)); >> arg.ctx_id = ctx; >> arg.param = I915_CONTEXT_PARAM_SSEU; >> arg.value = (uintptr_t) &sseu; >> if (drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, &arg) == 0) { >> sseu.packed.subslice_mask = 0; >> >> drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, &arg); >> } >> >> could be used to disable all subslices where supported. >> >> v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu() >> (Lionel) >> >> v3: Add ability to program this per engine (Chris) >> >> v4: Move most get_sseu() into i915_gem_context.c (Lionel) >> >> v5: Validate sseu configuration against the device's capabilities >> (Lionel) >> >> v6: Change context powergating settings through MI_SDM on kernel >> context (Chris) >> >> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100899 >> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> >> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> >> c: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> >> CC: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >> CC: Zhipeng Gong <zhipeng.gong@intel.com> >> CC: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> >> --- >> drivers/gpu/drm/i915/i915_gem_context.c | 169 ++++++++++++++++++++++++ >> drivers/gpu/drm/i915/intel_lrc.c | 103 ++++++++++----- >> drivers/gpu/drm/i915/intel_ringbuffer.c | 2 + >> drivers/gpu/drm/i915/intel_ringbuffer.h | 4 + >> include/uapi/drm/i915_drm.h | 38 ++++++ >> 5 files changed, 281 insertions(+), 35 deletions(-) >> >> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c >> b/drivers/gpu/drm/i915/i915_gem_context.c >> index 01310c99e032..0b72a771c3f3 100644 >> --- a/drivers/gpu/drm/i915/i915_gem_context.c >> +++ b/drivers/gpu/drm/i915/i915_gem_context.c >> @@ -734,6 +734,110 @@ int i915_gem_context_destroy_ioctl(struct >> drm_device *dev, void *data, >> return 0; >> } >> +static int >> +intel_sseu_from_user_sseu(const struct sseu_dev_info *sseu, >> + const struct drm_i915_gem_context_param_sseu *user_sseu, >> + union intel_sseu *ctx_sseu) >> +{ >> + if ((user_sseu->slice_mask & ~sseu->slice_mask) != 0 || >> + user_sseu->slice_mask == 0) >> + return -EINVAL; >> + >> + if ((user_sseu->subslice_mask & ~sseu->subslice_mask[0]) != 0 || >> + user_sseu->subslice_mask == 0) >> + return -EINVAL; >> + >> + if (user_sseu->min_eus_per_subslice > sseu->max_eus_per_subslice) >> + return -EINVAL; >> + >> + if (user_sseu->max_eus_per_subslice > sseu->max_eus_per_subslice || >> + user_sseu->max_eus_per_subslice < >> user_sseu->min_eus_per_subslice || >> + user_sseu->max_eus_per_subslice == 0) >> + return -EINVAL; >> + >> + ctx_sseu->slice_mask = user_sseu->slice_mask; >> + ctx_sseu->subslice_mask = user_sseu->subslice_mask; >> + ctx_sseu->min_eus_per_subslice = user_sseu->min_eus_per_subslice; >> + ctx_sseu->max_eus_per_subslice = user_sseu->max_eus_per_subslice; >> + >> + return 0; >> +} >> + >> +static int >> +i915_gem_context_reconfigure_sseu(struct i915_gem_context *ctx, >> + struct intel_engine_cs *engine, >> + union intel_sseu sseu) >> +{ >> + struct drm_i915_private *dev_priv = ctx->i915; >> + struct i915_timeline *timeline; >> + struct i915_request *rq; >> + union intel_sseu actual_sseu; >> + enum intel_engine_id id; >> + int ret; >> + >> + /* >> + * First notify user when this capability is not available so >> that it >> + * can be detected with any valid input. >> + */ >> + if (!engine->emit_rpcs_config) >> + return -ENODEV; >> + >> + if (to_intel_context(ctx, engine)->sseu.value == sseu.value) > > Are there other uses for the value union in the series? Think whether > it could be dropped and memcmp used here for simplicity. Sure. > >> + return 0; >> + >> + lockdep_assert_held(&dev_priv->drm.struct_mutex); >> + >> + i915_retire_requests(dev_priv); >> + >> + /* Now use the RCS to actually reconfigure. */ >> + engine = dev_priv->engine[RCS]; >> + >> + rq = i915_request_alloc(engine, dev_priv->kernel_context); >> + if (IS_ERR(rq)) >> + return PTR_ERR(rq); >> + >> + /* >> + * If i915/perf is active, we want a stable powergating >> configuration >> + * on the system. Act as if we recorded the user's request but >> program >> + * the default sseu configuration. When i915/perf is stopped, the >> + * recorded configuration will be programmed. >> + */ >> + actual_sseu = dev_priv->perf.oa.exclusive_stream ? > > This feels it should jump out more since there is no obvious > connection between the two. Probably a helper of some sort like > intel_sanitize_sseu? Or intel_apply_sseu? Merge? ... ? > > The comment would then be in the helper which I think would be better > than sprinkle OA knowledge into context params code. Okay. > >> + intel_sseu_from_device_sseu(&INTEL_INFO(dev_priv)->sseu) : >> + sseu; >> + >> + ret = engine->emit_rpcs_config(rq, ctx, actual_sseu); >> + if (ret) { >> + __i915_request_add(rq, true); >> + return ret; >> + } >> + >> + /* Queue this switch after all other activity */ >> + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { >> + struct i915_request *prev; >> + >> + prev = last_request_on_engine(timeline, engine); >> + if (prev) >> + i915_sw_fence_await_sw_fence_gfp(&rq->submit, >> + &prev->submit, >> + I915_FENCE_GFP); >> + } >> + >> + __i915_request_add(rq, true); > > This is actually the bit I reading the patch for. So this I think is > much better/safer than previous idling. However one concern, and maybe > not a concern but just something which needs to be explained in the > uAPI, is what it means with regards to the point from which the new > configuration becomes live. To me it's seems that all context ioctl effects are ordered. So I would expect any execbuf after this ioctl would have the right configuration. Anything before would have the previous configuration. If that's your understanind too, I can add that in the documentation. > > Could preemption for instance make it not defined enough? Could we, or > should we, also link this request somehow onto the issuing context > timeline so it must be first there? Hm, should we use the user context > instead of the kernel one, and set the highest priority? But would > have to avoid triggering preemption. My understanding is that a context can't MI_LRI itself before Gen11. I haven't tested that on Gen11 though. I'm afraid that leaves us with only a max priority request tied to all the previous requests. Or somehow keep a pointer to the last request to change the context powergating configuration and add a dependency on all new request until it's retired? > >> + /* >> + * Apply the configuration to all engine. Our hardware doesn't >> + * currently support different configurations for each engine. >> + */ >> + for_each_engine(engine, dev_priv, id) { >> + struct intel_context *ce = to_intel_context(ctx, engine); >> + >> + ce->sseu.value = sseu.value; >> + } >> + >> + return 0; >> +} >> + >> int i915_gem_context_getparam_ioctl(struct drm_device *dev, void >> *data, >> struct drm_file *file) >> { >> @@ -771,6 +875,37 @@ int i915_gem_context_getparam_ioctl(struct >> drm_device *dev, void *data, >> case I915_CONTEXT_PARAM_PRIORITY: >> args->value = ctx->sched.priority; >> break; >> + case I915_CONTEXT_PARAM_SSEU: { >> + struct drm_i915_gem_context_param_sseu param_sseu; >> + struct intel_engine_cs *engine; >> + struct intel_context *ce; >> + >> + if (copy_from_user(¶m_sseu, u64_to_user_ptr(args->value), >> + sizeof(param_sseu))) { >> + ret = -EFAULT; >> + break; >> + } >> + >> + engine = intel_engine_lookup_user(to_i915(dev), >> + param_sseu.class, >> + param_sseu.instance); >> + if (!engine) { >> + ret = -EINVAL; >> + break; >> + } >> + >> + ce = &ctx->__engine[engine->id]; > > to_intel_context(ctx, engine) ? Done. > >> + >> + param_sseu.slice_mask = ce->sseu.slice_mask; >> + param_sseu.subslice_mask = ce->sseu.subslice_mask; >> + param_sseu.min_eus_per_subslice = >> ce->sseu.min_eus_per_subslice; >> + param_sseu.max_eus_per_subslice = >> ce->sseu.max_eus_per_subslice; >> + >> + if (copy_to_user(u64_to_user_ptr(args->value), ¶m_sseu, >> + sizeof(param_sseu))) >> + ret = -EFAULT; >> + break; >> + } >> default: >> ret = -EINVAL; >> break; >> @@ -845,7 +980,41 @@ int i915_gem_context_setparam_ioctl(struct >> drm_device *dev, void *data, >> ctx->sched.priority = priority; >> } >> break; >> + case I915_CONTEXT_PARAM_SSEU: >> + { >> + struct drm_i915_private *dev_priv = to_i915(dev); >> + struct drm_i915_gem_context_param_sseu user_sseu; >> + struct intel_engine_cs *engine; >> + union intel_sseu ctx_sseu; >> + >> + if (args->size) { >> + ret = -EINVAL; >> + break; >> + } >> + >> + if (copy_from_user(&user_sseu, >> u64_to_user_ptr(args->value), >> + sizeof(user_sseu))) { >> + ret = -EFAULT; >> + break; >> + } >> + >> + engine = intel_engine_lookup_user(dev_priv, >> + user_sseu.class, >> + user_sseu.instance); >> + if (!engine) { >> + ret = -EINVAL; >> + break; >> + } >> + >> + ret = >> intel_sseu_from_user_sseu(&INTEL_INFO(dev_priv)->sseu, >> + &user_sseu, &ctx_sseu); > > Should setter have a helper as well? I'm not sure what you're asking here :( Setter for what? > >> + if (ret) >> + break; >> + ret = i915_gem_context_reconfigure_sseu(ctx, engine, >> + ctx_sseu); >> + } >> + break; >> default: >> ret = -EINVAL; >> break; >> diff --git a/drivers/gpu/drm/i915/intel_lrc.c >> b/drivers/gpu/drm/i915/intel_lrc.c >> index 320b416482e1..4e0216f33c75 100644 >> --- a/drivers/gpu/drm/i915/intel_lrc.c >> +++ b/drivers/gpu/drm/i915/intel_lrc.c >> @@ -2184,6 +2184,72 @@ static void gen8_emit_breadcrumb_rcs(struct >> i915_request *request, u32 *cs) >> } >> static const int gen8_emit_breadcrumb_rcs_sz = 8 + WA_TAIL_DWORDS; >> +u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, >> + union intel_sseu ctx_sseu) >> +{ >> + u32 rpcs = 0; >> + >> + /* >> + * Starting in Gen9, render power gating can leave >> + * slice/subslice/EU in a partially enabled state. We >> + * must make an explicit request through RPCS for full >> + * enablement. >> + */ >> + if (sseu->has_slice_pg) { >> + rpcs |= GEN8_RPCS_S_CNT_ENABLE; >> + rpcs |= hweight8(ctx_sseu.slice_mask) << GEN8_RPCS_S_CNT_SHIFT; >> + rpcs |= GEN8_RPCS_ENABLE; >> + } >> + >> + if (sseu->has_subslice_pg) { >> + rpcs |= GEN8_RPCS_SS_CNT_ENABLE; >> + rpcs |= hweight8(ctx_sseu.subslice_mask) << >> + GEN8_RPCS_SS_CNT_SHIFT; >> + rpcs |= GEN8_RPCS_ENABLE; >> + } >> + >> + if (sseu->has_eu_pg) { >> + rpcs |= ctx_sseu.min_eus_per_subslice << >> + GEN8_RPCS_EU_MIN_SHIFT; >> + rpcs |= ctx_sseu.max_eus_per_subslice << >> + GEN8_RPCS_EU_MAX_SHIFT; >> + rpcs |= GEN8_RPCS_ENABLE; >> + } >> + >> + return rpcs; >> +} >> + >> +static int gen8_emit_rpcs_config(struct i915_request *rq, >> + struct i915_gem_context *ctx, >> + union intel_sseu sseu) >> +{ >> + struct drm_i915_private *dev_priv = rq->i915; >> + struct intel_context *ce = to_intel_context(ctx, >> dev_priv->engine[RCS]); >> + u64 offset; >> + u32 *cs; >> + >> + /* Let the deferred state allocation take care of this. */ >> + if (!ce->state) >> + return 0; >> + >> + cs = intel_ring_begin(rq, 4); >> + if (IS_ERR(cs)) >> + return PTR_ERR(cs); >> + >> + offset = ce->state->node.start + >> + LRC_STATE_PN * PAGE_SIZE + >> + (CTX_R_PWR_CLK_STATE + 1) * 4; >> + >> + *cs++ = MI_STORE_DWORD_IMM_GEN4; >> + *cs++ = lower_32_bits(offset); >> + *cs++ = upper_32_bits(offset); >> + *cs++ = gen8_make_rpcs(&INTEL_INFO(dev_priv)->sseu, sseu); >> + >> + intel_ring_advance(rq, cs); >> + >> + return 0; >> +} >> + >> static int gen8_init_rcs_context(struct i915_request *rq) >> { >> int ret; >> @@ -2274,6 +2340,8 @@ logical_ring_default_vfuncs(struct >> intel_engine_cs *engine) >> engine->emit_breadcrumb = gen8_emit_breadcrumb; >> engine->emit_breadcrumb_sz = gen8_emit_breadcrumb_sz; >> + engine->emit_rpcs_config = gen8_emit_rpcs_config; >> + >> engine->set_default_submission = execlists_set_default_submission; >> if (INTEL_GEN(engine->i915) < 11) { >> @@ -2422,41 +2490,6 @@ int logical_xcs_ring_init(struct >> intel_engine_cs *engine) >> return logical_ring_init(engine); >> } >> -u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, >> - union intel_sseu ctx_sseu) >> -{ >> - u32 rpcs = 0; >> - >> - /* >> - * Starting in Gen9, render power gating can leave >> - * slice/subslice/EU in a partially enabled state. We >> - * must make an explicit request through RPCS for full >> - * enablement. >> - */ >> - if (sseu->has_slice_pg) { >> - rpcs |= GEN8_RPCS_S_CNT_ENABLE; >> - rpcs |= hweight8(ctx_sseu.slice_mask) << GEN8_RPCS_S_CNT_SHIFT; >> - rpcs |= GEN8_RPCS_ENABLE; >> - } >> - >> - if (sseu->has_subslice_pg) { >> - rpcs |= GEN8_RPCS_SS_CNT_ENABLE; >> - rpcs |= hweight8(ctx_sseu.subslice_mask) << >> - GEN8_RPCS_SS_CNT_SHIFT; >> - rpcs |= GEN8_RPCS_ENABLE; >> - } >> - >> - if (sseu->has_eu_pg) { >> - rpcs |= ctx_sseu.min_eus_per_subslice << >> - GEN8_RPCS_EU_MIN_SHIFT; >> - rpcs |= ctx_sseu.max_eus_per_subslice << >> - GEN8_RPCS_EU_MAX_SHIFT; >> - rpcs |= GEN8_RPCS_ENABLE; >> - } >> - >> - return rpcs; >> -} >> - >> static u32 intel_lr_indirect_ctx_offset(struct intel_engine_cs >> *engine) >> { >> u32 indirect_ctx_offset; >> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c >> b/drivers/gpu/drm/i915/intel_ringbuffer.c >> index 8f19349a6055..44fb3a1cf8f9 100644 >> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c >> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c >> @@ -2026,6 +2026,8 @@ static void intel_ring_default_vfuncs(struct >> drm_i915_private *dev_priv, >> engine->emit_breadcrumb_sz++; >> } >> + engine->emit_rpcs_config = NULL; /* Only supported on Gen8+ */ >> + >> engine->set_default_submission = i9xx_set_default_submission; >> if (INTEL_GEN(dev_priv) >= 6) >> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h >> b/drivers/gpu/drm/i915/intel_ringbuffer.h >> index cc7e73730469..9745f4ab8214 100644 >> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h >> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h >> @@ -463,6 +463,10 @@ struct intel_engine_cs { >> void (*emit_breadcrumb)(struct i915_request *rq, u32 *cs); >> int emit_breadcrumb_sz; >> + int (*emit_rpcs_config)(struct i915_request *rq, >> + struct i915_gem_context *ctx, >> + union intel_sseu sseu); >> + >> /* Pass the request to the hardware queue (e.g. directly into >> * the legacy ringbuffer or to the end of an execlist). >> * >> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h >> index 7f5634ce8e88..24b90836ce1d 100644 >> --- a/include/uapi/drm/i915_drm.h >> +++ b/include/uapi/drm/i915_drm.h >> @@ -1456,9 +1456,47 @@ struct drm_i915_gem_context_param { >> #define I915_CONTEXT_MAX_USER_PRIORITY 1023 /* inclusive */ >> #define I915_CONTEXT_DEFAULT_PRIORITY 0 >> #define I915_CONTEXT_MIN_USER_PRIORITY -1023 /* inclusive */ >> + /* >> + * When using the following param, value should be a pointer to >> + * drm_i915_gem_context_param_sseu. >> + */ >> +#define I915_CONTEXT_PARAM_SSEU 0x7 >> __u64 value; >> }; >> +struct drm_i915_gem_context_param_sseu { >> + /* >> + * Engine class & instance to be configured or queried. >> + */ >> + __u32 class; >> + __u32 instance; >> + >> + /* >> + * Mask of slices to enable for the context. Valid values are a >> subset >> + * of the bitmask value returned for I915_PARAM_SLICE_MASK. >> + */ >> + __u8 slice_mask; >> + >> + /* >> + * Mask of subslices to enable for the context. Valid values are a >> + * subset of the bitmask value return by I915_PARAM_SUBSLICE_MASK. >> + */ >> + __u8 subslice_mask; >> + >> + /* >> + * Minimum/Maximum number of EUs to enable per subslice for the >> + * context. min_eus_per_subslice must be inferior or equal to >> + * max_eus_per_subslice. >> + */ >> + __u8 min_eus_per_subslice; >> + __u8 max_eus_per_subslice; >> + >> + /* >> + * Unused for now. Must be cleared to zero. >> + */ >> + __u32 rsvd; >> +}; >> + >> enum drm_i915_oa_format { >> I915_OA_FORMAT_A13 = 1, /* HSW only */ >> I915_OA_FORMAT_A29, /* HSW only */ >> > > Regards, > > Tvrtko >
On 21/05/2018 14:22, Lionel Landwerlin wrote: > On 15/05/18 10:05, Tvrtko Ursulin wrote: >> >> On 14/05/2018 16:56, Lionel Landwerlin wrote: >>> From: Chris Wilson <chris@chris-wilson.co.uk> >>> >>> We want to allow userspace to reconfigure the subslice configuration for >>> its own use case. To do so, we expose a context parameter to allow >>> adjustment of the RPCS register stored within the context image (and >>> currently not accessible via LRI). If the context is adjusted before >>> first use, the adjustment is for "free"; otherwise if the context is >>> active we flush the context off the GPU (stalling all users) and forcing >>> the GPU to save the context to memory where we can modify it and so >>> ensure that the register is reloaded on next execution. >>> >>> The overhead of managing additional EU subslices can be significant, >>> especially in multi-context workloads. Non-GPGPU contexts should >>> preferably disable the subslices it is not using, and others should >>> fine-tune the number to match their workload. >>> >>> We expose complete control over the RPCS register, allowing >>> configuration of slice/subslice, via masks packed into a u64 for >>> simplicity. For example, >>> >>> struct drm_i915_gem_context_param arg; >>> struct drm_i915_gem_context_param_sseu sseu = { .class = 0, >>> .instance = 0, }; >>> >>> memset(&arg, 0, sizeof(arg)); >>> arg.ctx_id = ctx; >>> arg.param = I915_CONTEXT_PARAM_SSEU; >>> arg.value = (uintptr_t) &sseu; >>> if (drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, &arg) == 0) { >>> sseu.packed.subslice_mask = 0; >>> >>> drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, &arg); >>> } >>> >>> could be used to disable all subslices where supported. >>> >>> v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu() >>> (Lionel) >>> >>> v3: Add ability to program this per engine (Chris) >>> >>> v4: Move most get_sseu() into i915_gem_context.c (Lionel) >>> >>> v5: Validate sseu configuration against the device's capabilities >>> (Lionel) >>> >>> v6: Change context powergating settings through MI_SDM on kernel >>> context (Chris) [snip] >> >>> + intel_sseu_from_device_sseu(&INTEL_INFO(dev_priv)->sseu) : >>> + sseu; >>> + >>> + ret = engine->emit_rpcs_config(rq, ctx, actual_sseu); >>> + if (ret) { >>> + __i915_request_add(rq, true); >>> + return ret; >>> + } >>> + >>> + /* Queue this switch after all other activity */ >>> + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { This can iterate over gt.active_rings for a shorter walk. See current state of engine_has_idle_kernel_context. >>> + struct i915_request *prev; >>> + >>> + prev = last_request_on_engine(timeline, engine); >>> + if (prev) >>> + i915_sw_fence_await_sw_fence_gfp(&rq->submit, >>> + &prev->submit, >>> + I915_FENCE_GFP); >>> + } >>> + >>> + __i915_request_add(rq, true); >> >> This is actually the bit I reading the patch for. So this I think is >> much better/safer than previous idling. However one concern, and maybe >> not a concern but just something which needs to be explained in the >> uAPI, is what it means with regards to the point from which the new >> configuration becomes live. > > To me it's seems that all context ioctl effects are ordered. > So I would expect any execbuf after this ioctl would have the right > configuration. > Anything before would have the previous configuration. > > If that's your understanind too, I can add that in the documentation. I guess I am missing what prevents the context which issued the SSEU re-configuraiton to run before the re-configuration request executes. They are on separate timelines and subsequent execbuf on the issuing context won't depend on it. >> >> Could preemption for instance make it not defined enough? Could we, or >> should we, also link this request somehow onto the issuing context >> timeline so it must be first there? Hm, should we use the user context >> instead of the kernel one, and set the highest priority? But would >> have to avoid triggering preemption. > > My understanding is that a context can't MI_LRI itself before Gen11. > I haven't tested that on Gen11 though. > > I'm afraid that leaves us with only a max priority request tied to all > the previous requests. > Or somehow keep a pointer to the last request to change the context > powergating configuration and add a dependency on all new request until > it's retired? I lost the train of dependencies here. :) [snip] >> >>> + >>> + param_sseu.slice_mask = ce->sseu.slice_mask; >>> + param_sseu.subslice_mask = ce->sseu.subslice_mask; >>> + param_sseu.min_eus_per_subslice = >>> ce->sseu.min_eus_per_subslice; >>> + param_sseu.max_eus_per_subslice = >>> ce->sseu.max_eus_per_subslice; >>> + >>> + if (copy_to_user(u64_to_user_ptr(args->value), ¶m_sseu, >>> + sizeof(param_sseu))) >>> + ret = -EFAULT; >>> + break; >>> + } >>> default: >>> ret = -EINVAL; >>> break; >>> @@ -845,7 +980,41 @@ int i915_gem_context_setparam_ioctl(struct >>> drm_device *dev, void *data, >>> ctx->sched.priority = priority; >>> } >>> break; >>> + case I915_CONTEXT_PARAM_SSEU: >>> + { >>> + struct drm_i915_private *dev_priv = to_i915(dev); >>> + struct drm_i915_gem_context_param_sseu user_sseu; >>> + struct intel_engine_cs *engine; >>> + union intel_sseu ctx_sseu; >>> + >>> + if (args->size) { >>> + ret = -EINVAL; >>> + break; >>> + } >>> + >>> + if (copy_from_user(&user_sseu, >>> u64_to_user_ptr(args->value), >>> + sizeof(user_sseu))) { >>> + ret = -EFAULT; >>> + break; >>> + } >>> + >>> + engine = intel_engine_lookup_user(dev_priv, >>> + user_sseu.class, >>> + user_sseu.instance); >>> + if (!engine) { >>> + ret = -EINVAL; >>> + break; >>> + } >>> + >>> + ret = >>> intel_sseu_from_user_sseu(&INTEL_INFO(dev_priv)->sseu, >>> + &user_sseu, &ctx_sseu); >> >> Should setter have a helper as well? > > I'm not sure what you're asking here :( > Setter for what? set_param vs get_param. I only noticed you have helper to conver struct sseu from user to kernel version for one direction only. It could be it doesn't make sense to have the other, haven't looked that deeply. Regards, Tvrtko
On 21/05/18 17:00, Tvrtko Ursulin wrote: > > On 21/05/2018 14:22, Lionel Landwerlin wrote: >> On 15/05/18 10:05, Tvrtko Ursulin wrote: >>> >>> On 14/05/2018 16:56, Lionel Landwerlin wrote: >>>> From: Chris Wilson <chris@chris-wilson.co.uk> >>>> >>>> We want to allow userspace to reconfigure the subslice >>>> configuration for >>>> its own use case. To do so, we expose a context parameter to allow >>>> adjustment of the RPCS register stored within the context image (and >>>> currently not accessible via LRI). If the context is adjusted before >>>> first use, the adjustment is for "free"; otherwise if the context is >>>> active we flush the context off the GPU (stalling all users) and >>>> forcing >>>> the GPU to save the context to memory where we can modify it and so >>>> ensure that the register is reloaded on next execution. >>>> >>>> The overhead of managing additional EU subslices can be significant, >>>> especially in multi-context workloads. Non-GPGPU contexts should >>>> preferably disable the subslices it is not using, and others should >>>> fine-tune the number to match their workload. >>>> >>>> We expose complete control over the RPCS register, allowing >>>> configuration of slice/subslice, via masks packed into a u64 for >>>> simplicity. For example, >>>> >>>> struct drm_i915_gem_context_param arg; >>>> struct drm_i915_gem_context_param_sseu sseu = { .class = 0, >>>> .instance = 0, }; >>>> >>>> memset(&arg, 0, sizeof(arg)); >>>> arg.ctx_id = ctx; >>>> arg.param = I915_CONTEXT_PARAM_SSEU; >>>> arg.value = (uintptr_t) &sseu; >>>> if (drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, &arg) == >>>> 0) { >>>> sseu.packed.subslice_mask = 0; >>>> >>>> drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, &arg); >>>> } >>>> >>>> could be used to disable all subslices where supported. >>>> >>>> v2: Fix offset of CTX_R_PWR_CLK_STATE in >>>> intel_lr_context_set_sseu() (Lionel) >>>> >>>> v3: Add ability to program this per engine (Chris) >>>> >>>> v4: Move most get_sseu() into i915_gem_context.c (Lionel) >>>> >>>> v5: Validate sseu configuration against the device's capabilities >>>> (Lionel) >>>> >>>> v6: Change context powergating settings through MI_SDM on kernel >>>> context (Chris) > > [snip] > >>> >>>> + intel_sseu_from_device_sseu(&INTEL_INFO(dev_priv)->sseu) : >>>> + sseu; >>>> + >>>> + ret = engine->emit_rpcs_config(rq, ctx, actual_sseu); >>>> + if (ret) { >>>> + __i915_request_add(rq, true); >>>> + return ret; >>>> + } >>>> + >>>> + /* Queue this switch after all other activity */ >>>> + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { > > This can iterate over gt.active_rings for a shorter walk. See current > state of engine_has_idle_kernel_context. Thanks. > >>>> + struct i915_request *prev; >>>> + >>>> + prev = last_request_on_engine(timeline, engine); >>>> + if (prev) >>>> + i915_sw_fence_await_sw_fence_gfp(&rq->submit, >>>> + &prev->submit, >>>> + I915_FENCE_GFP); >>>> + } >>>> + >>>> + __i915_request_add(rq, true); >>> >>> This is actually the bit I reading the patch for. So this I think is >>> much better/safer than previous idling. However one concern, and >>> maybe not a concern but just something which needs to be explained >>> in the uAPI, is what it means with regards to the point from which >>> the new configuration becomes live. >> >> To me it's seems that all context ioctl effects are ordered. >> So I would expect any execbuf after this ioctl would have the right >> configuration. >> Anything before would have the previous configuration. >> >> If that's your understanind too, I can add that in the documentation. > > I guess I am missing what prevents the context which issued the SSEU > re-configuraiton to run before the re-configuration request executes. > They are on separate timelines and subsequent execbuf on the issuing > context won't depend on it. > >>> >>> Could preemption for instance make it not defined enough? Could we, >>> or should we, also link this request somehow onto the issuing >>> context timeline so it must be first there? Hm, should we use the >>> user context instead of the kernel one, and set the highest >>> priority? But would have to avoid triggering preemption. >> >> My understanding is that a context can't MI_LRI itself before Gen11. >> I haven't tested that on Gen11 though. >> >> I'm afraid that leaves us with only a max priority request tied to >> all the previous requests. >> Or somehow keep a pointer to the last request to change the context >> powergating configuration and add a dependency on all new request >> until it's retired? > > I lost the train of dependencies here. :) [snip] Sorry, I was getting confused by reading Chris' reply at the same time. There is a problem with preemption and since the documentation tells us that we can't MI_LRI on the context itself (like Chris suggested and what would be really simpler), we have to go with an MI_SDM which this patch currently lacks synchronization for. > >>> >>>> + >>>> + param_sseu.slice_mask = ce->sseu.slice_mask; >>>> + param_sseu.subslice_mask = ce->sseu.subslice_mask; >>>> + param_sseu.min_eus_per_subslice = >>>> ce->sseu.min_eus_per_subslice; >>>> + param_sseu.max_eus_per_subslice = >>>> ce->sseu.max_eus_per_subslice; >>>> + >>>> + if (copy_to_user(u64_to_user_ptr(args->value), ¶m_sseu, >>>> + sizeof(param_sseu))) >>>> + ret = -EFAULT; >>>> + break; >>>> + } >>>> default: >>>> ret = -EINVAL; >>>> break; >>>> @@ -845,7 +980,41 @@ int i915_gem_context_setparam_ioctl(struct >>>> drm_device *dev, void *data, >>>> ctx->sched.priority = priority; >>>> } >>>> break; >>>> + case I915_CONTEXT_PARAM_SSEU: >>>> + { >>>> + struct drm_i915_private *dev_priv = to_i915(dev); >>>> + struct drm_i915_gem_context_param_sseu user_sseu; >>>> + struct intel_engine_cs *engine; >>>> + union intel_sseu ctx_sseu; >>>> + >>>> + if (args->size) { >>>> + ret = -EINVAL; >>>> + break; >>>> + } >>>> + >>>> + if (copy_from_user(&user_sseu, >>>> u64_to_user_ptr(args->value), >>>> + sizeof(user_sseu))) { >>>> + ret = -EFAULT; >>>> + break; >>>> + } >>>> + >>>> + engine = intel_engine_lookup_user(dev_priv, >>>> + user_sseu.class, >>>> + user_sseu.instance); >>>> + if (!engine) { >>>> + ret = -EINVAL; >>>> + break; >>>> + } >>>> + >>>> + ret = >>>> intel_sseu_from_user_sseu(&INTEL_INFO(dev_priv)->sseu, >>>> + &user_sseu, &ctx_sseu); >>> >>> Should setter have a helper as well? >> >> I'm not sure what you're asking here :( >> Setter for what? > > set_param vs get_param. I only noticed you have helper to conver > struct sseu from user to kernel version for one direction only. It > could be it doesn't make sense to have the other, haven't looked that > deeply. Ah yeah, for the getter. It's just that the setter does some error checking that the getter doesn't need to. It makes the switch() { ... } a bit less verbose. > > Regards, > > Tvrtko >
On 21/05/18 17:00, Tvrtko Ursulin wrote: >>>> >>>> + >>>> + /* Queue this switch after all other activity */ >>>> + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { > > This can iterate over gt.active_rings for a shorter walk. See current > state of engine_has_idle_kernel_context. > For some reason, iterating over gt.active_rings will trigger an invalid memory access :| Not sure what's wrong here...
On 22/05/18 17:11, Lionel Landwerlin wrote: > On 21/05/18 17:00, Tvrtko Ursulin wrote: >>>>> >>>>> + >>>>> + /* Queue this switch after all other activity */ >>>>> + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { >> >> This can iterate over gt.active_rings for a shorter walk. See current >> state of engine_has_idle_kernel_context. >> > For some reason, iterating over gt.active_rings will trigger an > invalid memory access :| > > Not sure what's wrong here... > Duh! Found it : list_for_each_entry(ring, &dev_priv->gt.active_rings, link) { Instead of : list_for_each_entry(ring, &dev_priv->gt.active_rings, active_link) { - Lionel
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 01310c99e032..0b72a771c3f3 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -734,6 +734,110 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data, return 0; } +static int +intel_sseu_from_user_sseu(const struct sseu_dev_info *sseu, + const struct drm_i915_gem_context_param_sseu *user_sseu, + union intel_sseu *ctx_sseu) +{ + if ((user_sseu->slice_mask & ~sseu->slice_mask) != 0 || + user_sseu->slice_mask == 0) + return -EINVAL; + + if ((user_sseu->subslice_mask & ~sseu->subslice_mask[0]) != 0 || + user_sseu->subslice_mask == 0) + return -EINVAL; + + if (user_sseu->min_eus_per_subslice > sseu->max_eus_per_subslice) + return -EINVAL; + + if (user_sseu->max_eus_per_subslice > sseu->max_eus_per_subslice || + user_sseu->max_eus_per_subslice < user_sseu->min_eus_per_subslice || + user_sseu->max_eus_per_subslice == 0) + return -EINVAL; + + ctx_sseu->slice_mask = user_sseu->slice_mask; + ctx_sseu->subslice_mask = user_sseu->subslice_mask; + ctx_sseu->min_eus_per_subslice = user_sseu->min_eus_per_subslice; + ctx_sseu->max_eus_per_subslice = user_sseu->max_eus_per_subslice; + + return 0; +} + +static int +i915_gem_context_reconfigure_sseu(struct i915_gem_context *ctx, + struct intel_engine_cs *engine, + union intel_sseu sseu) +{ + struct drm_i915_private *dev_priv = ctx->i915; + struct i915_timeline *timeline; + struct i915_request *rq; + union intel_sseu actual_sseu; + enum intel_engine_id id; + int ret; + + /* + * First notify user when this capability is not available so that it + * can be detected with any valid input. + */ + if (!engine->emit_rpcs_config) + return -ENODEV; + + if (to_intel_context(ctx, engine)->sseu.value == sseu.value) + return 0; + + lockdep_assert_held(&dev_priv->drm.struct_mutex); + + i915_retire_requests(dev_priv); + + /* Now use the RCS to actually reconfigure. */ + engine = dev_priv->engine[RCS]; + + rq = i915_request_alloc(engine, dev_priv->kernel_context); + if (IS_ERR(rq)) + return PTR_ERR(rq); + + /* + * If i915/perf is active, we want a stable powergating configuration + * on the system. Act as if we recorded the user's request but program + * the default sseu configuration. When i915/perf is stopped, the + * recorded configuration will be programmed. + */ + actual_sseu = dev_priv->perf.oa.exclusive_stream ? + intel_sseu_from_device_sseu(&INTEL_INFO(dev_priv)->sseu) : + sseu; + + ret = engine->emit_rpcs_config(rq, ctx, actual_sseu); + if (ret) { + __i915_request_add(rq, true); + return ret; + } + + /* Queue this switch after all other activity */ + list_for_each_entry(timeline, &dev_priv->gt.timelines, link) { + struct i915_request *prev; + + prev = last_request_on_engine(timeline, engine); + if (prev) + i915_sw_fence_await_sw_fence_gfp(&rq->submit, + &prev->submit, + I915_FENCE_GFP); + } + + __i915_request_add(rq, true); + + /* + * Apply the configuration to all engine. Our hardware doesn't + * currently support different configurations for each engine. + */ + for_each_engine(engine, dev_priv, id) { + struct intel_context *ce = to_intel_context(ctx, engine); + + ce->sseu.value = sseu.value; + } + + return 0; +} + int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { @@ -771,6 +875,37 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data, case I915_CONTEXT_PARAM_PRIORITY: args->value = ctx->sched.priority; break; + case I915_CONTEXT_PARAM_SSEU: { + struct drm_i915_gem_context_param_sseu param_sseu; + struct intel_engine_cs *engine; + struct intel_context *ce; + + if (copy_from_user(¶m_sseu, u64_to_user_ptr(args->value), + sizeof(param_sseu))) { + ret = -EFAULT; + break; + } + + engine = intel_engine_lookup_user(to_i915(dev), + param_sseu.class, + param_sseu.instance); + if (!engine) { + ret = -EINVAL; + break; + } + + ce = &ctx->__engine[engine->id]; + + param_sseu.slice_mask = ce->sseu.slice_mask; + param_sseu.subslice_mask = ce->sseu.subslice_mask; + param_sseu.min_eus_per_subslice = ce->sseu.min_eus_per_subslice; + param_sseu.max_eus_per_subslice = ce->sseu.max_eus_per_subslice; + + if (copy_to_user(u64_to_user_ptr(args->value), ¶m_sseu, + sizeof(param_sseu))) + ret = -EFAULT; + break; + } default: ret = -EINVAL; break; @@ -845,7 +980,41 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data, ctx->sched.priority = priority; } break; + case I915_CONTEXT_PARAM_SSEU: + { + struct drm_i915_private *dev_priv = to_i915(dev); + struct drm_i915_gem_context_param_sseu user_sseu; + struct intel_engine_cs *engine; + union intel_sseu ctx_sseu; + + if (args->size) { + ret = -EINVAL; + break; + } + + if (copy_from_user(&user_sseu, u64_to_user_ptr(args->value), + sizeof(user_sseu))) { + ret = -EFAULT; + break; + } + + engine = intel_engine_lookup_user(dev_priv, + user_sseu.class, + user_sseu.instance); + if (!engine) { + ret = -EINVAL; + break; + } + + ret = intel_sseu_from_user_sseu(&INTEL_INFO(dev_priv)->sseu, + &user_sseu, &ctx_sseu); + if (ret) + break; + ret = i915_gem_context_reconfigure_sseu(ctx, engine, + ctx_sseu); + } + break; default: ret = -EINVAL; break; diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 320b416482e1..4e0216f33c75 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -2184,6 +2184,72 @@ static void gen8_emit_breadcrumb_rcs(struct i915_request *request, u32 *cs) } static const int gen8_emit_breadcrumb_rcs_sz = 8 + WA_TAIL_DWORDS; +u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, + union intel_sseu ctx_sseu) +{ + u32 rpcs = 0; + + /* + * Starting in Gen9, render power gating can leave + * slice/subslice/EU in a partially enabled state. We + * must make an explicit request through RPCS for full + * enablement. + */ + if (sseu->has_slice_pg) { + rpcs |= GEN8_RPCS_S_CNT_ENABLE; + rpcs |= hweight8(ctx_sseu.slice_mask) << GEN8_RPCS_S_CNT_SHIFT; + rpcs |= GEN8_RPCS_ENABLE; + } + + if (sseu->has_subslice_pg) { + rpcs |= GEN8_RPCS_SS_CNT_ENABLE; + rpcs |= hweight8(ctx_sseu.subslice_mask) << + GEN8_RPCS_SS_CNT_SHIFT; + rpcs |= GEN8_RPCS_ENABLE; + } + + if (sseu->has_eu_pg) { + rpcs |= ctx_sseu.min_eus_per_subslice << + GEN8_RPCS_EU_MIN_SHIFT; + rpcs |= ctx_sseu.max_eus_per_subslice << + GEN8_RPCS_EU_MAX_SHIFT; + rpcs |= GEN8_RPCS_ENABLE; + } + + return rpcs; +} + +static int gen8_emit_rpcs_config(struct i915_request *rq, + struct i915_gem_context *ctx, + union intel_sseu sseu) +{ + struct drm_i915_private *dev_priv = rq->i915; + struct intel_context *ce = to_intel_context(ctx, dev_priv->engine[RCS]); + u64 offset; + u32 *cs; + + /* Let the deferred state allocation take care of this. */ + if (!ce->state) + return 0; + + cs = intel_ring_begin(rq, 4); + if (IS_ERR(cs)) + return PTR_ERR(cs); + + offset = ce->state->node.start + + LRC_STATE_PN * PAGE_SIZE + + (CTX_R_PWR_CLK_STATE + 1) * 4; + + *cs++ = MI_STORE_DWORD_IMM_GEN4; + *cs++ = lower_32_bits(offset); + *cs++ = upper_32_bits(offset); + *cs++ = gen8_make_rpcs(&INTEL_INFO(dev_priv)->sseu, sseu); + + intel_ring_advance(rq, cs); + + return 0; +} + static int gen8_init_rcs_context(struct i915_request *rq) { int ret; @@ -2274,6 +2340,8 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine) engine->emit_breadcrumb = gen8_emit_breadcrumb; engine->emit_breadcrumb_sz = gen8_emit_breadcrumb_sz; + engine->emit_rpcs_config = gen8_emit_rpcs_config; + engine->set_default_submission = execlists_set_default_submission; if (INTEL_GEN(engine->i915) < 11) { @@ -2422,41 +2490,6 @@ int logical_xcs_ring_init(struct intel_engine_cs *engine) return logical_ring_init(engine); } -u32 gen8_make_rpcs(const struct sseu_dev_info *sseu, - union intel_sseu ctx_sseu) -{ - u32 rpcs = 0; - - /* - * Starting in Gen9, render power gating can leave - * slice/subslice/EU in a partially enabled state. We - * must make an explicit request through RPCS for full - * enablement. - */ - if (sseu->has_slice_pg) { - rpcs |= GEN8_RPCS_S_CNT_ENABLE; - rpcs |= hweight8(ctx_sseu.slice_mask) << GEN8_RPCS_S_CNT_SHIFT; - rpcs |= GEN8_RPCS_ENABLE; - } - - if (sseu->has_subslice_pg) { - rpcs |= GEN8_RPCS_SS_CNT_ENABLE; - rpcs |= hweight8(ctx_sseu.subslice_mask) << - GEN8_RPCS_SS_CNT_SHIFT; - rpcs |= GEN8_RPCS_ENABLE; - } - - if (sseu->has_eu_pg) { - rpcs |= ctx_sseu.min_eus_per_subslice << - GEN8_RPCS_EU_MIN_SHIFT; - rpcs |= ctx_sseu.max_eus_per_subslice << - GEN8_RPCS_EU_MAX_SHIFT; - rpcs |= GEN8_RPCS_ENABLE; - } - - return rpcs; -} - static u32 intel_lr_indirect_ctx_offset(struct intel_engine_cs *engine) { u32 indirect_ctx_offset; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 8f19349a6055..44fb3a1cf8f9 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -2026,6 +2026,8 @@ static void intel_ring_default_vfuncs(struct drm_i915_private *dev_priv, engine->emit_breadcrumb_sz++; } + engine->emit_rpcs_config = NULL; /* Only supported on Gen8+ */ + engine->set_default_submission = i9xx_set_default_submission; if (INTEL_GEN(dev_priv) >= 6) diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index cc7e73730469..9745f4ab8214 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -463,6 +463,10 @@ struct intel_engine_cs { void (*emit_breadcrumb)(struct i915_request *rq, u32 *cs); int emit_breadcrumb_sz; + int (*emit_rpcs_config)(struct i915_request *rq, + struct i915_gem_context *ctx, + union intel_sseu sseu); + /* Pass the request to the hardware queue (e.g. directly into * the legacy ringbuffer or to the end of an execlist). * diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 7f5634ce8e88..24b90836ce1d 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -1456,9 +1456,47 @@ struct drm_i915_gem_context_param { #define I915_CONTEXT_MAX_USER_PRIORITY 1023 /* inclusive */ #define I915_CONTEXT_DEFAULT_PRIORITY 0 #define I915_CONTEXT_MIN_USER_PRIORITY -1023 /* inclusive */ + /* + * When using the following param, value should be a pointer to + * drm_i915_gem_context_param_sseu. + */ +#define I915_CONTEXT_PARAM_SSEU 0x7 __u64 value; }; +struct drm_i915_gem_context_param_sseu { + /* + * Engine class & instance to be configured or queried. + */ + __u32 class; + __u32 instance; + + /* + * Mask of slices to enable for the context. Valid values are a subset + * of the bitmask value returned for I915_PARAM_SLICE_MASK. + */ + __u8 slice_mask; + + /* + * Mask of subslices to enable for the context. Valid values are a + * subset of the bitmask value return by I915_PARAM_SUBSLICE_MASK. + */ + __u8 subslice_mask; + + /* + * Minimum/Maximum number of EUs to enable per subslice for the + * context. min_eus_per_subslice must be inferior or equal to + * max_eus_per_subslice. + */ + __u8 min_eus_per_subslice; + __u8 max_eus_per_subslice; + + /* + * Unused for now. Must be cleared to zero. + */ + __u32 rsvd; +}; + enum drm_i915_oa_format { I915_OA_FORMAT_A13 = 1, /* HSW only */ I915_OA_FORMAT_A29, /* HSW only */