From patchwork Fri Sep 22 15:10:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lionel Landwerlin X-Patchwork-Id: 9966373 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 12C69600C5 for ; Fri, 22 Sep 2017 15:11:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F20D529922 for ; Fri, 22 Sep 2017 15:11:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E6FAD29925; Fri, 22 Sep 2017 15:11:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 58CD729922 for ; Fri, 22 Sep 2017 15:11:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4A04D6EA08; Fri, 22 Sep 2017 15:11:13 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id A6A246EA03 for ; Fri, 22 Sep 2017 15:11:08 +0000 (UTC) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP; 22 Sep 2017 08:11:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.42,427,1500966000"; d="scan'208"; a="1197999280" Received: from delly.ld.intel.com ([10.103.239.215]) by fmsmga001.fm.intel.com with ESMTP; 22 Sep 2017 08:11:06 -0700 From: Lionel Landwerlin To: intel-gfx@lists.freedesktop.org Date: Fri, 22 Sep 2017 16:10:57 +0100 Message-Id: <20170922151057.24782-6-lionel.g.landwerlin@intel.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20170922151057.24782-1-lionel.g.landwerlin@intel.com> References: <20170922151057.24782-1-lionel.g.landwerlin@intel.com> Subject: [Intel-gfx] [RFC PATCH v2 5/5] drm/i915: Expose RPCS (SSEU) configuration to userspace X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Chris Wilson We want to allow userspace to reconfigure the subslice configuration for its own use case. To do so, we expose a context parameter to allow adjustment of the RPCS register stored within the context image (and currently not accessible via LRI). If the context is adjusted before first use, the adjustment is for "free"; otherwise if the context is active we flush the context off the GPU (stalling all users) and forcing the GPU to save the context to memory where we can modify it and so ensure that the register is reloaded on next execution. The overhead of managing additional EU subslices can be significant, especially in multi-context workloads. Non-GPGPU contexts should preferably disable the subslices it is not using, and others should fine-tune the number to match their workload. We expose complete control over the RPCS register, allowing configuration of slice/subslice, via masks packed into a u64 for simplicity. For example, struct drm_i915_gem_context_param arg; struct drm_i915_gem_context_param_sseu sseu = { .flags = I915_EXEC_RENDER }; memset(&arg, 0, sizeof(arg)); arg.ctx_id = ctx; arg.param = I915_CONTEXT_PARAM_SSEU; arg.value = (uintptr_t) &sseu; if (drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, &arg) == 0) { sseu.packed.subslice_mask = 0; drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, &arg); } could be used to disable all subslices where supported. v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu() (Lionel) v3: Add ability to program this per engine (Chris) Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100899 Signed-off-by: Chris Wilson Signed-off-by: Lionel Landwerlin c: Dmitry Rogozhkin CC: Tvrtko Ursulin CC: Zhipeng Gong CC: Joonas Lahtinen --- drivers/gpu/drm/i915/i915_gem_context.c | 49 ++++++++++++++++++++ drivers/gpu/drm/i915/intel_lrc.c | 82 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/intel_lrc.h | 5 ++ include/uapi/drm/i915_drm.h | 28 +++++++++++ 4 files changed, 164 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index b386574259a1..088b5035c3a6 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -1042,6 +1042,30 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data, case I915_CONTEXT_PARAM_BANNABLE: args->value = i915_gem_context_is_bannable(ctx); break; + case I915_CONTEXT_PARAM_SSEU: { + struct drm_i915_gem_context_param_sseu param_sseu; + struct intel_engine_cs *engine; + + if (copy_from_user(¶m_sseu, u64_to_user_ptr(args->value), + sizeof(param_sseu))) { + ret = -EFAULT; + break; + } + + engine = i915_gem_engine_from_flags(to_i915(dev), file, + param_sseu.flags); + if (!engine) { + ret = -EINVAL; + break; + } + + param_sseu.value = intel_lr_context_get_sseu(ctx, engine); + + if (copy_to_user(u64_to_user_ptr(args->value), ¶m_sseu, + sizeof(param_sseu))) + ret = -EFAULT; + break; + } default: ret = -EINVAL; break; @@ -1097,6 +1121,31 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data, else i915_gem_context_clear_bannable(ctx); break; + case I915_CONTEXT_PARAM_SSEU: + if (args->size) + ret = -EINVAL; + else if (!i915_modparams.enable_execlists) + ret = -ENODEV; + else { + struct drm_i915_gem_context_param_sseu param_sseu; + struct intel_engine_cs *engine; + + if (copy_from_user(¶m_sseu, u64_to_user_ptr(args->value), + sizeof(param_sseu))) { + ret = -EFAULT; + break; + } + + engine = i915_gem_engine_from_flags(to_i915(dev), file, + param_sseu.flags); + if (!engine) { + ret = -EINVAL; + break; + } + + ret = intel_lr_context_set_sseu(ctx, engine, param_sseu.value); + } + break; default: ret = -EINVAL; break; diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index f5e9caf4913c..bffdc1126838 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -2164,3 +2164,85 @@ void intel_lr_context_resume(struct drm_i915_private *dev_priv) } } } + +int intel_lr_context_set_sseu(struct i915_gem_context *ctx, + struct intel_engine_cs *engine, + u64 value) +{ + struct drm_i915_gem_context_param_sseu user = { .value = value }; + struct drm_i915_private *i915 = ctx->i915; + struct sseu_dev_info sseu = ctx->engine[engine->id].sseu; + struct intel_context *ce; + enum intel_engine_id id; + int ret; + + lockdep_assert_held(&i915->drm.struct_mutex); + + sseu.slice_mask = user.packed.slice_mask == 0 ? + INTEL_INFO(i915)->sseu.slice_mask : + (user.packed.slice_mask & INTEL_INFO(i915)->sseu.slice_mask); + sseu.subslice_mask = user.packed.subslice_mask == 0 ? + INTEL_INFO(i915)->sseu.subslice_mask : + (user.packed.subslice_mask & INTEL_INFO(i915)->sseu.subslice_mask); + sseu.min_eu_per_subslice = + max(user.packed.min_eu_per_subslice, + INTEL_INFO(i915)->sseu.min_eu_per_subslice); + sseu.max_eu_per_subslice = + min(user.packed.max_eu_per_subslice, + INTEL_INFO(i915)->sseu.max_eu_per_subslice); + + if (memcmp(&sseu, &ctx->engine[engine->id].sseu, sizeof(sseu)) == 0) + return 0; + + /* + * We can only program this on render ring. + */ + ce = &ctx->engine[RCS]; + + if (ce->pin_count) { /* Assume that the context is active! */ + ret = i915_gem_switch_to_kernel_context(i915); + if (ret) + return ret; + + ret = i915_gem_wait_for_idle(i915, + I915_WAIT_INTERRUPTIBLE | + I915_WAIT_LOCKED); + if (ret) + return ret; + } + + if (ce->state) { + u32 *regs; + + regs = i915_gem_object_pin_map(ce->state->obj, I915_MAP_WB) + + LRC_STATE_PN * PAGE_SIZE; + if (IS_ERR(regs)) + return PTR_ERR(regs); + + regs[CTX_R_PWR_CLK_STATE + 1] = make_rpcs(&sseu); + i915_gem_object_unpin_map(ce->state->obj); + } + + /* + * Apply the configuration to all engine. Our hardware doesn't + * currently support different configurations for each engine. + */ + for_each_engine(engine, i915, id) + ctx->engine[id].sseu = sseu; + + return 0; +} + +u64 intel_lr_context_get_sseu(struct i915_gem_context *ctx, + struct intel_engine_cs *engine) +{ + struct drm_i915_gem_context_param_sseu user; + const struct sseu_dev_info *sseu = &ctx->engine[engine->id].sseu; + + user.packed.slice_mask = sseu->slice_mask; + user.packed.subslice_mask = sseu->subslice_mask; + user.packed.min_eu_per_subslice = sseu->min_eu_per_subslice; + user.packed.max_eu_per_subslice = sseu->max_eu_per_subslice; + + return user.value; +} diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h index 314adee7127a..a51e67d9fec5 100644 --- a/drivers/gpu/drm/i915/intel_lrc.h +++ b/drivers/gpu/drm/i915/intel_lrc.h @@ -106,6 +106,11 @@ intel_lr_context_descriptor(struct i915_gem_context *ctx, return ctx->engine[engine->id].lrc_desc; } +int intel_lr_context_set_sseu(struct i915_gem_context *ctx, + struct intel_engine_cs *engine, + u64 value); +u64 intel_lr_context_get_sseu(struct i915_gem_context *ctx, + struct intel_engine_cs *engine); /* Execlists */ int intel_sanitize_enable_execlists(struct drm_i915_private *dev_priv, diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index fe25a01c81f2..ed1cbced8e6e 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -1360,9 +1360,37 @@ struct drm_i915_gem_context_param { #define I915_CONTEXT_PARAM_GTT_SIZE 0x3 #define I915_CONTEXT_PARAM_NO_ERROR_CAPTURE 0x4 #define I915_CONTEXT_PARAM_BANNABLE 0x5 +/* + * When using the following param, value should be a pointer to + * drm_i915_gem_context_param_sseu. + */ +#define I915_CONTEXT_PARAM_SSEU 0x6 __u64 value; }; +struct drm_i915_gem_context_param_sseu { + /* + * Engine to be configured or queried. Same value you would use with + * drm_i915_gem_execbuffer2. + */ + __u64 flags; + + /* + * Setting slice_mask or subslice_mask to 0 will make the context use + * masks reported respectively by I915_PARAM_SLICE_MASK or + * I915_PARAM_SUBSLICE_MASK. + */ + union { + struct { + __u8 slice_mask; + __u8 subslice_mask; + __u8 min_eu_per_subslice; + __u8 max_eu_per_subslice; + } packed; + __u64 value; + }; +}; + enum drm_i915_oa_format { I915_OA_FORMAT_A13 = 1, /* HSW only */ I915_OA_FORMAT_A29, /* HSW only */