From patchwork Fri Jan 17 09:50:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lisovskiy, Stanislav" X-Patchwork-Id: 11338657 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7487D13A0 for ; Fri, 17 Jan 2020 09:52:43 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5D2E82082F for ; Fri, 17 Jan 2020 09:52:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D2E82082F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DA6CA6F4BE; Fri, 17 Jan 2020 09:52:42 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id DF8AC6F4BE for ; Fri, 17 Jan 2020 09:52:40 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jan 2020 01:52:40 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,329,1574150400"; d="scan'208";a="424412466" Received: from slisovsk-lenovo-ideapad-720s-13ikb.fi.intel.com ([10.237.72.89]) by fmsmga005.fm.intel.com with ESMTP; 17 Jan 2020 01:52:38 -0800 From: Stanislav Lisovskiy To: intel-gfx@lists.freedesktop.org Date: Fri, 17 Jan 2020 11:50:22 +0200 Message-Id: <20200117095026.1113-2-stanislav.lisovskiy@intel.com> X-Mailer: git-send-email 2.24.1.485.gad05a3d8e5 In-Reply-To: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> References: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v13 1/5] drm/i915: Remove skl_ddl_allocation struct X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Current consensus that it is redundant as we already have skl_ddb_values struct out there, also this struct contains only single member which makes it unnecessary. v2: As dirty_pipes soon going to be nuked away from skl_ddb_values, evacuating enabled_slices to safer in dev_priv. v3: Changed "enabled_slices" to be "enabled_dbuf_slices_num" (Matt Roper) v4: - Wrapped the line getting number of dbuf slices(Matt Roper) - Removed indeed redundant skl_ddb_values declaration(Matt Roper) Reviewed-by: Matt Roper Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/display/intel_display.c | 16 +++---- .../drm/i915/display/intel_display_power.c | 8 ++-- .../drm/i915/display/intel_display_types.h | 3 ++ drivers/gpu/drm/i915/i915_drv.h | 7 +-- drivers/gpu/drm/i915/intel_pm.c | 45 +++++++++---------- drivers/gpu/drm/i915/intel_pm.h | 5 +-- 6 files changed, 39 insertions(+), 45 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index aa0ece27186c..eee9e3d9c72d 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -13750,14 +13750,13 @@ static void verify_wm_state(struct intel_crtc *crtc, struct skl_hw_state { struct skl_ddb_entry ddb_y[I915_MAX_PLANES]; struct skl_ddb_entry ddb_uv[I915_MAX_PLANES]; - struct skl_ddb_allocation ddb; struct skl_pipe_wm wm; } *hw; - struct skl_ddb_allocation *sw_ddb; struct skl_pipe_wm *sw_wm; struct skl_ddb_entry *hw_ddb_entry, *sw_ddb_entry; const enum pipe pipe = crtc->pipe; int plane, level, max_level = ilk_wm_max_level(dev_priv); + u8 hw_enabled_slices; if (INTEL_GEN(dev_priv) < 9 || !new_crtc_state->hw.active) return; @@ -13771,14 +13770,13 @@ static void verify_wm_state(struct intel_crtc *crtc, skl_pipe_ddb_get_hw_state(crtc, hw->ddb_y, hw->ddb_uv); - skl_ddb_get_hw_state(dev_priv, &hw->ddb); - sw_ddb = &dev_priv->wm.skl_hw.ddb; + hw_enabled_slices = intel_enabled_dbuf_slices_num(dev_priv); if (INTEL_GEN(dev_priv) >= 11 && - hw->ddb.enabled_slices != sw_ddb->enabled_slices) + hw_enabled_slices != dev_priv->enabled_dbuf_slices_num) DRM_ERROR("mismatch in DBUF Slices (expected %u, got %u)\n", - sw_ddb->enabled_slices, - hw->ddb.enabled_slices); + dev_priv->enabled_dbuf_slices_num, + hw_enabled_slices); /* planes */ for_each_universal_plane(dev_priv, pipe, plane) { @@ -15123,8 +15121,8 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state) struct drm_i915_private *dev_priv = to_i915(state->base.dev); struct intel_crtc *crtc; struct intel_crtc_state *old_crtc_state, *new_crtc_state; - u8 hw_enabled_slices = dev_priv->wm.skl_hw.ddb.enabled_slices; - u8 required_slices = state->wm_results.ddb.enabled_slices; + u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_num; + u8 required_slices = state->enabled_dbuf_slices_num; struct skl_ddb_entry entries[I915_MAX_PIPES] = {}; const u8 num_pipes = INTEL_NUM_PIPES(dev_priv); u8 update_pipes = 0, modeset_pipes = 0; diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c index 21561acfa3ac..5e1c601f0f99 100644 --- a/drivers/gpu/drm/i915/display/intel_display_power.c +++ b/drivers/gpu/drm/i915/display/intel_display_power.c @@ -4406,7 +4406,7 @@ static u8 intel_dbuf_max_slices(struct drm_i915_private *dev_priv) void icl_dbuf_slices_update(struct drm_i915_private *dev_priv, u8 req_slices) { - const u8 hw_enabled_slices = dev_priv->wm.skl_hw.ddb.enabled_slices; + const u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_num; bool ret; if (req_slices > intel_dbuf_max_slices(dev_priv)) { @@ -4423,7 +4423,7 @@ void icl_dbuf_slices_update(struct drm_i915_private *dev_priv, ret = intel_dbuf_slice_set(dev_priv, DBUF_CTL_S2, false); if (ret) - dev_priv->wm.skl_hw.ddb.enabled_slices = req_slices; + dev_priv->enabled_dbuf_slices_num = req_slices; } static void icl_dbuf_enable(struct drm_i915_private *dev_priv) @@ -4442,7 +4442,7 @@ static void icl_dbuf_enable(struct drm_i915_private *dev_priv) * FIXME: for now pretend that we only have 1 slice, see * intel_enabled_dbuf_slices_num(). */ - dev_priv->wm.skl_hw.ddb.enabled_slices = 1; + dev_priv->enabled_dbuf_slices_num = 1; } static void icl_dbuf_disable(struct drm_i915_private *dev_priv) @@ -4461,7 +4461,7 @@ static void icl_dbuf_disable(struct drm_i915_private *dev_priv) * FIXME: for now pretend that the first slice is always * enabled, see intel_enabled_dbuf_slices_num(). */ - dev_priv->wm.skl_hw.ddb.enabled_slices = 1; + dev_priv->enabled_dbuf_slices_num = 1; } static void icl_mbus_init(struct drm_i915_private *dev_priv) diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h index fdd943a17de3..6dca90ebca35 100644 --- a/drivers/gpu/drm/i915/display/intel_display_types.h +++ b/drivers/gpu/drm/i915/display/intel_display_types.h @@ -517,6 +517,9 @@ struct intel_atomic_state { /* Gen9+ only */ struct skl_ddb_values wm_results; + /* Number of enabled DBuf slices */ + u8 enabled_dbuf_slices_num; + struct i915_sw_fence commit_ready; struct llist_node freed; diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 077af22b8340..9dbf2e57b01b 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -799,13 +799,8 @@ static inline bool skl_ddb_entry_equal(const struct skl_ddb_entry *e1, return false; } -struct skl_ddb_allocation { - u8 enabled_slices; /* GEN11 has configurable 2 slices */ -}; - struct skl_ddb_values { unsigned dirty_pipes; - struct skl_ddb_allocation ddb; }; struct skl_wm_level { @@ -1213,6 +1208,8 @@ struct drm_i915_private { bool distrust_bios_wm; } wm; + u8 enabled_dbuf_slices_num; /* GEN11 has configurable 2 slices */ + struct dram_info { bool valid; bool is_16gb_dimm; diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index bd2d30ecc030..8f6f6472626a 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -3644,16 +3644,16 @@ bool ilk_disable_lp_wm(struct drm_i915_private *dev_priv) return _ilk_disable_lp_wm(dev_priv, WM_DIRTY_LP_ALL); } -static u8 intel_enabled_dbuf_slices_num(struct drm_i915_private *dev_priv) +u8 intel_enabled_dbuf_slices_num(struct drm_i915_private *dev_priv) { - u8 enabled_slices; + u8 enabled_dbuf_slices_num; /* Slice 1 will always be enabled */ - enabled_slices = 1; + enabled_dbuf_slices_num = 1; /* Gen prior to GEN11 have only one DBuf slice */ if (INTEL_GEN(dev_priv) < 11) - return enabled_slices; + return enabled_dbuf_slices_num; /* * FIXME: for now we'll only ever use 1 slice; pretend that we have @@ -3661,9 +3661,9 @@ static u8 intel_enabled_dbuf_slices_num(struct drm_i915_private *dev_priv) * toggling of the second slice. */ if (0 && I915_READ(DBUF_CTL_S2) & DBUF_POWER_STATE) - enabled_slices++; + enabled_dbuf_slices_num++; - return enabled_slices; + return enabled_dbuf_slices_num; } /* @@ -3867,9 +3867,10 @@ bool intel_can_enable_sagv(struct intel_atomic_state *state) static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv, const struct intel_crtc_state *crtc_state, const u64 total_data_rate, - const int num_active, - struct skl_ddb_allocation *ddb) + const int num_active) { + struct drm_atomic_state *state = crtc_state->uapi.state; + struct intel_atomic_state *intel_state = to_intel_atomic_state(state); const struct drm_display_mode *adjusted_mode; u64 total_data_bw; u16 ddb_size = INTEL_INFO(dev_priv)->ddb_size; @@ -3891,9 +3892,9 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv, * - should validate we stay within the hw bandwidth limits */ if (0 && (num_active > 1 || total_data_bw >= GBps(12))) { - ddb->enabled_slices = 2; + intel_state->enabled_dbuf_slices_num = 2; } else { - ddb->enabled_slices = 1; + intel_state->enabled_dbuf_slices_num = 1; ddb_size /= 2; } @@ -3904,7 +3905,6 @@ static void skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv, const struct intel_crtc_state *crtc_state, const u64 total_data_rate, - struct skl_ddb_allocation *ddb, struct skl_ddb_entry *alloc, /* out */ int *num_active /* out */) { @@ -3930,7 +3930,7 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv, *num_active = hweight8(dev_priv->active_pipes); ddb_size = intel_get_ddb_size(dev_priv, crtc_state, total_data_rate, - *num_active, ddb); + *num_active); /* * If the state doesn't change the active CRTC's or there is no @@ -4091,10 +4091,10 @@ void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc, intel_display_power_put(dev_priv, power_domain, wakeref); } -void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv, - struct skl_ddb_allocation *ddb /* out */) +void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv) { - ddb->enabled_slices = intel_enabled_dbuf_slices_num(dev_priv); + dev_priv->enabled_dbuf_slices_num = + intel_enabled_dbuf_slices_num(dev_priv); } /* @@ -4271,8 +4271,7 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state, } static int -skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, - struct skl_ddb_allocation *ddb /* out */) +skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) { struct drm_atomic_state *state = crtc_state->uapi.state; struct drm_crtc *crtc = crtc_state->uapi.crtc; @@ -4314,7 +4313,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state, total_data_rate, - ddb, alloc, &num_active); + alloc, &num_active); alloc_size = skl_ddb_entry_size(alloc); if (alloc_size == 0) return 0; @@ -5231,18 +5230,17 @@ skl_ddb_add_affected_planes(const struct intel_crtc_state *old_crtc_state, static int skl_compute_ddb(struct intel_atomic_state *state) { - const struct drm_i915_private *dev_priv = to_i915(state->base.dev); - struct skl_ddb_allocation *ddb = &state->wm_results.ddb; + struct drm_i915_private *dev_priv = to_i915(state->base.dev); struct intel_crtc_state *old_crtc_state; struct intel_crtc_state *new_crtc_state; struct intel_crtc *crtc; int ret, i; - memcpy(ddb, &dev_priv->wm.skl_hw.ddb, sizeof(*ddb)); + state->enabled_dbuf_slices_num = dev_priv->enabled_dbuf_slices_num; for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { - ret = skl_allocate_pipe_ddb(new_crtc_state, ddb); + ret = skl_allocate_pipe_ddb(new_crtc_state); if (ret) return ret; @@ -5719,11 +5717,10 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc, void skl_wm_get_hw_state(struct drm_i915_private *dev_priv) { struct skl_ddb_values *hw = &dev_priv->wm.skl_hw; - struct skl_ddb_allocation *ddb = &dev_priv->wm.skl_hw.ddb; struct intel_crtc *crtc; struct intel_crtc_state *crtc_state; - skl_ddb_get_hw_state(dev_priv, ddb); + skl_ddb_get_hw_state(dev_priv); for_each_intel_crtc(&dev_priv->drm, crtc) { crtc_state = to_intel_crtc_state(crtc->base.state); diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h index c06c6a846d9a..22fd2daf608e 100644 --- a/drivers/gpu/drm/i915/intel_pm.h +++ b/drivers/gpu/drm/i915/intel_pm.h @@ -17,7 +17,6 @@ struct intel_atomic_state; struct intel_crtc; struct intel_crtc_state; struct intel_plane; -struct skl_ddb_allocation; struct skl_ddb_entry; struct skl_pipe_wm; struct skl_wm_level; @@ -33,11 +32,11 @@ void g4x_wm_get_hw_state(struct drm_i915_private *dev_priv); void vlv_wm_get_hw_state(struct drm_i915_private *dev_priv); void ilk_wm_get_hw_state(struct drm_i915_private *dev_priv); void skl_wm_get_hw_state(struct drm_i915_private *dev_priv); +u8 intel_enabled_dbuf_slices_num(struct drm_i915_private *dev_priv); void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc, struct skl_ddb_entry *ddb_y, struct skl_ddb_entry *ddb_uv); -void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv, - struct skl_ddb_allocation *ddb /* out */); +void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv); void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc, struct skl_pipe_wm *out); void g4x_wm_sanitize(struct drm_i915_private *dev_priv); From patchwork Fri Jan 17 09:50:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Lisovskiy, Stanislav" X-Patchwork-Id: 11338659 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6115139A for ; Fri, 17 Jan 2020 09:52:45 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BE9562073A for ; Fri, 17 Jan 2020 09:52:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BE9562073A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 07CE46F4C0; Fri, 17 Jan 2020 09:52:45 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id A37926F4BF for ; Fri, 17 Jan 2020 09:52:43 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jan 2020 01:52:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,329,1574150400"; d="scan'208";a="424412486" Received: from slisovsk-lenovo-ideapad-720s-13ikb.fi.intel.com ([10.237.72.89]) by fmsmga005.fm.intel.com with ESMTP; 17 Jan 2020 01:52:40 -0800 From: Stanislav Lisovskiy To: intel-gfx@lists.freedesktop.org Date: Fri, 17 Jan 2020 11:50:23 +0200 Message-Id: <20200117095026.1113-3-stanislav.lisovskiy@intel.com> X-Mailer: git-send-email 2.24.1.485.gad05a3d8e5 In-Reply-To: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> References: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v13 2/5] drm/i915: Move dbuf slice update to proper place X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Current DBuf slices update wasn't done in proper place, especially its "post" part, which should disable those only once vblank had passed and all other changes are committed. v2: Fix to use dev_priv and intel_atomic_state instead of skl_ddb_values (to be nuked in Villes patch) v3: Renamed "enabled_slices" to "enabled_dbuf_slices_num" (Matt Roper) v4: - Rebase against drm-tip. - Move post_update closer to optimize_watermarks, to prevent unneeded noise from underrun reporting (Ville Syrjälä) Reviewed-by: Matt Roper Reviewed-by: Ville Syrjälä Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/display/intel_display.c | 37 +++++++++++++++----- 1 file changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index eee9e3d9c72d..8b06ef29693e 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -15116,13 +15116,33 @@ static void intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc, state); } +static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state) +{ + struct drm_i915_private *dev_priv = to_i915(state->base.dev); + u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_num; + u8 required_slices = state->enabled_dbuf_slices_num; + + /* If 2nd DBuf slice required, enable it here */ + if (INTEL_GEN(dev_priv) >= 11 && required_slices > hw_enabled_slices) + icl_dbuf_slices_update(dev_priv, required_slices); +} + +static void icl_dbuf_slice_post_update(struct intel_atomic_state *state) +{ + struct drm_i915_private *dev_priv = to_i915(state->base.dev); + u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_num; + u8 required_slices = state->enabled_dbuf_slices_num; + + /* If 2nd DBuf slice is no more required disable it */ + if (INTEL_GEN(dev_priv) >= 11 && required_slices < hw_enabled_slices) + icl_dbuf_slices_update(dev_priv, required_slices); +} + static void skl_commit_modeset_enables(struct intel_atomic_state *state) { struct drm_i915_private *dev_priv = to_i915(state->base.dev); struct intel_crtc *crtc; struct intel_crtc_state *old_crtc_state, *new_crtc_state; - u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_num; - u8 required_slices = state->enabled_dbuf_slices_num; struct skl_ddb_entry entries[I915_MAX_PIPES] = {}; const u8 num_pipes = INTEL_NUM_PIPES(dev_priv); u8 update_pipes = 0, modeset_pipes = 0; @@ -15141,10 +15161,6 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state) } } - /* If 2nd DBuf slice required, enable it here */ - if (INTEL_GEN(dev_priv) >= 11 && required_slices > hw_enabled_slices) - icl_dbuf_slices_update(dev_priv, required_slices); - /* * Whenever the number of active pipes changes, we need to make sure we * update the pipes in the right order so that their ddb allocations @@ -15246,9 +15262,6 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state) WARN_ON(modeset_pipes); - /* If 2nd DBuf slice is no more required disable it */ - if (INTEL_GEN(dev_priv) >= 11 && required_slices < hw_enabled_slices) - icl_dbuf_slices_update(dev_priv, required_slices); } static void intel_atomic_helper_free_state(struct drm_i915_private *dev_priv) @@ -15378,6 +15391,9 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state) if (state->modeset) intel_encoders_update_prepare(state); + /* Enable all new slices, we might need */ + icl_dbuf_slice_pre_update(state); + /* Now enable the clocks, plane, pipe, and connectors that we set up. */ dev_priv->display.commit_modeset_enables(state); @@ -15434,6 +15450,9 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state) dev_priv->display.optimize_watermarks(state, crtc); } + /* Disable all slices, we don't need */ + icl_dbuf_slice_post_update(state); + for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { intel_post_plane_update(state, crtc); From patchwork Fri Jan 17 09:50:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Lisovskiy, Stanislav" X-Patchwork-Id: 11338661 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F344139A for ; Fri, 17 Jan 2020 09:52:47 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 479C52073A for ; Fri, 17 Jan 2020 09:52:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 479C52073A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 061316F4BF; Fri, 17 Jan 2020 09:52:46 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id CE47D6F4BF for ; Fri, 17 Jan 2020 09:52:44 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jan 2020 01:52:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,329,1574150400"; d="scan'208";a="424412507" Received: from slisovsk-lenovo-ideapad-720s-13ikb.fi.intel.com ([10.237.72.89]) by fmsmga005.fm.intel.com with ESMTP; 17 Jan 2020 01:52:43 -0800 From: Stanislav Lisovskiy To: intel-gfx@lists.freedesktop.org Date: Fri, 17 Jan 2020 11:50:24 +0200 Message-Id: <20200117095026.1113-4-stanislav.lisovskiy@intel.com> X-Mailer: git-send-email 2.24.1.485.gad05a3d8e5 In-Reply-To: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> References: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v13 3/5] drm/i915: Introduce parameterized DBUF_CTL X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Now start using parameterized DBUF_CTL instead of hardcoded, this would allow shorter access functions when reading or storing entire state. Tried to implement it in a MMIO_PIPE manner, however DBUF_CTL1 address is higher than DBUF_CTL2, which implies that we have to now subtract from base rather than add. v2: - Removed unneeded DBUF_CTL_DIST and DBUF_CTL_ADDR macros. Started to use _PICK construct as suggested by Matt Roper. v3: - DBUF_CTL_S* to _DBUF_CTL_S*, changed X to "slice" in macro(Ville Syrjälä) - _PICK to _PICK_EVEN(Ville Syrjälä) - Introduced enum for enumerating DBUF slices(Ville Syrjälä) Reviewed-by: Ville Syrjälä Reviewed-by: Matt Roper Signed-off-by: Stanislav Lisovskiy --- .../drm/i915/display/intel_display_power.c | 30 +++++++++++-------- .../drm/i915/display/intel_display_power.h | 5 ++++ drivers/gpu/drm/i915/i915_reg.h | 7 +++-- drivers/gpu/drm/i915/intel_pm.c | 2 +- 4 files changed, 28 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c index 5e1c601f0f99..08065720391f 100644 --- a/drivers/gpu/drm/i915/display/intel_display_power.c +++ b/drivers/gpu/drm/i915/display/intel_display_power.c @@ -4418,9 +4418,11 @@ void icl_dbuf_slices_update(struct drm_i915_private *dev_priv, return; if (req_slices > hw_enabled_slices) - ret = intel_dbuf_slice_set(dev_priv, DBUF_CTL_S2, true); + ret = intel_dbuf_slice_set(dev_priv, + DBUF_CTL_S(DBUF_S2), true); else - ret = intel_dbuf_slice_set(dev_priv, DBUF_CTL_S2, false); + ret = intel_dbuf_slice_set(dev_priv, + DBUF_CTL_S(DBUF_S2), false); if (ret) dev_priv->enabled_dbuf_slices_num = req_slices; @@ -4428,14 +4430,16 @@ void icl_dbuf_slices_update(struct drm_i915_private *dev_priv, static void icl_dbuf_enable(struct drm_i915_private *dev_priv) { - I915_WRITE(DBUF_CTL_S1, I915_READ(DBUF_CTL_S1) | DBUF_POWER_REQUEST); - I915_WRITE(DBUF_CTL_S2, I915_READ(DBUF_CTL_S2) | DBUF_POWER_REQUEST); - POSTING_READ(DBUF_CTL_S2); + I915_WRITE(DBUF_CTL_S(DBUF_S1), + I915_READ(DBUF_CTL_S(DBUF_S1)) | DBUF_POWER_REQUEST); + I915_WRITE(DBUF_CTL_S(DBUF_S2), + I915_READ(DBUF_CTL_S(DBUF_S2)) | DBUF_POWER_REQUEST); + POSTING_READ(DBUF_CTL_S(DBUF_S2)); udelay(10); - if (!(I915_READ(DBUF_CTL_S1) & DBUF_POWER_STATE) || - !(I915_READ(DBUF_CTL_S2) & DBUF_POWER_STATE)) + if (!(I915_READ(DBUF_CTL_S(DBUF_S1)) & DBUF_POWER_STATE) || + !(I915_READ(DBUF_CTL_S(DBUF_S2)) & DBUF_POWER_STATE)) DRM_ERROR("DBuf power enable timeout\n"); else /* @@ -4447,14 +4451,16 @@ static void icl_dbuf_enable(struct drm_i915_private *dev_priv) static void icl_dbuf_disable(struct drm_i915_private *dev_priv) { - I915_WRITE(DBUF_CTL_S1, I915_READ(DBUF_CTL_S1) & ~DBUF_POWER_REQUEST); - I915_WRITE(DBUF_CTL_S2, I915_READ(DBUF_CTL_S2) & ~DBUF_POWER_REQUEST); - POSTING_READ(DBUF_CTL_S2); + I915_WRITE(DBUF_CTL_S(DBUF_S1), + I915_READ(DBUF_CTL_S(DBUF_S1)) & ~DBUF_POWER_REQUEST); + I915_WRITE(DBUF_CTL_S(DBUF_S2), + I915_READ(DBUF_CTL_S(DBUF_S2)) & ~DBUF_POWER_REQUEST); + POSTING_READ(DBUF_CTL_S(DBUF_S2)); udelay(10); - if ((I915_READ(DBUF_CTL_S1) & DBUF_POWER_STATE) || - (I915_READ(DBUF_CTL_S2) & DBUF_POWER_STATE)) + if ((I915_READ(DBUF_CTL_S(DBUF_S1)) & DBUF_POWER_STATE) || + (I915_READ(DBUF_CTL_S(DBUF_S2)) & DBUF_POWER_STATE)) DRM_ERROR("DBuf power disable timeout!\n"); else /* diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h b/drivers/gpu/drm/i915/display/intel_display_power.h index 2608a65af7fa..601e000ffd0d 100644 --- a/drivers/gpu/drm/i915/display/intel_display_power.h +++ b/drivers/gpu/drm/i915/display/intel_display_power.h @@ -307,6 +307,11 @@ intel_display_power_put_async(struct drm_i915_private *i915, } #endif +enum dbuf_slice { + DBUF_S1, + DBUF_S2, +}; + #define with_intel_display_power(i915, domain, wf) \ for ((wf) = intel_display_power_get((i915), (domain)); (wf); \ intel_display_power_put_async((i915), (domain), (wf)), (wf) = 0) diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index e5071af4a3b3..b3de69a0ea50 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -7745,9 +7745,10 @@ enum { #define DISP_ARB_CTL2 _MMIO(0x45004) #define DISP_DATA_PARTITION_5_6 (1 << 6) #define DISP_IPC_ENABLE (1 << 3) -#define DBUF_CTL _MMIO(0x45008) -#define DBUF_CTL_S1 _MMIO(0x45008) -#define DBUF_CTL_S2 _MMIO(0x44FE8) +#define DBUF_CTL_ADDR1 0x45008 +#define DBUF_CTL_ADDR2 0x44FE8 +#define DBUF_CTL_S(X) _MMIO(_PICK(X, DBUF_CTL_ADDR1, DBUF_CTL_ADDR2)) +#define DBUF_CTL DBUF_CTL_S(0) #define DBUF_POWER_REQUEST (1 << 31) #define DBUF_POWER_STATE (1 << 30) #define GEN7_MSG_CTL _MMIO(0x45010) diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index 8f6f6472626a..f22509f8ac28 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -3660,7 +3660,7 @@ u8 intel_enabled_dbuf_slices_num(struct drm_i915_private *dev_priv) * only that 1 slice enabled until we have a proper way for on-demand * toggling of the second slice. */ - if (0 && I915_READ(DBUF_CTL_S2) & DBUF_POWER_STATE) + if (0 && I915_READ(DBUF_CTL_S(DBUF_S2)) & DBUF_POWER_STATE) enabled_dbuf_slices_num++; return enabled_dbuf_slices_num; From patchwork Fri Jan 17 09:50:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Lisovskiy, Stanislav" X-Patchwork-Id: 11338665 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7105013A0 for ; Fri, 17 Jan 2020 09:52:51 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 59AC32073A for ; Fri, 17 Jan 2020 09:52:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59AC32073A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 070406F4C7; Fri, 17 Jan 2020 09:52:50 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id B93486F4C4 for ; Fri, 17 Jan 2020 09:52:46 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jan 2020 01:52:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,329,1574150400"; d="scan'208";a="424412517" Received: from slisovsk-lenovo-ideapad-720s-13ikb.fi.intel.com ([10.237.72.89]) by fmsmga005.fm.intel.com with ESMTP; 17 Jan 2020 01:52:44 -0800 From: Stanislav Lisovskiy To: intel-gfx@lists.freedesktop.org Date: Fri, 17 Jan 2020 11:50:25 +0200 Message-Id: <20200117095026.1113-5-stanislav.lisovskiy@intel.com> X-Mailer: git-send-email 2.24.1.485.gad05a3d8e5 In-Reply-To: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> References: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v13 4/5] drm/i915: Manipulate DBuf slices properly X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Start manipulating DBuf slices as a mask, but not as a total number, as current approach doesn't give us full control on all combinations of slices, which we might need(like enabling S2 only can't enabled by setting enabled_slices=1). Removed wrong code from intel_get_ddb_size as it doesn't match to BSpec. For now still just use DBuf slice until proper algorithm is implemented. Other minor code refactoring to get prepared for major DBuf assignment changes landed: - As now enabled slices contain a mask we still need some value which should reflect how much DBuf slices are supported by the platform, now device info contains num_supported_dbuf_slices. - Removed unneeded assertion as we are now manipulating slices in a more proper way. v2: Start using enabled_slices in dev_priv v3: "enabled_slices" is now "enabled_dbuf_slices_mask", as this now sits in dev_priv independently. v4: - Fixed debug print formatting to hex(Matt Roper) - Optimized dbuf slice updates to be used only if slice union is different from current conf(Matt Roper) - Fixed some functions to be static(Matt Roper) - Created a parameterized version for DBUF_CTL to simplify DBuf programming cycle(Matt Roper) - Removed unrequred field from GEN10_FEATURES(Matt Roper) v5: - Removed redundant programming dbuf slices helper(Ville Syrjälä) - Started to use parameterized loop for hw readout to get slices (Ville Syrjälä) - Added back assertion checking amount of DBUF slices enabled after DC states 5/6 transition, also added new assertion as starting from ICL DMC seems to restore the last DBuf power state set, rather than power up all dbuf slices as assertion was previously expecting(Ville Syrjälä) v6: - Now using enum for DBuf slices in this patch (Ville Syrjälä) - Removed gen11_assert_dbuf_enabled and put gen9_assert_dbuf_enabled back, as we really need to have a single unified assert here however currently enabling always slice 1 is enforced by BSpec, so we will have to OR enabled slices mask with 1 in order to be consistent with BSpec, that way we can unify that assertion and against the actual state from the driver, but not some hardcoded value.(concluded with Ville) - Remove parameterized DBUF_CTL version, to extract it to another patch.(Ville Syrjälä) - Removed unneeded hardcoded return value for older gens from intel_enabled_dbuf_slices_mask - this now is handled in a unified manner since device info anyway returns max dbuf slices as 1 for older platforms(Matthew Roper) - Now using INTEL_INFO(dev_priv)->num_supported_dbuf_slices instead of intel_dbuf_max_slices function as it is trivial(Matthew Roper) Signed-off-by: Stanislav Lisovskiy Reviewed-by: Ville Syrjälä Reviewed-by: Matt Roper --- drivers/gpu/drm/i915/display/intel_display.c | 23 ++--- .../drm/i915/display/intel_display_power.c | 92 ++++++------------- .../drm/i915/display/intel_display_types.h | 2 +- drivers/gpu/drm/i915/i915_drv.h | 2 +- drivers/gpu/drm/i915/i915_pci.c | 5 +- drivers/gpu/drm/i915/intel_device_info.h | 1 + drivers/gpu/drm/i915/intel_pm.c | 53 +++-------- drivers/gpu/drm/i915/intel_pm.h | 2 +- 8 files changed, 63 insertions(+), 117 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 8b06ef29693e..061de161b95b 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -13770,12 +13770,12 @@ static void verify_wm_state(struct intel_crtc *crtc, skl_pipe_ddb_get_hw_state(crtc, hw->ddb_y, hw->ddb_uv); - hw_enabled_slices = intel_enabled_dbuf_slices_num(dev_priv); + hw_enabled_slices = intel_enabled_dbuf_slices_mask(dev_priv); if (INTEL_GEN(dev_priv) >= 11 && - hw_enabled_slices != dev_priv->enabled_dbuf_slices_num) - DRM_ERROR("mismatch in DBUF Slices (expected %u, got %u)\n", - dev_priv->enabled_dbuf_slices_num, + hw_enabled_slices != dev_priv->enabled_dbuf_slices_mask) + DRM_ERROR("mismatch in DBUF Slices (expected 0x%x, got 0x%x)\n", + dev_priv->enabled_dbuf_slices_mask, hw_enabled_slices); /* planes */ @@ -15119,22 +15119,23 @@ static void intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc, static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state) { struct drm_i915_private *dev_priv = to_i915(state->base.dev); - u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_num; - u8 required_slices = state->enabled_dbuf_slices_num; + u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask; + u8 required_slices = state->enabled_dbuf_slices_mask; + u8 slices_union = hw_enabled_slices | required_slices; /* If 2nd DBuf slice required, enable it here */ - if (INTEL_GEN(dev_priv) >= 11 && required_slices > hw_enabled_slices) - icl_dbuf_slices_update(dev_priv, required_slices); + if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices) + icl_dbuf_slices_update(dev_priv, slices_union); } static void icl_dbuf_slice_post_update(struct intel_atomic_state *state) { struct drm_i915_private *dev_priv = to_i915(state->base.dev); - u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_num; - u8 required_slices = state->enabled_dbuf_slices_num; + u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask; + u8 required_slices = state->enabled_dbuf_slices_mask; /* If 2nd DBuf slice is no more required disable it */ - if (INTEL_GEN(dev_priv) >= 11 && required_slices < hw_enabled_slices) + if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices) icl_dbuf_slices_update(dev_priv, required_slices); } diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c index 08065720391f..b86842b1ff7a 100644 --- a/drivers/gpu/drm/i915/display/intel_display_power.c +++ b/drivers/gpu/drm/i915/display/intel_display_power.c @@ -15,6 +15,7 @@ #include "intel_display_types.h" #include "intel_dpio_phy.h" #include "intel_hotplug.h" +#include "intel_pm.h" #include "intel_sideband.h" #include "intel_tc.h" #include "intel_vga.h" @@ -1028,11 +1029,13 @@ static bool gen9_dc_off_power_well_enabled(struct drm_i915_private *dev_priv, static void gen9_assert_dbuf_enabled(struct drm_i915_private *dev_priv) { - u32 tmp = I915_READ(DBUF_CTL); + u8 hw_enabled_dbuf_slices = intel_enabled_dbuf_slices_mask(dev_priv); + u8 enabled_dbuf_slices = dev_priv->enabled_dbuf_slices_mask; - WARN((tmp & (DBUF_POWER_STATE | DBUF_POWER_REQUEST)) != - (DBUF_POWER_STATE | DBUF_POWER_REQUEST), - "Unexpected DBuf power power state (0x%08x)\n", tmp); + WARN(hw_enabled_dbuf_slices != enabled_dbuf_slices, + "Unexpected DBuf power power state (0x%08x, expected 0x%08x)\n", + hw_enabled_dbuf_slices, + enabled_dbuf_slices); } static void gen9_disable_dc_states(struct drm_i915_private *dev_priv) @@ -4388,86 +4391,49 @@ bool intel_dbuf_slice_set(struct drm_i915_private *dev_priv, static void gen9_dbuf_enable(struct drm_i915_private *dev_priv) { - intel_dbuf_slice_set(dev_priv, DBUF_CTL, true); + icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1)); } static void gen9_dbuf_disable(struct drm_i915_private *dev_priv) { - intel_dbuf_slice_set(dev_priv, DBUF_CTL, false); -} - -static u8 intel_dbuf_max_slices(struct drm_i915_private *dev_priv) -{ - if (INTEL_GEN(dev_priv) < 11) - return 1; - return 2; + icl_dbuf_slices_update(dev_priv, 0); } void icl_dbuf_slices_update(struct drm_i915_private *dev_priv, u8 req_slices) { - const u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_num; - bool ret; + int i; + int max_slices = INTEL_INFO(dev_priv)->num_supported_dbuf_slices; - if (req_slices > intel_dbuf_max_slices(dev_priv)) { - DRM_ERROR("Invalid number of dbuf slices requested\n"); - return; - } + WARN(hweight8(req_slices) > max_slices, + "Invalid number of dbuf slices requested\n"); - if (req_slices == hw_enabled_slices || req_slices == 0) - return; + DRM_DEBUG_KMS("Updating dbuf slices to 0x%x\n", req_slices); - if (req_slices > hw_enabled_slices) - ret = intel_dbuf_slice_set(dev_priv, - DBUF_CTL_S(DBUF_S2), true); - else - ret = intel_dbuf_slice_set(dev_priv, - DBUF_CTL_S(DBUF_S2), false); + for (i = 0; i < max_slices; i++) { + intel_dbuf_slice_set(dev_priv, + DBUF_CTL_S(i), + (req_slices & BIT(i)) != 0); + } - if (ret) - dev_priv->enabled_dbuf_slices_num = req_slices; + dev_priv->enabled_dbuf_slices_mask = req_slices; } static void icl_dbuf_enable(struct drm_i915_private *dev_priv) { - I915_WRITE(DBUF_CTL_S(DBUF_S1), - I915_READ(DBUF_CTL_S(DBUF_S1)) | DBUF_POWER_REQUEST); - I915_WRITE(DBUF_CTL_S(DBUF_S2), - I915_READ(DBUF_CTL_S(DBUF_S2)) | DBUF_POWER_REQUEST); - POSTING_READ(DBUF_CTL_S(DBUF_S2)); - - udelay(10); - - if (!(I915_READ(DBUF_CTL_S(DBUF_S1)) & DBUF_POWER_STATE) || - !(I915_READ(DBUF_CTL_S(DBUF_S2)) & DBUF_POWER_STATE)) - DRM_ERROR("DBuf power enable timeout\n"); - else - /* - * FIXME: for now pretend that we only have 1 slice, see - * intel_enabled_dbuf_slices_num(). - */ - dev_priv->enabled_dbuf_slices_num = 1; + /* + * Just power up 1 slice, we will + * figure out later which slices we have and what we need. + */ + icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1)); } static void icl_dbuf_disable(struct drm_i915_private *dev_priv) { - I915_WRITE(DBUF_CTL_S(DBUF_S1), - I915_READ(DBUF_CTL_S(DBUF_S1)) & ~DBUF_POWER_REQUEST); - I915_WRITE(DBUF_CTL_S(DBUF_S2), - I915_READ(DBUF_CTL_S(DBUF_S2)) & ~DBUF_POWER_REQUEST); - POSTING_READ(DBUF_CTL_S(DBUF_S2)); - - udelay(10); - - if ((I915_READ(DBUF_CTL_S(DBUF_S1)) & DBUF_POWER_STATE) || - (I915_READ(DBUF_CTL_S(DBUF_S2)) & DBUF_POWER_STATE)) - DRM_ERROR("DBuf power disable timeout!\n"); - else - /* - * FIXME: for now pretend that the first slice is always - * enabled, see intel_enabled_dbuf_slices_num(). - */ - dev_priv->enabled_dbuf_slices_num = 1; + /* + * As of BSpec Slice 1 is always enabled. + */ + icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1)); } static void icl_mbus_init(struct drm_i915_private *dev_priv) diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h index 6dca90ebca35..09f14e9e7863 100644 --- a/drivers/gpu/drm/i915/display/intel_display_types.h +++ b/drivers/gpu/drm/i915/display/intel_display_types.h @@ -518,7 +518,7 @@ struct intel_atomic_state { struct skl_ddb_values wm_results; /* Number of enabled DBuf slices */ - u8 enabled_dbuf_slices_num; + u8 enabled_dbuf_slices_mask; struct i915_sw_fence commit_ready; diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 9dbf2e57b01b..5b895ad38fbc 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1208,7 +1208,7 @@ struct drm_i915_private { bool distrust_bios_wm; } wm; - u8 enabled_dbuf_slices_num; /* GEN11 has configurable 2 slices */ + u8 enabled_dbuf_slices_mask; /* GEN11 has configurable 2 slices */ struct dram_info { bool valid; diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index 83f01401b8b5..6c6daa44c439 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -615,7 +615,8 @@ static const struct intel_device_info chv_info = { .has_gt_uc = 1, \ .display.has_hdcp = 1, \ .display.has_ipc = 1, \ - .ddb_size = 896 + .ddb_size = 896, \ + .num_supported_dbuf_slices = 1 #define SKL_PLATFORM \ GEN9_FEATURES, \ @@ -650,6 +651,7 @@ static const struct intel_device_info skl_gt4_info = { #define GEN9_LP_FEATURES \ GEN(9), \ .is_lp = 1, \ + .num_supported_dbuf_slices = 1, \ .display.has_hotplug = 1, \ .engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0) | BIT(VECS0), \ .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C), \ @@ -774,6 +776,7 @@ static const struct intel_device_info cnl_info = { }, \ GEN(11), \ .ddb_size = 2048, \ + .num_supported_dbuf_slices = 2, \ .has_logical_ring_elsq = 1, \ .color = { .degamma_lut_size = 33, .gamma_lut_size = 262145 } diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h index 2725cb7fc169..7d4d122d2182 100644 --- a/drivers/gpu/drm/i915/intel_device_info.h +++ b/drivers/gpu/drm/i915/intel_device_info.h @@ -180,6 +180,7 @@ struct intel_device_info { } display; u16 ddb_size; /* in blocks */ + u8 num_supported_dbuf_slices; /* number of DBuf slices */ /* Register offsets for the various display pipes and transcoders */ int pipe_offsets[I915_MAX_TRANSCODERS]; diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index f22509f8ac28..b4b291d4244b 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -3644,26 +3644,18 @@ bool ilk_disable_lp_wm(struct drm_i915_private *dev_priv) return _ilk_disable_lp_wm(dev_priv, WM_DIRTY_LP_ALL); } -u8 intel_enabled_dbuf_slices_num(struct drm_i915_private *dev_priv) +u8 intel_enabled_dbuf_slices_mask(struct drm_i915_private *dev_priv) { - u8 enabled_dbuf_slices_num; - - /* Slice 1 will always be enabled */ - enabled_dbuf_slices_num = 1; - - /* Gen prior to GEN11 have only one DBuf slice */ - if (INTEL_GEN(dev_priv) < 11) - return enabled_dbuf_slices_num; + int i; + int max_slices = INTEL_INFO(dev_priv)->num_supported_dbuf_slices; + u8 enabled_slices_mask = 0; - /* - * FIXME: for now we'll only ever use 1 slice; pretend that we have - * only that 1 slice enabled until we have a proper way for on-demand - * toggling of the second slice. - */ - if (0 && I915_READ(DBUF_CTL_S(DBUF_S2)) & DBUF_POWER_STATE) - enabled_dbuf_slices_num++; + for (i = 0; i < max_slices; i++) { + if (I915_READ(DBUF_CTL_S(i)) & DBUF_POWER_STATE) + enabled_slices_mask |= BIT(i); + } - return enabled_dbuf_slices_num; + return enabled_slices_mask; } /* @@ -3871,8 +3863,6 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv, { struct drm_atomic_state *state = crtc_state->uapi.state; struct intel_atomic_state *intel_state = to_intel_atomic_state(state); - const struct drm_display_mode *adjusted_mode; - u64 total_data_bw; u16 ddb_size = INTEL_INFO(dev_priv)->ddb_size; WARN_ON(ddb_size == 0); @@ -3880,23 +3870,8 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv, if (INTEL_GEN(dev_priv) < 11) return ddb_size - 4; /* 4 blocks for bypass path allocation */ - adjusted_mode = &crtc_state->hw.adjusted_mode; - total_data_bw = total_data_rate * drm_mode_vrefresh(adjusted_mode); - - /* - * 12GB/s is maximum BW supported by single DBuf slice. - * - * FIXME dbuf slice code is broken: - * - must wait for planes to stop using the slice before powering it off - * - plane straddling both slices is illegal in multi-pipe scenarios - * - should validate we stay within the hw bandwidth limits - */ - if (0 && (num_active > 1 || total_data_bw >= GBps(12))) { - intel_state->enabled_dbuf_slices_num = 2; - } else { - intel_state->enabled_dbuf_slices_num = 1; - ddb_size /= 2; - } + intel_state->enabled_dbuf_slices_mask = BIT(DBUF_S1); + ddb_size /= 2; return ddb_size; } @@ -4093,8 +4068,8 @@ void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc, void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv) { - dev_priv->enabled_dbuf_slices_num = - intel_enabled_dbuf_slices_num(dev_priv); + dev_priv->enabled_dbuf_slices_mask = + intel_enabled_dbuf_slices_mask(dev_priv); } /* @@ -5236,7 +5211,7 @@ skl_compute_ddb(struct intel_atomic_state *state) struct intel_crtc *crtc; int ret, i; - state->enabled_dbuf_slices_num = dev_priv->enabled_dbuf_slices_num; + state->enabled_dbuf_slices_mask = dev_priv->enabled_dbuf_slices_mask; for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h index 22fd2daf608e..d60a85421c5a 100644 --- a/drivers/gpu/drm/i915/intel_pm.h +++ b/drivers/gpu/drm/i915/intel_pm.h @@ -32,7 +32,7 @@ void g4x_wm_get_hw_state(struct drm_i915_private *dev_priv); void vlv_wm_get_hw_state(struct drm_i915_private *dev_priv); void ilk_wm_get_hw_state(struct drm_i915_private *dev_priv); void skl_wm_get_hw_state(struct drm_i915_private *dev_priv); -u8 intel_enabled_dbuf_slices_num(struct drm_i915_private *dev_priv); +u8 intel_enabled_dbuf_slices_mask(struct drm_i915_private *dev_priv); void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc, struct skl_ddb_entry *ddb_y, struct skl_ddb_entry *ddb_uv); From patchwork Fri Jan 17 09:50:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Lisovskiy, Stanislav" X-Patchwork-Id: 11338663 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 00F7E13A0 for ; Fri, 17 Jan 2020 09:52:50 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DD8682082F for ; Fri, 17 Jan 2020 09:52:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD8682082F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 752556F4C2; Fri, 17 Jan 2020 09:52:49 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7578C6F4C2 for ; Fri, 17 Jan 2020 09:52:48 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jan 2020 01:52:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,329,1574150400"; d="scan'208";a="424412522" Received: from slisovsk-lenovo-ideapad-720s-13ikb.fi.intel.com ([10.237.72.89]) by fmsmga005.fm.intel.com with ESMTP; 17 Jan 2020 01:52:46 -0800 From: Stanislav Lisovskiy To: intel-gfx@lists.freedesktop.org Date: Fri, 17 Jan 2020 11:50:26 +0200 Message-Id: <20200117095026.1113-6-stanislav.lisovskiy@intel.com> X-Mailer: git-send-email 2.24.1.485.gad05a3d8e5 In-Reply-To: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> References: <20200117095026.1113-1-stanislav.lisovskiy@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v13 5/5] drm/i915: Correctly map DBUF slices to pipes X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Added proper DBuf slice mapping to correspondent pipes, depending on pipe configuration as stated in BSpec. v2: - Remove unneeded braces - Stop using macro for DBuf assignments as it seems to reduce readability. v3: Start using enabled slices mask in dev_priv v4: Renamed "enabled_slices" used in dev_priv to "enabled_dbuf_slices_mask"(Matt Roper) v5: - Removed redundant parameters from intel_get_ddb_size function.(Matt Roper) - Made i915_possible_dbuf_slices static(Matt Roper) - Renamed total_width into total_width_in_range so that it now reflects that this is not a total pipe width but the one in current dbuf slice allowed range for pipe.(Matt Roper) - Removed 4th pipe for ICL in DBuf assignment table(Matt Roper) - Fixed wrong DBuf slice in DBuf table for TGL (Matt Roper) - Added comment regarding why we currently not using pipe ratio for DBuf assignment for ICL v6: - Changed u32 to unsigned int in icl_get_first_dbuf_slice_offset function signature (Ville Syrjälä) - Changed also u32 to u8 in dbuf slice mask structure (Ville Syrjälä) - Switched from DBUF_S1_BIT to enum + explicit BIT(DBUF_S1) access(Ville Syrjälä) - Switched to named initializers in DBuf assignment arrays(Ville Syrjälä) - DBuf assignment arrays now use autogeneration tool from https://patchwork.freedesktop.org/series/70493/ to avoid typos. - Renamed i915_find_pipe_conf to *_compute_dbuf_slices (Ville Syrjälä) - Changed platforms ordering in skl_compute_dbuf_slices to be from newest to oldest(Ville Syrjälä) v7: - Now ORing assigned DBuf slice config always with DBUF_S1 because slice 1 has to be constantly powered on. (Ville Syrjälä) Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/intel_pm.c | 395 ++++++++++++++++++++++++++++++-- 1 file changed, 377 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index b4b291d4244b..b9cfbbeaef8b 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -3856,13 +3856,29 @@ bool intel_can_enable_sagv(struct intel_atomic_state *state) return true; } -static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv, - const struct intel_crtc_state *crtc_state, - const u64 total_data_rate, - const int num_active) +/* + * Calculate initial DBuf slice offset, based on slice size + * and mask(i.e if slice size is 1024 and second slice is enabled + * offset would be 1024) + */ +static unsigned int +icl_get_first_dbuf_slice_offset(u32 dbuf_slice_mask, + u32 slice_size, + u32 ddb_size) +{ + unsigned int offset = 0; + + if (!dbuf_slice_mask) + return 0; + + offset = (ffs(dbuf_slice_mask) - 1) * slice_size; + + WARN_ON(offset >= ddb_size); + return offset; +} + +static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv) { - struct drm_atomic_state *state = crtc_state->uapi.state; - struct intel_atomic_state *intel_state = to_intel_atomic_state(state); u16 ddb_size = INTEL_INFO(dev_priv)->ddb_size; WARN_ON(ddb_size == 0); @@ -3870,12 +3886,12 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv, if (INTEL_GEN(dev_priv) < 11) return ddb_size - 4; /* 4 blocks for bypass path allocation */ - intel_state->enabled_dbuf_slices_mask = BIT(DBUF_S1); - ddb_size /= 2; - return ddb_size; } +static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state, + u32 active_pipes); + static void skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv, const struct intel_crtc_state *crtc_state, @@ -3887,10 +3903,17 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv, struct intel_atomic_state *intel_state = to_intel_atomic_state(state); struct drm_crtc *for_crtc = crtc_state->uapi.crtc; const struct intel_crtc *crtc; - u32 pipe_width = 0, total_width = 0, width_before_pipe = 0; + u32 pipe_width = 0, total_width_in_range = 0, width_before_pipe = 0; enum pipe for_pipe = to_intel_crtc(for_crtc)->pipe; u16 ddb_size; + u32 ddb_range_size; u32 i; + u32 dbuf_slice_mask; + u32 active_pipes; + u32 offset; + u32 slice_size; + u32 total_slice_mask; + u32 start, end; if (WARN_ON(!state) || !crtc_state->hw.active) { alloc->start = 0; @@ -3900,12 +3923,15 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv, } if (intel_state->active_pipe_changes) - *num_active = hweight8(intel_state->active_pipes); + active_pipes = intel_state->active_pipes; else - *num_active = hweight8(dev_priv->active_pipes); + active_pipes = dev_priv->active_pipes; + + *num_active = hweight8(active_pipes); + + ddb_size = intel_get_ddb_size(dev_priv); - ddb_size = intel_get_ddb_size(dev_priv, crtc_state, total_data_rate, - *num_active); + slice_size = ddb_size / INTEL_INFO(dev_priv)->num_supported_dbuf_slices; /* * If the state doesn't change the active CRTC's or there is no @@ -3924,22 +3950,71 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv, return; } + /* + * Get allowed DBuf slices for correspondent pipe and platform. + */ + dbuf_slice_mask = skl_compute_dbuf_slices(crtc_state, active_pipes); + + DRM_DEBUG_KMS("DBuf slice mask %x pipe %d active pipes %x\n", + dbuf_slice_mask, + for_pipe, active_pipes); + + /* + * Figure out at which DBuf slice we start, i.e if we start at Dbuf S2 + * and slice size is 1024, the offset would be 1024 + */ + offset = icl_get_first_dbuf_slice_offset(dbuf_slice_mask, + slice_size, ddb_size); + + /* + * Figure out total size of allowed DBuf slices, which is basically + * a number of allowed slices for that pipe multiplied by slice size. + * Inside of this + * range ddb entries are still allocated in proportion to display width. + */ + ddb_range_size = hweight8(dbuf_slice_mask) * slice_size; + /* * Watermark/ddb requirement highly depends upon width of the * framebuffer, So instead of allocating DDB equally among pipes * distribute DDB based on resolution/width of the display. */ + total_slice_mask = dbuf_slice_mask; for_each_new_intel_crtc_in_state(intel_state, crtc, crtc_state, i) { const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; enum pipe pipe = crtc->pipe; int hdisplay, vdisplay; + u32 pipe_dbuf_slice_mask; - if (!crtc_state->hw.enable) + if (!crtc_state->hw.active) + continue; + + pipe_dbuf_slice_mask = skl_compute_dbuf_slices(crtc_state, + active_pipes); + + /* + * According to BSpec pipe can share one dbuf slice with another + * pipes or pipe can use multiple dbufs, in both cases we + * account for other pipes only if they have exactly same mask. + * However we need to account how many slices we should enable + * in total. + */ + total_slice_mask |= pipe_dbuf_slice_mask; + + /* + * Do not account pipes using other slice sets + * luckily as of current BSpec slice sets do not partially + * intersect(pipes share either same one slice or same slice set + * i.e no partial intersection), so it is enough to check for + * equality for now. + */ + if (dbuf_slice_mask != pipe_dbuf_slice_mask) continue; drm_mode_get_hv_timing(adjusted_mode, &hdisplay, &vdisplay); - total_width += hdisplay; + + total_width_in_range += hdisplay; if (pipe < for_pipe) width_before_pipe += hdisplay; @@ -3947,8 +4022,31 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv, pipe_width = hdisplay; } - alloc->start = ddb_size * width_before_pipe / total_width; - alloc->end = ddb_size * (width_before_pipe + pipe_width) / total_width; + /* + * FIXME(how?): + * It looks ridiculous. It looks stupid. But currently we need to OR it + * with DBUF_S1 bit always, during core_init/uninit, we have to always + * heve powered on 1st DBuf slice, according to BSpec(!). + * So for this reason icl/skl_dbuf_enable always sets it on. + * But we now have a unified assertion in gen9_disable_dc_states, which + * is supposed to check if the actual state matches state in dev_priv. + * Thus we always going to have a mismatch! So currently we are going + * to OR it, keep DBUF_S1 constantly ON even when this is not needed.. + */ + intel_state->enabled_dbuf_slices_mask = total_slice_mask | BIT(DBUF_S1); + + start = ddb_range_size * width_before_pipe / total_width_in_range; + end = ddb_range_size * + (width_before_pipe + pipe_width) / total_width_in_range; + + alloc->start = offset + start; + alloc->end = offset + end; + + DRM_DEBUG_KMS("Pipe %d ddb %d-%d\n", for_pipe, + alloc->start, alloc->end); + DRM_DEBUG_KMS("Enabled ddb slices mask %x num supported %d\n", + intel_state->enabled_dbuf_slices_mask, + INTEL_INFO(dev_priv)->num_supported_dbuf_slices); } static int skl_compute_wm_params(const struct intel_crtc_state *crtc_state, @@ -4119,6 +4217,267 @@ skl_plane_downscale_amount(const struct intel_crtc_state *crtc_state, return mul_fixed16(downscale_w, downscale_h); } +struct dbuf_slice_conf_entry { + u8 active_pipes; + u8 dbuf_mask[I915_MAX_PIPES]; +}; + +/* + * Table taken from Bspec 12716 + * Pipes do have some preferred DBuf slice affinity, + * plus there are some hardcoded requirements on how + * those should be distributed for multipipe scenarios. + * For more DBuf slices algorithm can get even more messy + * and less readable, so decided to use a table almost + * as is from BSpec itself - that way it is at least easier + * to compare, change and check. + */ +static struct dbuf_slice_conf_entry icl_allowed_dbufs[] = +/* Autogenerated with igt/tools/intel_dbuf_map tool: */ +{ + { + .active_pipes = BIT(PIPE_A), + { + [PIPE_A] = BIT(DBUF_S1) | BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_B), + { + [PIPE_B] = BIT(DBUF_S1) | BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_A) | BIT(PIPE_B), + { + [PIPE_A] = BIT(DBUF_S1), + [PIPE_B] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_C), + { + [PIPE_C] = BIT(DBUF_S2) | BIT(DBUF_S1) + } + }, + { + .active_pipes = BIT(PIPE_A) | BIT(PIPE_C), + { + [PIPE_A] = BIT(DBUF_S1), + [PIPE_C] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_B) | BIT(PIPE_C), + { + [PIPE_B] = BIT(DBUF_S1), + [PIPE_C] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C), + { + [PIPE_A] = BIT(DBUF_S1), + [PIPE_B] = BIT(DBUF_S1), + [PIPE_C] = BIT(DBUF_S2) + } + }, +}; + +/* + * Table taken from Bspec 49255 + * Pipes do have some preferred DBuf slice affinity, + * plus there are some hardcoded requirements on how + * those should be distributed for multipipe scenarios. + * For more DBuf slices algorithm can get even more messy + * and less readable, so decided to use a table almost + * as is from BSpec itself - that way it is at least easier + * to compare, change and check. + */ +static struct dbuf_slice_conf_entry tgl_allowed_dbufs[] = +/* Autogenerated with igt/tools/intel_dbuf_map tool: */ +{ + { + .active_pipes = BIT(PIPE_A), + { + [PIPE_A] = BIT(DBUF_S1) | BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_B), + { + [PIPE_B] = BIT(DBUF_S1) | BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_A) | BIT(PIPE_B), + { + [PIPE_A] = BIT(DBUF_S2), + [PIPE_B] = BIT(DBUF_S1) + } + }, + { + .active_pipes = BIT(PIPE_C), + { + [PIPE_C] = BIT(DBUF_S2) | BIT(DBUF_S1) + } + }, + { + .active_pipes = BIT(PIPE_A) | BIT(PIPE_C), + { + [PIPE_A] = BIT(DBUF_S1), + [PIPE_C] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_B) | BIT(PIPE_C), + { + [PIPE_B] = BIT(DBUF_S1), + [PIPE_C] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C), + { + [PIPE_A] = BIT(DBUF_S1), + [PIPE_B] = BIT(DBUF_S1), + [PIPE_C] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_D), + { + [PIPE_D] = BIT(DBUF_S2) | BIT(DBUF_S1) + } + }, + { + .active_pipes = BIT(PIPE_A) | BIT(PIPE_D), + { + [PIPE_A] = BIT(DBUF_S1), + [PIPE_D] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_B) | BIT(PIPE_D), + { + [PIPE_B] = BIT(DBUF_S1), + [PIPE_D] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_D), + { + [PIPE_A] = BIT(DBUF_S1), + [PIPE_B] = BIT(DBUF_S1), + [PIPE_D] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_C) | BIT(PIPE_D), + { + [PIPE_C] = BIT(DBUF_S1), + [PIPE_D] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_A) | BIT(PIPE_C) | BIT(PIPE_D), + { + [PIPE_A] = BIT(DBUF_S1), + [PIPE_C] = BIT(DBUF_S2), + [PIPE_D] = BIT(DBUF_S2) + } + }, + { + .active_pipes = BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D), + { + [PIPE_B] = BIT(DBUF_S1), + [PIPE_C] = BIT(DBUF_S2), + [PIPE_D] = BIT(DBUF_S2) + } + }, + { + .active_pipes = + BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D), + { + [PIPE_A] = BIT(DBUF_S1), + [PIPE_B] = BIT(DBUF_S1), + [PIPE_C] = BIT(DBUF_S2), + [PIPE_D] = BIT(DBUF_S2) + } + }, +}; + +static u8 compute_dbuf_slices(enum pipe pipe, + u32 active_pipes, + const struct dbuf_slice_conf_entry *dbuf_slices, + int size) +{ + int i; + + for (i = 0; i < size; i++) { + if (dbuf_slices[i].active_pipes == active_pipes) + return dbuf_slices[i].dbuf_mask[pipe]; + } + return 0; +} + +/* + * This function finds an entry with same enabled pipe configuration and + * returns correspondent DBuf slice mask as stated in BSpec for particular + * platform. + */ +static u32 icl_compute_dbuf_slices(enum pipe pipe, + u32 active_pipes, + const struct intel_crtc_state *crtc_state) +{ + /* + * FIXME: For ICL this is still a bit unclear as prev BSpec revision + * required calculating "pipe ratio" in order to determine + * if one or two slices can be used for single pipe configurations + * as additional constraint to the existing table. + * However based on recent info, it should be not "pipe ratio" + * but rather ratio between pixel_rate and cdclk with additional + * constants, so for now we are using only table until this is + * clarified. Also this is the reason why crtc_state param is + * still here - we will need it once those additional constraints + * pop up. + */ + return compute_dbuf_slices(pipe, active_pipes, + icl_allowed_dbufs, + ARRAY_SIZE(icl_allowed_dbufs)); +} + +static u32 tgl_compute_dbuf_slices(enum pipe pipe, + u32 active_pipes, + const struct intel_crtc_state *crtc_state) +{ + return compute_dbuf_slices(pipe, active_pipes, + tgl_allowed_dbufs, + ARRAY_SIZE(tgl_allowed_dbufs)); +} + +static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state, + u32 active_pipes) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + enum pipe pipe = crtc->pipe; + + if (IS_GEN(dev_priv, 12)) + return tgl_compute_dbuf_slices(pipe, + active_pipes, + crtc_state); + else if (IS_GEN(dev_priv, 11)) + return icl_compute_dbuf_slices(pipe, + active_pipes, + crtc_state); + /* + * For anything else just return one slice yet. + * Should be extended for other platforms. + */ + return BIT(DBUF_S1); +} + static u64 skl_plane_relative_data_rate(const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state,