From patchwork Thu Feb 20 12:07:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Lisovskiy, Stanislav" X-Patchwork-Id: 11393913 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB996138D for ; Thu, 20 Feb 2020 12:10:57 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A4657207FD for ; Thu, 20 Feb 2020 12:10:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A4657207FD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 756DF89FF9; Thu, 20 Feb 2020 12:10:55 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3A2886ED3D for ; Thu, 20 Feb 2020 12:10:52 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Feb 2020 04:10:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,464,1574150400"; d="scan'208";a="259252313" Received: from slisovsk-lenovo-ideapad-720s-13ikb.fi.intel.com ([10.237.72.89]) by fmsmga004.fm.intel.com with ESMTP; 20 Feb 2020 04:10:49 -0800 From: Stanislav Lisovskiy To: intel-gfx@lists.freedesktop.org Date: Thu, 20 Feb 2020 14:07:36 +0200 Message-Id: <20200220120741.6917-3-stanislav.lisovskiy@intel.com> X-Mailer: git-send-email 2.24.1.485.gad05a3d8e5 In-Reply-To: <20200220120741.6917-1-stanislav.lisovskiy@intel.com> References: <20200220120741.6917-1-stanislav.lisovskiy@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v17 2/7] drm/i915: Introduce skl_plane_wm_level accessor. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" For future Gen12 SAGV implementation we need to seemlessly alter wm levels calculated, depending on whether we are allowed to enable SAGV or not. So this accessor will give additional flexibility to do that. Currently this accessor is still simply working as "pass-through" function. This will be changed in next coming patches from this series. v2: - plane_id -> plane->id(Ville Syrjälä) - Moved wm_level var to have more local scope (Ville Syrjälä) - Renamed yuv to color_plane(Ville Syrjälä) in skl_plane_wm_level Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/intel_pm.c | 120 +++++++++++++++++++++----------- 1 file changed, 81 insertions(+), 39 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index d6933e382657..e1d167429489 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -4548,6 +4548,18 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state, return total_data_rate; } +static const struct skl_wm_level * +skl_plane_wm_level(struct intel_plane *plane, + const struct intel_crtc_state *crtc_state, + int level, + int color_plane) +{ + const struct skl_plane_wm *wm = + &crtc_state->wm.skl.optimal.planes[plane->id]; + + return color_plane ? &wm->uv_wm[level] : &wm->wm[level]; +} + static int skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) { @@ -4560,7 +4572,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) u16 total[I915_MAX_PLANES] = {}; u16 uv_total[I915_MAX_PLANES] = {}; u64 total_data_rate; - enum plane_id plane_id; + struct intel_plane *plane; int num_active; u64 plane_data_rate[I915_MAX_PLANES] = {}; u64 uv_plane_data_rate[I915_MAX_PLANES] = {}; @@ -4612,22 +4624,28 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) */ for (level = ilk_wm_max_level(dev_priv); level >= 0; level--) { blocks = 0; - for_each_plane_id_on_crtc(intel_crtc, plane_id) { - const struct skl_plane_wm *wm = - &crtc_state->wm.skl.optimal.planes[plane_id]; - if (plane_id == PLANE_CURSOR) { - if (wm->wm[level].min_ddb_alloc > total[PLANE_CURSOR]) { + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { + const struct skl_wm_level *wm_level; + const struct skl_wm_level *wm_uv_level; + + wm_level = skl_plane_wm_level(plane, crtc_state, + level, false); + wm_uv_level = skl_plane_wm_level(plane, crtc_state, + level, true); + + if (plane->id == PLANE_CURSOR) { + if (wm_level->min_ddb_alloc > total[PLANE_CURSOR]) { drm_WARN_ON(&dev_priv->drm, - wm->wm[level].min_ddb_alloc != U16_MAX); + wm_level->min_ddb_alloc != U16_MAX); blocks = U32_MAX; break; } continue; } - blocks += wm->wm[level].min_ddb_alloc; - blocks += wm->uv_wm[level].min_ddb_alloc; + blocks += wm_level->min_ddb_alloc; + blocks += wm_uv_level->min_ddb_alloc; } if (blocks <= alloc_size) { @@ -4649,13 +4667,18 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) * watermark level, plus an extra share of the leftover blocks * proportional to its relative data rate. */ - for_each_plane_id_on_crtc(intel_crtc, plane_id) { - const struct skl_plane_wm *wm = - &crtc_state->wm.skl.optimal.planes[plane_id]; + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { + const struct skl_wm_level *wm_level; + const struct skl_wm_level *wm_uv_level; u64 rate; u16 extra; - if (plane_id == PLANE_CURSOR) + wm_level = skl_plane_wm_level(plane, crtc_state, + level, false); + wm_uv_level = skl_plane_wm_level(plane, crtc_state, + level, true); + + if (plane->id == PLANE_CURSOR) continue; /* @@ -4665,22 +4688,22 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) if (total_data_rate == 0) break; - rate = plane_data_rate[plane_id]; + rate = plane_data_rate[plane->id]; extra = min_t(u16, alloc_size, DIV64_U64_ROUND_UP(alloc_size * rate, total_data_rate)); - total[plane_id] = wm->wm[level].min_ddb_alloc + extra; + total[plane->id] = wm_level->min_ddb_alloc + extra; alloc_size -= extra; total_data_rate -= rate; if (total_data_rate == 0) break; - rate = uv_plane_data_rate[plane_id]; + rate = uv_plane_data_rate[plane->id]; extra = min_t(u16, alloc_size, DIV64_U64_ROUND_UP(alloc_size * rate, total_data_rate)); - uv_total[plane_id] = wm->uv_wm[level].min_ddb_alloc + extra; + uv_total[plane->id] = wm_uv_level->min_ddb_alloc + extra; alloc_size -= extra; total_data_rate -= rate; } @@ -4688,29 +4711,29 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) /* Set the actual DDB start/end points for each plane */ start = alloc->start; - for_each_plane_id_on_crtc(intel_crtc, plane_id) { + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { struct skl_ddb_entry *plane_alloc = - &crtc_state->wm.skl.plane_ddb_y[plane_id]; + &crtc_state->wm.skl.plane_ddb_y[plane->id]; struct skl_ddb_entry *uv_plane_alloc = - &crtc_state->wm.skl.plane_ddb_uv[plane_id]; + &crtc_state->wm.skl.plane_ddb_uv[plane->id]; - if (plane_id == PLANE_CURSOR) + if (plane->id == PLANE_CURSOR) continue; /* Gen11+ uses a separate plane for UV watermarks */ drm_WARN_ON(&dev_priv->drm, - INTEL_GEN(dev_priv) >= 11 && uv_total[plane_id]); + INTEL_GEN(dev_priv) >= 11 && uv_total[plane->id]); /* Leave disabled planes at (0,0) */ - if (total[plane_id]) { + if (total[plane->id]) { plane_alloc->start = start; - start += total[plane_id]; + start += total[plane->id]; plane_alloc->end = start; } - if (uv_total[plane_id]) { + if (uv_total[plane->id]) { uv_plane_alloc->start = start; - start += uv_total[plane_id]; + start += uv_total[plane->id]; uv_plane_alloc->end = start; } } @@ -4722,9 +4745,16 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) * that aren't actually possible. */ for (level++; level <= ilk_wm_max_level(dev_priv); level++) { - for_each_plane_id_on_crtc(intel_crtc, plane_id) { + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { + const struct skl_wm_level *wm_level; + const struct skl_wm_level *wm_uv_level; struct skl_plane_wm *wm = - &crtc_state->wm.skl.optimal.planes[plane_id]; + &crtc_state->wm.skl.optimal.planes[plane->id]; + + wm_level = skl_plane_wm_level(plane, crtc_state, + level, false); + wm_uv_level = skl_plane_wm_level(plane, crtc_state, + level, true); /* * We only disable the watermarks for each plane if @@ -4738,9 +4768,10 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) * planes must be enabled before the level will be used." * So this is actually safe to do. */ - if (wm->wm[level].min_ddb_alloc > total[plane_id] || - wm->uv_wm[level].min_ddb_alloc > uv_total[plane_id]) - memset(&wm->wm[level], 0, sizeof(wm->wm[level])); + if (wm_level->min_ddb_alloc > total[plane->id] || + wm_uv_level->min_ddb_alloc > uv_total[plane->id]) + memset(&wm->wm[level], 0, + sizeof(struct skl_wm_level)); /* * Wa_1408961008:icl, ehl @@ -4748,9 +4779,14 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) */ if (IS_GEN(dev_priv, 11) && level == 1 && wm->wm[0].plane_en) { - wm->wm[level].plane_res_b = wm->wm[0].plane_res_b; - wm->wm[level].plane_res_l = wm->wm[0].plane_res_l; - wm->wm[level].ignore_lines = wm->wm[0].ignore_lines; + wm_level = skl_plane_wm_level(plane, crtc_state, + 0, false); + wm->wm[level].plane_res_b = + wm_level->plane_res_b; + wm->wm[level].plane_res_l = + wm_level->plane_res_l; + wm->wm[level].ignore_lines = + wm_level->ignore_lines; } } } @@ -4759,11 +4795,11 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) * Go back and disable the transition watermark if it turns out we * don't have enough DDB blocks for it. */ - for_each_plane_id_on_crtc(intel_crtc, plane_id) { + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { struct skl_plane_wm *wm = - &crtc_state->wm.skl.optimal.planes[plane_id]; + &crtc_state->wm.skl.optimal.planes[plane->id]; - if (wm->trans_wm.plane_res_b >= total[plane_id]) + if (wm->trans_wm.plane_res_b >= total[plane->id]) memset(&wm->trans_wm, 0, sizeof(wm->trans_wm)); } @@ -5354,10 +5390,13 @@ void skl_write_plane_wm(struct intel_plane *plane, &crtc_state->wm.skl.plane_ddb_y[plane_id]; const struct skl_ddb_entry *ddb_uv = &crtc_state->wm.skl.plane_ddb_uv[plane_id]; + const struct skl_wm_level *wm_level; for (level = 0; level <= max_level; level++) { + wm_level = skl_plane_wm_level(plane, crtc_state, level, false); + skl_write_wm_level(dev_priv, PLANE_WM(pipe, plane_id, level), - &wm->wm[level]); + wm_level); } skl_write_wm_level(dev_priv, PLANE_WM_TRANS(pipe, plane_id), &wm->trans_wm); @@ -5388,10 +5427,13 @@ void skl_write_cursor_wm(struct intel_plane *plane, &crtc_state->wm.skl.optimal.planes[plane_id]; const struct skl_ddb_entry *ddb = &crtc_state->wm.skl.plane_ddb_y[plane_id]; + const struct skl_wm_level *wm_level; for (level = 0; level <= max_level; level++) { + wm_level = skl_plane_wm_level(plane, crtc_state, level, false); + skl_write_wm_level(dev_priv, CUR_WM(pipe, level), - &wm->wm[level]); + wm_level); } skl_write_wm_level(dev_priv, CUR_WM_TRANS(pipe), &wm->trans_wm);