From patchwork Mon Feb 3 14:07:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Lisovskiy, Stanislav" X-Patchwork-Id: 11362799 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 67358139A for ; Mon, 3 Feb 2020 14:11:19 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5011F2086A for ; Mon, 3 Feb 2020 14:11:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5011F2086A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E0E316EBFE; Mon, 3 Feb 2020 14:11:18 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 387DC6EBFE for ; Mon, 3 Feb 2020 14:11:17 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Feb 2020 06:11:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,398,1574150400"; d="scan'208";a="429455552" Received: from slisovsk-lenovo-ideapad-720s-13ikb.fi.intel.com ([10.237.72.89]) by fmsmga005.fm.intel.com with ESMTP; 03 Feb 2020 06:11:15 -0800 From: Stanislav Lisovskiy To: intel-gfx@lists.freedesktop.org Date: Mon, 3 Feb 2020 16:07:42 +0200 Message-Id: <20200203140747.22771-3-stanislav.lisovskiy@intel.com> X-Mailer: git-send-email 2.24.1.485.gad05a3d8e5 In-Reply-To: <20200203140747.22771-1-stanislav.lisovskiy@intel.com> References: <20200203140747.22771-1-stanislav.lisovskiy@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v14 2/7] drm/i915: Introduce skl_plane_wm_level accessor. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" For future Gen12 SAGV implementation we need to seemlessly alter wm levels calculated, depending on whether we are allowed to enable SAGV or not. So this accessor will give additional flexibility to do that. Currently this accessor is still simply working as "pass-through" function. This will be changed in next coming patches from this series. v2: - plane_id -> plane->id(Ville Syrjälä) - Moved wm_level var to have more local scope (Ville Syrjälä) - Renamed yuv to color_plane(Ville Syrjälä) in skl_plane_wm_level Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/intel_pm.c | 120 +++++++++++++++++++++----------- 1 file changed, 81 insertions(+), 39 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index 01a152dc9774..b5043ccd159b 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -4226,6 +4226,18 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state, return total_data_rate; } +static const struct skl_wm_level * +skl_plane_wm_level(struct intel_plane *plane, + const struct intel_crtc_state *crtc_state, + int level, + int color_plane) +{ + const struct skl_plane_wm *wm = + &crtc_state->wm.skl.optimal.planes[plane->id]; + + return color_plane ? &wm->uv_wm[level] : &wm->wm[level]; +} + static int skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, struct skl_ddb_allocation *ddb /* out */) @@ -4239,7 +4251,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, u16 total[I915_MAX_PLANES] = {}; u16 uv_total[I915_MAX_PLANES] = {}; u64 total_data_rate; - enum plane_id plane_id; + struct intel_plane *plane; int num_active; u64 plane_data_rate[I915_MAX_PLANES] = {}; u64 uv_plane_data_rate[I915_MAX_PLANES] = {}; @@ -4291,22 +4303,28 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, */ for (level = ilk_wm_max_level(dev_priv); level >= 0; level--) { blocks = 0; - for_each_plane_id_on_crtc(intel_crtc, plane_id) { - const struct skl_plane_wm *wm = - &crtc_state->wm.skl.optimal.planes[plane_id]; - if (plane_id == PLANE_CURSOR) { - if (wm->wm[level].min_ddb_alloc > total[PLANE_CURSOR]) { + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { + const struct skl_wm_level *wm_level; + const struct skl_wm_level *wm_uv_level; + + wm_level = skl_plane_wm_level(plane, crtc_state, + level, false); + wm_uv_level = skl_plane_wm_level(plane, crtc_state, + level, true); + + if (plane->id == PLANE_CURSOR) { + if (wm_level->min_ddb_alloc > total[PLANE_CURSOR]) { drm_WARN_ON(&dev_priv->drm, - wm->wm[level].min_ddb_alloc != U16_MAX); + wm_level->min_ddb_alloc != U16_MAX); blocks = U32_MAX; break; } continue; } - blocks += wm->wm[level].min_ddb_alloc; - blocks += wm->uv_wm[level].min_ddb_alloc; + blocks += wm_level->min_ddb_alloc; + blocks += wm_uv_level->min_ddb_alloc; } if (blocks <= alloc_size) { @@ -4328,13 +4346,18 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, * watermark level, plus an extra share of the leftover blocks * proportional to its relative data rate. */ - for_each_plane_id_on_crtc(intel_crtc, plane_id) { - const struct skl_plane_wm *wm = - &crtc_state->wm.skl.optimal.planes[plane_id]; + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { + const struct skl_wm_level *wm_level; + const struct skl_wm_level *wm_uv_level; u64 rate; u16 extra; - if (plane_id == PLANE_CURSOR) + wm_level = skl_plane_wm_level(plane, crtc_state, + level, false); + wm_uv_level = skl_plane_wm_level(plane, crtc_state, + level, true); + + if (plane->id == PLANE_CURSOR) continue; /* @@ -4344,22 +4367,22 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, if (total_data_rate == 0) break; - rate = plane_data_rate[plane_id]; + rate = plane_data_rate[plane->id]; extra = min_t(u16, alloc_size, DIV64_U64_ROUND_UP(alloc_size * rate, total_data_rate)); - total[plane_id] = wm->wm[level].min_ddb_alloc + extra; + total[plane->id] = wm_level->min_ddb_alloc + extra; alloc_size -= extra; total_data_rate -= rate; if (total_data_rate == 0) break; - rate = uv_plane_data_rate[plane_id]; + rate = uv_plane_data_rate[plane->id]; extra = min_t(u16, alloc_size, DIV64_U64_ROUND_UP(alloc_size * rate, total_data_rate)); - uv_total[plane_id] = wm->uv_wm[level].min_ddb_alloc + extra; + uv_total[plane->id] = wm_uv_level->min_ddb_alloc + extra; alloc_size -= extra; total_data_rate -= rate; } @@ -4367,29 +4390,29 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, /* Set the actual DDB start/end points for each plane */ start = alloc->start; - for_each_plane_id_on_crtc(intel_crtc, plane_id) { + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { struct skl_ddb_entry *plane_alloc = - &crtc_state->wm.skl.plane_ddb_y[plane_id]; + &crtc_state->wm.skl.plane_ddb_y[plane->id]; struct skl_ddb_entry *uv_plane_alloc = - &crtc_state->wm.skl.plane_ddb_uv[plane_id]; + &crtc_state->wm.skl.plane_ddb_uv[plane->id]; - if (plane_id == PLANE_CURSOR) + if (plane->id == PLANE_CURSOR) continue; /* Gen11+ uses a separate plane for UV watermarks */ drm_WARN_ON(&dev_priv->drm, - INTEL_GEN(dev_priv) >= 11 && uv_total[plane_id]); + INTEL_GEN(dev_priv) >= 11 && uv_total[plane->id]); /* Leave disabled planes at (0,0) */ - if (total[plane_id]) { + if (total[plane->id]) { plane_alloc->start = start; - start += total[plane_id]; + start += total[plane->id]; plane_alloc->end = start; } - if (uv_total[plane_id]) { + if (uv_total[plane->id]) { uv_plane_alloc->start = start; - start += uv_total[plane_id]; + start += uv_total[plane->id]; uv_plane_alloc->end = start; } } @@ -4401,9 +4424,16 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, * that aren't actually possible. */ for (level++; level <= ilk_wm_max_level(dev_priv); level++) { - for_each_plane_id_on_crtc(intel_crtc, plane_id) { + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { + const struct skl_wm_level *wm_level; + const struct skl_wm_level *wm_uv_level; struct skl_plane_wm *wm = - &crtc_state->wm.skl.optimal.planes[plane_id]; + &crtc_state->wm.skl.optimal.planes[plane->id]; + + wm_level = skl_plane_wm_level(plane, crtc_state, + level, false); + wm_uv_level = skl_plane_wm_level(plane, crtc_state, + level, true); /* * We only disable the watermarks for each plane if @@ -4417,9 +4447,10 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, * planes must be enabled before the level will be used." * So this is actually safe to do. */ - if (wm->wm[level].min_ddb_alloc > total[plane_id] || - wm->uv_wm[level].min_ddb_alloc > uv_total[plane_id]) - memset(&wm->wm[level], 0, sizeof(wm->wm[level])); + if (wm_level->min_ddb_alloc > total[plane->id] || + wm_uv_level->min_ddb_alloc > uv_total[plane->id]) + memset(&wm->wm[level], 0, + sizeof(struct skl_wm_level)); /* * Wa_1408961008:icl, ehl @@ -4427,9 +4458,14 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, */ if (IS_GEN(dev_priv, 11) && level == 1 && wm->wm[0].plane_en) { - wm->wm[level].plane_res_b = wm->wm[0].plane_res_b; - wm->wm[level].plane_res_l = wm->wm[0].plane_res_l; - wm->wm[level].ignore_lines = wm->wm[0].ignore_lines; + wm_level = skl_plane_wm_level(plane, crtc_state, + 0, false); + wm->wm[level].plane_res_b = + wm_level->plane_res_b; + wm->wm[level].plane_res_l = + wm_level->plane_res_l; + wm->wm[level].ignore_lines = + wm_level->ignore_lines; } } } @@ -4438,11 +4474,11 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state, * Go back and disable the transition watermark if it turns out we * don't have enough DDB blocks for it. */ - for_each_plane_id_on_crtc(intel_crtc, plane_id) { + for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) { struct skl_plane_wm *wm = - &crtc_state->wm.skl.optimal.planes[plane_id]; + &crtc_state->wm.skl.optimal.planes[plane->id]; - if (wm->trans_wm.plane_res_b >= total[plane_id]) + if (wm->trans_wm.plane_res_b >= total[plane->id]) memset(&wm->trans_wm, 0, sizeof(wm->trans_wm)); } @@ -5033,10 +5069,13 @@ void skl_write_plane_wm(struct intel_plane *plane, &crtc_state->wm.skl.plane_ddb_y[plane_id]; const struct skl_ddb_entry *ddb_uv = &crtc_state->wm.skl.plane_ddb_uv[plane_id]; + const struct skl_wm_level *wm_level; for (level = 0; level <= max_level; level++) { + wm_level = skl_plane_wm_level(plane, crtc_state, level, false); + skl_write_wm_level(dev_priv, PLANE_WM(pipe, plane_id, level), - &wm->wm[level]); + wm_level); } skl_write_wm_level(dev_priv, PLANE_WM_TRANS(pipe, plane_id), &wm->trans_wm); @@ -5067,10 +5106,13 @@ void skl_write_cursor_wm(struct intel_plane *plane, &crtc_state->wm.skl.optimal.planes[plane_id]; const struct skl_ddb_entry *ddb = &crtc_state->wm.skl.plane_ddb_y[plane_id]; + const struct skl_wm_level *wm_level; for (level = 0; level <= max_level; level++) { + wm_level = skl_plane_wm_level(plane, crtc_state, level, false); + skl_write_wm_level(dev_priv, CUR_WM(pipe, level), - &wm->wm[level]); + wm_level); } skl_write_wm_level(dev_priv, CUR_WM_TRANS(pipe), &wm->trans_wm);