From patchwork Tue Nov 3 16:55:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matt Roper X-Patchwork-Id: 7545711 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 582EE9F399 for ; Tue, 3 Nov 2015 16:56:02 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 772B8204D6 for ; Tue, 3 Nov 2015 16:56:00 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 8A41720457 for ; Tue, 3 Nov 2015 16:55:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9532E6E6BC; Tue, 3 Nov 2015 08:55:57 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTP id 675826E6BC for ; Tue, 3 Nov 2015 08:55:50 -0800 (PST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP; 03 Nov 2015 08:55:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,239,1444719600"; d="scan'208";a="810807395" Received: from mdroper-hswdev.fm.intel.com (HELO mdroper-hswdev) ([10.1.134.207]) by orsmga001.jf.intel.com with ESMTP; 03 Nov 2015 08:55:41 -0800 Received: from mattrope by mdroper-hswdev with local (Exim 4.84) (envelope-from ) id 1Ztes1-0004Yh-AF; Tue, 03 Nov 2015 08:55:41 -0800 From: Matt Roper To: intel-gfx@lists.freedesktop.org Date: Tue, 3 Nov 2015 08:55:37 -0800 Message-Id: <1446569737-17479-1-git-send-email-matthew.d.roper@intel.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1446516892-25937-5-git-send-email-matthew.d.roper@intel.com> References: <1446516892-25937-5-git-send-email-matthew.d.roper@intel.com> Cc: jani.nikula@intel.com Subject: [Intel-gfx] [PATCH 4/4] drm/i915: Add two-stage ILK-style watermark programming (v7) X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In addition to calculating final watermarks, let's also pre-calculate a set of intermediate watermark values at atomic check time. These intermediate watermarks are a combination of the watermarks for the old state and the new state; they should satisfy the requirements of both states which means they can be programmed immediately when we commit the atomic state (without waiting for a vblank). Once the vblank does happen, we can then re-program watermarks to the more optimal final value. v2: Significant rebasing/rewriting. v3: - Move 'need_postvbl_update' flag to CRTC state (Daniel) - Don't forget to check intermediate watermark values for validity (Maarten) - Don't due async watermark optimization; just do it at the end of the atomic transaction, after waiting for vblanks. We do want it to be async eventually, but adding that now will cause more trouble for Maarten's in-progress work. (Maarten) - Don't allocate space in crtc_state for intermediate watermarks on platforms that don't need it (gen9+). - Move WaCxSRDisabledForSpriteScaling:ivb into intel_begin_crtc_commit now that ilk_update_wm is gone. v4: - Add a wm_mutex to cover updates to intel_crtc->active and the need_postvbl_update flag. Since we don't have async yet it isn't terribly important yet, but might as well add it now. - Change interface to program watermarks. Platforms will now expose .initial_watermarks() and .optimize_watermarks() functions to do watermark programming. These should lock wm_mutex, copy the appropriate state values into intel_crtc->active, and then call the internal program watermarks function. v5: - Skip intermediate watermark calculation/check during initial hardware readout since we don't trust the existing HW values (and don't have valid values of our own yet). - Don't try to call .optimize_watermarks() on platforms that don't have atomic watermarks yet. (Maarten) v6: - Rebase v7: - Further rebase Signed-off-by: Matt Roper Reviewed-by(v5): Maarten Lankhorst --- drivers/gpu/drm/i915/i915_drv.h | 6 +- drivers/gpu/drm/i915/intel_atomic.c | 1 + drivers/gpu/drm/i915/intel_display.c | 90 +++++++++++++++++++- drivers/gpu/drm/i915/intel_drv.h | 31 ++++++- drivers/gpu/drm/i915/intel_pm.c | 160 ++++++++++++++++++++++++----------- 5 files changed, 232 insertions(+), 56 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 09807c8..e9a07d8 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -629,7 +629,11 @@ struct drm_i915_display_funcs { struct dpll *best_clock); int (*compute_pipe_wm)(struct intel_crtc *crtc, struct drm_atomic_state *state); - void (*program_watermarks)(struct intel_crtc_state *cstate); + int (*compute_intermediate_wm)(struct drm_device *dev, + struct intel_crtc *intel_crtc, + struct intel_crtc_state *newstate); + void (*initial_watermarks)(struct intel_crtc_state *cstate); + void (*optimize_watermarks)(struct intel_crtc_state *cstate); void (*update_wm)(struct drm_crtc *crtc); int (*modeset_calc_cdclk)(struct drm_atomic_state *state); void (*modeset_commit_cdclk)(struct drm_atomic_state *state); diff --git a/drivers/gpu/drm/i915/intel_atomic.c b/drivers/gpu/drm/i915/intel_atomic.c index 643f342..b91e166 100644 --- a/drivers/gpu/drm/i915/intel_atomic.c +++ b/drivers/gpu/drm/i915/intel_atomic.c @@ -95,6 +95,7 @@ intel_crtc_duplicate_state(struct drm_crtc *crtc) crtc_state->update_pipe = false; crtc_state->disable_lp_wm = false; + crtc_state->wm.need_postvbl_update = false; return &crtc_state->base; } diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index e289311..29ad378 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -11659,6 +11659,12 @@ int intel_plane_atomic_calc_changes(struct drm_crtc_state *crtc_state, intel_crtc->atomic.update_wm_pre = true; } + /* Pre-gen9 platforms need two-step watermark updates */ + if ((intel_crtc->atomic.update_wm_pre || intel_crtc->atomic.update_wm_post) && + INTEL_INFO(dev)->gen < 9 && + dev_priv->display.optimize_watermarks) + to_intel_crtc_state(crtc_state)->wm.need_postvbl_update = true; + if (visible || was_visible) intel_crtc->atomic.fb_bits |= to_intel_plane(plane)->frontbuffer_bit; @@ -11815,8 +11821,29 @@ static int intel_crtc_atomic_check(struct drm_crtc *crtc, ret = 0; if (dev_priv->display.compute_pipe_wm) { ret = dev_priv->display.compute_pipe_wm(intel_crtc, state); - if (ret) + if (ret) { + DRM_DEBUG_KMS("Target pipe watermarks are invalid\n"); return ret; + } + } + + if (dev_priv->display.compute_intermediate_wm && + !to_intel_atomic_state(state)->skip_intermediate_wm) { + if (WARN_ON(!dev_priv->display.compute_pipe_wm)) + return 0; + + /* + * Calculate 'intermediate' watermarks that satisfy both the + * old state and the new state. We can program these + * immediately. + */ + ret = dev_priv->display.compute_intermediate_wm(crtc->dev, + intel_crtc, + pipe_config); + if (ret) { + DRM_DEBUG_KMS("No valid intermediate pipe watermarks are possible\n"); + return ret; + } } if (INTEL_INFO(dev)->gen >= 9) { @@ -13247,6 +13274,7 @@ static int intel_atomic_commit(struct drm_device *dev, struct drm_i915_private *dev_priv = dev->dev_private; struct drm_crtc_state *crtc_state; struct drm_crtc *crtc; + struct intel_crtc_state *intel_cstate; int ret = 0; int i; bool any_ms = false; @@ -13325,6 +13353,20 @@ static int intel_atomic_commit(struct drm_device *dev, drm_atomic_helper_wait_for_vblanks(dev, state); + /* + * Now that the vblank has passed, we can go ahead and program the + * optimal watermarks on platforms that need two-step watermark + * programming. + * + * TODO: Move this (and other cleanup) to an async worker eventually. + */ + for_each_crtc_in_state(state, crtc, crtc_state, i) { + intel_cstate = to_intel_crtc_state(crtc->state); + + if (dev_priv->display.optimize_watermarks) + dev_priv->display.optimize_watermarks(intel_cstate); + } + mutex_lock(&dev->struct_mutex); drm_atomic_helper_cleanup_planes(dev, state); mutex_unlock(&dev->struct_mutex); @@ -13687,13 +13729,44 @@ static void intel_begin_crtc_commit(struct drm_crtc *crtc, struct drm_crtc_state *old_crtc_state) { struct drm_device *dev = crtc->dev; + struct drm_i915_private *dev_priv = to_i915(dev); struct intel_crtc *intel_crtc = to_intel_crtc(crtc); + struct intel_crtc_state *cstate = to_intel_crtc_state(crtc->state); struct intel_crtc_state *old_intel_state = to_intel_crtc_state(old_crtc_state); bool modeset = needs_modeset(crtc->state); - if (intel_crtc->atomic.update_wm_pre) + /* + * IVB workaround: must disable low power watermarks for at least + * one frame before enabling scaling. LP watermarks can be re-enabled + * when scaling is disabled. + * + * WaCxSRDisabledForSpriteScaling:ivb + */ + if (cstate->disable_lp_wm) { + ilk_disable_lp_wm(crtc->dev); + intel_wait_for_vblank(crtc->dev, intel_crtc->pipe); + } + + /* + * For platforms that support atomic watermarks, program the + * 'intermediate' watermarks immediately. On pre-gen9 platforms, these + * will be the intermediate values that are safe for both pre- and + * post- vblank; when vblank happens, the 'active' values will be set + * to the final 'target' values and we'll do this again to get the + * optimal watermarks. For gen9+ platforms, the values we program here + * will be the final target values which will get automatically latched + * at vblank time; no further programming will be necessary. + * + * If a platform hasn't been transitioned to atomic watermarks yet, + * we'll continue to update watermarks the old way, if flags tell + * us to. + */ + if (dev_priv->display.initial_watermarks != NULL) { + dev_priv->display.initial_watermarks(cstate); + } else if (intel_crtc->atomic.update_wm_pre) { intel_update_watermarks(crtc); + } /* Perform vblank evasion around commit operation */ intel_pipe_update_start(intel_crtc); @@ -14032,6 +14105,7 @@ static void intel_crtc_init(struct drm_device *dev, int pipe) intel_crtc->cursor_size = ~0; intel_crtc->wm.cxsr_allowed = true; + mutex_init(&intel_crtc->wm.wm_mutex); BUG_ON(pipe >= ARRAY_SIZE(dev_priv->plane_to_crtc_mapping) || dev_priv->plane_to_crtc_mapping[intel_crtc->plane] != NULL); @@ -14951,7 +15025,7 @@ static void sanitize_watermarks(struct drm_device *dev) int i; /* Only supported on platforms that use atomic watermark design */ - if (!dev_priv->display.program_watermarks) + if (!dev_priv->display.optimize_watermarks) return; /* @@ -14963,6 +15037,13 @@ static void sanitize_watermarks(struct drm_device *dev) if (WARN_ON(IS_ERR(state))) return; + /* + * Hardware readout is the only time we don't want to calculate + * intermediate watermarks (since we don't trust the current + * watermarks). + */ + to_intel_atomic_state(state)->skip_intermediate_wm = true; + ret = intel_atomic_check(dev, state); if (ret) { /* @@ -14978,7 +15059,8 @@ static void sanitize_watermarks(struct drm_device *dev) for_each_crtc_in_state(state, crtc, cstate, i) { struct intel_crtc_state *cs = to_intel_crtc_state(cstate); - dev_priv->display.program_watermarks(cs); + cs->wm.need_postvbl_update = true; + dev_priv->display.optimize_watermarks(cs); } drm_atomic_state_free(state); diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index d1a6071..8392d57 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -251,6 +251,12 @@ struct intel_atomic_state { bool dpll_set; struct intel_shared_dpll_config shared_dpll[I915_NUM_PLLS]; struct intel_wm_config wm_config; + + /* + * Current watermarks can't be trusted during hardware readout, so + * don't bother calculating intermediate watermarks. + */ + bool skip_intermediate_wm; }; struct intel_plane_state { @@ -493,13 +499,29 @@ struct intel_crtc_state { struct { /* - * optimal watermarks, programmed post-vblank when this state - * is committed + * Optimal watermarks, programmed post-vblank when this state + * is committed. */ union { struct intel_pipe_wm ilk; struct skl_pipe_wm skl; } optimal; + + /* + * Intermediate watermarks; these can be programmed immediately + * since they satisfy both the current configuration we're + * switching away from and the new configuration we're switching + * to. + */ + struct intel_pipe_wm intermediate; + + /* + * Platforms with two-step watermark programming will need to + * update watermark programming post-vblank to switch from the + * safe intermediate watermarks to the optimal final + * watermarks. + */ + bool need_postvbl_update; } wm; }; @@ -589,8 +611,12 @@ struct intel_crtc { struct intel_pipe_wm ilk; struct skl_pipe_wm skl; } active; + /* allow CxSR on this pipe */ bool cxsr_allowed; + + /* Protects active and need_postvbl_update */ + struct mutex wm_mutex; } wm; int scanline_offset; @@ -1439,6 +1465,7 @@ void skl_wm_get_hw_state(struct drm_device *dev); void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv, struct skl_ddb_allocation *ddb /* out */); uint32_t ilk_pipe_pixel_rate(const struct intel_crtc_state *pipe_config); +bool ilk_disable_lp_wm(struct drm_device *dev); /* intel_sdvo.c */ bool intel_sdvo_init(struct drm_device *dev, uint32_t sdvo_reg, bool is_sdvob); diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index fbcb072..ac4cddb 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -2265,6 +2265,29 @@ static void skl_setup_wm_latency(struct drm_device *dev) intel_print_wm_latency(dev, "Gen9 Plane", dev_priv->wm.skl_latency); } +static bool ilk_validate_pipe_wm(struct drm_device *dev, + struct intel_pipe_wm *pipe_wm) +{ + /* LP0 watermark maximums depend on this pipe alone */ + const struct intel_wm_config config = { + .num_pipes_active = 1, + .sprites_enabled = pipe_wm->sprites_enabled, + .sprites_scaled = pipe_wm->sprites_scaled, + }; + struct ilk_wm_maximums max; + + /* LP0 watermarks always use 1/2 DDB partitioning */ + ilk_compute_wm_maximums(dev, 0, &config, INTEL_DDB_PART_1_2, &max); + + /* At least LP0 must be valid */ + if (!ilk_validate_wm_level(0, &max, &pipe_wm->wm[0])) { + DRM_DEBUG_KMS("LP0 watermark invalid\n"); + return false; + } + + return true; +} + /* Compute new watermarks for the pipe */ static int ilk_compute_pipe_wm(struct intel_crtc *intel_crtc, struct drm_atomic_state *state) @@ -2279,10 +2302,6 @@ static int ilk_compute_pipe_wm(struct intel_crtc *intel_crtc, struct intel_plane_state *sprstate = NULL; struct intel_plane_state *curstate = NULL; int level, max_level = ilk_wm_max_level(dev); - /* LP0 watermark maximums depend on this pipe alone */ - struct intel_wm_config config = { - .num_pipes_active = 1, - }; struct ilk_wm_maximums max; cstate = intel_atomic_get_crtc_state(state, intel_crtc); @@ -2305,21 +2324,18 @@ static int ilk_compute_pipe_wm(struct intel_crtc *intel_crtc, curstate = to_intel_plane_state(ps); } - config.sprites_enabled = sprstate->visible; - config.sprites_scaled = sprstate->visible && + pipe_wm->pipe_enabled = cstate->base.active; + pipe_wm->sprites_enabled = sprstate->visible; + pipe_wm->sprites_scaled = sprstate->visible && (drm_rect_width(&sprstate->dst) != drm_rect_width(&sprstate->src) >> 16 || drm_rect_height(&sprstate->dst) != drm_rect_height(&sprstate->src) >> 16); - pipe_wm->pipe_enabled = cstate->base.active; - pipe_wm->sprites_enabled = config.sprites_enabled; - pipe_wm->sprites_scaled = config.sprites_scaled; - /* ILK/SNB: LP2+ watermarks only w/o sprites */ if (INTEL_INFO(dev)->gen <= 6 && sprstate->visible) max_level = 1; /* ILK/SNB/IVB: LP1+ watermarks only w/o scaling */ - if (config.sprites_scaled) + if (pipe_wm->sprites_scaled) max_level = 0; ilk_compute_wm_level(dev_priv, intel_crtc, 0, cstate, @@ -2328,12 +2344,8 @@ static int ilk_compute_pipe_wm(struct intel_crtc *intel_crtc, if (IS_HASWELL(dev) || IS_BROADWELL(dev)) pipe_wm->linetime = hsw_compute_linetime_wm(dev, cstate); - /* LP0 watermarks always use 1/2 DDB partitioning */ - ilk_compute_wm_maximums(dev, 0, &config, INTEL_DDB_PART_1_2, &max); - - /* At least LP0 must be valid */ - if (!ilk_validate_wm_level(0, &max, &pipe_wm->wm[0])) - return -EINVAL; + if (!ilk_validate_pipe_wm(dev, pipe_wm)) + return false; ilk_compute_wm_reg_maximums(dev, 1, &max); @@ -2358,6 +2370,59 @@ static int ilk_compute_pipe_wm(struct intel_crtc *intel_crtc, } /* + * Build a set of 'intermediate' watermark values that satisfy both the old + * state and the new state. These can be programmed to the hardware + * immediately. + */ +static int ilk_compute_intermediate_wm(struct drm_device *dev, + struct intel_crtc *intel_crtc, + struct intel_crtc_state *newstate) +{ + struct intel_pipe_wm *a = &newstate->wm.intermediate; + struct intel_pipe_wm *b = &intel_crtc->wm.active.ilk; + int level, max_level = ilk_wm_max_level(dev); + + /* + * Start with the final, target watermarks, then combine with the + * currently active watermarks to get values that are safe both before + * and after the vblank. + */ + *a = newstate->wm.optimal.ilk; + a->pipe_enabled |= b->pipe_enabled; + a->sprites_enabled |= b->sprites_enabled; + a->sprites_scaled |= b->sprites_scaled; + + for (level = 0; level <= max_level; level++) { + struct intel_wm_level *a_wm = &a->wm[level]; + const struct intel_wm_level *b_wm = &b->wm[level]; + + a_wm->enable &= b_wm->enable; + a_wm->pri_val = max(a_wm->pri_val, b_wm->pri_val); + a_wm->spr_val = max(a_wm->spr_val, b_wm->spr_val); + a_wm->cur_val = max(a_wm->cur_val, b_wm->cur_val); + a_wm->fbc_val = max(a_wm->fbc_val, b_wm->fbc_val); + } + + /* + * We need to make sure that these merged watermark values are + * actually a valid configuration themselves. If they're not, + * there's no safe way to transition from the old state to + * the new state, so we need to fail the atomic transaction. + */ + if (!ilk_validate_pipe_wm(dev, a)) + return -EINVAL; + + /* + * If our intermediate WM are identical to the final WM, then we can + * omit the post-vblank programming; only update if it's different. + */ + if (memcmp(a, &newstate->wm.optimal.ilk, sizeof *a) != 0) + newstate->wm.need_postvbl_update = false; + + return 0; +} + +/* * Merge the watermarks from all active pipes for a specific level. */ static void ilk_merge_wm_level(struct drm_device *dev, @@ -2369,9 +2434,7 @@ static void ilk_merge_wm_level(struct drm_device *dev, ret_wm->enable = true; for_each_intel_crtc(dev, intel_crtc) { - const struct intel_crtc_state *cstate = - to_intel_crtc_state(intel_crtc->base.state); - const struct intel_pipe_wm *active = &cstate->wm.optimal.ilk; + const struct intel_pipe_wm *active = &intel_crtc->wm.active.ilk; const struct intel_wm_level *wm = &active->wm[level]; if (!active->pipe_enabled) @@ -2519,15 +2582,13 @@ static void ilk_compute_wm_results(struct drm_device *dev, /* LP0 register values */ for_each_intel_crtc(dev, intel_crtc) { - const struct intel_crtc_state *cstate = - to_intel_crtc_state(intel_crtc->base.state); enum pipe pipe = intel_crtc->pipe; - const struct intel_wm_level *r = &cstate->wm.optimal.ilk.wm[0]; + const struct intel_wm_level *r = &intel_crtc->wm.active.ilk.wm[0]; if (WARN_ON(!r->enable)) continue; - results->wm_linetime[pipe] = cstate->wm.optimal.ilk.linetime; + results->wm_linetime[pipe] = intel_crtc->wm.active.ilk.linetime; results->wm_pipe[pipe] = (r->pri_val << WM0_PIPE_PLANE_SHIFT) | @@ -2734,7 +2795,7 @@ static void ilk_write_wm_values(struct drm_i915_private *dev_priv, dev_priv->wm.hw = *results; } -static bool ilk_disable_lp_wm(struct drm_device *dev) +bool ilk_disable_lp_wm(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; @@ -3611,19 +3672,15 @@ static void skl_update_wm(struct drm_crtc *crtc) dev_priv->wm.skl_hw = *results; } -static void ilk_program_watermarks(struct intel_crtc_state *cstate) +static void ilk_program_watermarks(struct drm_i915_private *dev_priv) { - struct drm_crtc *crtc = cstate->base.crtc; - struct drm_device *dev = crtc->dev; - struct drm_i915_private *dev_priv = to_i915(dev); + struct drm_device *dev = dev_priv->dev; struct intel_pipe_wm lp_wm_1_2 = {}, lp_wm_5_6 = {}, *best_lp_wm; struct ilk_wm_maximums max; struct intel_wm_config *config = &dev_priv->wm.config; struct ilk_wm_values results = {}; enum intel_ddb_partitioning partitioning; - to_intel_crtc(crtc)->wm.active.ilk = cstate->wm.optimal.ilk; - ilk_compute_wm_maximums(dev, 1, config, INTEL_DDB_PART_1_2, &max); ilk_wm_merge(dev, config, &max, &lp_wm_1_2); @@ -3646,26 +3703,29 @@ static void ilk_program_watermarks(struct intel_crtc_state *cstate) ilk_write_wm_values(dev_priv, &results); } -static void ilk_update_wm(struct drm_crtc *crtc) +static void ilk_initial_watermarks(struct intel_crtc_state *cstate) { - struct intel_crtc *intel_crtc = to_intel_crtc(crtc); - struct intel_crtc_state *cstate = to_intel_crtc_state(crtc->state); + struct drm_i915_private *dev_priv = to_i915(cstate->base.crtc->dev); + struct intel_crtc *intel_crtc = to_intel_crtc(cstate->base.crtc); - WARN_ON(cstate->base.active != intel_crtc->active); + mutex_lock(&intel_crtc->wm.wm_mutex); + intel_crtc->wm.active.ilk = cstate->wm.intermediate; + ilk_program_watermarks(dev_priv); + mutex_unlock(&intel_crtc->wm.wm_mutex); +} - /* - * IVB workaround: must disable low power watermarks for at least - * one frame before enabling scaling. LP watermarks can be re-enabled - * when scaling is disabled. - * - * WaCxSRDisabledForSpriteScaling:ivb - */ - if (cstate->disable_lp_wm) { - ilk_disable_lp_wm(crtc->dev); - intel_wait_for_vblank(crtc->dev, intel_crtc->pipe); - } +static void ilk_optimize_watermarks(struct intel_crtc_state *cstate) +{ + struct drm_i915_private *dev_priv = to_i915(cstate->base.crtc->dev); + struct intel_crtc *intel_crtc = to_intel_crtc(cstate->base.crtc); - ilk_program_watermarks(cstate); + mutex_lock(&intel_crtc->wm.wm_mutex); + if (cstate->wm.need_postvbl_update) { + intel_crtc->wm.active.ilk = cstate->wm.optimal.ilk; + ilk_program_watermarks(dev_priv); + cstate->wm.need_postvbl_update = false; + } + mutex_unlock(&intel_crtc->wm.wm_mutex); } static void skl_pipe_wm_active_state(uint32_t val, @@ -6971,9 +7031,11 @@ void intel_init_pm(struct drm_device *dev) dev_priv->wm.spr_latency[1] && dev_priv->wm.cur_latency[1]) || (!IS_GEN5(dev) && dev_priv->wm.pri_latency[0] && dev_priv->wm.spr_latency[0] && dev_priv->wm.cur_latency[0])) { - dev_priv->display.update_wm = ilk_update_wm; dev_priv->display.compute_pipe_wm = ilk_compute_pipe_wm; - dev_priv->display.program_watermarks = ilk_program_watermarks; + dev_priv->display.compute_intermediate_wm = + ilk_compute_intermediate_wm; + dev_priv->display.initial_watermarks = ilk_initial_watermarks; + dev_priv->display.optimize_watermarks = ilk_optimize_watermarks; } else { DRM_DEBUG_KMS("Failed to read display plane latency. " "Disable CxSR\n");