From patchwork Wed May 9 07:13:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jani Nikula X-Patchwork-Id: 10388241 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A581A60153 for ; Wed, 9 May 2018 07:10:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 81D4328CEA for ; Wed, 9 May 2018 07:10:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7525728D09; Wed, 9 May 2018 07:10:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C7DAF28CEA for ; Wed, 9 May 2018 07:10:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 46AB46EC56; Wed, 9 May 2018 07:10:48 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 60E866EC56 for ; Wed, 9 May 2018 07:10:46 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 May 2018 00:10:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,381,1520924400"; d="scan'208";a="53635214" Received: from jnikula-mobl2.fi.intel.com (HELO localhost) ([10.237.72.62]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2018 00:10:44 -0700 From: Jani Nikula To: intel-gfx@lists.freedesktop.org Date: Wed, 9 May 2018 10:13:21 +0300 Message-Id: <20180509071321.28563-1-jani.nikula@intel.com> X-Mailer: git-send-email 2.11.0 MIME-Version: 1.0 Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo Subject: [Intel-gfx] [RFC] drm/i915/dp: optimize eDP 1.4+ link config fast and narrow X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jani.nikula@intel.com, Rodrigo Vivi Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP We've opted to use the maximum link rate and lane count for eDP panels, because typically the maximum supported configuration reported by the panel has matched the native resolution requirements of the panel, and optimizing the link has lead to problems. With eDP 1.4 rate select method and DSC features, this is decreasingly the case. There's a need to optimize the link parameters. Moreover, already eDP 1.3 states fast link with fewer lanes is preferred over the wide and slow. (Wide and slow should still be more reliable for longer cable lengths.) Additionally, there have been reports of panels failing on arbitrary link configurations, although arguably all configurations they claim to support should work. Optimize eDP 1.4+ link config fast and narrow. Side note: The implementation has a near duplicate of the link config function, with just the two inner for loops turned inside out. Perhaps there'd be a way to make this, say, more table driven to reduce the duplication, but seems like that would lead to duplication in the table generation. We'll also have to see how the link config optimization for DSC turns out. Cc: Ville Syrjälä Cc: Manasi Navare Cc: Rodrigo Vivi Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105267 Signed-off-by: Jani Nikula Acked-by: Rodrigo Vivi --- Untested. It's possible this helps the referenced bug. The downside is that this patch has a bunch of dependencies that are too much to backport to stable kernels. If the patch works, we may need to consider hacking together an uglier backport. --- drivers/gpu/drm/i915/intel_dp.c | 73 ++++++++++++++++++++++++++++++++++------- 1 file changed, 62 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c index dde92e4af5d3..1ec62965ece3 100644 --- a/drivers/gpu/drm/i915/intel_dp.c +++ b/drivers/gpu/drm/i915/intel_dp.c @@ -1768,6 +1768,42 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp, return false; } +/* Optimize link config in order: max bpp, min lanes, min clock */ +static bool +intel_dp_compute_link_config_fast(struct intel_dp *intel_dp, + struct intel_crtc_state *pipe_config, + const struct link_config_limits *limits) +{ + struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode; + int bpp, clock, lane_count; + int mode_rate, link_clock, link_avail; + + for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) { + mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock, + bpp); + + for (lane_count = limits->min_lane_count; + lane_count <= limits->max_lane_count; + lane_count <<= 1) { + for (clock = limits->min_clock; clock <= limits->max_clock; clock++) { + link_clock = intel_dp->common_rates[clock]; + link_avail = intel_dp_max_data_rate(link_clock, + lane_count); + + if (mode_rate <= link_avail) { + pipe_config->lane_count = lane_count; + pipe_config->pipe_bpp = bpp; + pipe_config->port_clock = link_clock; + + return true; + } + } + } + } + + return false; +} + static bool intel_dp_compute_link_config(struct intel_encoder *encoder, struct intel_crtc_state *pipe_config) @@ -1792,13 +1828,15 @@ intel_dp_compute_link_config(struct intel_encoder *encoder, limits.min_bpp = 6 * 3; limits.max_bpp = intel_dp_compute_bpp(intel_dp, pipe_config); - if (intel_dp_is_edp(intel_dp)) { + if (intel_dp_is_edp(intel_dp) && intel_dp->edp_dpcd[0] < DP_EDP_14) { /* * Use the maximum clock and number of lanes the eDP panel - * advertizes being capable of. The panels are generally - * designed to support only a single clock and lane - * configuration, and typically these values correspond to the - * native resolution of the panel. + * advertizes being capable of. The eDP 1.3 and earlier panels + * are generally designed to support only a single clock and + * lane configuration, and typically these values correspond to + * the native resolution of the panel. With eDP 1.4 rate select + * and DSC, this is decreasingly the case, and we need to be + * able to select less than maximum link config. */ limits.min_lane_count = limits.max_lane_count; limits.min_clock = limits.max_clock; @@ -1812,12 +1850,25 @@ intel_dp_compute_link_config(struct intel_encoder *encoder, intel_dp->common_rates[limits.max_clock], limits.max_bpp, adjusted_mode->crtc_clock); - /* - * Optimize for slow and wide. This is the place to add alternative - * optimization policy. - */ - if (!intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits)) - return false; + if (intel_dp_is_edp(intel_dp)) { + /* + * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4 + * section A.1: "It is recommended that the minimum number of + * lanes be used, using the minimum link rate allowed for that + * lane configuration." + * + * Note that we use the max clock and lane count for eDP 1.3 and + * earlier, and fast vs. wide is irrelevant. + */ + if (!intel_dp_compute_link_config_fast(intel_dp, pipe_config, + &limits)) + return false; + } else { + /* Optimize for slow and wide. */ + if (!intel_dp_compute_link_config_wide(intel_dp, pipe_config, + &limits)) + return false; + } DRM_DEBUG_KMS("DP lane count %d clock %d bpp %d\n", pipe_config->lane_count, pipe_config->port_clock,