From patchwork Thu Apr 6 13:02:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kahola, Mika" X-Patchwork-Id: 13203354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3A45C76196 for ; Thu, 6 Apr 2023 13:32:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 236C410EBC7; Thu, 6 Apr 2023 13:32:49 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8D82A10EBB9 for ; Thu, 6 Apr 2023 13:32:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680787965; x=1712323965; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=M8ekKXWeHqem9MRrgBH9CVt99XFFM19yzn179m/N3AM=; b=mwbFnv+SxR7NkINRxn80IKNSS6Tw+gjs+F2ko0N0VMJmC33dcGSz/1kN w3kT80INXl7nlFEZPY4km3qiy2MM6Md4fSL11+XcNbGEzyQKiQrRTo2Xs YasujzYkSj39/KW/Uc5aw1Iy4lYPiyZeVQ5hw06m8yPRVqT81yVDdegSy VLYg9LzS4OvI4PvAjWCGUUFKQ/HrLkPiwASHYP+JzI/Z+g+YDa68mq+/l ecmPCKLLMHctJpx17mH3+2G2CHyANElMRbLESN8f6f6UrCOqZ3va1oFgK xDttuoP2+2TVjvo3z2XY43JE2UofLNaxydQFfFoHBVDzLfz00aBPWqUQ0 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="345352298" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="345352298" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 06:07:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="751641644" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="751641644" Received: from sorvi2.fi.intel.com ([10.237.72.194]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2023 06:07:30 -0700 From: Mika Kahola To: intel-gfx@lists.freedesktop.org Date: Thu, 6 Apr 2023 16:02:14 +0300 Message-Id: <20230406130221.2998457-2-mika.kahola@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230406130221.2998457-1-mika.kahola@intel.com> References: <20230406130221.2998457-1-mika.kahola@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v3 1/8] drm/i915/mtl: Initial DDI port setup X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lucas De Marchi Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Clint Taylor Initialization sequences and C10 phy are in place to be able to enable the first 2 ports of MTL. The other ports use C20 phy that still need to be properly added. Enable the first ports for now, keeping a TODO comment about the others. Cc: Radhakrishna Sripada Reviewed-by: Lucas De Marchi Signed-off-by: Clint Taylor --- drivers/gpu/drm/i915/display/intel_display.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 010ee793e1ff..fa7807d04ac2 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -7729,7 +7729,11 @@ static void intel_setup_outputs(struct drm_i915_private *dev_priv) if (!HAS_DISPLAY(dev_priv)) return; - if (IS_DG2(dev_priv)) { + if (IS_METEORLAKE(dev_priv)) { + /* TODO: initialize TC ports as well */ + intel_ddi_init(dev_priv, PORT_A); + intel_ddi_init(dev_priv, PORT_B); + } else if (IS_DG2(dev_priv)) { intel_ddi_init(dev_priv, PORT_A); intel_ddi_init(dev_priv, PORT_B); intel_ddi_init(dev_priv, PORT_C); From patchwork Thu Apr 6 13:02:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kahola, Mika" X-Patchwork-Id: 13203359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3458DC7618D for ; Thu, 6 Apr 2023 13:33:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E2BDD10EBC1; Thu, 6 Apr 2023 13:32:52 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1A9AD10EBB9 for ; Thu, 6 Apr 2023 13:32:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680787966; x=1712323966; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TGqr0MHhWM+KQ5jzFVlKLXDn0UpkOr/ZBPkiPmkRuXw=; b=IABDN8g91GbKFADKlTdP7DwhWMLY28+oHPVjxMrTcLji12BY+hRoFH7s LiTDX3Xuv4r9KI8inDsD8bn4mWPdmB4FB2SnNwvJg2geZHp8qWJR3Cef1 /Eq+KktXHGRkRJI78I8M2cws9DeMQs8ycKkwv2d8zYPQEI5FnSFhnmCxI IXPYd3u8g2fEGfc3NZBlSCVZj1yxbNIQ9rApesLfsBCeZf0d8SYMeCqbE StGwJjT7rqSLYyh1k94C/vXRQIxSC29TmIayv7a83rY4WFDN/F88yxupS YaVbji2gj9SObIv/LrU3SUZ4zWEpaFZUz6jw9EprMCyDFQ+9RGIVkyc3i A==; X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="345352303" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="345352303" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 06:07:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="751641650" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="751641650" Received: from sorvi2.fi.intel.com ([10.237.72.194]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2023 06:07:31 -0700 From: Mika Kahola To: intel-gfx@lists.freedesktop.org Date: Thu, 6 Apr 2023 16:02:15 +0300 Message-Id: <20230406130221.2998457-3-mika.kahola@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230406130221.2998457-1-mika.kahola@intel.com> References: <20230406130221.2998457-1-mika.kahola@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v3 2/8] drm/i915/mtl: Add DP rates X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add DP rates for Meteorlake. Reviewed-by: Vinod Govindapillai Signed-off-by: Radhakrishna Sripada Signed-off-by: Mika Kahola --- drivers/gpu/drm/i915/display/intel_dp.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c index f0bace9d98a1..3367ae98c2bd 100644 --- a/drivers/gpu/drm/i915/display/intel_dp.c +++ b/drivers/gpu/drm/i915/display/intel_dp.c @@ -421,6 +421,11 @@ static int ehl_max_source_rate(struct intel_dp *intel_dp) return 810000; } +static int mtl_max_source_rate(struct intel_dp *intel_dp) +{ + return intel_dp_is_edp(intel_dp) ? 675000 : 810000; +} + static int vbt_max_link_rate(struct intel_dp *intel_dp) { struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; @@ -445,6 +450,10 @@ static void intel_dp_set_source_rates(struct intel_dp *intel_dp) { /* The values must be in increasing order */ + static const int mtl_rates[] = { + 162000, 216000, 243000, 270000, 324000, 432000, 540000, 675000, + 810000, + }; static const int icl_rates[] = { 162000, 216000, 270000, 324000, 432000, 540000, 648000, 810000, 1000000, 1350000, @@ -470,7 +479,11 @@ intel_dp_set_source_rates(struct intel_dp *intel_dp) drm_WARN_ON(&dev_priv->drm, intel_dp->source_rates || intel_dp->num_source_rates); - if (DISPLAY_VER(dev_priv) >= 11) { + if (DISPLAY_VER(dev_priv) >= 14) { + source_rates = mtl_rates; + size = ARRAY_SIZE(mtl_rates); + max_rate = mtl_max_source_rate(intel_dp); + } else if (DISPLAY_VER(dev_priv) >= 11) { source_rates = icl_rates; size = ARRAY_SIZE(icl_rates); if (IS_DG2(dev_priv)) From patchwork Thu Apr 6 13:02:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Kahola, Mika" X-Patchwork-Id: 13203356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30CD6C7618D for ; Thu, 6 Apr 2023 13:32:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AA06210EBC4; Thu, 6 Apr 2023 13:32:50 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2FF6010EBBD for ; Thu, 6 Apr 2023 13:32:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680787966; x=1712323966; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ke7sKxa+15fTJtaKOH/9erEzjTTUQVrhFhhautBIUeg=; b=gD+X96gexIpkJQb4FRK2B33txd9gL0uSVtZgcso9A2ikxSJrKFr8CWUs Ki49nMNqSY/A8ozps6jLlTWQV+adVj16T/gvbFf8V086+NYw7q5oe7bE1 VTg4rj98ReGncZ7JTch4nuHDZyEcKodN3VUwIYhsnZhUjvbZa7FgfaYNw Oax30UYQY5jlgmhs9Y1EjiKqIGI3D+iubRx21aK0UsysLUI3g1Em4vWT2 dlfCU6A5kf+O//mbFyUJn6BRspVnxivdwShkqVkN+plecNakqRkad9M8e mGWL7nzA6YcYVRWVAYoQY3HIeQc65cKv/6zM5aTm+N9JmaNqlU+5TKuBf A==; X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="345352307" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="345352307" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 06:07:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="751641655" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="751641655" Received: from sorvi2.fi.intel.com ([10.237.72.194]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2023 06:07:33 -0700 From: Mika Kahola To: intel-gfx@lists.freedesktop.org Date: Thu, 6 Apr 2023 16:02:16 +0300 Message-Id: <20230406130221.2998457-4-mika.kahola@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230406130221.2998457-1-mika.kahola@intel.com> References: <20230406130221.2998457-1-mika.kahola@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v3 3/8] drm/i915/mtl: Create separate reg file for PICA registers X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Create a separate file to store registers for PICA chips C10 and C20. v2: Rename file (Jani) v3: Use _PICK_EVEN_2RANGES() macro (Lucas) Coding style fixed (Lucas) v4: Redefine macros (Imre) Reviewed-by: Vinod Govindapillai (v3) Signed-off-by: Radhakrishna Sripada Signed-off-by: Mika Kahola --- .../gpu/drm/i915/display/intel_cx0_phy_regs.h | 133 ++++++++++++++++++ 1 file changed, 133 insertions(+) create mode 100644 drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h new file mode 100644 index 000000000000..27723c1a93d9 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: MIT + * + * Copyright © 2022 Intel Corporation + */ + +#ifndef __INTEL_CX0_PHY_REGS_H__ +#define __INTEL_CX0_PHY_REGS_H__ + +#include "i915_reg_defs.h" + +#define _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_A 0x64040 +#define _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_B 0x64140 +#define _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_USBC1 0x16F240 +#define _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_USBC2 0x16F440 +#define XELPDP_PORT_M2P_MSGBUS_CTL(port, lane) _MMIO(_PICK_EVEN_2RANGES(port, PORT_TC1, \ + _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_A, \ + _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_B, \ + _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_USBC1, \ + _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_USBC2) + (lane) * 4) +#define XELPDP_PORT_M2P_TRANSACTION_PENDING REG_BIT(31) +#define XELPDP_PORT_M2P_COMMAND_TYPE_MASK REG_GENMASK(30, 27) +#define XELPDP_PORT_M2P_COMMAND_WRITE_UNCOMMITTED REG_FIELD_PREP(XELPDP_PORT_M2P_COMMAND_TYPE_MASK, 0x1) +#define XELPDP_PORT_M2P_COMMAND_WRITE_COMMITTED REG_FIELD_PREP(XELPDP_PORT_M2P_COMMAND_TYPE_MASK, 0x2) +#define XELPDP_PORT_M2P_COMMAND_READ REG_FIELD_PREP(XELPDP_PORT_M2P_COMMAND_TYPE_MASK, 0x3) +#define XELPDP_PORT_M2P_DATA_MASK REG_GENMASK(23, 16) +#define XELPDP_PORT_M2P_DATA(val) REG_FIELD_PREP(XELPDP_PORT_M2P_DATA_MASK, val) +#define XELPDP_PORT_M2P_TRANSACTION_RESET REG_BIT(15) +#define XELPDP_PORT_M2P_ADDRESS_MASK REG_GENMASK(11, 0) +#define XELPDP_PORT_M2P_ADDRESS(val) REG_FIELD_PREP(XELPDP_PORT_M2P_ADDRESS_MASK, val) +#define XELPDP_PORT_P2M_MSGBUS_STATUS(port, lane) _MMIO(_PICK_EVEN_2RANGES(port, PORT_TC1, \ + _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_A, \ + _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_B, \ + _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_USBC1, \ + _XELPDP_PORT_M2P_MSGBUS_CTL_LN0_USBC2) + (lane) * 4 + 8) +#define XELPDP_PORT_P2M_RESPONSE_READY REG_BIT(31) +#define XELPDP_PORT_P2M_COMMAND_TYPE_MASK REG_GENMASK(30, 27) +#define XELPDP_PORT_P2M_COMMAND_READ_ACK 0x4 +#define XELPDP_PORT_P2M_COMMAND_WRITE_ACK 0x5 +#define XELPDP_PORT_P2M_DATA_MASK REG_GENMASK(23, 16) +#define XELPDP_PORT_P2M_DATA(val) REG_FIELD_PREP(XELPDP_PORT_P2M_DATA_MASK, val) +#define XELPDP_PORT_P2M_ERROR_SET REG_BIT(15) + +#define XELPDP_MSGBUS_TIMEOUT_SLOW 1 +#define XELPDP_MSGBUS_TIMEOUT_FAST_US 2 +#define XELPDP_PCLK_PLL_ENABLE_TIMEOUT_US 3200 +#define XELPDP_PCLK_PLL_DISABLE_TIMEOUT_US 20 +#define XELPDP_PORT_BUF_SOC_READY_TIMEOUT_US 100 +#define XELPDP_PORT_RESET_START_TIMEOUT_US 5 +#define XELPDP_PORT_POWERDOWN_UPDATE_TIMEOUT_US 100 +#define XELPDP_PORT_RESET_END_TIMEOUT 15 +#define XELPDP_REFCLK_ENABLE_TIMEOUT_US 1 + +#define _XELPDP_PORT_BUF_CTL1_LN0_A 0x64004 +#define _XELPDP_PORT_BUF_CTL1_LN0_B 0x64104 +#define _XELPDP_PORT_BUF_CTL1_LN0_USBC1 0x16F200 +#define _XELPDP_PORT_BUF_CTL1_LN0_USBC2 0x16F400 +#define XELPDP_PORT_BUF_CTL1(port) _MMIO(_PICK_EVEN_2RANGES(port, PORT_TC1, \ + _XELPDP_PORT_BUF_CTL1_LN0_A, \ + _XELPDP_PORT_BUF_CTL1_LN0_B, \ + _XELPDP_PORT_BUF_CTL1_LN0_USBC1, \ + _XELPDP_PORT_BUF_CTL1_LN0_USBC2)) +#define XELPDP_PORT_BUF_SOC_PHY_READY REG_BIT(24) +#define XELPDP_PORT_REVERSAL REG_BIT(16) +#define XELPDP_TC_PHY_OWNERSHIP REG_BIT(6) +#define XELPDP_TCSS_POWER_REQUEST REG_BIT(5) +#define XELPDP_TCSS_POWER_STATE REG_BIT(4) +#define XELPDP_PORT_WIDTH_MASK REG_GENMASK(3, 1) +#define XELPDP_PORT_WIDTH(val) REG_FIELD_PREP(XELPDP_PORT_WIDTH_MASK, val) + +#define XELPDP_PORT_BUF_CTL2(port) _MMIO(_PICK_EVEN_2RANGES(port, PORT_TC1, \ + _XELPDP_PORT_BUF_CTL1_LN0_A, \ + _XELPDP_PORT_BUF_CTL1_LN0_B, \ + _XELPDP_PORT_BUF_CTL1_LN0_USBC1, \ + _XELPDP_PORT_BUF_CTL1_LN0_USBC2) + 4) + +#define XELPDP_LANE_PIPE_RESET(lane) _PICK(lane, REG_BIT(31), REG_BIT(30)) +#define XELPDP_LANE_PHY_CURRENT_STATUS(lane) _PICK(lane, REG_BIT(29), REG_BIT(28)) +#define XELPDP_LANE_POWERDOWN_UPDATE(lane) _PICK(lane, REG_BIT(25), REG_BIT(24)) +#define _XELPDP_LANE0_POWERDOWN_NEW_STATE_MASK REG_GENMASK(23, 20) +#define _XELPDP_LANE0_POWERDOWN_NEW_STATE(val) REG_FIELD_PREP(_XELPDP_LANE0_POWERDOWN_NEW_STATE_MASK, val) +#define _XELPDP_LANE1_POWERDOWN_NEW_STATE_MASK REG_GENMASK(19, 16) +#define _XELPDP_LANE1_POWERDOWN_NEW_STATE(val) REG_FIELD_PREP(_XELPDP_LANE1_POWERDOWN_NEW_STATE_MASK, val) +#define XELPDP_LANE_POWERDOWN_NEW_STATE(lane, val) _PICK(lane, \ + _XELPDP_LANE0_POWERDOWN_NEW_STATE(val), \ + _XELPDP_LANE1_POWERDOWN_NEW_STATE(val)) +#define XELPDP_LANE_POWERDOWN_NEW_STATE_MASK REG_GENMASK(3, 0) +#define XELPDP_POWER_STATE_READY_MASK REG_GENMASK(7, 4) +#define XELPDP_POWER_STATE_READY(val) REG_FIELD_PREP(XELPDP_POWER_STATE_READY_MASK, val) + +#define XELPDP_PORT_BUF_CTL3(port) _MMIO(_PICK_EVEN_2RANGES(port, PORT_TC1, \ + _XELPDP_PORT_BUF_CTL1_LN0_A, \ + _XELPDP_PORT_BUF_CTL1_LN0_B, \ + _XELPDP_PORT_BUF_CTL1_LN0_USBC1, \ + _XELPDP_PORT_BUF_CTL1_LN0_USBC2) + 8) +#define XELPDP_PLL_LANE_STAGGERING_DELAY_MASK REG_GENMASK(15, 8) +#define XELPDP_PLL_LANE_STAGGERING_DELAY(val) REG_FIELD_PREP(XELPDP_PLL_LANE_STAGGERING_DELAY_MASK, val) +#define XELPDP_POWER_STATE_ACTIVE_MASK REG_GENMASK(3, 0) +#define XELPDP_POWER_STATE_ACTIVE(val) REG_FIELD_PREP(XELPDP_POWER_STATE_ACTIVE_MASK, val) + +#define _XELPDP_PORT_CLOCK_CTL_A 0x640E0 +#define _XELPDP_PORT_CLOCK_CTL_B 0x641E0 +#define _XELPDP_PORT_CLOCK_CTL_USBC1 0x16F260 +#define _XELPDP_PORT_CLOCK_CTL_USBC2 0x16F460 +#define XELPDP_PORT_CLOCK_CTL(port) _MMIO(_PICK_EVEN_2RANGES(port, PORT_TC1, \ + _XELPDP_PORT_CLOCK_CTL_A, \ + _XELPDP_PORT_CLOCK_CTL_B, \ + _XELPDP_PORT_CLOCK_CTL_USBC1, \ + _XELPDP_PORT_CLOCK_CTL_USBC2)) +#define XELPDP_LANE0_PCLK_PLL_REQUEST REG_BIT(31) +#define XELPDP_LANE0_PCLK_PLL_ACK REG_BIT(30) +#define XELPDP_LANE0_PCLK_REFCLK_REQUEST REG_BIT(29) +#define XELPDP_LANE0_PCLK_REFCLK_ACK REG_BIT(28) +#define XELPDP_LANE1_PCLK_PLL_REQUEST REG_BIT(27) +#define XELPDP_LANE1_PCLK_PLL_ACK REG_BIT(26) +#define XELPDP_LANE1_PCLK_REFCLK_REQUEST REG_BIT(25) +#define XELPDP_LANE1_PCLK_REFCLK_ACK REG_BIT(24) +#define XELPDP_TBT_CLOCK_REQUEST REG_BIT(19) +#define XELPDP_TBT_CLOCK_ACK REG_BIT(18) +#define XELPDP_DDI_CLOCK_SELECT_MASK REG_GENMASK(15, 12) +#define XELPDP_DDI_CLOCK_SELECT(val) REG_FIELD_PREP(XELPDP_DDI_CLOCK_SELECT_MASK, val) +#define XELPDP_DDI_CLOCK_SELECT_NONE 0x0 +#define XELPDP_DDI_CLOCK_SELECT_MAXPCLK 0x8 +#define XELPDP_DDI_CLOCK_SELECT_DIV18CLK 0x9 +#define XELPDP_DDI_CLOCK_SELECT_TBT_162 0xc +#define XELPDP_DDI_CLOCK_SELECT_TBT_270 0xd +#define XELPDP_DDI_CLOCK_SELECT_TBT_540 0xe +#define XELPDP_DDI_CLOCK_SELECT_TBT_810 0xf +#define XELPDP_FORWARD_CLOCK_UNGATE REG_BIT(10) +#define XELPDP_LANE1_PHY_CLOCK_SELECT REG_BIT(8) +#define XELPDP_SSC_ENABLE_PLLA REG_BIT(1) +#define XELPDP_SSC_ENABLE_PLLB REG_BIT(0) + +#endif /* __INTEL_CX0_PHY_REGS_H__ */ From patchwork Thu Apr 6 13:02:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Kahola, Mika" X-Patchwork-Id: 13203358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 010B1C76196 for ; Thu, 6 Apr 2023 13:32:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 547C910EBC5; Thu, 6 Apr 2023 13:32:51 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5B78E10EBC5 for ; Thu, 6 Apr 2023 13:32:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680787966; x=1712323966; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1fzW1Jd+d0apbrwcaR00x+W8oyNDGrog/La6LaEv78k=; b=aw+jNtFaj2iWjIEAeedrlWEFy80s8zoIhr3bbMeu+0275kIenQj511SJ zafFD6bCWhmcS2XZWV/x7/scjRXqSGT2U9Dxe7QImN8MI81kiH2AjmM05 xJM72IEZL/PB3vCX71hGonTb18EIP//prV6kQ6g+kU6ispBVFBeYR6WDv SXkLVViS7W5fZcJV7FX8OHKsdch+IT12de/qLteotYJHHKkKuKhHOr93s 68VSmB2zL8h2n/ZbqI24EBVtLX8qRBZySZF4FcYfZba2n+m9jpgnKHHCl QLukEi/zP3CZEFcOcCk3SLj6ijIVXIG55tN9AE9FBE+Xnbq+sXQhHPAby A==; X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="345352317" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="345352317" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 06:07:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="751641682" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="751641682" Received: from sorvi2.fi.intel.com ([10.237.72.194]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2023 06:07:34 -0700 From: Mika Kahola To: intel-gfx@lists.freedesktop.org Date: Thu, 6 Apr 2023 16:02:17 +0300 Message-Id: <20230406130221.2998457-5-mika.kahola@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230406130221.2998457-1-mika.kahola@intel.com> References: <20230406130221.2998457-1-mika.kahola@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v3 4/8] drm/i915/mtl: Add Support for C10 PHY message bus and pll programming X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Radhakrishna Sripada XELPDP has C10 and C20 phys from Synopsys to drive displays. Each phy has a dedicated PIPE 5.2 Message bus for configuration. This message bus is used to configure the phy internal registers. XELPDP has C10 phys to drive output to the EDP and the native output from the display engine. Add structures, programming hardware state readout logic. Port clock calculations are similar to DG2. Use the DG2 formulae to calculate the port clock but use the relevant pll signals. Note: PHY lane 0 is always used for PLL programming. Add sequences for C10 phy enable/disable phy lane reset, powerdown change sequence and phy lane programming. Bspec: 64539, 64568, 64599, 65100, 65101, 65450, 65451, 67610, 67636 v2: Squash patches related to C10 phy message bus and pll programming support (Jani) Move register definitions to a new file i.e. intel_cx0_reg_defs.h (Jani) Move macro definitions (Jani) DP rates as separate patch (Jani) Spin out xelpdp register definitions into a separate file (Jani) Replace macro to select registers based on phy lane with function calls (Jani) Fix styling issues (Jani) Call XELPDP_PORT_P2M_MSGBUS_STATUS() with port instead of phy (Lucas) v3: Move clear request flag into try-loop v4: On PHY idle change drm_err_once() as drm_dbg_kms() (Jani) use __intel_de_wait_for_register() instead of __intel_wait_for_register and uncomment intel_uncore.h (Jani) Add DP-alt support for PHY lane programming (Khaled) v4: Add tx and cmn on c10mpllb_state (Imre) Add missing waits for pending transactions between two message bus writes (Imre) General cleanups and simplifications (Imre) Cc: Mika Kahola Cc: Imre Deak Cc: Uma Shankar Cc: Gustavo Sousa Signed-off-by: Radhakrishna Sripada Signed-off-by: Mika Kahola Reviewed-by: Imre Deak --- drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/display/intel_cx0_phy.c | 1206 +++++++++++++++++ drivers/gpu/drm/i915/display/intel_cx0_phy.h | 44 + .../gpu/drm/i915/display/intel_cx0_phy_regs.h | 53 +- drivers/gpu/drm/i915/display/intel_ddi.c | 22 +- .../drm/i915/display/intel_display_types.h | 13 + drivers/gpu/drm/i915/display/intel_dpll.c | 26 +- drivers/gpu/drm/i915/display/intel_dpll_mgr.c | 2 +- .../drm/i915/display/intel_modeset_verify.c | 2 + drivers/gpu/drm/i915/i915_reg.h | 5 + drivers/gpu/drm/i915/i915_reg_defs.h | 57 + 11 files changed, 1419 insertions(+), 12 deletions(-) create mode 100644 drivers/gpu/drm/i915/display/intel_cx0_phy.c create mode 100644 drivers/gpu/drm/i915/display/intel_cx0_phy.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 97b0d4ae221a..4ee3b5850dd0 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -298,6 +298,7 @@ i915-y += \ display/icl_dsi.o \ display/intel_backlight.o \ display/intel_crt.o \ + display/intel_cx0_phy.o \ display/intel_ddi.o \ display/intel_ddi_buf_trans.o \ display/intel_display_trace.o \ diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy.c b/drivers/gpu/drm/i915/display/intel_cx0_phy.c new file mode 100644 index 000000000000..f3e13edd27ba --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_cx0_phy.c @@ -0,0 +1,1206 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2021 Intel Corporation + */ + +#include "i915_reg.h" +#include "intel_cx0_phy.h" +#include "intel_cx0_phy_regs.h" +#include "intel_de.h" +#include "intel_display_types.h" +#include "intel_dp.h" +#include "intel_panel.h" +#include "intel_psr.h" +#include "intel_tc.h" + +#define MB_WRITE_COMMITTED true +#define MB_WRITE_UNCOMMITTED false + +bool intel_is_c10phy(struct drm_i915_private *dev_priv, enum phy phy) +{ + if (IS_METEORLAKE(dev_priv) && (phy < PHY_C)) + return true; + + return false; +} + +static int lane_mask_to_lane(u8 lane_mask) +{ + if (WARN_ON((lane_mask & ~INTEL_CX0_BOTH_LANES) || + hweight8(lane_mask) != 1)) + return 0; + + return ilog2(lane_mask); +} + +static void +assert_dc_off(struct drm_i915_private *i915) +{ + bool enabled; + + enabled = intel_display_power_is_enabled(i915, POWER_DOMAIN_DC_OFF); + drm_WARN_ON(&i915->drm, !enabled); +} + +/* + * Prepare HW for CX0 phy transactions. + * + * It is required that PSR and DC5/6 are disabled before any CX0 message + * bus transaction is executed. + */ +static intel_wakeref_t intel_cx0_phy_transaction_begin(struct intel_encoder *encoder) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + + intel_psr_pause(intel_dp); + return intel_display_power_get(i915, POWER_DOMAIN_DC_OFF); +} + +static void intel_cx0_phy_transaction_end(struct intel_encoder *encoder, intel_wakeref_t wakeref) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + + intel_psr_resume(intel_dp); + intel_display_power_put(i915, POWER_DOMAIN_DC_OFF, wakeref); +} + +static void intel_clear_response_ready_flag(struct drm_i915_private *i915, + enum port port, int lane) +{ + intel_de_rmw(i915, XELPDP_PORT_P2M_MSGBUS_STATUS(port, lane), + 0, XELPDP_PORT_P2M_RESPONSE_READY | XELPDP_PORT_P2M_ERROR_SET); +} + +static void intel_cx0_bus_reset(struct drm_i915_private *i915, enum port port, int lane) +{ + enum phy phy = intel_port_to_phy(i915, port); + + /* Bring the phy to idle. */ + intel_de_write(i915, XELPDP_PORT_M2P_MSGBUS_CTL(port, lane), + XELPDP_PORT_M2P_TRANSACTION_RESET); + + /* Wait for Idle Clear. */ + if (intel_de_wait_for_clear(i915, XELPDP_PORT_M2P_MSGBUS_CTL(port, lane), + XELPDP_PORT_M2P_TRANSACTION_RESET, + XELPDP_MSGBUS_TIMEOUT_SLOW)) { + drm_err_once(&i915->drm, "Failed to bring PHY %c to idle.\n", phy_name(phy)); + return; + } + + intel_clear_response_ready_flag(i915, port, lane); +} + +static int intel_cx0_wait_for_ack(struct drm_i915_private *i915, enum port port, + int command, int lane, u32 *val) +{ + enum phy phy = intel_port_to_phy(i915, port); + + if (__intel_de_wait_for_register(i915, + XELPDP_PORT_P2M_MSGBUS_STATUS(port, lane), + XELPDP_PORT_P2M_RESPONSE_READY, + XELPDP_PORT_P2M_RESPONSE_READY, + XELPDP_MSGBUS_TIMEOUT_FAST_US, + XELPDP_MSGBUS_TIMEOUT_SLOW, val)) { + drm_dbg_kms(&i915->drm, "PHY %c Timeout waiting for message ACK. Status: 0x%x\n", phy_name(phy), *val); + return -ETIMEDOUT; + } + + /* Check for error. */ + if (*val & XELPDP_PORT_P2M_ERROR_SET) { + drm_dbg_kms(&i915->drm, "PHY %c Error occurred during %s command. Status: 0x%x\n", + phy_name(phy), command == XELPDP_PORT_P2M_COMMAND_READ_ACK ? "read" : "write", *val); + intel_cx0_bus_reset(i915, port, lane); + return -EINVAL; + } + + /* Check for Read/Write Ack. */ + if (REG_FIELD_GET(XELPDP_PORT_P2M_COMMAND_TYPE_MASK, *val) != command) { + drm_dbg_kms(&i915->drm, "PHY %c Not a %s response. MSGBUS Status: 0x%x.\n", + phy_name(phy), command == XELPDP_PORT_P2M_COMMAND_READ_ACK ? "read" : "write", *val); + intel_cx0_bus_reset(i915, port, lane); + return -EINVAL; + } + + return 0; +} + +static int __intel_cx0_read_once(struct drm_i915_private *i915, enum port port, + int lane, u16 addr) +{ + enum phy phy = intel_port_to_phy(i915, port); + int ack; + u32 val; + + /* Wait for pending transactions.*/ + if (intel_de_wait_for_clear(i915, XELPDP_PORT_M2P_MSGBUS_CTL(port, lane), + XELPDP_PORT_M2P_TRANSACTION_PENDING, + XELPDP_MSGBUS_TIMEOUT_SLOW)) { + drm_dbg_kms(&i915->drm, "PHY %c Timeout waiting for previous transaction to complete. Reset the bus and retry.\n", phy_name(phy)); + intel_cx0_bus_reset(i915, port, lane); + return -ETIMEDOUT; + } + + /* Issue the read command. */ + intel_de_write(i915, XELPDP_PORT_M2P_MSGBUS_CTL(port, lane), + XELPDP_PORT_M2P_TRANSACTION_PENDING | + XELPDP_PORT_M2P_COMMAND_READ | + XELPDP_PORT_M2P_ADDRESS(addr)); + + /* Wait for response ready. And read response.*/ + ack = intel_cx0_wait_for_ack(i915, port, XELPDP_PORT_P2M_COMMAND_READ_ACK, lane, &val); + if (ack < 0) { + intel_cx0_bus_reset(i915, port, lane); + return ack; + } + + /* Clear Response Ready flag.*/ + intel_clear_response_ready_flag(i915, port, lane); + + return REG_FIELD_GET(XELPDP_PORT_P2M_DATA_MASK, val); +} + +static u8 __intel_cx0_read(struct drm_i915_private *i915, enum port port, + int lane, u16 addr) +{ + enum phy phy = intel_port_to_phy(i915, port); + int i, status; + + assert_dc_off(i915); + + /* 3 tries is assumed to be enough to read successfully */ + for (i = 0; i < 3; i++) { + status = __intel_cx0_read_once(i915, port, lane, addr); + + if (status >= 0) + return status; + } + + if (i == 3) + drm_err_once(&i915->drm, "PHY %c Read %04x failed after %d retries.\n", phy_name(phy), addr, i); + + return 0; +} + +static u8 intel_cx0_read(struct drm_i915_private *i915, enum port port, + u8 lane_mask, u16 addr) +{ + int lane = lane_mask_to_lane(lane_mask); + + return __intel_cx0_read(i915, port, lane, addr); +} + +static int __intel_cx0_write_once(struct drm_i915_private *i915, enum port port, + int lane, u16 addr, u8 data, bool committed) +{ + enum phy phy = intel_port_to_phy(i915, port); + u32 val; + + /* Wait for pending transactions */ + if (intel_de_wait_for_clear(i915, XELPDP_PORT_M2P_MSGBUS_CTL(port, lane), + XELPDP_PORT_M2P_TRANSACTION_PENDING, + XELPDP_MSGBUS_TIMEOUT_SLOW)) { + drm_dbg_kms(&i915->drm, "PHY %c Timeout waiting for previous transaction to complete. Resetting the bus.\n", phy_name(phy)); + intel_cx0_bus_reset(i915, port, lane); + return -ETIMEDOUT; + } + + /* Issue the write command. */ + intel_de_write(i915, XELPDP_PORT_M2P_MSGBUS_CTL(port, lane), + XELPDP_PORT_M2P_TRANSACTION_PENDING | + (committed ? XELPDP_PORT_M2P_COMMAND_WRITE_COMMITTED : + XELPDP_PORT_M2P_COMMAND_WRITE_UNCOMMITTED) | + XELPDP_PORT_M2P_DATA(data) | + XELPDP_PORT_M2P_ADDRESS(addr)); + + /* Wait for pending transactions.*/ + if (intel_de_wait_for_clear(i915, XELPDP_PORT_M2P_MSGBUS_CTL(port, lane), + XELPDP_PORT_M2P_TRANSACTION_PENDING, + XELPDP_MSGBUS_TIMEOUT_SLOW)) { + drm_dbg_kms(&i915->drm, "PHY %c Timeout waiting for write to complete. Resetting the bus.\n", phy_name(phy)); + intel_cx0_bus_reset(i915, port, lane); + return -ETIMEDOUT; + } + + /* Check for error. */ + if (committed) { + if (intel_cx0_wait_for_ack(i915, port, XELPDP_PORT_P2M_COMMAND_WRITE_ACK, lane, &val) < 0) { + intel_cx0_bus_reset(i915, port, lane); + return -EINVAL; + } + } else if ((intel_de_read(i915, XELPDP_PORT_P2M_MSGBUS_STATUS(port, lane)) & + XELPDP_PORT_P2M_ERROR_SET)) { + drm_dbg_kms(&i915->drm, "PHY %c Error occurred during write command.\n", phy_name(phy)); + intel_cx0_bus_reset(i915, port, lane); + return -EINVAL; + } + + /* Clear Response Ready flag.*/ + intel_clear_response_ready_flag(i915, port, lane); + + return 0; +} + +static void __intel_cx0_write(struct drm_i915_private *i915, enum port port, + int lane, u16 addr, u8 data, bool committed) +{ + enum phy phy = intel_port_to_phy(i915, port); + int i, status; + + assert_dc_off(i915); + + /* 3 tries is assumed to be enough to write successfully */ + for (i = 0; i < 3; i++) { + status = __intel_cx0_write_once(i915, port, lane, addr, data, committed); + + if (status == 0) + return; + } + + if (i == 3) + drm_err_once(&i915->drm, "PHY %c Write %04x failed after %d retries.\n", phy_name(phy), addr, i); +} + +static void intel_cx0_write(struct drm_i915_private *i915, enum port port, + u8 lane_mask, u16 addr, u8 data, bool committed) +{ + int lane; + + for_each_cx0_lane_in_mask(lane_mask, lane) + __intel_cx0_write(i915, port, lane, addr, data, committed); +} + +static void __intel_cx0_rmw(struct drm_i915_private *i915, enum port port, + int lane, u16 addr, u8 clear, u8 set, bool committed) +{ + u8 old, val; + + old = __intel_cx0_read(i915, port, lane, addr); + val = (old & ~clear) | set; + + if (val != old) + __intel_cx0_write(i915, port, lane, addr, val, committed); +} + +static void intel_cx0_rmw(struct drm_i915_private *i915, enum port port, + u8 lane_mask, u16 addr, u8 clear, u8 set, bool committed) +{ + u8 lane; + + for_each_cx0_lane_in_mask(lane_mask, lane) + __intel_cx0_rmw(i915, port, lane, addr, clear, set, committed); +} + +/* + * Basic DP link rates with 38.4 MHz reference clock. + * Note: The tables below are with SSC. In non-ssc + * registers 0xC04 to 0xC08(pll[4] to pll[8]) will be + * programmed 0. + */ + +static const struct intel_c10mpllb_state mtl_c10_dp_rbr = { + .clock = 162000, + .tx = 0x10, + .cmn = 0x21, + .pll[0] = 0xB4, + .pll[1] = 0, + .pll[2] = 0x30, + .pll[3] = 0x1, + .pll[4] = 0x26, + .pll[5] = 0x0C, + .pll[6] = 0x98, + .pll[7] = 0x46, + .pll[8] = 0x1, + .pll[9] = 0x1, + .pll[10] = 0, + .pll[11] = 0, + .pll[12] = 0xC0, + .pll[13] = 0, + .pll[14] = 0, + .pll[15] = 0x2, + .pll[16] = 0x84, + .pll[17] = 0x4F, + .pll[18] = 0xE5, + .pll[19] = 0x23, +}; + +static const struct intel_c10mpllb_state mtl_c10_edp_r216 = { + .clock = 216000, + .tx = 0x10, + .cmn = 0x21, + .pll[0] = 0x4, + .pll[1] = 0, + .pll[2] = 0xA2, + .pll[3] = 0x1, + .pll[4] = 0x33, + .pll[5] = 0x10, + .pll[6] = 0x75, + .pll[7] = 0xB3, + .pll[8] = 0x1, + .pll[9] = 0x1, + .pll[10] = 0, + .pll[11] = 0, + .pll[12] = 0, + .pll[13] = 0, + .pll[14] = 0, + .pll[15] = 0x2, + .pll[16] = 0x85, + .pll[17] = 0x0F, + .pll[18] = 0xE6, + .pll[19] = 0x23, +}; + +static const struct intel_c10mpllb_state mtl_c10_edp_r243 = { + .clock = 243000, + .tx = 0x10, + .cmn = 0x21, + .pll[0] = 0x34, + .pll[1] = 0, + .pll[2] = 0xDA, + .pll[3] = 0x1, + .pll[4] = 0x39, + .pll[5] = 0x12, + .pll[6] = 0xE3, + .pll[7] = 0xE9, + .pll[8] = 0x1, + .pll[9] = 0x1, + .pll[10] = 0, + .pll[11] = 0, + .pll[12] = 0x20, + .pll[13] = 0, + .pll[14] = 0, + .pll[15] = 0x2, + .pll[16] = 0x85, + .pll[17] = 0x8F, + .pll[18] = 0xE6, + .pll[19] = 0x23, +}; + +static const struct intel_c10mpllb_state mtl_c10_dp_hbr1 = { + .clock = 270000, + .tx = 0x10, + .cmn = 0x21, + .pll[0] = 0xF4, + .pll[1] = 0, + .pll[2] = 0xF8, + .pll[3] = 0x0, + .pll[4] = 0x20, + .pll[5] = 0x0A, + .pll[6] = 0x29, + .pll[7] = 0x10, + .pll[8] = 0x1, /* Verify */ + .pll[9] = 0x1, + .pll[10] = 0, + .pll[11] = 0, + .pll[12] = 0xA0, + .pll[13] = 0, + .pll[14] = 0, + .pll[15] = 0x1, + .pll[16] = 0x84, + .pll[17] = 0x4F, + .pll[18] = 0xE5, + .pll[19] = 0x23, +}; + +static const struct intel_c10mpllb_state mtl_c10_edp_r324 = { + .clock = 324000, + .tx = 0x10, + .cmn = 0x21, + .pll[0] = 0xB4, + .pll[1] = 0, + .pll[2] = 0x30, + .pll[3] = 0x1, + .pll[4] = 0x26, + .pll[5] = 0x0C, + .pll[6] = 0x98, + .pll[7] = 0x46, + .pll[8] = 0x1, + .pll[9] = 0x1, + .pll[10] = 0, + .pll[11] = 0, + .pll[12] = 0xC0, + .pll[13] = 0, + .pll[14] = 0, + .pll[15] = 0x1, + .pll[16] = 0x85, + .pll[17] = 0x4F, + .pll[18] = 0xE6, + .pll[19] = 0x23, +}; + +static const struct intel_c10mpllb_state mtl_c10_edp_r432 = { + .clock = 432000, + .tx = 0x10, + .cmn = 0x21, + .pll[0] = 0x4, + .pll[1] = 0, + .pll[2] = 0xA2, + .pll[3] = 0x1, + .pll[4] = 0x33, + .pll[5] = 0x10, + .pll[6] = 0x75, + .pll[7] = 0xB3, + .pll[8] = 0x1, + .pll[9] = 0x1, + .pll[10] = 0, + .pll[11] = 0, + .pll[12] = 0, + .pll[13] = 0, + .pll[14] = 0, + .pll[15] = 0x1, + .pll[16] = 0x85, + .pll[17] = 0x0F, + .pll[18] = 0xE6, + .pll[19] = 0x23, +}; + +static const struct intel_c10mpllb_state mtl_c10_dp_hbr2 = { + .clock = 540000, + .tx = 0x10, + .cmn = 0x21, + .pll[0] = 0xF4, + .pll[1] = 0, + .pll[2] = 0xF8, + .pll[3] = 0, + .pll[4] = 0x20, + .pll[5] = 0x0A, + .pll[6] = 0x29, + .pll[7] = 0x10, + .pll[8] = 0x1, + .pll[9] = 0x1, + .pll[10] = 0, + .pll[11] = 0, + .pll[12] = 0xA0, + .pll[13] = 0, + .pll[14] = 0, + .pll[15] = 0, + .pll[16] = 0x84, + .pll[17] = 0x4F, + .pll[18] = 0xE5, + .pll[19] = 0x23, +}; + +static const struct intel_c10mpllb_state mtl_c10_edp_r675 = { + .clock = 675000, + .tx = 0x10, + .cmn = 0x21, + .pll[0] = 0xB4, + .pll[1] = 0, + .pll[2] = 0x3E, + .pll[3] = 0x1, + .pll[4] = 0xA8, + .pll[5] = 0x0C, + .pll[6] = 0x33, + .pll[7] = 0x54, + .pll[8] = 0x1, + .pll[9] = 0x1, + .pll[10] = 0, + .pll[11] = 0, + .pll[12] = 0xC8, + .pll[13] = 0, + .pll[14] = 0, + .pll[15] = 0, + .pll[16] = 0x85, + .pll[17] = 0x8F, + .pll[18] = 0xE6, + .pll[19] = 0x23, +}; + +static const struct intel_c10mpllb_state mtl_c10_dp_hbr3 = { + .clock = 810000, + .tx = 0x10, + .cmn = 0x21, + .pll[0] = 0x34, + .pll[1] = 0, + .pll[2] = 0x84, + .pll[3] = 0x1, + .pll[4] = 0x30, + .pll[5] = 0x0F, + .pll[6] = 0x3D, + .pll[7] = 0x98, + .pll[8] = 0x1, + .pll[9] = 0x1, + .pll[10] = 0, + .pll[11] = 0, + .pll[12] = 0xF0, + .pll[13] = 0, + .pll[14] = 0, + .pll[15] = 0, + .pll[16] = 0x84, + .pll[17] = 0x0F, + .pll[18] = 0xE5, + .pll[19] = 0x23, +}; + +static const struct intel_c10mpllb_state * const mtl_c10_dp_tables[] = { + &mtl_c10_dp_rbr, + &mtl_c10_dp_hbr1, + &mtl_c10_dp_hbr2, + &mtl_c10_dp_hbr3, + NULL, +}; + +static const struct intel_c10mpllb_state * const mtl_c10_edp_tables[] = { + &mtl_c10_dp_rbr, + &mtl_c10_edp_r216, + &mtl_c10_edp_r243, + &mtl_c10_dp_hbr1, + &mtl_c10_edp_r324, + &mtl_c10_edp_r432, + &mtl_c10_dp_hbr2, + &mtl_c10_edp_r675, + &mtl_c10_dp_hbr3, + NULL, +}; + +static const struct intel_c10mpllb_state * const * +intel_c10_mpllb_tables_get(struct intel_crtc_state *crtc_state, + struct intel_encoder *encoder) +{ + if (intel_crtc_has_dp_encoder(crtc_state)) { + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_EDP)) + return mtl_c10_edp_tables; + else + return mtl_c10_dp_tables; + } + + /* TODO: Add HDMI Support */ + MISSING_CASE(encoder->type); + return NULL; +} + +static void intel_c10mpllb_update_pll(struct intel_crtc_state *crtc_state, + struct intel_encoder *encoder) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + struct intel_cx0pll_state *pll_state = &crtc_state->cx0pll_state; + int i; + + if (intel_crtc_has_dp_encoder(crtc_state)) { + if (intel_panel_use_ssc(i915)) { + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + + pll_state->ssc_enabled = + (intel_dp->dpcd[DP_MAX_DOWNSPREAD] & DP_MAX_DOWNSPREAD_0_5); + } + } + + if (pll_state->ssc_enabled) + return; + + drm_WARN_ON(&i915->drm, ARRAY_SIZE(pll_state->c10.pll) < 9); + for (i = 4; i < 9; i++) + pll_state->c10.pll[i] = 0; +} + +static int intel_c10mpllb_calc_state(struct intel_crtc_state *crtc_state, + struct intel_encoder *encoder) +{ + const struct intel_c10mpllb_state * const *tables; + int i; + + tables = intel_c10_mpllb_tables_get(crtc_state, encoder); + if (!tables) + return -EINVAL; + + for (i = 0; tables[i]; i++) { + if (crtc_state->port_clock == tables[i]->clock) { + crtc_state->cx0pll_state.c10 = *tables[i]; + intel_c10mpllb_update_pll(crtc_state, encoder); + + return 0; + } + } + + return -EINVAL; +} + +int intel_cx0mpllb_calc_state(struct intel_crtc_state *crtc_state, + struct intel_encoder *encoder) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + enum phy phy = intel_port_to_phy(i915, encoder->port); + + drm_WARN_ON(&i915->drm, !intel_is_c10phy(i915, phy)); + + return intel_c10mpllb_calc_state(crtc_state, encoder); +} + +void intel_c10mpllb_readout_hw_state(struct intel_encoder *encoder, + struct intel_c10mpllb_state *pll_state) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + u8 lane = INTEL_CX0_LANE0; + intel_wakeref_t wakeref; + int i; + + wakeref = intel_cx0_phy_transaction_begin(encoder); + + /* + * According to C10 VDR Register programming Sequence we need + * to do this to read PHY internal registers from MsgBus. + */ + intel_cx0_rmw(i915, encoder->port, lane, PHY_C10_VDR_CONTROL(1), + 0, C10_VDR_CTRL_MSGBUS_ACCESS, + MB_WRITE_COMMITTED); + + for (i = 0; i < ARRAY_SIZE(pll_state->pll); i++) + pll_state->pll[i] = intel_cx0_read(i915, encoder->port, lane, + PHY_C10_VDR_PLL(i)); + + pll_state->cmn = intel_cx0_read(i915, encoder->port, lane, PHY_C10_VDR_CMN(0)); + pll_state->tx = intel_cx0_read(i915, encoder->port, lane, PHY_C10_VDR_TX(0)); + + intel_cx0_phy_transaction_end(encoder, wakeref); +} + +static void intel_c10_pll_program(struct drm_i915_private *i915, + const struct intel_crtc_state *crtc_state, + struct intel_encoder *encoder) +{ + const struct intel_c10mpllb_state *pll_state = &crtc_state->cx0pll_state.c10; + int i; + + intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), + 0, C10_VDR_CTRL_MSGBUS_ACCESS, + MB_WRITE_COMMITTED); + /* Custom width needs to be programmed to 0 for both the phy lanes */ + intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH, + C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10, + MB_WRITE_COMMITTED); + intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), + 0, C10_VDR_CTRL_UPDATE_CFG, + MB_WRITE_COMMITTED); + + /* Program the pll values only for the master lane */ + for (i = 0; i < ARRAY_SIZE(pll_state->pll); i++) + intel_cx0_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C10_VDR_PLL(i), + pll_state->pll[i], + (i % 4) ? MB_WRITE_UNCOMMITTED : MB_WRITE_COMMITTED); + + intel_cx0_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C10_VDR_CMN(0), pll_state->cmn, MB_WRITE_COMMITTED); + intel_cx0_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C10_VDR_TX(0), pll_state->tx, MB_WRITE_COMMITTED); + + intel_cx0_rmw(i915, encoder->port, INTEL_CX0_LANE0, PHY_C10_VDR_CONTROL(1), + 0, C10_VDR_CTRL_MASTER_LANE | C10_VDR_CTRL_UPDATE_CFG, + MB_WRITE_COMMITTED); +} + +void intel_c10mpllb_dump_hw_state(struct drm_i915_private *dev_priv, + const struct intel_c10mpllb_state *hw_state) +{ + bool fracen; + int i; + unsigned int frac_quot = 0, frac_rem = 0, frac_den = 1; + unsigned int multiplier, tx_clk_div; + + fracen = hw_state->pll[0] & C10_PLL0_FRACEN; + drm_dbg_kms(&dev_priv->drm, "c10pll_hw_state: fracen: %s, ", + str_yes_no(fracen)); + + if (fracen) { + frac_quot = hw_state->pll[12] << 8 | hw_state->pll[11]; + frac_rem = hw_state->pll[14] << 8 | hw_state->pll[13]; + frac_den = hw_state->pll[10] << 8 | hw_state->pll[9]; + drm_dbg_kms(&dev_priv->drm, "quot: %u, rem: %u, den: %u,\n", + frac_quot, frac_rem, frac_den); + } + + multiplier = (REG_FIELD_GET8(C10_PLL3_MULTIPLIERH_MASK, hw_state->pll[3]) << 8 | + hw_state->pll[2]) / 2 + 16; + tx_clk_div = REG_FIELD_GET8(C10_PLL15_TXCLKDIV_MASK, hw_state->pll[15]); + drm_dbg_kms(&dev_priv->drm, + "multiplier: %u, tx_clk_div: %u.\n", multiplier, tx_clk_div); + + drm_dbg_kms(&dev_priv->drm, "c10pll_rawhw_state:"); + drm_dbg_kms(&dev_priv->drm, "tx: 0x%x, cmn: 0x%x\n", hw_state->tx, hw_state->cmn); + + for (i = 0; i < ARRAY_SIZE(hw_state->pll); i = i + 4) + drm_dbg_kms(&dev_priv->drm, "pll[%d] = 0x%x, pll[%d] = 0x%x, pll[%d] = 0x%x, pll[%d] = 0x%x\n", + i, hw_state->pll[i], i + 1, hw_state->pll[i + 1], + i + 2, hw_state->pll[i + 2], i + 3, hw_state->pll[i + 3]); +} + +int intel_c10mpllb_calc_port_clock(struct intel_encoder *encoder, + const struct intel_c10mpllb_state *pll_state) +{ + unsigned int frac_quot = 0, frac_rem = 0, frac_den = 1; + unsigned int multiplier, tx_clk_div, refclk = 38400; + + if (pll_state->pll[0] & C10_PLL0_FRACEN) { + frac_quot = pll_state->pll[12] << 8 | pll_state->pll[11]; + frac_rem = pll_state->pll[14] << 8 | pll_state->pll[13]; + frac_den = pll_state->pll[10] << 8 | pll_state->pll[9]; + } + + multiplier = (REG_FIELD_GET8(C10_PLL3_MULTIPLIERH_MASK, pll_state->pll[3]) << 8 | + pll_state->pll[2]) / 2 + 16; + + tx_clk_div = REG_FIELD_GET8(C10_PLL15_TXCLKDIV_MASK, pll_state->pll[15]); + + return DIV_ROUND_CLOSEST_ULL(mul_u32_u32(refclk, (multiplier << 16) + frac_quot) + + DIV_ROUND_CLOSEST(refclk * frac_rem, frac_den), + 10 << (tx_clk_div + 16)); +} + +static void intel_program_port_clock_ctl(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state, + bool lane_reversal) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + u32 val = 0; + + intel_de_rmw(i915, XELPDP_PORT_BUF_CTL1(encoder->port), XELPDP_PORT_REVERSAL, + lane_reversal ? XELPDP_PORT_REVERSAL : 0); + + if (lane_reversal) + val |= XELPDP_LANE1_PHY_CLOCK_SELECT; + + val |= XELPDP_FORWARD_CLOCK_UNGATE; + val |= XELPDP_DDI_CLOCK_SELECT(XELPDP_DDI_CLOCK_SELECT_MAXPCLK); + + /* TODO: HDMI FRL */ + /* TODO: DP2.0 10G and 20G rates enable MPLLA*/ + val |= crtc_state->cx0pll_state.ssc_enabled ? XELPDP_SSC_ENABLE_PLLB : 0; + + intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), + XELPDP_LANE1_PHY_CLOCK_SELECT | + XELPDP_FORWARD_CLOCK_UNGATE | + XELPDP_DDI_CLOCK_SELECT_MASK | + XELPDP_SSC_ENABLE_PLLB, val); +} + +static u32 intel_cx0_get_powerdown_update(u8 lane_mask) +{ + u32 val = 0; + int lane = 0; + + for_each_cx0_lane_in_mask(lane_mask, lane) + val |= XELPDP_LANE_POWERDOWN_UPDATE(lane); + + return val; +} + +static u32 intel_cx0_get_powerdown_state(u8 lane_mask, u8 state) +{ + u32 val = 0; + int lane = 0; + + for_each_cx0_lane_in_mask(lane_mask, lane) + val |= XELPDP_LANE_POWERDOWN_NEW_STATE(lane, state); + + return val; +} + +static void intel_cx0_powerdown_change_sequence(struct drm_i915_private *i915, + enum port port, + u8 lane_mask, u8 state) +{ + enum phy phy = intel_port_to_phy(i915, port); + int lane; + + intel_de_rmw(i915, XELPDP_PORT_BUF_CTL2(port), + intel_cx0_get_powerdown_state(INTEL_CX0_BOTH_LANES, XELPDP_LANE_POWERDOWN_NEW_STATE_MASK), + intel_cx0_get_powerdown_state(lane_mask, state)); + + /* Wait for pending transactions.*/ + for_each_cx0_lane_in_mask(lane_mask, lane) + if (intel_de_wait_for_clear(i915, XELPDP_PORT_M2P_MSGBUS_CTL(port, lane), + XELPDP_PORT_M2P_TRANSACTION_PENDING, + XELPDP_MSGBUS_TIMEOUT_SLOW)) { + drm_dbg_kms(&i915->drm, + "PHY %c Timeout waiting for previous transaction to complete. Reset the bus.\n", + phy_name(phy)); + intel_cx0_bus_reset(i915, port, lane); + } + + intel_de_rmw(i915, XELPDP_PORT_BUF_CTL2(port), + intel_cx0_get_powerdown_update(INTEL_CX0_BOTH_LANES), + intel_cx0_get_powerdown_update(lane_mask)); + + /* Update Timeout Value */ + if (__intel_de_wait_for_register(i915, XELPDP_PORT_BUF_CTL2(port), + intel_cx0_get_powerdown_update(lane_mask), 0, + XELPDP_PORT_POWERDOWN_UPDATE_TIMEOUT_US, 0, NULL)) + drm_warn(&i915->drm, "PHY %c failed to bring out of Lane reset after %dus.\n", + phy_name(phy), XELPDP_PORT_RESET_START_TIMEOUT_US); +} + +static void intel_cx0_setup_powerdown(struct drm_i915_private *i915, enum port port) +{ + intel_de_rmw(i915, XELPDP_PORT_BUF_CTL2(port), + XELPDP_POWER_STATE_READY_MASK, + XELPDP_POWER_STATE_READY(CX0_P2_STATE_READY)); + intel_de_rmw(i915, XELPDP_PORT_BUF_CTL3(port), + XELPDP_POWER_STATE_ACTIVE_MASK | + XELPDP_PLL_LANE_STAGGERING_DELAY_MASK, + XELPDP_POWER_STATE_ACTIVE(CX0_P0_STATE_ACTIVE) | + XELPDP_PLL_LANE_STAGGERING_DELAY(0)); +} + +static u32 intel_cx0_get_pclk_refclk_request(u8 lane_mask) +{ + u32 val = 0; + int lane = 0; + + for_each_cx0_lane_in_mask(lane_mask, lane) + val |= XELPDP_LANE_PCLK_REFCLK_REQUEST(lane); + + return val; +} + +static u32 intel_cx0_get_pclk_refclk_ack(u8 lane_mask) +{ + u32 val = 0; + int lane = 0; + + for_each_cx0_lane_in_mask(lane_mask, lane) + val |= XELPDP_LANE_PCLK_REFCLK_ACK(lane); + + return val; +} + +/* FIXME: Some Type-C cases need not reset both the lanes. Handle those cases. */ +static void intel_cx0_phy_lane_reset(struct drm_i915_private *i915, enum port port, + bool lane_reversal) +{ + enum phy phy = intel_port_to_phy(i915, port); + u8 lane_mask = lane_reversal ? INTEL_CX0_LANE1 : + INTEL_CX0_LANE0; + + if (__intel_de_wait_for_register(i915, XELPDP_PORT_BUF_CTL1(port), + XELPDP_PORT_BUF_SOC_PHY_READY, + XELPDP_PORT_BUF_SOC_PHY_READY, + XELPDP_PORT_BUF_SOC_READY_TIMEOUT_US, 0, NULL)) + drm_warn(&i915->drm, "PHY %c failed to bring out of SOC reset after %dus.\n", + phy_name(phy), XELPDP_PORT_BUF_SOC_READY_TIMEOUT_US); + + intel_de_rmw(i915, XELPDP_PORT_BUF_CTL2(port), + XELPDP_LANE_PIPE_RESET(0) | XELPDP_LANE_PIPE_RESET(1), + XELPDP_LANE_PIPE_RESET(0) | XELPDP_LANE_PIPE_RESET(1)); + + if (__intel_de_wait_for_register(i915, XELPDP_PORT_BUF_CTL2(port), + XELPDP_LANE_PHY_CURRENT_STATUS(0) | + XELPDP_LANE_PHY_CURRENT_STATUS(1), + XELPDP_LANE_PHY_CURRENT_STATUS(0) | + XELPDP_LANE_PHY_CURRENT_STATUS(1), + XELPDP_PORT_RESET_START_TIMEOUT_US, 0, NULL)) + drm_warn(&i915->drm, "PHY %c failed to bring out of Lane reset after %dus.\n", + phy_name(phy), XELPDP_PORT_RESET_START_TIMEOUT_US); + + intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(port), + intel_cx0_get_pclk_refclk_request(INTEL_CX0_BOTH_LANES), + intel_cx0_get_pclk_refclk_request(lane_mask)); + + if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(port), + intel_cx0_get_pclk_refclk_ack(INTEL_CX0_BOTH_LANES), + intel_cx0_get_pclk_refclk_ack(lane_mask), + XELPDP_REFCLK_ENABLE_TIMEOUT_US, 0, NULL)) + drm_warn(&i915->drm, "PHY %c failed to request refclk after %dus.\n", + phy_name(phy), XELPDP_REFCLK_ENABLE_TIMEOUT_US); + + intel_cx0_powerdown_change_sequence(i915, port, INTEL_CX0_BOTH_LANES, + CX0_P2_STATE_RESET); + intel_cx0_setup_powerdown(i915, port); + + intel_de_rmw(i915, XELPDP_PORT_BUF_CTL2(port), + XELPDP_LANE_PIPE_RESET(0) | XELPDP_LANE_PIPE_RESET(1), + 0); + + if (intel_de_wait_for_clear(i915, XELPDP_PORT_BUF_CTL2(port), + XELPDP_LANE_PHY_CURRENT_STATUS(0) | + XELPDP_LANE_PHY_CURRENT_STATUS(1), + XELPDP_PORT_RESET_END_TIMEOUT)) + drm_warn(&i915->drm, "PHY %c failed to bring out of Lane reset after %dms.\n", + phy_name(phy), XELPDP_PORT_RESET_END_TIMEOUT); +} + +static void intel_c10_program_phy_lane(struct drm_i915_private *i915, + struct intel_encoder *encoder, int lane_count, + bool lane_reversal) +{ + u8 l0t1, l0t2, l1t1, l1t2; + bool dp_alt_mode = intel_tc_port_in_dp_alt_mode(enc_to_dig_port(encoder)); + enum port port = encoder->port; + + intel_cx0_rmw(i915, port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), + 0, C10_VDR_CTRL_MSGBUS_ACCESS, + MB_WRITE_COMMITTED); + + /* TODO: DP-alt MFD case where only one PHY lane should be programmed. */ + l0t1 = intel_cx0_read(i915, port, INTEL_CX0_LANE0, PHY_CX0_TX_CONTROL(1, 2)); + l0t2 = intel_cx0_read(i915, port, INTEL_CX0_LANE0, PHY_CX0_TX_CONTROL(2, 2)); + l1t1 = intel_cx0_read(i915, port, INTEL_CX0_LANE1, PHY_CX0_TX_CONTROL(1, 2)); + l1t2 = intel_cx0_read(i915, port, INTEL_CX0_LANE1, PHY_CX0_TX_CONTROL(2, 2)); + + l0t1 |= CONTROL2_DISABLE_SINGLE_TX; + l0t2 |= CONTROL2_DISABLE_SINGLE_TX; + l1t1 |= CONTROL2_DISABLE_SINGLE_TX; + l1t2 |= CONTROL2_DISABLE_SINGLE_TX; + + if (lane_reversal) { + switch (lane_count) { + case 4: + l0t1 &= ~CONTROL2_DISABLE_SINGLE_TX; + fallthrough; + case 3: + l0t2 &= ~CONTROL2_DISABLE_SINGLE_TX; + fallthrough; + case 2: + l1t1 &= ~CONTROL2_DISABLE_SINGLE_TX; + fallthrough; + case 1: + l1t2 &= ~CONTROL2_DISABLE_SINGLE_TX; + break; + default: + MISSING_CASE(lane_count); + } + } else { + switch (lane_count) { + case 4: + l1t2 &= ~CONTROL2_DISABLE_SINGLE_TX; + fallthrough; + case 3: + l1t1 &= ~CONTROL2_DISABLE_SINGLE_TX; + fallthrough; + case 2: + l0t2 &= ~CONTROL2_DISABLE_SINGLE_TX; + l0t1 &= ~CONTROL2_DISABLE_SINGLE_TX; + break; + case 1: + if (dp_alt_mode) + l0t2 &= ~CONTROL2_DISABLE_SINGLE_TX; + else + l0t1 &= ~CONTROL2_DISABLE_SINGLE_TX; + break; + default: + MISSING_CASE(lane_count); + } + } + + /* disable MLs */ + intel_cx0_write(i915, port, INTEL_CX0_LANE0, PHY_CX0_TX_CONTROL(1, 2), + l0t1, MB_WRITE_COMMITTED); + intel_cx0_write(i915, port, INTEL_CX0_LANE0, PHY_CX0_TX_CONTROL(2, 2), + l0t2, MB_WRITE_COMMITTED); + intel_cx0_write(i915, port, INTEL_CX0_LANE1, PHY_CX0_TX_CONTROL(1, 2), + l1t1, MB_WRITE_COMMITTED); + intel_cx0_write(i915, port, INTEL_CX0_LANE1, PHY_CX0_TX_CONTROL(2, 2), + l1t2, MB_WRITE_COMMITTED); + + intel_cx0_rmw(i915, port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), + 0, C10_VDR_CTRL_UPDATE_CFG, + MB_WRITE_COMMITTED); +} + +static u32 intel_cx0_get_pclk_pll_request(u8 lane_mask) +{ + u32 val = 0; + int lane = 0; + + for_each_cx0_lane_in_mask(lane_mask, lane) + val |= XELPDP_LANE_PCLK_PLL_REQUEST(lane); + + return val; +} + +static u32 intel_cx0_get_pclk_pll_ack(u8 lane_mask) +{ + u32 val = 0; + int lane = 0; + + for_each_cx0_lane_in_mask(lane_mask, lane) + val |= XELPDP_LANE_PCLK_PLL_ACK(lane); + + return val; +} + +static void intel_c10pll_enable(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + enum phy phy = intel_port_to_phy(i915, encoder->port); + struct intel_digital_port *dig_port = enc_to_dig_port(encoder); + bool lane_reversal = dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL; + u8 maxpclk_lane = lane_reversal ? INTEL_CX0_LANE1 : + INTEL_CX0_LANE0; + + /* + * 1. Program PORT_CLOCK_CTL REGISTER to configure + * clock muxes, gating and SSC + */ + intel_program_port_clock_ctl(encoder, crtc_state, lane_reversal); + + /* 2. Bring PHY out of reset. */ + intel_cx0_phy_lane_reset(i915, encoder->port, lane_reversal); + + /* + * 3. Change Phy power state to Ready. + * TODO: For DP alt mode use only one lane. + */ + intel_cx0_powerdown_change_sequence(i915, encoder->port, INTEL_CX0_BOTH_LANES, + CX0_P2_STATE_READY); + + /* 4. Program PHY internal PLL internal registers. */ + intel_c10_pll_program(i915, crtc_state, encoder); + + /* + * 5. Program the enabled and disabled owned PHY lane + * transmitters over message bus + */ + intel_c10_program_phy_lane(i915, encoder, crtc_state->lane_count, lane_reversal); + + /* + * 6. Follow the Display Voltage Frequency Switching - Sequence + * Before Frequency Change. We handle this step in bxt_set_cdclk(). + */ + + /* + * 7. Program DDI_CLK_VALFREQ to match intended DDI + * clock frequency. + */ + intel_de_write(i915, DDI_CLK_VALFREQ(encoder->port), + crtc_state->port_clock); + + /* + * 8. Set PORT_CLOCK_CTL register PCLK PLL Request + * LN to "1" to enable PLL. + */ + intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), + intel_cx0_get_pclk_pll_request(INTEL_CX0_BOTH_LANES), + intel_cx0_get_pclk_pll_request(maxpclk_lane)); + + /* 9. Poll on PORT_CLOCK_CTL PCLK PLL Ack LN == "1". */ + if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), + intel_cx0_get_pclk_pll_ack(INTEL_CX0_BOTH_LANES), + intel_cx0_get_pclk_pll_ack(maxpclk_lane), + XELPDP_PCLK_PLL_ENABLE_TIMEOUT_US, 0, NULL)) + drm_warn(&i915->drm, "Port %c PLL not locked after %dus.\n", + phy_name(phy), XELPDP_PCLK_PLL_ENABLE_TIMEOUT_US); + + /* + * 10. Follow the Display Voltage Frequency Switching Sequence After + * Frequency Change. We handle this step in bxt_set_cdclk(). + */ +} + +void intel_cx0pll_enable(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + enum phy phy = intel_port_to_phy(i915, encoder->port); + intel_wakeref_t wakeref; + + wakeref = intel_cx0_phy_transaction_begin(encoder); + + drm_WARN_ON(&i915->drm, !intel_is_c10phy(i915, phy)); + intel_c10pll_enable(encoder, crtc_state); + + /* TODO: enable TBT-ALT mode */ + intel_cx0_phy_transaction_end(encoder, wakeref); +} + +static void intel_c10pll_disable(struct intel_encoder *encoder) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + enum phy phy = intel_port_to_phy(i915, encoder->port); + + /* 1. Change owned PHY lane power to Disable state. */ + intel_cx0_powerdown_change_sequence(i915, encoder->port, INTEL_CX0_BOTH_LANES, + CX0_P2PG_STATE_DISABLE); + + /* + * 2. Follow the Display Voltage Frequency Switching Sequence Before + * Frequency Change. We handle this step in bxt_set_cdclk(). + */ + + /* + * 3. Set PORT_CLOCK_CTL register PCLK PLL Request LN + * to "0" to disable PLL. + */ + intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), + intel_cx0_get_pclk_pll_request(INTEL_CX0_BOTH_LANES) | + intel_cx0_get_pclk_refclk_request(INTEL_CX0_BOTH_LANES), 0); + + /* 4. Program DDI_CLK_VALFREQ to 0. */ + intel_de_write(i915, DDI_CLK_VALFREQ(encoder->port), 0); + + /* + * 5. Poll on PORT_CLOCK_CTL PCLK PLL Ack LN == "0". + */ + if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), + intel_cx0_get_pclk_pll_ack(INTEL_CX0_BOTH_LANES) | + intel_cx0_get_pclk_refclk_ack(INTEL_CX0_BOTH_LANES), 0, + XELPDP_PCLK_PLL_DISABLE_TIMEOUT_US, 0, NULL)) + drm_warn(&i915->drm, "Port %c PLL not unlocked after %dus.\n", + phy_name(phy), XELPDP_PCLK_PLL_DISABLE_TIMEOUT_US); + + /* + * 6. Follow the Display Voltage Frequency Switching Sequence After + * Frequency Change. We handle this step in bxt_set_cdclk(). + */ + + /* 7. Program PORT_CLOCK_CTL register to disable and gate clocks. */ + intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), + XELPDP_DDI_CLOCK_SELECT_MASK | + XELPDP_FORWARD_CLOCK_UNGATE, 0); +} + +void intel_cx0pll_disable(struct intel_encoder *encoder) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + enum phy phy = intel_port_to_phy(i915, encoder->port); + intel_wakeref_t wakeref; + + wakeref = intel_cx0_phy_transaction_begin(encoder); + + drm_WARN_ON(&i915->drm, !intel_is_c10phy(i915, phy)); + intel_c10pll_disable(encoder); + intel_cx0_phy_transaction_end(encoder, wakeref); +} + +void intel_c10mpllb_state_verify(struct intel_atomic_state *state, + struct intel_crtc_state *new_crtc_state) +{ + struct drm_i915_private *i915 = to_i915(state->base.dev); + struct intel_c10mpllb_state mpllb_hw_state = { 0 }; + struct intel_c10mpllb_state *mpllb_sw_state = &new_crtc_state->cx0pll_state.c10; + struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc); + struct intel_encoder *encoder; + enum phy phy; + int i; + + if (DISPLAY_VER(i915) < 14) + return; + + if (!new_crtc_state->hw.active) + return; + + encoder = intel_get_crtc_new_encoder(state, new_crtc_state); + phy = intel_port_to_phy(i915, encoder->port); + + if (!intel_is_c10phy(i915, phy)) + return; + + intel_c10mpllb_readout_hw_state(encoder, &mpllb_hw_state); + + for (i = 0; i < ARRAY_SIZE(mpllb_sw_state->pll); i++) { + u8 expected = mpllb_sw_state->pll[i]; + + I915_STATE_WARN(mpllb_hw_state.pll[i] != expected, + "[CRTC:%d:%s] mismatch in C10MPLLB: Register[%d] (expected 0x%02x, found 0x%02x)", + crtc->base.base.id, crtc->base.name, + i, expected, mpllb_hw_state.pll[i]); + + I915_STATE_WARN(mpllb_hw_state.tx != mpllb_sw_state->tx, + "[CRTC:%d:%s] mismatch in C10MPLLB: Register TX0 (expected 0x%02x, found 0x%02x)", + crtc->base.base.id, crtc->base.name, + mpllb_sw_state->tx, mpllb_hw_state.tx); + + I915_STATE_WARN(mpllb_hw_state.cmn != mpllb_sw_state->cmn, + "[CRTC:%d:%s] mismatch in C10MPLLB: Register CMN0 (expected 0x%02x, found 0x%02x)", + crtc->base.base.id, crtc->base.name, + mpllb_sw_state->cmn, mpllb_hw_state.cmn); + } +} diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy.h b/drivers/gpu/drm/i915/display/intel_cx0_phy.h new file mode 100644 index 000000000000..30b1b11b2176 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_cx0_phy.h @@ -0,0 +1,44 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2021 Intel Corporation + */ + +#ifndef __INTEL_CX0_PHY_H__ +#define __INTEL_CX0_PHY_H__ + +#include +#include +#include + +#include "i915_drv.h" +#include "intel_display_types.h" + +struct drm_i915_private; +struct intel_encoder; +struct intel_crtc_state; +enum phy; + +#define for_each_cx0_lane_in_mask(__lane_mask, __lane) \ + for ((__lane) = 0; (__lane) < 2; (__lane)++) \ + for_each_if((__lane_mask) & BIT(__lane)) + +#define INTEL_CX0_LANE0 BIT(0) +#define INTEL_CX0_LANE1 BIT(1) +#define INTEL_CX0_BOTH_LANES (INTEL_CX0_LANE1 | INTEL_CX0_LANE0) + +bool intel_is_c10phy(struct drm_i915_private *dev_priv, enum phy phy); +void intel_cx0pll_enable(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state); +void intel_cx0pll_disable(struct intel_encoder *encoder); +void intel_c10mpllb_readout_hw_state(struct intel_encoder *encoder, + struct intel_c10mpllb_state *pll_state); +int intel_cx0mpllb_calc_state(struct intel_crtc_state *crtc_state, + struct intel_encoder *encoder); +void intel_c10mpllb_dump_hw_state(struct drm_i915_private *dev_priv, + const struct intel_c10mpllb_state *hw_state); +int intel_c10mpllb_calc_port_clock(struct intel_encoder *encoder, + const struct intel_c10mpllb_state *pll_state); +void intel_c10mpllb_state_verify(struct intel_atomic_state *state, + struct intel_crtc_state *new_crtc_state); + +#endif /* __INTEL_CX0_PHY_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h index 27723c1a93d9..16061a6e52f6 100644 --- a/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h +++ b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h @@ -96,6 +96,11 @@ #define XELPDP_PLL_LANE_STAGGERING_DELAY(val) REG_FIELD_PREP(XELPDP_PLL_LANE_STAGGERING_DELAY_MASK, val) #define XELPDP_POWER_STATE_ACTIVE_MASK REG_GENMASK(3, 0) #define XELPDP_POWER_STATE_ACTIVE(val) REG_FIELD_PREP(XELPDP_POWER_STATE_ACTIVE_MASK, val) +#define CX0_P0_STATE_ACTIVE 0x0 +#define CX0_P2_STATE_READY 0x2 +#define CX0_P2PG_STATE_DISABLE 0x9 +#define CX0_P4PG_STATE_DISABLE 0xC +#define CX0_P2_STATE_RESET 0x2 #define _XELPDP_PORT_CLOCK_CTL_A 0x640E0 #define _XELPDP_PORT_CLOCK_CTL_B 0x641E0 @@ -106,14 +111,11 @@ _XELPDP_PORT_CLOCK_CTL_B, \ _XELPDP_PORT_CLOCK_CTL_USBC1, \ _XELPDP_PORT_CLOCK_CTL_USBC2)) -#define XELPDP_LANE0_PCLK_PLL_REQUEST REG_BIT(31) -#define XELPDP_LANE0_PCLK_PLL_ACK REG_BIT(30) -#define XELPDP_LANE0_PCLK_REFCLK_REQUEST REG_BIT(29) -#define XELPDP_LANE0_PCLK_REFCLK_ACK REG_BIT(28) -#define XELPDP_LANE1_PCLK_PLL_REQUEST REG_BIT(27) -#define XELPDP_LANE1_PCLK_PLL_ACK REG_BIT(26) -#define XELPDP_LANE1_PCLK_REFCLK_REQUEST REG_BIT(25) -#define XELPDP_LANE1_PCLK_REFCLK_ACK REG_BIT(24) +#define XELPDP_LANE_PCLK_PLL_REQUEST(lane) _PICK(lane, REG_BIT(31), REG_BIT(27)) +#define XELPDP_LANE_PCLK_PLL_ACK(lane) _PICK(lane, REG_BIT(30), REG_BIT(26)) +#define XELPDP_LANE_PCLK_REFCLK_REQUEST(lane) _PICK(lane, REG_BIT(29), REG_BIT(25)) +#define XELPDP_LANE_PCLK_REFCLK_ACK(lane) _PICK(lane, REG_BIT(28), REG_BIT(24)) + #define XELPDP_TBT_CLOCK_REQUEST REG_BIT(19) #define XELPDP_TBT_CLOCK_ACK REG_BIT(18) #define XELPDP_DDI_CLOCK_SELECT_MASK REG_GENMASK(15, 12) @@ -130,4 +132,37 @@ #define XELPDP_SSC_ENABLE_PLLA REG_BIT(1) #define XELPDP_SSC_ENABLE_PLLB REG_BIT(0) -#endif /* __INTEL_CX0_PHY_REGS_H__ */ +/* C10 Vendor Registers */ +#define PHY_C10_VDR_PLL(idx) (0xC00 + (idx)) +#define C10_PLL0_FRACEN REG_BIT8(4) +#define C10_PLL3_MULTIPLIERH_MASK REG_GENMASK8(3, 0) +#define C10_PLL15_TXCLKDIV_MASK REG_GENMASK8(2, 0) +#define PHY_C10_VDR_CMN(idx) (0xC20 + (idx)) +#define C10_CMN0_REF_RANGE REG_FIELD_PREP(REG_GENMASK(4, 0), 1) +#define C10_CMN0_REF_CLK_MPLLB_DIV REG_FIELD_PREP(REG_GENMASK(7, 5), 1) +#define C10_CMN3_TXVBOOST_MASK REG_GENMASK8(7, 5) +#define C10_CMN3_TXVBOOST(val) REG_FIELD_PREP8(C10_CMN3_TXVBOOST_MASK, val) +#define PHY_C10_VDR_TX(idx) (0xC30 + (idx)) +#define C10_TX0_TX_MPLLB_SEL REG_BIT(4) +#define PHY_C10_VDR_CONTROL(idx) (0xC70 + (idx) - 1) +#define C10_VDR_CTRL_MSGBUS_ACCESS REG_BIT8(2) +#define C10_VDR_CTRL_MASTER_LANE REG_BIT8(1) +#define C10_VDR_CTRL_UPDATE_CFG REG_BIT8(0) +#define PHY_C10_VDR_CUSTOM_WIDTH 0xD02 +#define C10_VDR_CUSTOM_WIDTH_MASK REG_GENMASK(1, 0) +#define C10_VDR_CUSTOM_WIDTH_8_10 REG_FIELD_PREP(C10_VDR_CUSTOM_WIDTH_MASK, 0) + +#define CX0_P0_STATE_ACTIVE 0x0 +#define CX0_P2_STATE_READY 0x2 +#define CX0_P2PG_STATE_DISABLE 0x9 +#define CX0_P4PG_STATE_DISABLE 0xC +#define CX0_P2_STATE_RESET 0x2 + +/* PHY_C10_VDR_PLL0 */ +#define PLL_C10_MPLL_SSC_EN REG_BIT8(0) + +/* PIPE SPEC Defined Registers */ +#define PHY_CX0_TX_CONTROL(tx, control) (0x400 + ((tx) - 1) * 0x200 + (control)) +#define CONTROL2_DISABLE_SINGLE_TX REG_BIT(6) + +#endif /* __INTEL_CX0_REG_DEFS_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c index d0bb3a52ae5c..089854f70cac 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi.c +++ b/drivers/gpu/drm/i915/display/intel_ddi.c @@ -39,6 +39,7 @@ #include "intel_combo_phy_regs.h" #include "intel_connector.h" #include "intel_crtc.h" +#include "intel_cx0_phy.h" #include "intel_ddi.h" #include "intel_ddi_buf_trans.h" #include "intel_de.h" @@ -3493,6 +3494,21 @@ void intel_ddi_get_clock(struct intel_encoder *encoder, &crtc_state->dpll_hw_state); } +static void mtl_ddi_get_config(struct intel_encoder *encoder, + struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + enum phy phy = intel_port_to_phy(i915, encoder->port); + + drm_WARN_ON(&i915->drm, !intel_is_c10phy(i915, phy)); + + intel_c10mpllb_readout_hw_state(encoder, &crtc_state->cx0pll_state.c10); + intel_c10mpllb_dump_hw_state(i915, &crtc_state->cx0pll_state.c10); + crtc_state->port_clock = intel_c10mpllb_calc_port_clock(encoder, &crtc_state->cx0pll_state.c10); + + intel_ddi_get_config(encoder, crtc_state); +} + static void dg2_ddi_get_config(struct intel_encoder *encoder, struct intel_crtc_state *crtc_state) { @@ -4400,7 +4416,11 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port) encoder->cloneable = 0; encoder->pipe_mask = ~0; - if (IS_DG2(dev_priv)) { + if (DISPLAY_VER(dev_priv) >= 14) { + encoder->enable_clock = intel_cx0pll_enable; + encoder->disable_clock = intel_cx0pll_disable; + encoder->get_config = mtl_ddi_get_config; + } else if (IS_DG2(dev_priv)) { encoder->enable_clock = intel_mpllb_enable; encoder->disable_clock = intel_mpllb_disable; encoder->get_config = dg2_ddi_get_config; diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h index 47395b39c8f4..116b4d2526de 100644 --- a/drivers/gpu/drm/i915/display/intel_display_types.h +++ b/drivers/gpu/drm/i915/display/intel_display_types.h @@ -980,6 +980,18 @@ struct intel_link_m_n { u32 link_n; }; +struct intel_c10mpllb_state { + u32 clock; /* in KHz */ + u8 tx; + u8 cmn; + u8 pll[20]; +}; + +struct intel_cx0pll_state { + struct intel_c10mpllb_state c10; + bool ssc_enabled; +}; + struct intel_crtc_state { /* * uapi (drm) state. This is the software state shown to userspace. @@ -1123,6 +1135,7 @@ struct intel_crtc_state { union { struct intel_dpll_hw_state dpll_hw_state; struct intel_mpllb_state mpllb_state; + struct intel_cx0pll_state cx0pll_state; }; /* diff --git a/drivers/gpu/drm/i915/display/intel_dpll.c b/drivers/gpu/drm/i915/display/intel_dpll.c index 4e9c18be7e1f..b497cc409d00 100644 --- a/drivers/gpu/drm/i915/display/intel_dpll.c +++ b/drivers/gpu/drm/i915/display/intel_dpll.c @@ -8,6 +8,7 @@ #include "i915_reg.h" #include "intel_crtc.h" +#include "intel_cx0_phy.h" #include "intel_de.h" #include "intel_display.h" #include "intel_display_types.h" @@ -980,21 +981,38 @@ static int hsw_crtc_get_shared_dpll(struct intel_atomic_state *state, static int dg2_crtc_compute_clock(struct intel_atomic_state *state, struct intel_crtc *crtc) { + struct drm_i915_private *i915 = to_i915(state->base.dev); struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc); struct intel_encoder *encoder = intel_get_crtc_new_encoder(state, crtc_state); + enum phy phy = intel_port_to_phy(i915, encoder->port); int ret; ret = intel_mpllb_calc_state(crtc_state, encoder); if (ret) return ret; + /* TODO: Do the readback via intel_compute_shared_dplls() */ + if (intel_is_c10phy(i915, phy)) + crtc_state->port_clock = intel_c10mpllb_calc_port_clock(encoder, &crtc_state->cx0pll_state.c10); + crtc_state->hw.adjusted_mode.crtc_clock = intel_crtc_dotclock(crtc_state); return 0; } +static int mtl_crtc_compute_clock(struct intel_atomic_state *state, + struct intel_crtc *crtc) +{ + struct intel_crtc_state *crtc_state = + intel_atomic_get_new_crtc_state(state, crtc); + struct intel_encoder *encoder = + intel_get_crtc_new_encoder(state, crtc_state); + + return intel_cx0mpllb_calc_state(crtc_state, encoder); +} + static bool ilk_needs_fb_cb_tune(const struct dpll *dpll, int factor) { return dpll->m < factor * dpll->n; @@ -1423,6 +1441,10 @@ static int i8xx_crtc_compute_clock(struct intel_atomic_state *state, return 0; } +static const struct intel_dpll_funcs mtl_dpll_funcs = { + .crtc_compute_clock = mtl_crtc_compute_clock, +}; + static const struct intel_dpll_funcs dg2_dpll_funcs = { .crtc_compute_clock = dg2_crtc_compute_clock, }; @@ -1517,7 +1539,9 @@ int intel_dpll_crtc_get_shared_dpll(struct intel_atomic_state *state, void intel_dpll_init_clock_hook(struct drm_i915_private *dev_priv) { - if (IS_DG2(dev_priv)) + if (DISPLAY_VER(dev_priv) >= 14) + dev_priv->display.funcs.dpll = &mtl_dpll_funcs; + else if (IS_DG2(dev_priv)) dev_priv->display.funcs.dpll = &dg2_dpll_funcs; else if (DISPLAY_VER(dev_priv) >= 9 || HAS_DDI(dev_priv)) dev_priv->display.funcs.dpll = &hsw_dpll_funcs; diff --git a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c index 22fc908b7e5d..ed372d227aa7 100644 --- a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c +++ b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c @@ -4104,7 +4104,7 @@ void intel_shared_dpll_init(struct drm_i915_private *dev_priv) mutex_init(&dev_priv->display.dpll.lock); - if (IS_DG2(dev_priv)) + if (DISPLAY_VER(dev_priv) >= 14 || IS_DG2(dev_priv)) /* No shared DPLLs on DG2; port PLLs are part of the PHY */ dpll_mgr = NULL; else if (IS_ALDERLAKE_P(dev_priv)) diff --git a/drivers/gpu/drm/i915/display/intel_modeset_verify.c b/drivers/gpu/drm/i915/display/intel_modeset_verify.c index 842d70f0dfd2..ec504470c2f0 100644 --- a/drivers/gpu/drm/i915/display/intel_modeset_verify.c +++ b/drivers/gpu/drm/i915/display/intel_modeset_verify.c @@ -11,6 +11,7 @@ #include "intel_atomic.h" #include "intel_crtc.h" #include "intel_crtc_state_dump.h" +#include "intel_cx0_phy.h" #include "intel_display.h" #include "intel_display_types.h" #include "intel_fdi.h" @@ -236,6 +237,7 @@ void intel_modeset_verify_crtc(struct intel_crtc *crtc, verify_crtc_state(crtc, old_crtc_state, new_crtc_state); intel_shared_dpll_state_verify(crtc, old_crtc_state, new_crtc_state); intel_mpllb_state_verify(state, new_crtc_state); + intel_c10mpllb_state_verify(state, new_crtc_state); } void intel_modeset_verify_disabled(struct drm_i915_private *dev_priv, diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 736a8dcf777b..4bd047be51df 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -1809,6 +1809,11 @@ #define CLKGATE_DIS_PSL_EXT(pipe) \ _MMIO_PIPE(pipe, _CLKGATE_DIS_PSL_EXT_A, _CLKGATE_DIS_PSL_EXT_B) +/* DDI Buffer Control */ +#define _DDI_CLK_VALFREQ_A 0x64030 +#define _DDI_CLK_VALFREQ_B 0x64130 +#define DDI_CLK_VALFREQ(port) _MMIO_PORT(port, _DDI_CLK_VALFREQ_A, _DDI_CLK_VALFREQ_B) + /* * Display engine regs */ diff --git a/drivers/gpu/drm/i915/i915_reg_defs.h b/drivers/gpu/drm/i915/i915_reg_defs.h index db26de6b57bc..f9d7c03e95d6 100644 --- a/drivers/gpu/drm/i915/i915_reg_defs.h +++ b/drivers/gpu/drm/i915/i915_reg_defs.h @@ -22,6 +22,19 @@ BUILD_BUG_ON_ZERO(__is_constexpr(__n) && \ ((__n) < 0 || (__n) > 31)))) +/** + * REG_BIT8() - Prepare a u8 bit value + * @__n: 0-based bit number + * + * Local wrapper for BIT() to force u8, with compile time checks. + * + * @return: Value with bit @__n set. + */ +#define REG_BIT8(__n) \ + ((u8)(BIT(__n) + \ + BUILD_BUG_ON_ZERO(__is_constexpr(__n) && \ + ((__n) < 0 || (__n) > 7)))) + /** * REG_GENMASK() - Prepare a continuous u32 bitmask * @__high: 0-based high bit @@ -52,6 +65,21 @@ __is_constexpr(__low) && \ ((__low) < 0 || (__high) > 63 || (__low) > (__high))))) +/** + * REG_GENMASK8() - Prepare a continuous u8 bitmask + * @__high: 0-based high bit + * @__low: 0-based low bit + * + * Local wrapper for GENMASK() to force u8, with compile time checks. + * + * @return: Continuous bitmask from @__high to @__low, inclusive. + */ +#define REG_GENMASK8(__high, __low) \ + ((u8)(GENMASK(__high, __low) + \ + BUILD_BUG_ON_ZERO(__is_constexpr(__high) && \ + __is_constexpr(__low) && \ + ((__low) < 0 || (__high) > 7 || (__low) > (__high))))) + /* * Local integer constant expression version of is_power_of_2(). */ @@ -74,6 +102,23 @@ BUILD_BUG_ON_ZERO(!IS_POWER_OF_2((__mask) + (1ULL << __bf_shf(__mask)))) + \ BUILD_BUG_ON_ZERO(__builtin_choose_expr(__is_constexpr(__val), (~((__mask) >> __bf_shf(__mask)) & (__val)), 0)))) +/** + * REG_FIELD_PREP8() - Prepare a u8 bitfield value + * @__mask: shifted mask defining the field's length and position + * @__val: value to put in the field + * + * Local copy of FIELD_PREP8() to generate an integer constant expression, force + * u8 and for consistency with REG_FIELD_GET8(), REG_BIT8() and REG_GENMASK8(). + * + * @return: @__val masked and shifted into the field defined by @__mask. + */ +#define REG_FIELD_PREP8(__mask, __val) \ + ((u8)((((typeof(__mask))(__val) << __bf_shf(__mask)) & (__mask)) + \ + BUILD_BUG_ON_ZERO(!__is_constexpr(__mask)) + \ + BUILD_BUG_ON_ZERO((__mask) == 0 || (__mask) > U8_MAX) + \ + BUILD_BUG_ON_ZERO(!IS_POWER_OF_2((__mask) + (1ULL << __bf_shf(__mask)))) + \ + BUILD_BUG_ON_ZERO(__builtin_choose_expr(__is_constexpr(__val), (~((__mask) >> __bf_shf(__mask)) & (__val)), 0)))) + /** * REG_FIELD_GET() - Extract a u32 bitfield value * @__mask: shifted mask defining the field's length and position @@ -155,6 +200,18 @@ */ #define _PICK(__index, ...) (((const u32 []){ __VA_ARGS__ })[__index]) +/** + * REG_FIELD_GET8() - Extract a u8 bitfield value + * @__mask: shifted mask defining the field's length and position + * @__val: value to extract the bitfield value from + * + * Local wrapper for FIELD_GET() to force u8 and for consistency with + * REG_FIELD_PREP(), REG_BIT() and REG_GENMASK(). + * + * @return: Masked and shifted value of the field defined by @__mask in @__val. + */ +#define REG_FIELD_GET8(__mask, __val) ((u8)FIELD_GET(__mask, __val)) + typedef struct { u32 reg; } i915_reg_t; From patchwork Thu Apr 6 13:02:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kahola, Mika" X-Patchwork-Id: 13203357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 94044C761A6 for ; Thu, 6 Apr 2023 13:32:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2C66E10EBB9; Thu, 6 Apr 2023 13:32:51 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 848C910EBB9 for ; Thu, 6 Apr 2023 13:32:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680787967; x=1712323967; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=A/+bGDuaJOU8fVbfzRUFdbc/nikFsqMkHhH3VTD3VMg=; b=LcJ86HeJGQUP3I9mQEVnX0wRgZe00+akLttUt/kkFk48O7uHEhV96VJu lnxmsuSl9VIr3iP+toM92i5mHcNho42xrx65pi53C9uXrySlM0hsM+LvW a8lRLMfILHoH8DAIvFx7DkFNhtSpR3ORqrRrpbTOM9glNaAYLfDa4Wesm 91aVWEFFVo20VckxamymP/GpEgazHCPbTgSb2Eikj9UWxmHae803DrrAn fDYItfX1c86Yx14n8ztfa+fNhUH6lX5mUGCQqqpT8XDt+oJR8CcNoMZwX hz0gPTwwFkBELSxZmr1awke2h0xFQtE7SPgJVEVgKeZNGNSZX2wAjEbyG A==; X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="345352332" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="345352332" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 06:07:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="751641707" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="751641707" Received: from sorvi2.fi.intel.com ([10.237.72.194]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2023 06:07:37 -0700 From: Mika Kahola To: intel-gfx@lists.freedesktop.org Date: Thu, 6 Apr 2023 16:02:18 +0300 Message-Id: <20230406130221.2998457-6-mika.kahola@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230406130221.2998457-1-mika.kahola@intel.com> References: <20230406130221.2998457-1-mika.kahola@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v3 5/8] drm/i915/mtl: Add vswing programming for C10 phys X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" C10 phys uses direct mapping internally for voltage and pre-emphasis levels. Program the levels directly to the fields in the VDR Registers. Bspec: 65449 v2: From table "C10: Tx EQ settings for DP 1.4x" it shows level 1 and preemphasis 1 instead of two times of level 1 preemphasis 0. Fix this in the driver code as well. v3: VSwing update (Clint) Cc: Imre Deak Cc: Uma Shankar Signed-off-by: Clint Taylor Signed-off-by: Radhakrishna Sripada Signed-off-by: Mika Kahola Reviewed-by: Imre Deak --- drivers/gpu/drm/i915/display/intel_cx0_phy.c | 61 +++++++++++++++++-- drivers/gpu/drm/i915/display/intel_cx0_phy.h | 3 + .../gpu/drm/i915/display/intel_cx0_phy_regs.h | 14 +++++ drivers/gpu/drm/i915/display/intel_ddi.c | 4 +- .../drm/i915/display/intel_ddi_buf_trans.c | 31 +++++++++- .../drm/i915/display/intel_ddi_buf_trans.h | 6 ++ .../i915/display/intel_display_power_map.c | 1 + 7 files changed, 114 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy.c b/drivers/gpu/drm/i915/display/intel_cx0_phy.c index f3e13edd27ba..81910d55bb63 100644 --- a/drivers/gpu/drm/i915/display/intel_cx0_phy.c +++ b/drivers/gpu/drm/i915/display/intel_cx0_phy.c @@ -6,6 +6,8 @@ #include "i915_reg.h" #include "intel_cx0_phy.h" #include "intel_cx0_phy_regs.h" +#include "intel_ddi.h" +#include "intel_ddi_buf_trans.h" #include "intel_de.h" #include "intel_display_types.h" #include "intel_dp.h" @@ -292,6 +294,57 @@ static void intel_cx0_rmw(struct drm_i915_private *i915, enum port port, __intel_cx0_rmw(i915, port, lane, addr, clear, set, committed); } +void intel_cx0_phy_set_signal_levels(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + const struct intel_ddi_buf_trans *trans; + intel_wakeref_t wakeref; + int n_entries, ln; + + wakeref = intel_cx0_phy_transaction_begin(encoder); + + trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries); + if (drm_WARN_ON_ONCE(&i915->drm, !trans)) + return; + + intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), + 0, C10_VDR_CTRL_MSGBUS_ACCESS, + MB_WRITE_COMMITTED); + + for (ln = 0; ln < 4; ln++) { + int level = intel_ddi_level(encoder, crtc_state, ln); + int lane, tx; + + lane = ln / 2; + tx = ln % 2 + 1; + + intel_cx0_rmw(i915, encoder->port, lane + 1, PHY_CX0_VDR_OVRD_CONTROL(lane, tx, 0), + C10_PHY_OVRD_LEVEL_MASK, + C10_PHY_OVRD_LEVEL(trans->entries[level].snps.pre_cursor), + MB_WRITE_COMMITTED); + intel_cx0_rmw(i915, encoder->port, lane + 1, PHY_CX0_VDR_OVRD_CONTROL(lane, tx, 1), + C10_PHY_OVRD_LEVEL_MASK, + C10_PHY_OVRD_LEVEL(trans->entries[level].snps.vswing), + MB_WRITE_COMMITTED); + intel_cx0_rmw(i915, encoder->port, lane + 1, PHY_CX0_VDR_OVRD_CONTROL(lane, tx, 2), + C10_PHY_OVRD_LEVEL_MASK, + C10_PHY_OVRD_LEVEL(trans->entries[level].snps.post_cursor), + MB_WRITE_COMMITTED); + } + + /* Write Override enables in 0xD71 */ + intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_OVRD, + 0, PHY_C10_VDR_OVRD_TX1 | PHY_C10_VDR_OVRD_TX2, + MB_WRITE_COMMITTED); + + intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), + 0, C10_VDR_CTRL_UPDATE_CFG, + MB_WRITE_COMMITTED); + + intel_cx0_phy_transaction_end(encoder, wakeref); +} + /* * Basic DP link rates with 38.4 MHz reference clock. * Note: The tables below are with SSC. In non-ssc @@ -765,10 +818,9 @@ static void intel_program_port_clock_ctl(struct intel_encoder *encoder, val |= crtc_state->cx0pll_state.ssc_enabled ? XELPDP_SSC_ENABLE_PLLB : 0; intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), - XELPDP_LANE1_PHY_CLOCK_SELECT | - XELPDP_FORWARD_CLOCK_UNGATE | + XELPDP_LANE1_PHY_CLOCK_SELECT | XELPDP_FORWARD_CLOCK_UNGATE | XELPDP_DDI_CLOCK_SELECT_MASK | - XELPDP_SSC_ENABLE_PLLB, val); + XELPDP_SSC_ENABLE_PLLA | XELPDP_SSC_ENABLE_PLLB, val); } static u32 intel_cx0_get_powerdown_update(u8 lane_mask) @@ -1143,7 +1195,8 @@ static void intel_c10pll_disable(struct intel_encoder *encoder) /* 7. Program PORT_CLOCK_CTL register to disable and gate clocks. */ intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), - XELPDP_DDI_CLOCK_SELECT_MASK | + XELPDP_DDI_CLOCK_SELECT_MASK, 0); + intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), XELPDP_FORWARD_CLOCK_UNGATE, 0); } diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy.h b/drivers/gpu/drm/i915/display/intel_cx0_phy.h index 30b1b11b2176..cc4f174947ad 100644 --- a/drivers/gpu/drm/i915/display/intel_cx0_phy.h +++ b/drivers/gpu/drm/i915/display/intel_cx0_phy.h @@ -40,5 +40,8 @@ int intel_c10mpllb_calc_port_clock(struct intel_encoder *encoder, const struct intel_c10mpllb_state *pll_state); void intel_c10mpllb_state_verify(struct intel_atomic_state *state, struct intel_crtc_state *new_crtc_state); +int intel_c10_phy_check_hdmi_link_rate(int clock); +void intel_cx0_phy_set_signal_levels(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state); #endif /* __INTEL_CX0_PHY_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h index 16061a6e52f6..a72f79ea5e6c 100644 --- a/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h +++ b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h @@ -158,6 +158,14 @@ #define CX0_P4PG_STATE_DISABLE 0xC #define CX0_P2_STATE_RESET 0x2 +#define PHY_C10_VDR_OVRD 0xD71 +#define PHY_C10_VDR_OVRD_TX1 REG_BIT8(0) +#define PHY_C10_VDR_OVRD_TX2 REG_BIT8(2) +#define PHY_C10_VDR_PRE_OVRD_TX1 0xD80 +#define C10_PHY_OVRD_LEVEL_MASK REG_GENMASK8(5, 0) +#define C10_PHY_OVRD_LEVEL(val) REG_FIELD_PREP8(C10_PHY_OVRD_LEVEL_MASK, val) +#define PHY_CX0_VDR_OVRD_CONTROL(lane, tx, control) (PHY_C10_VDR_PRE_OVRD_TX1 + ((lane) ^ ((tx) - 1)) * 0x10 + (control)) + /* PHY_C10_VDR_PLL0 */ #define PLL_C10_MPLL_SSC_EN REG_BIT8(0) @@ -165,4 +173,10 @@ #define PHY_CX0_TX_CONTROL(tx, control) (0x400 + ((tx) - 1) * 0x200 + (control)) #define CONTROL2_DISABLE_SINGLE_TX REG_BIT(6) +/* C10 Phy VSWING Masks */ +#define C10_PHY_VSWING_LEVEL_MASK REG_GENMASK8(2, 0) +#define C10_PHY_VSWING_LEVEL(val) REG_FIELD_PREP8(C10_PHY_VSWING_LEVEL_MASK, val) +#define C10_PHY_VSWING_PREEMPH_MASK REG_GENMASK8(1, 0) +#define C10_PHY_VSWING_PREEMPH(val) REG_FIELD_PREP8(C10_PHY_VSWING_PREEMPH_MASK, val) + #endif /* __INTEL_CX0_REG_DEFS_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c index 089854f70cac..e97d7627d9d1 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi.c +++ b/drivers/gpu/drm/i915/display/intel_ddi.c @@ -4480,7 +4480,9 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port) encoder->get_config = hsw_ddi_get_config; } - if (IS_DG2(dev_priv)) { + if (DISPLAY_VER(dev_priv) >= 14) { + encoder->set_signal_levels = intel_cx0_phy_set_signal_levels; + } else if (IS_DG2(dev_priv)) { encoder->set_signal_levels = intel_snps_phy_set_signal_levels; } else if (DISPLAY_VER(dev_priv) >= 12) { if (intel_phy_is_combo(dev_priv, phy)) diff --git a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c index 006a2e979000..cd4becbae098 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c +++ b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c @@ -1035,6 +1035,25 @@ static const struct intel_ddi_buf_trans dg2_snps_trans_uhbr = { .num_entries = ARRAY_SIZE(_dg2_snps_trans_uhbr), }; +static const union intel_ddi_buf_trans_entry _mtl_c10_trans_dp14[] = { + { .snps = { 26, 0, 0 } }, /* preset 0 */ + { .snps = { 33, 0, 6 } }, /* preset 1 */ + { .snps = { 38, 0, 11 } }, /* preset 2 */ + { .snps = { 43, 0, 19 } }, /* preset 3 */ + { .snps = { 39, 0, 0 } }, /* preset 4 */ + { .snps = { 45, 0, 7 } }, /* preset 5 */ + { .snps = { 46, 0, 13 } }, /* preset 6 */ + { .snps = { 46, 0, 0 } }, /* preset 7 */ + { .snps = { 55, 0, 7 } }, /* preset 8 */ + { .snps = { 62, 0, 0 } }, /* preset 9 */ +}; + +static const struct intel_ddi_buf_trans mtl_cx0c10_trans = { + .entries = _mtl_c10_trans_dp14, + .num_entries = ARRAY_SIZE(_mtl_c10_trans_dp14), + .hdmi_default_entry = ARRAY_SIZE(_mtl_c10_trans_dp14) - 1, +}; + bool is_hobl_buf_trans(const struct intel_ddi_buf_trans *table) { return table == &tgl_combo_phy_trans_edp_hbr2_hobl; @@ -1606,12 +1625,22 @@ dg2_get_snps_buf_trans(struct intel_encoder *encoder, return intel_get_buf_trans(&dg2_snps_trans, n_entries); } +static const struct intel_ddi_buf_trans * +mtl_get_cx0_buf_trans(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state, + int *n_entries) +{ + return intel_get_buf_trans(&mtl_cx0c10_trans, n_entries); +} + void intel_ddi_buf_trans_init(struct intel_encoder *encoder) { struct drm_i915_private *i915 = to_i915(encoder->base.dev); enum phy phy = intel_port_to_phy(i915, encoder->port); - if (IS_DG2(i915)) { + if (DISPLAY_VER(i915) >= 14) { + encoder->get_buf_trans = mtl_get_cx0_buf_trans; + } else if (IS_DG2(i915)) { encoder->get_buf_trans = dg2_get_snps_buf_trans; } else if (IS_ALDERLAKE_P(i915)) { if (intel_phy_is_combo(i915, phy)) diff --git a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.h b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.h index 2133984a572b..e4a857b9829d 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.h +++ b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.h @@ -51,6 +51,11 @@ struct dg2_snps_phy_buf_trans { u8 post_cursor; }; +struct direct_phy_buf_trans { + u8 level; + u8 preemph; +}; + union intel_ddi_buf_trans_entry { struct hsw_ddi_buf_trans hsw; struct bxt_ddi_buf_trans bxt; @@ -58,6 +63,7 @@ union intel_ddi_buf_trans_entry { struct icl_mg_phy_ddi_buf_trans mg; struct tgl_dkl_phy_ddi_buf_trans dkl; struct dg2_snps_phy_buf_trans snps; + struct direct_phy_buf_trans direct; }; struct intel_ddi_buf_trans { diff --git a/drivers/gpu/drm/i915/display/intel_display_power_map.c b/drivers/gpu/drm/i915/display/intel_display_power_map.c index 6645eb1911d8..5ec2b9a109ae 100644 --- a/drivers/gpu/drm/i915/display/intel_display_power_map.c +++ b/drivers/gpu/drm/i915/display/intel_display_power_map.c @@ -1427,6 +1427,7 @@ I915_DECL_PW_DOMAINS(xelpdp_pwdoms_dc_off, XELPDP_PW_2_POWER_DOMAINS, POWER_DOMAIN_AUDIO_MMIO, POWER_DOMAIN_MODESET, + POWER_DOMAIN_DC_OFF, POWER_DOMAIN_AUX_A, POWER_DOMAIN_AUX_B, POWER_DOMAIN_DC_OFF, From patchwork Thu Apr 6 13:02:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kahola, Mika" X-Patchwork-Id: 13203361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C83FC77B71 for ; Thu, 6 Apr 2023 13:33:01 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1EA5B10EBC6; Thu, 6 Apr 2023 13:32:53 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id BE6E310EBBD for ; Thu, 6 Apr 2023 13:32:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680787967; x=1712323967; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Oy+aQMZSxIfFIayYQmbs5uy9XWrDWJpGy/Ld+mP7wZw=; b=enzfnEN9BZ/go0rXAdbvwkQmLYVHWewrY4jLb6/3tRo84DiwkpnFz2Lm Iqy4taWBRz9bLamWRN8+puakOwAue1e8YjS2ohObccprgTJnXRheV0jm1 hPyTzODgELj5YRFY//rTdb7xA2F3jk7379qReUNToG36jmAKKqK/6e+2/ YIaCaP9TID+Dy+A+Pa33jcxehsBHR5phyoKeee1tOWy5mVdBvzXCyY+QY IxWxJ2ocGUzl3zuBSpfxXdY75DH/rY+rcZw8Hh87cQuxfzVhqulsQSKU3 InF0AhxJQnPNW9LSaThgB7jX2F2ZWWLEFJ2dH8zkeGlCGj2U9dzuN8vWw w==; X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="345352336" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="345352336" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 06:07:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="751641715" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="751641715" Received: from sorvi2.fi.intel.com ([10.237.72.194]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2023 06:07:40 -0700 From: Mika Kahola To: intel-gfx@lists.freedesktop.org Date: Thu, 6 Apr 2023 16:02:19 +0300 Message-Id: <20230406130221.2998457-7-mika.kahola@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230406130221.2998457-1-mika.kahola@intel.com> References: <20230406130221.2998457-1-mika.kahola@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v3 6/8] drm/i915/mtl: MTL PICA hotplug detection X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhumitha Tolakanahalli Pradeep Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" PICA is used for DP alt mode and TBT modes. Hotplug interruption is routed from PICA chip to south display engine and from there to north display engine. This patch adds functionality to enable hotplug detection for all Type-C ports (4 ports available). Differently from HPD in south display, PICA provides a dedicated HPD control register for each supported port, so we loop over ports ourselves instead of using intel_hpd_hotplug_enables() or intel_get_hpd_pins(). BSpec: 49305, 55726, 65107, 65300 Signed-off-by: Madhumitha Tolakanahalli Pradeep Signed-off-by: Gustavo Sousa Signed-off-by: Imre Deak Signed-off-by: Mika Kahola Reviewed-by: Imre Deak --- drivers/gpu/drm/i915/i915_irq.c | 249 +++++++++++++++++++++++++++++++- drivers/gpu/drm/i915/i915_reg.h | 31 +++- 2 files changed, 273 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index d24bdea65a3d..54ca5b65493f 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -162,6 +162,13 @@ static const u32 hpd_gen11[HPD_NUM_PINS] = { [HPD_PORT_TC6] = GEN11_TC_HOTPLUG(HPD_PORT_TC6) | GEN11_TBT_HOTPLUG(HPD_PORT_TC6), }; +static const u32 hpd_xelpdp[HPD_NUM_PINS] = { + [HPD_PORT_TC1] = XELPDP_TBT_HOTPLUG(HPD_PORT_TC1) | XELPDP_DP_ALT_HOTPLUG(HPD_PORT_TC1), + [HPD_PORT_TC2] = XELPDP_TBT_HOTPLUG(HPD_PORT_TC2) | XELPDP_DP_ALT_HOTPLUG(HPD_PORT_TC2), + [HPD_PORT_TC3] = XELPDP_TBT_HOTPLUG(HPD_PORT_TC3) | XELPDP_DP_ALT_HOTPLUG(HPD_PORT_TC3), + [HPD_PORT_TC4] = XELPDP_TBT_HOTPLUG(HPD_PORT_TC4) | XELPDP_DP_ALT_HOTPLUG(HPD_PORT_TC4), +}; + static const u32 hpd_icp[HPD_NUM_PINS] = { [HPD_PORT_A] = SDE_DDI_HOTPLUG_ICP(HPD_PORT_A), [HPD_PORT_B] = SDE_DDI_HOTPLUG_ICP(HPD_PORT_B), @@ -182,6 +189,16 @@ static const u32 hpd_sde_dg1[HPD_NUM_PINS] = { [HPD_PORT_TC1] = SDE_TC_HOTPLUG_DG2(HPD_PORT_TC1), }; +static const u32 hpd_mtp[HPD_NUM_PINS] = { + [HPD_PORT_A] = SDE_DDI_HOTPLUG_ICP(HPD_PORT_A), + [HPD_PORT_B] = SDE_DDI_HOTPLUG_ICP(HPD_PORT_B), + [HPD_PORT_TC1] = SDE_TC_HOTPLUG_ICP(HPD_PORT_TC1), + [HPD_PORT_TC2] = SDE_TC_HOTPLUG_ICP(HPD_PORT_TC2), + [HPD_PORT_TC3] = SDE_TC_HOTPLUG_ICP(HPD_PORT_TC3), + [HPD_PORT_TC4] = SDE_TC_HOTPLUG_ICP(HPD_PORT_TC4), +}; + + static void intel_hpd_init_pins(struct drm_i915_private *dev_priv) { struct intel_hotplug *hpd = &dev_priv->display.hotplug; @@ -195,7 +212,9 @@ static void intel_hpd_init_pins(struct drm_i915_private *dev_priv) return; } - if (DISPLAY_VER(dev_priv) >= 11) + if (DISPLAY_VER(dev_priv) >= 14) + hpd->hpd = hpd_xelpdp; + else if (DISPLAY_VER(dev_priv) >= 11) hpd->hpd = hpd_gen11; else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv)) hpd->hpd = hpd_bxt; @@ -214,6 +233,8 @@ static void intel_hpd_init_pins(struct drm_i915_private *dev_priv) if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG1) hpd->pch_hpd = hpd_sde_dg1; + else if (INTEL_PCH_TYPE(dev_priv) >= PCH_MTP) + hpd->pch_hpd = hpd_mtp; else if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP) hpd->pch_hpd = hpd_icp; else if (HAS_PCH_CNP(dev_priv) || HAS_PCH_SPT(dev_priv)) @@ -1559,6 +1580,44 @@ static void cpt_irq_handler(struct drm_i915_private *dev_priv, u32 pch_iir) cpt_serr_int_handler(dev_priv); } +static void xelpdp_pica_irq_handler(struct drm_i915_private *dev_priv, u32 iir) +{ + enum hpd_pin pin; + u32 hotplug_trigger = iir & (XELPDP_DP_ALT_HOTPLUG_MASK | XELPDP_TBT_HOTPLUG_MASK); + u32 trigger_aux = iir & XELPDP_AUX_TC_MASK; + u32 pin_mask = 0, long_mask = 0; + + for (pin = HPD_PORT_TC1; pin <= HPD_PORT_TC4; pin++) { + u32 val; + + if (!(dev_priv->display.hotplug.hpd[pin] & hotplug_trigger)) + continue; + + pin_mask |= BIT(pin); + + val = intel_uncore_read(&dev_priv->uncore, XELPDP_PORT_HOTPLUG_CTL(pin)); + intel_uncore_write(&dev_priv->uncore, XELPDP_PORT_HOTPLUG_CTL(pin), val); + + if (val & (XELPDP_DP_ALT_HPD_LONG_DETECT | XELPDP_TBT_HPD_LONG_DETECT)) + long_mask |= BIT(pin); + } + + if (pin_mask) { + drm_dbg(&dev_priv->drm, + "pica hotplug event received, stat 0x%08x, pins 0x%08x, long 0x%08x\n", + hotplug_trigger, pin_mask, long_mask); + + intel_hpd_irq_handler(dev_priv, pin_mask, long_mask); + } + + if (trigger_aux) + dp_aux_irq_handler(dev_priv); + + if (!pin_mask && !trigger_aux) + drm_err(&dev_priv->drm, + "Unexpected DE HPD/AUX interrupt 0x%08x\n", iir); +} + static void icp_irq_handler(struct drm_i915_private *dev_priv, u32 pch_iir) { u32 ddi_hotplug_trigger = pch_iir & SDE_DDI_HOTPLUG_MASK_ICP; @@ -2029,6 +2088,34 @@ u32 gen8_de_pipe_underrun_mask(struct drm_i915_private *dev_priv) return mask; } +static void gen8_read_and_ack_pch_irqs(struct drm_i915_private *i915, u32 *pch_iir, u32 *pica_iir) +{ + u32 pica_ier = 0; + + *pica_iir = 0; + *pch_iir = intel_de_read(i915, SDEIIR); + if (!*pch_iir) + return; + + /** + * PICA IER must be disabled/re-enabled around clearing PICA IIR and + * SDEIIR, to avoid losing PICA IRQs and to ensure that such IRQs set + * their flags both in the PICA and SDE IIR. + */ + if (*pch_iir & SDE_PICAINTERRUPT) { + drm_WARN_ON(&i915->drm, INTEL_PCH_TYPE(i915) < PCH_MTP); + + pica_ier = intel_de_rmw(i915, PICAINTERRUPT_IER, ~0, 0); + *pica_iir = intel_de_read(i915, PICAINTERRUPT_IIR); + intel_de_write(i915, PICAINTERRUPT_IIR, *pica_iir); + } + + intel_de_write(i915, SDEIIR, *pch_iir); + + if (pica_ier) + intel_de_write(i915, PICAINTERRUPT_IER, pica_ier); +} + static irqreturn_t gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl) { @@ -2153,16 +2240,20 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl) if (HAS_PCH_SPLIT(dev_priv) && !HAS_PCH_NOP(dev_priv) && master_ctl & GEN8_DE_PCH_IRQ) { + u32 pica_iir; + /* * FIXME(BDW): Assume for now that the new interrupt handling * scheme also closed the SDE interrupt handling race we've seen * on older pch-split platforms. But this needs testing. */ - iir = intel_uncore_read(&dev_priv->uncore, SDEIIR); + gen8_read_and_ack_pch_irqs(dev_priv, &iir, &pica_iir); if (iir) { - intel_uncore_write(&dev_priv->uncore, SDEIIR, iir); ret = IRQ_HANDLED; + if (pica_iir) + xelpdp_pica_irq_handler(dev_priv, pica_iir); + if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP) icp_irq_handler(dev_priv, iir); else if (INTEL_PCH_TYPE(dev_priv) >= PCH_SPT) @@ -2740,7 +2831,11 @@ static void gen11_display_irq_reset(struct drm_i915_private *dev_priv) GEN3_IRQ_RESET(uncore, GEN8_DE_PORT_); GEN3_IRQ_RESET(uncore, GEN8_DE_MISC_); - GEN3_IRQ_RESET(uncore, GEN11_DE_HPD_); + + if (DISPLAY_VER(dev_priv) >= 14) + GEN3_IRQ_RESET(uncore, PICAINTERRUPT_); + else + GEN3_IRQ_RESET(uncore, GEN11_DE_HPD_); if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP) GEN3_IRQ_RESET(uncore, SDE); @@ -3031,6 +3126,127 @@ static void gen11_hpd_irq_setup(struct drm_i915_private *dev_priv) icp_hpd_irq_setup(dev_priv); } +static u32 mtp_ddi_hotplug_enables(struct intel_encoder *encoder) +{ + switch (encoder->hpd_pin) { + case HPD_PORT_A: + case HPD_PORT_B: + return SHOTPLUG_CTL_DDI_HPD_ENABLE(encoder->hpd_pin); + default: + return 0; + } +} + +static u32 mtp_tc_hotplug_enables(struct intel_encoder *encoder) +{ + switch (encoder->hpd_pin) { + case HPD_PORT_TC1: + case HPD_PORT_TC2: + case HPD_PORT_TC3: + case HPD_PORT_TC4: + return ICP_TC_HPD_ENABLE(encoder->hpd_pin); + default: + return 0; + } +} + +static void mtp_ddi_hpd_detection_setup(struct drm_i915_private *dev_priv) +{ + u32 hotplug; + + hotplug = intel_uncore_read(&dev_priv->uncore, SHOTPLUG_CTL_DDI); + hotplug &= ~(SHOTPLUG_CTL_DDI_HPD_ENABLE(HPD_PORT_A) | + SHOTPLUG_CTL_DDI_HPD_ENABLE(HPD_PORT_B)); + hotplug |= intel_hpd_hotplug_enables(dev_priv, mtp_ddi_hotplug_enables); + intel_uncore_write(&dev_priv->uncore, SHOTPLUG_CTL_DDI, hotplug); +} + +static void mtp_tc_hpd_detection_setup(struct drm_i915_private *dev_priv) +{ + u32 hotplug; + + hotplug = intel_uncore_read(&dev_priv->uncore, SHOTPLUG_CTL_TC); + hotplug &= ~(ICP_TC_HPD_ENABLE(HPD_PORT_TC1) | + ICP_TC_HPD_ENABLE(HPD_PORT_TC2) | + ICP_TC_HPD_ENABLE(HPD_PORT_TC3) | + ICP_TC_HPD_ENABLE(HPD_PORT_TC4)); + hotplug |= intel_hpd_hotplug_enables(dev_priv, mtp_tc_hotplug_enables); + intel_uncore_write(&dev_priv->uncore, SHOTPLUG_CTL_TC, hotplug); +} + +static void mtp_hpd_invert(struct drm_i915_private *i915) +{ + u32 val = (INVERT_DDIA_HPD | + INVERT_DDIB_HPD | + INVERT_DDIC_HPD | + INVERT_TC1_HPD | + INVERT_TC2_HPD | + INVERT_TC3_HPD | + INVERT_TC4_HPD | + INVERT_DDID_HPD_MTP | + INVERT_DDIE_HPD); + intel_uncore_rmw(&i915->uncore, SOUTH_CHICKEN1, 0, val); +} + +static void mtp_hpd_irq_setup(struct drm_i915_private *dev_priv) +{ + u32 hotplug_irqs, enabled_irqs; + + enabled_irqs = intel_hpd_enabled_irqs(dev_priv, dev_priv->display.hotplug.pch_hpd); + hotplug_irqs = intel_hpd_hotplug_irqs(dev_priv, dev_priv->display.hotplug.pch_hpd); + + intel_uncore_write(&dev_priv->uncore, SHPD_FILTER_CNT, SHPD_FILTER_CNT_500_ADJ); + + mtp_hpd_invert(dev_priv); + ibx_display_interrupt_update(dev_priv, hotplug_irqs, enabled_irqs); + + mtp_ddi_hpd_detection_setup(dev_priv); + mtp_tc_hpd_detection_setup(dev_priv); +} + +static void xelpdp_pica_hpd_detection_setup(struct drm_i915_private *dev_priv) +{ + struct intel_encoder *encoder; + enum hpd_pin pin; + u32 available_pins = 0; + + BUILD_BUG_ON(BITS_PER_TYPE(available_pins) < HPD_NUM_PINS); + + for_each_intel_encoder(&dev_priv->drm, encoder) + available_pins |= BIT(encoder->hpd_pin); + + for (pin = HPD_PORT_TC1; pin <= HPD_PORT_TC4; pin++) { + u32 mask = (available_pins & BIT(pin)) ? ~(u32)0 : 0; + u32 val = XELPDP_TBT_HOTPLUG_ENABLE | + XELPDP_DP_ALT_HOTPLUG_ENABLE; + + intel_uncore_rmw(&dev_priv->uncore, + XELPDP_PORT_HOTPLUG_CTL(pin), + ~mask & val, mask & val); + } +} + +static void xelpdp_hpd_irq_setup(struct drm_i915_private *dev_priv) +{ + u32 hotplug_irqs, enabled_irqs; + u32 val; + + enabled_irqs = intel_hpd_enabled_irqs(dev_priv, dev_priv->display.hotplug.hpd); + hotplug_irqs = intel_hpd_hotplug_irqs(dev_priv, dev_priv->display.hotplug.hpd); + + val = intel_uncore_read(&dev_priv->uncore, PICAINTERRUPT_IMR); + val &= ~hotplug_irqs; + val |= ~enabled_irqs & hotplug_irqs; + intel_uncore_write(&dev_priv->uncore, PICAINTERRUPT_IMR, val); + intel_uncore_posting_read(&dev_priv->uncore, PICAINTERRUPT_IMR); + + xelpdp_pica_hpd_detection_setup(dev_priv); + + if (INTEL_PCH_TYPE(dev_priv) >= PCH_MTP) { + mtp_hpd_irq_setup(dev_priv); + } +} + static u32 spt_hotplug_enables(struct intel_encoder *encoder) { switch (encoder->hpd_pin) { @@ -3363,7 +3579,7 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv) GEN3_IRQ_INIT(uncore, GEN8_DE_PORT_, ~de_port_masked, de_port_enables); GEN3_IRQ_INIT(uncore, GEN8_DE_MISC_, ~de_misc_masked, de_misc_masked); - if (DISPLAY_VER(dev_priv) >= 11) { + if (IS_DISPLAY_VER(dev_priv, 11, 13)) { u32 de_hpd_masked = 0; u32 de_hpd_enables = GEN11_DE_TC_HOTPLUG_MASK | GEN11_DE_TBT_HOTPLUG_MASK; @@ -3373,6 +3589,20 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv) } } +static void mtp_irq_postinstall(struct drm_i915_private *dev_priv) +{ + struct intel_uncore *uncore = &dev_priv->uncore; + u32 sde_mask = SDE_GMBUS_ICP | SDE_PICAINTERRUPT; + u32 de_hpd_mask = XELPDP_AUX_TC_MASK; + u32 de_hpd_enables = de_hpd_mask | XELPDP_DP_ALT_HOTPLUG_MASK | + XELPDP_TBT_HOTPLUG_MASK; + + GEN3_IRQ_INIT(uncore, PICAINTERRUPT_, ~de_hpd_mask, + de_hpd_enables); + + GEN3_IRQ_INIT(uncore, SDE, ~sde_mask, 0xffffffff); +} + static void icp_irq_postinstall(struct drm_i915_private *dev_priv) { struct intel_uncore *uncore = &dev_priv->uncore; @@ -3434,7 +3664,11 @@ static void dg1_irq_postinstall(struct drm_i915_private *dev_priv) GEN3_IRQ_INIT(uncore, GEN11_GU_MISC_, ~gu_misc_masked, gu_misc_masked); if (HAS_DISPLAY(dev_priv)) { - icp_irq_postinstall(dev_priv); + if (DISPLAY_VER(dev_priv) >= 14) + mtp_irq_postinstall(dev_priv); + else + icp_irq_postinstall(dev_priv); + gen8_de_irq_postinstall(dev_priv); intel_uncore_write(&dev_priv->uncore, GEN11_DISPLAY_INT_CTL, GEN11_DISPLAY_IRQ_ENABLE); @@ -3920,6 +4154,7 @@ static const struct intel_hotplug_funcs platform##_hpd_funcs = { \ } HPD_FUNCS(i915); +HPD_FUNCS(xelpdp); HPD_FUNCS(dg1); HPD_FUNCS(gen11); HPD_FUNCS(bxt); @@ -3980,6 +4215,8 @@ void intel_irq_init(struct drm_i915_private *dev_priv) dev_priv->display.funcs.hotplug = &icp_hpd_funcs; else if (HAS_PCH_DG1(dev_priv)) dev_priv->display.funcs.hotplug = &dg1_hpd_funcs; + else if (DISPLAY_VER(dev_priv) >= 14) + dev_priv->display.funcs.hotplug = &xelpdp_hpd_funcs; else if (DISPLAY_VER(dev_priv) >= 11) dev_priv->display.funcs.hotplug = &gen11_hpd_funcs; else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv)) diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 4bd047be51df..8d49676148f2 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -4487,6 +4487,28 @@ #define GEN11_HOTPLUG_CTL_SHORT_DETECT(hpd_pin) (1 << (_HPD_PIN_TC(hpd_pin) * 4)) #define GEN11_HOTPLUG_CTL_NO_DETECT(hpd_pin) (0 << (_HPD_PIN_TC(hpd_pin) * 4)) +#define PICAINTERRUPT_ISR _MMIO(0x16FE50) +#define PICAINTERRUPT_IMR _MMIO(0x16FE54) +#define PICAINTERRUPT_IIR _MMIO(0x16FE58) +#define PICAINTERRUPT_IER _MMIO(0x16FE5C) + +#define XELPDP_DP_ALT_HOTPLUG(hpd_pin) REG_BIT(16 + _HPD_PIN_TC(hpd_pin)) +#define XELPDP_DP_ALT_HOTPLUG_MASK REG_GENMASK(19, 16) + +#define XELPDP_AUX_TC(hpd_pin) REG_BIT(8 + _HPD_PIN_TC(hpd_pin)) +#define XELPDP_AUX_TC_MASK REG_GENMASK(11, 8) + +#define XELPDP_TBT_HOTPLUG(hpd_pin) REG_BIT(_HPD_PIN_TC(hpd_pin)) +#define XELPDP_TBT_HOTPLUG_MASK REG_GENMASK(3, 0) + +#define XELPDP_PORT_HOTPLUG_CTL(hpd_pin) _MMIO(0x16F270 + (_HPD_PIN_TC(hpd_pin) * 0x200)) +#define XELPDP_TBT_HOTPLUG_ENABLE REG_BIT(6) +#define XELPDP_TBT_HPD_LONG_DETECT REG_BIT(5) +#define XELPDP_TBT_HPD_SHORT_DETECT REG_BIT(4) +#define XELPDP_DP_ALT_HOTPLUG_ENABLE REG_BIT(2) +#define XELPDP_DP_ALT_HPD_LONG_DETECT REG_BIT(1) +#define XELPDP_DP_ALT_HPD_SHORT_DETECT REG_BIT(0) + #define ILK_DISPLAY_CHICKEN2 _MMIO(0x42004) /* Required on all Ironlake and Sandybridge according to the B-Spec. */ #define ILK_ELPIN_409_SELECT (1 << 25) @@ -4773,7 +4795,8 @@ SDE_FDI_RXB_CPT | \ SDE_FDI_RXA_CPT) -/* south display engine interrupt: ICP/TGP */ +/* south display engine interrupt: ICP/TGP/MTP */ +#define SDE_PICAINTERRUPT REG_BIT(31) #define SDE_GMBUS_ICP (1 << 23) #define SDE_TC_HOTPLUG_ICP(hpd_pin) REG_BIT(24 + _HPD_PIN_TC(hpd_pin)) #define SDE_TC_HOTPLUG_DG2(hpd_pin) REG_BIT(25 + _HPD_PIN_TC(hpd_pin)) /* sigh */ @@ -5127,6 +5150,12 @@ #define SOUTH_CHICKEN1 _MMIO(0xc2000) #define FDIA_PHASE_SYNC_SHIFT_OVR 19 #define FDIA_PHASE_SYNC_SHIFT_EN 18 +#define INVERT_DDIE_HPD REG_BIT(28) +#define INVERT_DDID_HPD_MTP REG_BIT(27) +#define INVERT_TC4_HPD REG_BIT(26) +#define INVERT_TC3_HPD REG_BIT(25) +#define INVERT_TC2_HPD REG_BIT(24) +#define INVERT_TC1_HPD REG_BIT(23) #define INVERT_DDID_HPD (1 << 18) #define INVERT_DDIC_HPD (1 << 17) #define INVERT_DDIB_HPD (1 << 16) From patchwork Thu Apr 6 13:02:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kahola, Mika" X-Patchwork-Id: 13203360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7716FC761A6 for ; Thu, 6 Apr 2023 13:33:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9212410EBCA; Thu, 6 Apr 2023 13:32:53 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2A83810EBB9 for ; Thu, 6 Apr 2023 13:32:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680787968; x=1712323968; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KPcR8ng7lO0b0p/M1Pv52KbtTpu4FHKq2d3YdDbCY2E=; b=N9Q+huVGrk44yO22wHZqd5QldijL/IdLczJeKNEEd52GC68MWrDaIlwS zO+lmBUL7vhvwP5C4WTksOblMtG/t522afgmXmkK2DmtKqXe3GG5C1cJc Ead0dthMjtf6x1yq+7jxSeHQU8CJYwSo6/VUKf1QPpBjxnwTUzNyqxHjD y+HlwNyNHcqmev+I3qScqScHHBG9l9u5u6V9W+b1eDE7dLs6qFUfxv1CA zJKtNiivWL3pDEFlsF97MX3JkXqdnCllasmbVY+N2u4XOK3tPAxkNYpM1 /38uHndnHwBmiYqTIUA90mTtgqWktKqeKuuybQ2+X8agdHUKpQrTZKkQd w==; X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="345352345" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="345352345" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 06:07:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="751641724" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="751641724" Received: from sorvi2.fi.intel.com ([10.237.72.194]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2023 06:07:42 -0700 From: Mika Kahola To: intel-gfx@lists.freedesktop.org Date: Thu, 6 Apr 2023 16:02:20 +0300 Message-Id: <20230406130221.2998457-8-mika.kahola@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230406130221.2998457-1-mika.kahola@intel.com> References: <20230406130221.2998457-1-mika.kahola@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v3 7/8] drm/i915/display/mtl: Fill port width in DDI_BUF_/TRANS_DDI_FUNC_/PORT_BUF_CTL for HDMI X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Taylor@freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Ankit Nautiyal MTL requires the PORT_CTL_WIDTH, TRANS_DDI_FUNC_CTL and DDI_BUF_CTL to be filled with 4 lanes for TMDS mode. This patch enables D2D link and fills PORT_WIDTH in appropriate registers. v2: - Added fixes from Clint's Add HDMI implementation changes. - Modified commit message. v3: - Use TRANS_DDI_PORT_WIDTH() instead of DDI_PORT_WIDTH() for the value of TRANS_DDI_FUNC_CTL_*. (Gustavo) Signed-off-by: Ankit Nautiyal Signed-off-by: Taylor, Clinton A Signed-off-by: Mika Kahola --- drivers/gpu/drm/i915/display/intel_ddi.c | 48 +++++++++++++++++++++--- drivers/gpu/drm/i915/i915_reg.h | 2 + 2 files changed, 44 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c index e97d7627d9d1..20b0844b8240 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi.c +++ b/drivers/gpu/drm/i915/display/intel_ddi.c @@ -516,6 +516,8 @@ intel_ddi_transcoder_func_reg_val_get(struct intel_encoder *encoder, temp |= TRANS_DDI_HDMI_SCRAMBLING; if (crtc_state->hdmi_high_tmds_clock_ratio) temp |= TRANS_DDI_HIGH_TMDS_CHAR_RATE; + if (DISPLAY_VER(dev_priv) >= 14) + temp |= TRANS_DDI_PORT_WIDTH(crtc_state->lane_count); } else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_ANALOG)) { temp |= TRANS_DDI_MODE_SELECT_FDI_OR_128B132B; temp |= (crtc_state->fdi_lanes - 1) << 1; @@ -2891,6 +2893,10 @@ static void intel_enable_ddi_hdmi(struct intel_atomic_state *state, if (has_buf_trans_select(dev_priv)) hsw_prepare_hdmi_ddi_buffers(encoder, crtc_state); + /* e. Enable D2D Link for C10/C20 Phy */ + if (DISPLAY_VER(dev_priv) >= 14) + mtl_ddi_enable_d2d(encoder); + encoder->set_signal_levels(encoder, crtc_state); /* Display WA #1143: skl,kbl,cfl */ @@ -2936,13 +2942,39 @@ static void intel_enable_ddi_hdmi(struct intel_atomic_state *state, * * On ADL_P the PHY link rate and lane count must be programmed but * these are both 0 for HDMI. + * + * But MTL onwards HDMI2.1 is supported and in TMDS mode this + * is always filled with 4 lanes, already set in the crtc_state. + * The same is required to be filled in PORT_BUF_CTL for C10/20 Phy. */ - buf_ctl = dig_port->saved_port_bits | DDI_BUF_CTL_ENABLE; - if (IS_ALDERLAKE_P(dev_priv) && intel_phy_is_tc(dev_priv, phy)) { - drm_WARN_ON(&dev_priv->drm, !intel_tc_port_in_legacy_mode(dig_port)); - buf_ctl |= DDI_BUF_CTL_TC_PHY_OWNERSHIP; + if (DISPLAY_VER(dev_priv) >= 14) { + u32 ddi_buf = 0; + u8 lane_count = mtl_get_port_width(crtc_state->lane_count); + u32 port_buf = 0; + + port_buf |= XELPDP_PORT_WIDTH(lane_count); + + if (intel_bios_is_lane_reversal_needed(dev_priv, port)) + port_buf |= XELPDP_PORT_REVERSAL; + + intel_de_rmw(dev_priv, XELPDP_PORT_BUF_CTL1(port), 0, port_buf); + + ddi_buf |= DDI_BUF_CTL_ENABLE | + DDI_PORT_WIDTH(lane_count); + + intel_de_write(dev_priv, DDI_BUF_CTL(port), + dig_port->saved_port_bits | ddi_buf); + + /* i. Poll for PORT_BUF_CTL Idle Status == 0, timeout after 100 us */ + intel_wait_ddi_buf_active(dev_priv, port); + } else { + buf_ctl = dig_port->saved_port_bits | DDI_BUF_CTL_ENABLE; + if (IS_ALDERLAKE_P(dev_priv) && intel_phy_is_tc(dev_priv, phy)) { + drm_WARN_ON(&dev_priv->drm, !intel_tc_port_in_legacy_mode(dig_port)); + buf_ctl |= DDI_BUF_CTL_TC_PHY_OWNERSHIP; + } + intel_de_write(dev_priv, DDI_BUF_CTL(port), buf_ctl); } - intel_de_write(dev_priv, DDI_BUF_CTL(port), buf_ctl); intel_wait_ddi_buf_active(dev_priv, port); @@ -3357,7 +3389,11 @@ static void intel_ddi_read_func_ctl(struct intel_encoder *encoder, fallthrough; case TRANS_DDI_MODE_SELECT_DVI: pipe_config->output_types |= BIT(INTEL_OUTPUT_HDMI); - pipe_config->lane_count = 4; + if (DISPLAY_VER(dev_priv) >= 14) + pipe_config->lane_count = + ((temp & DDI_PORT_WIDTH_MASK) >> DDI_PORT_WIDTH_SHIFT) + 1; + else + pipe_config->lane_count = 4; break; case TRANS_DDI_MODE_SELECT_DP_SST: if (encoder->type == INTEL_OUTPUT_EDP) diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 8d49676148f2..c4d363248bd2 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -5597,6 +5597,8 @@ enum skl_power_gate { #define TRANS_DDI_HDCP_SELECT REG_BIT(5) #define TRANS_DDI_BFI_ENABLE (1 << 4) #define TRANS_DDI_HIGH_TMDS_CHAR_RATE (1 << 4) +#define TRANS_DDI_PORT_WIDTH_MASK REG_GENMASK(3, 1) +#define TRANS_DDI_PORT_WIDTH(width) REG_FIELD_PREP(TRANS_DDI_PORT_WIDTH_MASK, (width) - 1) #define TRANS_DDI_HDMI_SCRAMBLING (1 << 0) #define TRANS_DDI_HDMI_SCRAMBLING_MASK (TRANS_DDI_HDMI_SCRAMBLER_CTS_ENABLE \ | TRANS_DDI_HDMI_SCRAMBLER_RESET_FREQ \ From patchwork Thu Apr 6 13:02:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Kahola, Mika" X-Patchwork-Id: 13203362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8182AC77B74 for ; Thu, 6 Apr 2023 13:33:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EE77210EBBD; Thu, 6 Apr 2023 13:33:00 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1CA3C10EBC4 for ; Thu, 6 Apr 2023 13:32:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680787970; x=1712323970; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4CYr/MkDJsVtqvqJ3k+5d0NmuOzNZA7u9Z7nU1HAMKg=; b=G4MhOXH13rT79ITK/r7k8nhPOJOn7gjUEts5hngicf7oTjp8Nb/m2+sM ZDPZqnqNqdnB1cGMam1Q2liy4sFceBpMZYtezAXjnjyzJlmi4UHLI7iCi 1sciJen9BClajiS0IgwHRborJRHa9iv1ZBmj3QoMGZ4obbsyyh2uNiNKt MeORF8/Rm92LW7LUnE0gA+hYtubU8+aV+mEGvhQuF+uNG9PFDAU8PMry0 wBuTogYBqYvqt7sqYejZBmQz53PBDoPZ5jOITbBhzQ/yhoDW8B8lhFPmZ TPi/bAso9h98uv4kmtaNm8Ndlc5jlCbrvUKYER8J/y7uP8ggsodHKChZm g==; X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="345352352" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="345352352" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 06:07:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="751641739" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="751641739" Received: from sorvi2.fi.intel.com ([10.237.72.194]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2023 06:07:44 -0700 From: Mika Kahola To: intel-gfx@lists.freedesktop.org Date: Thu, 6 Apr 2023 16:02:21 +0300 Message-Id: <20230406130221.2998457-9-mika.kahola@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230406130221.2998457-1-mika.kahola@intel.com> References: <20230406130221.2998457-1-mika.kahola@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v3 8/8] drm/i915/mtl/display: Implement DisplayPort sequences X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Matt Roper Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: José Roberto de Souza The differences between MTL and TGL DP sequences are big enough to MTL have its own functions. Also it is much easier to follow MTL sequences against spec with its own functions. One change worthy to mention is the move of 'intel_display_power_get(dev_priv, dig_port->ddi_io_power_domain)'. This call is not necessary for MTL but we have _put() counter part in intel_ddi_post_disable_dp() that needs to balanced. We could add a display version check on it but instead here it is moving it to intel_ddi_pre_enable_dp() so it is executed for all platforms in a single place and this will not cause any harm in MTL and newer platforms. v2: - Fix logic to wait for buf idle. - Use the right register to wait for ddi active.(RK) v3: - Increase wait timeout for ddi buf active (Mika) v4: - Increase idle timeout for ddi buf idle (Mika) BSpec: 65448 65505 Acked-by: Matt Roper Signed-off-by: Satyeshwar Singh Signed-off-by: Clint Taylor Signed-off-by: Radhakrishna Sripada Signed-off-by: Ankit Nautiyal Signed-off-by: José Roberto de Souza Signed-off-by: Mika Kahola --- .../gpu/drm/i915/display/intel_cx0_phy_regs.h | 8 + drivers/gpu/drm/i915/display/intel_ddi.c | 375 +++++++++++++++++- drivers/gpu/drm/i915/i915_reg.h | 5 + 3 files changed, 373 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h index a72f79ea5e6c..e23f921a5168 100644 --- a/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h +++ b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h @@ -59,8 +59,16 @@ _XELPDP_PORT_BUF_CTL1_LN0_B, \ _XELPDP_PORT_BUF_CTL1_LN0_USBC1, \ _XELPDP_PORT_BUF_CTL1_LN0_USBC2)) +#define XELPDP_PORT_BUF_D2D_LINK_ENABLE REG_BIT(29) +#define XELPDP_PORT_BUF_D2D_LINK_STATE REG_BIT(28) #define XELPDP_PORT_BUF_SOC_PHY_READY REG_BIT(24) +#define XELPDP_PORT_BUF_PORT_DATA_WIDTH_MASK REG_GENMASK(19, 18) +#define XELPDP_PORT_BUF_PORT_DATA_10BIT REG_FIELD_PREP(XELPDP_PORT_BUF_PORT_DATA_WIDTH_MASK, 0) +#define XELPDP_PORT_BUF_PORT_DATA_20BIT REG_FIELD_PREP(XELPDP_PORT_BUF_PORT_DATA_WIDTH_MASK, 1) +#define XELPDP_PORT_BUF_PORT_DATA_40BIT REG_FIELD_PREP(XELPDP_PORT_BUF_PORT_DATA_WIDTH_MASK, 2) #define XELPDP_PORT_REVERSAL REG_BIT(16) +#define XELPDP_PORT_BUF_IO_SELECTION REG_BIT(11) +#define XELPDP_PORT_BUF_PHY_IDLE REG_BIT(7) #define XELPDP_TC_PHY_OWNERSHIP REG_BIT(6) #define XELPDP_TCSS_POWER_REQUEST REG_BIT(5) #define XELPDP_TCSS_POWER_STATE REG_BIT(4) diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c index 20b0844b8240..c3178cac3dee 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi.c +++ b/drivers/gpu/drm/i915/display/intel_ddi.c @@ -40,6 +40,7 @@ #include "intel_connector.h" #include "intel_crtc.h" #include "intel_cx0_phy.h" +#include "intel_cx0_phy_regs.h" #include "intel_ddi.h" #include "intel_ddi_buf_trans.h" #include "intel_de.h" @@ -170,6 +171,18 @@ static void hsw_prepare_hdmi_ddi_buffers(struct intel_encoder *encoder, trans->entries[level].hsw.trans2); } +static void mtl_wait_ddi_buf_idle(struct drm_i915_private *i915, enum port port) +{ + int ret; + + /* FIXME: find out why Bspec's 100us timeout is too short */ + ret = wait_for_us((intel_de_read(i915, XELPDP_PORT_BUF_CTL1(port)) & + XELPDP_PORT_BUF_PHY_IDLE), 10000); + if (ret) + drm_err(&i915->drm, "Timeout waiting for DDI BUF %c to get idle\n", + port_name(port)); +} + void intel_wait_ddi_buf_idle(struct drm_i915_private *dev_priv, enum port port) { @@ -197,7 +210,9 @@ static void intel_wait_ddi_buf_active(struct drm_i915_private *dev_priv, return; } - if (IS_DG2(dev_priv)) { + if (DISPLAY_VER(dev_priv) >= 14) { + timeout_us = 10000; + } else if (IS_DG2(dev_priv)) { timeout_us = 1200; } else if (DISPLAY_VER(dev_priv) >= 12) { if (intel_phy_is_tc(dev_priv, phy)) @@ -208,8 +223,12 @@ static void intel_wait_ddi_buf_active(struct drm_i915_private *dev_priv, timeout_us = 500; } - ret = _wait_for(!(intel_de_read(dev_priv, DDI_BUF_CTL(port)) & - DDI_BUF_IS_IDLE), timeout_us, 10, 10); + if (DISPLAY_VER(dev_priv) >= 14) + ret = _wait_for(!(intel_de_read(dev_priv, XELPDP_PORT_BUF_CTL1(port)) & XELPDP_PORT_BUF_PHY_IDLE), + timeout_us, 10, 10); + else + ret = _wait_for(!(intel_de_read(dev_priv, DDI_BUF_CTL(port)) & DDI_BUF_IS_IDLE), + timeout_us, 10, 10); if (ret) drm_err(&dev_priv->drm, "Timeout waiting for DDI BUF %c to get active\n", @@ -314,6 +333,13 @@ static void intel_ddi_init_dp_buf_reg(struct intel_encoder *encoder, DDI_PORT_WIDTH(crtc_state->lane_count) | DDI_BUF_TRANS_SELECT(0); + if (DISPLAY_VER(i915) >= 14) { + if (intel_dp_is_uhbr(crtc_state)) + intel_dp->DP |= DDI_BUF_PORT_DATA_40BIT; + else + intel_dp->DP |= DDI_BUF_PORT_DATA_10BIT; + } + if (IS_ALDERLAKE_P(i915) && intel_phy_is_tc(i915, phy)) { intel_dp->DP |= ddi_buf_phy_link_rate(crtc_state->port_clock); if (!intel_tc_port_in_tbt_alt_mode(dig_port)) @@ -2312,6 +2338,179 @@ static void intel_ddi_mso_configure(const struct intel_crtc_state *crtc_state) OVERLAP_PIXELS_MASK, dss1); } +static u8 mtl_get_port_width(u8 lane_count) +{ + switch (lane_count) { + case 1: + return 0; + case 2: + return 1; + case 3: + return 4; + case 4: + return 3; + default: + MISSING_CASE(lane_count); + return 4; + } +} + +static void +mtl_ddi_enable_d2d(struct intel_encoder *encoder) +{ + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + enum port port = encoder->port; + + intel_de_rmw(dev_priv, XELPDP_PORT_BUF_CTL1(port), 0, + XELPDP_PORT_BUF_D2D_LINK_ENABLE); + + if (wait_for_us((intel_de_read(dev_priv, XELPDP_PORT_BUF_CTL1(port)) & + XELPDP_PORT_BUF_D2D_LINK_STATE), 100)) { + drm_err(&dev_priv->drm, "Timeout waiting for D2D Link enable for PORT_BUF_CTL %c\n", + port_name(port)); + } +} + +static void mtl_port_buf_ctl_program(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + struct intel_digital_port *dig_port = enc_to_dig_port(encoder); + enum port port = encoder->port; + u32 val; + + val = intel_de_read(i915, XELPDP_PORT_BUF_CTL1(port)); + val &= ~XELPDP_PORT_WIDTH_MASK; + val |= XELPDP_PORT_WIDTH(mtl_get_port_width(crtc_state->lane_count)); + + val &= ~XELPDP_PORT_BUF_PORT_DATA_WIDTH_MASK; + if (intel_dp_is_uhbr(crtc_state)) + val |= XELPDP_PORT_BUF_PORT_DATA_40BIT; + else + val |= XELPDP_PORT_BUF_PORT_DATA_10BIT; + + if (dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL) + val |= XELPDP_PORT_REVERSAL; + + intel_de_write(i915, XELPDP_PORT_BUF_CTL1(port), val); +} + +static void mtl_port_buf_ctl_io_selection(struct intel_encoder *encoder) +{ + struct drm_i915_private *i915 = to_i915(encoder->base.dev); + struct intel_digital_port *dig_port = enc_to_dig_port(encoder); + u32 val; + + val = intel_tc_port_in_tbt_alt_mode(dig_port) ? + XELPDP_PORT_BUF_IO_SELECTION : 0; + intel_de_rmw(i915, XELPDP_PORT_BUF_CTL1(encoder->port), + XELPDP_PORT_BUF_IO_SELECTION, val); +} + +static void mtl_ddi_pre_enable_dp(struct intel_atomic_state *state, + struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state, + const struct drm_connector_state *conn_state) +{ + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + bool is_mst = intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST); + + intel_dp_set_link_params(intel_dp, + crtc_state->port_clock, + crtc_state->lane_count); + + /* + * We only configure what the register value will be here. Actual + * enabling happens during link training farther down. + */ + intel_ddi_init_dp_buf_reg(encoder, crtc_state); + + /* + * 1. Enable Power Wells + * + * This was handled at the beginning of intel_atomic_commit_tail(), + * before we called down into this function. + */ + + /* 2. PMdemand was already set */ + + /* 3. Select Thunderbolt */ + mtl_port_buf_ctl_io_selection(encoder); + + /* 4. Enable Panel Power if PPS is required */ + intel_pps_on(intel_dp); + + /* 5. Enable the port PLL */ + intel_ddi_enable_clock(encoder, crtc_state); + + /* + * 6.a Configure Transcoder Clock Select to direct the Port clock to the + * Transcoder. + */ + intel_ddi_enable_transcoder_clock(encoder, crtc_state); + + /* + * 6.b If DP v2.0/128b mode - Configure TRANS_DP2_CTL register settings. + */ + intel_ddi_config_transcoder_dp2(encoder, crtc_state); + + /* + * 6.c Configure TRANS_DDI_FUNC_CTL DDI Select, DDI Mode Select & MST + * Transport Select + */ + intel_ddi_config_transcoder_func(encoder, crtc_state); + + /* + * 6.e Program CoG/MSO configuration bits in DSS_CTL1 if selected. + */ + intel_ddi_mso_configure(crtc_state); + + if (!is_mst) + intel_dp_set_power(intel_dp, DP_SET_POWER_D0); + + intel_dp_configure_protocol_converter(intel_dp, crtc_state); + intel_dp_sink_set_decompression_state(intel_dp, crtc_state, true); + /* + * DDI FEC: "anticipates enabling FEC encoding sets the FEC_READY bit + * in the FEC_CONFIGURATION register to 1 before initiating link + * training + */ + intel_dp_sink_set_fec_ready(intel_dp, crtc_state); + + intel_dp_check_frl_training(intel_dp); + intel_dp_pcon_dsc_configure(intel_dp, crtc_state); + + /* + * 6. The rest of the below are substeps under the bspec's "Enable and + * Train Display Port" step. Note that steps that are specific to + * MST will be handled by intel_mst_pre_enable_dp() before/after it + * calls into this function. Also intel_mst_pre_enable_dp() only calls + * us when active_mst_links==0, so any steps designated for "single + * stream or multi-stream master transcoder" can just be performed + * unconditionally here. + * + * mtl_ddi_prepare_link_retrain() that is called by + * intel_dp_start_link_train() will execute steps: 6.d, 6.f, 6.g, 6.h, + * 6.i and 6.j + * + * 6.k Follow DisplayPort specification training sequence (see notes for + * failure handling) + * 6.m If DisplayPort multi-stream - Set DP_TP_CTL link training to Idle + * Pattern, wait for 5 idle patterns (DP_TP_STATUS Min_Idles_Sent) + * (timeout after 800 us) + */ + intel_dp_start_link_train(intel_dp, crtc_state); + + /* 6.n Set DP_TP_CTL link training to Normal */ + if (!is_trans_port_sync_mode(crtc_state)) + intel_dp_stop_link_train(intel_dp, crtc_state); + + /* 6.o Configure and enable FEC if needed */ + intel_ddi_enable_fec(encoder, crtc_state); + + intel_dsc_dp_pps_write(encoder, crtc_state); +} + static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state, struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state, @@ -2526,7 +2725,9 @@ static void intel_ddi_pre_enable_dp(struct intel_atomic_state *state, intel_dp_128b132b_sdp_crc16(enc_to_intel_dp(encoder), crtc_state); - if (DISPLAY_VER(dev_priv) >= 12) + if (DISPLAY_VER(dev_priv) >= 14) + mtl_ddi_pre_enable_dp(state, encoder, crtc_state, conn_state); + else if (DISPLAY_VER(dev_priv) >= 12) tgl_ddi_pre_enable_dp(state, encoder, crtc_state, conn_state); else hsw_ddi_pre_enable_dp(state, encoder, crtc_state, conn_state); @@ -2607,8 +2808,55 @@ static void intel_ddi_pre_enable(struct intel_atomic_state *state, } } -static void intel_disable_ddi_buf(struct intel_encoder *encoder, - const struct intel_crtc_state *crtc_state) +static void +mtl_ddi_disable_d2d_link(struct intel_encoder *encoder) +{ + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + enum port port = encoder->port; + + intel_de_rmw(dev_priv, XELPDP_PORT_BUF_CTL1(port), + XELPDP_PORT_BUF_D2D_LINK_ENABLE, 0); + + if (wait_for_us(!(intel_de_read(dev_priv, XELPDP_PORT_BUF_CTL1(port)) & + XELPDP_PORT_BUF_D2D_LINK_STATE), 100)) + drm_err(&dev_priv->drm, "Timeout waiting for D2D Link disable for PORT_BUF_CTL %c\n", + port_name(port)); +} + +static void mtl_disable_ddi_buf(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + enum port port = encoder->port; + u32 val; + + /* 3.b Clear DDI_CTL_DE Enabel to 0. */ + val = intel_de_read(dev_priv, DDI_BUF_CTL(port)); + if (val & DDI_BUF_CTL_ENABLE) { + val &= ~DDI_BUF_CTL_ENABLE; + intel_de_write(dev_priv, DDI_BUF_CTL(port), val); + + /* 3.c Poll for PORT_BUF_CTL Idle Status == 1, timeout after 100us */ + mtl_wait_ddi_buf_idle(dev_priv, port); + } + + /* 3.d Disable D2D Link */ + mtl_ddi_disable_d2d_link(encoder); + + /* 3.e Disable DP_TP_CTL */ + if (intel_crtc_has_dp_encoder(crtc_state)) { + val = intel_de_read(dev_priv, dp_tp_ctl_reg(encoder, crtc_state)); + val &= ~(DP_TP_CTL_ENABLE | DP_TP_CTL_LINK_TRAIN_MASK); + val |= DP_TP_CTL_LINK_TRAIN_PAT1; + intel_de_write(dev_priv, dp_tp_ctl_reg(encoder, crtc_state), val); + } + + /* 3.f Disable DP_TP_CTL FEC Enable if it is needed */ + intel_ddi_disable_fec_state(encoder, crtc_state); +} + +static void disable_ddi_buf(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) { struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); enum port port = encoder->port; @@ -2633,6 +2881,17 @@ static void intel_disable_ddi_buf(struct intel_encoder *encoder, intel_wait_ddi_buf_idle(dev_priv, port); } +static void intel_disable_ddi_buf(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + + if (DISPLAY_VER(dev_priv) >= 14) + mtl_disable_ddi_buf(encoder, crtc_state); + else + disable_ddi_buf(encoder, crtc_state); +} + static void intel_ddi_post_disable_dp(struct intel_atomic_state *state, struct intel_encoder *encoder, const struct intel_crtc_state *old_crtc_state, @@ -2680,12 +2939,21 @@ static void intel_ddi_post_disable_dp(struct intel_atomic_state *state, intel_pps_vdd_on(intel_dp); intel_pps_off(intel_dp); - if (!intel_tc_port_in_tbt_alt_mode(dig_port)) - intel_display_power_put(dev_priv, - dig_port->ddi_io_power_domain, - fetch_and_zero(&dig_port->ddi_io_wakeref)); + if (!intel_tc_port_in_tbt_alt_mode(dig_port)) { + intel_wakeref_t wakeref = fetch_and_zero(&dig_port->ddi_io_wakeref); + + if (wakeref) + intel_display_power_put(dev_priv, + dig_port->ddi_io_power_domain, + wakeref); + } intel_ddi_disable_clock(encoder); + + /* De-select Thunderbolt */ + if (DISPLAY_VER(dev_priv) >= 14) + intel_de_rmw(dev_priv, XELPDP_PORT_BUF_CTL1(encoder->port), + XELPDP_PORT_BUF_IO_SELECTION, 0); } static void intel_ddi_post_disable_hdmi(struct intel_atomic_state *state, @@ -2696,6 +2964,7 @@ static void intel_ddi_post_disable_hdmi(struct intel_atomic_state *state, struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct intel_digital_port *dig_port = enc_to_dig_port(encoder); struct intel_hdmi *intel_hdmi = &dig_port->hdmi; + intel_wakeref_t wakeref; dig_port->set_infoframes(encoder, false, old_crtc_state, old_conn_state); @@ -2708,9 +2977,11 @@ static void intel_ddi_post_disable_hdmi(struct intel_atomic_state *state, if (DISPLAY_VER(dev_priv) >= 12) intel_ddi_disable_transcoder_clock(old_crtc_state); - intel_display_power_put(dev_priv, - dig_port->ddi_io_power_domain, - fetch_and_zero(&dig_port->ddi_io_wakeref)); + wakeref = fetch_and_zero(&dig_port->ddi_io_wakeref); + if (wakeref) + intel_display_power_put(dev_priv, + dig_port->ddi_io_power_domain, + wakeref); intel_ddi_disable_clock(encoder); @@ -2948,13 +3219,16 @@ static void intel_enable_ddi_hdmi(struct intel_atomic_state *state, * The same is required to be filled in PORT_BUF_CTL for C10/20 Phy. */ if (DISPLAY_VER(dev_priv) >= 14) { + const struct intel_bios_encoder_data *devdata = + intel_bios_encoder_data_lookup(dev_priv, port); + u32 ddi_buf = 0; u8 lane_count = mtl_get_port_width(crtc_state->lane_count); u32 port_buf = 0; port_buf |= XELPDP_PORT_WIDTH(lane_count); - if (intel_bios_is_lane_reversal_needed(dev_priv, port)) + if (intel_bios_encoder_lane_reversal(devdata)) port_buf |= XELPDP_PORT_REVERSAL; intel_de_rmw(dev_priv, XELPDP_PORT_BUF_CTL1(port), 0, port_buf); @@ -3141,6 +3415,73 @@ static void adlp_tbt_to_dp_alt_switch_wa(struct intel_encoder *encoder) intel_dkl_phy_rmw(i915, DKL_PCS_DW5(tc_port, ln), DKL_PCS_DW5_CORE_SOFTRESET, 0); } +static void mtl_ddi_prepare_link_retrain(struct intel_dp *intel_dp, + const struct intel_crtc_state *crtc_state) +{ + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + struct intel_encoder *encoder = &dig_port->base; + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + enum port port = encoder->port; + u32 dp_tp_ctl, ddi_buf_ctl; + + /* + * TODO: To train with only a different voltage swing entry is not + * necessary disable and enable port + */ + dp_tp_ctl = intel_de_read(dev_priv, dp_tp_ctl_reg(encoder, crtc_state)); + if (dp_tp_ctl & DP_TP_CTL_ENABLE) { + /* Disable sequence: 3.b Clear DDI_CTL_DE enable to 0. */ + ddi_buf_ctl = intel_de_read(dev_priv, DDI_BUF_CTL(port)); + if (ddi_buf_ctl & DDI_BUF_CTL_ENABLE) { + intel_de_write(dev_priv, DDI_BUF_CTL(port), + ddi_buf_ctl & ~DDI_BUF_CTL_ENABLE); + /* + * Disable sequence: 3.c Poll for PORT_BUF_CTL + * Idle Status == 1, timeout after 100us + */ + mtl_wait_ddi_buf_idle(dev_priv, port); + } + + /* Disable sequence: 3.d Disable D2D Link */ + mtl_ddi_disable_d2d_link(encoder); + + /* Disable sequence: 3.e Disable DP_TP_CTL */ + dp_tp_ctl &= ~(DP_TP_CTL_ENABLE | DP_TP_CTL_LINK_TRAIN_MASK); + dp_tp_ctl |= DP_TP_CTL_LINK_TRAIN_PAT1; + intel_de_write(dev_priv, dp_tp_ctl_reg(encoder, crtc_state), dp_tp_ctl); + intel_de_posting_read(dev_priv, dp_tp_ctl_reg(encoder, crtc_state)); + } + + /* 6.d Configure and enable DP_TP_CTL with link training pattern 1 selected */ + dp_tp_ctl = DP_TP_CTL_ENABLE | DP_TP_CTL_LINK_TRAIN_PAT1; + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)) { + dp_tp_ctl |= DP_TP_CTL_MODE_MST; + } else { + dp_tp_ctl |= DP_TP_CTL_MODE_SST; + if (drm_dp_enhanced_frame_cap(intel_dp->dpcd)) + dp_tp_ctl |= DP_TP_CTL_ENHANCED_FRAME_ENABLE; + } + intel_de_write(dev_priv, dp_tp_ctl_reg(encoder, crtc_state), dp_tp_ctl); + intel_de_posting_read(dev_priv, dp_tp_ctl_reg(encoder, crtc_state)); + + /* 6.f Enable D2D Link */ + mtl_ddi_enable_d2d(encoder); + + /* 6.g Configure voltage swing and related IO settings */ + encoder->set_signal_levels(encoder, crtc_state); + + /* 6.h Configure PORT_BUF_CTL1 */ + mtl_port_buf_ctl_program(encoder, crtc_state); + + /* 6.i Configure and enable DDI_CTL_DE to start sending valid data to port slice */ + intel_dp->DP |= DDI_BUF_CTL_ENABLE; + intel_de_write(dev_priv, DDI_BUF_CTL(port), intel_dp->DP); + intel_de_posting_read(dev_priv, DDI_BUF_CTL(port)); + + /* 6.j Poll for PORT_BUF_CTL Idle Status == 0, timeout after 100 us */ + intel_wait_ddi_buf_active(dev_priv, port); +} + static void intel_ddi_prepare_link_retrain(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state) { @@ -3912,6 +4253,7 @@ static const struct drm_encoder_funcs intel_ddi_funcs = { static struct intel_connector * intel_ddi_init_dp_connector(struct intel_digital_port *dig_port) { + struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); struct intel_connector *connector; enum port port = dig_port->base.port; @@ -3920,7 +4262,10 @@ intel_ddi_init_dp_connector(struct intel_digital_port *dig_port) return NULL; dig_port->dp.output_reg = DDI_BUF_CTL(port); - dig_port->dp.prepare_link_retrain = intel_ddi_prepare_link_retrain; + if (DISPLAY_VER(i915) >= 14) + dig_port->dp.prepare_link_retrain = mtl_ddi_prepare_link_retrain; + else + dig_port->dp.prepare_link_retrain = intel_ddi_prepare_link_retrain; dig_port->dp.set_link_train = intel_ddi_set_link_train; dig_port->dp.set_idle_link_train = intel_ddi_set_idle_link_train; diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index c4d363248bd2..62f88c6c3a4c 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -5658,11 +5658,16 @@ enum skl_power_gate { /* DDI Buffer Control */ #define _DDI_BUF_CTL_A 0x64000 #define _DDI_BUF_CTL_B 0x64100 +/* Known as DDI_CTL_DE in MTL+ */ #define DDI_BUF_CTL(port) _MMIO_PORT(port, _DDI_BUF_CTL_A, _DDI_BUF_CTL_B) #define DDI_BUF_CTL_ENABLE (1 << 31) #define DDI_BUF_TRANS_SELECT(n) ((n) << 24) #define DDI_BUF_EMP_MASK (0xf << 24) #define DDI_BUF_PHY_LINK_RATE(r) ((r) << 20) +#define DDI_BUF_PORT_DATA_MASK REG_GENMASK(19, 18) +#define DDI_BUF_PORT_DATA_10BIT REG_FIELD_PREP(DDI_BUF_PORT_DATA_MASK, 0) +#define DDI_BUF_PORT_DATA_20BIT REG_FIELD_PREP(DDI_BUF_PORT_DATA_MASK, 1) +#define DDI_BUF_PORT_DATA_40BIT REG_FIELD_PREP(DDI_BUF_PORT_DATA_MASK, 2) #define DDI_BUF_PORT_REVERSAL (1 << 16) #define DDI_BUF_IS_IDLE (1 << 7) #define DDI_BUF_CTL_TC_PHY_OWNERSHIP REG_BIT(6)