From patchwork Fri Aug 28 01:10:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JC Kuo X-Patchwork-Id: 11742009 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9255F109B for ; Fri, 28 Aug 2020 01:13:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7422420872 for ; Fri, 28 Aug 2020 01:13:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="kIWG0Lik" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728305AbgH1BMF (ORCPT ); Thu, 27 Aug 2020 21:12:05 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:9903 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728295AbgH1BME (ORCPT ); Thu, 27 Aug 2020 21:12:04 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 27 Aug 2020 18:10:01 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 27 Aug 2020 18:12:03 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 27 Aug 2020 18:12:03 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 28 Aug 2020 01:12:02 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 28 Aug 2020 01:12:02 +0000 Received: from jckuo-lt.nvidia.com (Not Verified[10.19.101.4]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 27 Aug 2020 18:12:01 -0700 From: JC Kuo To: , , , , CC: , , , , , JC Kuo Subject: [PATCH 04/12] phy: tegra: xusb: t210: add lane_iddq operations Date: Fri, 28 Aug 2020 09:10:48 +0800 Message-ID: <20200828011056.1067559-5-jckuo@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200828011056.1067559-1-jckuo@nvidia.com> References: <20200828011056.1067559-1-jckuo@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1598577001; bh=F4URxIm7hAp3xls6554zWF/g3WCHX59U8xOrhdagm6U=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=kIWG0LikYKkCwWWg4w0oS+8W3F0btqO3HsyCUH+5p9U23edIyD7xn8rUvPDU2j5ty vVpsCeywVJ2o81xUSB4CAWpBT8FIfB2uRnlPy6aapk6tf2rcWkMz9KD/JTayxNG+Of WBShdoLalzSQSCpSKEVM8MQB+C8rzlmgDIetxr96AqVpXN6vxetGB2z1f9/8wDMAmL NffxFpoWcBK0EMmyVYN7Ij8DYY8R7PaucBserWb6LY+qrIob214wCe0vFw82cmb1Yg XNcwTLe0fz+Pk24YHBKez8HD3aFG3MZ6uSALyqIi0tq3BCMZc/aCs6PKDjKbAB64Ud FVHsPDaDmhEzg== Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org As per Tegra210 TRM, before changing lane assignments, driver should keep lanes in IDDQ and sleep state; after changing lane assignments, driver should bring lanes out of IDDQ. This commit implements the required operations. Signed-off-by: JC Kuo --- drivers/phy/tegra/xusb-tegra210.c | 94 +++++++++++++++++++++++++++++++ drivers/phy/tegra/xusb.c | 6 ++ drivers/phy/tegra/xusb.h | 4 +- 3 files changed, 103 insertions(+), 1 deletion(-) diff --git a/drivers/phy/tegra/xusb-tegra210.c b/drivers/phy/tegra/xusb-tegra210.c index 3a2d9797fb9f..fe1ab440424d 100644 --- a/drivers/phy/tegra/xusb-tegra210.c +++ b/drivers/phy/tegra/xusb-tegra210.c @@ -198,6 +198,18 @@ #define XUSB_PADCTL_UPHY_MISC_PAD_CTL1_AUX_RX_TERM_EN BIT(18) #define XUSB_PADCTL_UPHY_MISC_PAD_CTL1_AUX_RX_MODE_OVRD BIT(13) +#define XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(x) (0x464 + (x) * 0x40) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ BIT(0) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ_OVRD BIT(1) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_MASK (0x3 << 4) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_VAL (0x3 << 4) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_PWR_OVRD BIT(24) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ BIT(8) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ_OVRD BIT(9) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_MASK (0x3 << 12) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_VAL (0x3 << 12) +#define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_PWR_OVRD BIT(25) + #define XUSB_PADCTL_UPHY_PLL_S0_CTL1 0x860 #define XUSB_PADCTL_UPHY_PLL_S0_CTL2 0x864 @@ -209,6 +221,7 @@ #define XUSB_PADCTL_UPHY_PLL_S0_CTL8 0x87c #define XUSB_PADCTL_UPHY_MISC_PAD_S0_CTL1 0x960 +#define XUSB_PADCTL_UPHY_MISC_PAD_S0_CTL2 0x964 #define XUSB_PADCTL_UPHY_USB3_PADX_ECTL1(x) (0xa60 + (x) * 0x40) #define XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_SHIFT 16 @@ -1636,6 +1649,63 @@ static const struct tegra_xusb_pad_soc tegra210_hsic_pad = { .ops = &tegra210_hsic_ops, }; +static void +tegra210_uphy_lane_iddq_enable(struct tegra_xusb_padctl *padctl, unsigned lane) +{ + u32 val, offset; + + if (lane <= 6) + offset = XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(lane); + else if (lane == 7) + offset = XUSB_PADCTL_UPHY_MISC_PAD_S0_CTL2; + else { + WARN(true, "invalid lane %u\n", lane); + return; + } + + val = padctl_readl(padctl, offset); + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ_OVRD; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ_OVRD; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_PWR_OVRD; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_PWR_OVRD; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ; + val &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_MASK; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_VAL; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ; + val &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_MASK; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_VAL; + padctl_writel(padctl, val, offset); +} + +static void +tegra210_uphy_lane_iddq_disable(struct tegra_xusb_padctl *padctl, unsigned lane) +{ + u32 val, offset; + + if (lane <= 6) + offset = XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(lane); + else if (lane == 7) + offset = XUSB_PADCTL_UPHY_MISC_PAD_S0_CTL2; + else { + WARN(true, "invalid lane %d\n", lane); + return; + } + + val = padctl_readl(padctl, offset); + val &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ_OVRD; + val &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ_OVRD; + val &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_PWR_OVRD; + val &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_PWR_OVRD; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ; + val &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_MASK; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_VAL; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ; + val &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_MASK; + val |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_VAL; + padctl_writel(padctl, val, offset); +} + + static const char *tegra210_pcie_functions[] = { "pcie-x1", "usb3-ss", @@ -1808,9 +1878,21 @@ static void tegra210_pcie_lane_remove(struct tegra_xusb_lane *lane) kfree(pcie); } +static void tegra210_pcie_lane_iddq_enable(struct tegra_xusb_lane *lane) +{ + tegra210_uphy_lane_iddq_enable(lane->pad->padctl, lane->index); +} + +static void tegra210_pcie_lane_iddq_disable(struct tegra_xusb_lane *lane) +{ + tegra210_uphy_lane_iddq_disable(lane->pad->padctl, lane->index); +} + static const struct tegra_xusb_lane_ops tegra210_pcie_lane_ops = { .probe = tegra210_pcie_lane_probe, .remove = tegra210_pcie_lane_remove, + .iddq_enable = tegra210_pcie_lane_iddq_enable, + .iddq_disable = tegra210_pcie_lane_iddq_disable, }; static int tegra210_pcie_phy_init(struct phy *phy) @@ -1971,9 +2053,21 @@ static void tegra210_sata_lane_remove(struct tegra_xusb_lane *lane) kfree(sata); } +static void tegra210_sata_lane_iddq_enable(struct tegra_xusb_lane *lane) +{ + tegra210_uphy_lane_iddq_enable(lane->pad->padctl, lane->index + 7); +} + +static void tegra210_sata_lane_iddq_disable(struct tegra_xusb_lane *lane) +{ + tegra210_uphy_lane_iddq_disable(lane->pad->padctl, lane->index + 7); +} + static const struct tegra_xusb_lane_ops tegra210_sata_lane_ops = { .probe = tegra210_sata_lane_probe, .remove = tegra210_sata_lane_remove, + .iddq_enable = tegra210_sata_lane_iddq_enable, + .iddq_disable = tegra210_sata_lane_iddq_disable, }; static int tegra210_sata_phy_init(struct phy *phy) diff --git a/drivers/phy/tegra/xusb.c b/drivers/phy/tegra/xusb.c index b48b590aa0c1..74abd67e3a31 100644 --- a/drivers/phy/tegra/xusb.c +++ b/drivers/phy/tegra/xusb.c @@ -321,11 +321,17 @@ static void tegra_xusb_lane_program(struct tegra_xusb_lane *lane) if (soc->num_funcs < 2) return; + if (lane->pad->ops->iddq_enable && lane->pad->ops->iddq_disable) + lane->pad->ops->iddq_enable(lane); + /* choose function */ value = padctl_readl(padctl, soc->offset); value &= ~(soc->mask << soc->shift); value |= lane->function << soc->shift; padctl_writel(padctl, value, soc->offset); + + if (lane->pad->ops->iddq_enable && lane->pad->ops->iddq_disable) + lane->pad->ops->iddq_disable(lane); } static void tegra_xusb_pad_program(struct tegra_xusb_pad *pad) diff --git a/drivers/phy/tegra/xusb.h b/drivers/phy/tegra/xusb.h index 0c828694cf2d..485b692a3b15 100644 --- a/drivers/phy/tegra/xusb.h +++ b/drivers/phy/tegra/xusb.h @@ -22,6 +22,7 @@ struct phy; struct phy_provider; struct platform_device; struct regulator; +struct tegra_xusb_padctl; /* * lanes @@ -126,6 +127,8 @@ struct tegra_xusb_lane_ops { struct device_node *np, unsigned int index); void (*remove)(struct tegra_xusb_lane *lane); + void (*iddq_enable)(struct tegra_xusb_lane *lane); + void (*iddq_disable)(struct tegra_xusb_lane *lane); }; bool tegra_xusb_lane_check(struct tegra_xusb_lane *lane, const char *function); @@ -134,7 +137,6 @@ bool tegra_xusb_lane_check(struct tegra_xusb_lane *lane, const char *function); * pads */ struct tegra_xusb_pad_soc; -struct tegra_xusb_padctl; struct tegra_xusb_pad_ops { struct tegra_xusb_pad *(*probe)(struct tegra_xusb_padctl *padctl,