From patchwork Mon Jul 1 12:39:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025795 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A354F1510 for ; Mon, 1 Jul 2019 12:40:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 948D1286F1 for ; Mon, 1 Jul 2019 12:40:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8887B286FD; Mon, 1 Jul 2019 12:40:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17C2A286F1 for ; Mon, 1 Jul 2019 12:40:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728938AbfGAMk1 (ORCPT ); Mon, 1 Jul 2019 08:40:27 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:9091 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727564AbfGAMk1 (ORCPT ); Mon, 1 Jul 2019 08:40:27 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:40:21 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:40:24 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 01 Jul 2019 05:40:24 -0700 Received: from HQMAIL104.nvidia.com (172.18.146.11) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:40:24 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:40:24 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:40:24 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 01/12] PCI: Add #defines for some of PCIe spec r4.0 features Date: Mon, 1 Jul 2019 18:09:59 +0530 Message-ID: <20190701124010.7484-2-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984822; bh=qs0gOPaCBg0srg3wMw2EmiluuEQwUuUc+Qxu9MhL+P0=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=o8gD4SR2Cw9mU0fRQXHJWOpfv/A8IfvGABPveqOOZ68CmtHZUXn0sY2EtZtYc/iuD qR+FwZ4fLoKSFtBv84f/qBm3CbaXepihWtYkNxtKWy5dPGL3eP17kwr3mwPqEC64co kRDlmgwJrC+dy0sg1IA/sdpK6EtE3o4bupefhg5udVBTOvokIplgNtFvt7KCAfULww V2mAxU1CiENiGS5hBJ+3JfXsgxeysYMmC5r11O6vE43hv+g+Tu/8d8O1R/rcrSZyYM EW+ofPPHR1nM0n0R9j/UBjanfy6CAqOd9v3MChKPz3maaKeBfQLHyfJnltwAjooT7n 2yVmRcYiKuVcQ== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add #defines only for the Data Link Feature and Physical Layer 16.0 GT/s features. Signed-off-by: Vidya Sagar Reviewed-by: Thierry Reding --- Changes since [v11]: * None Changes since [v10]: * None Changes since [v9]: * None Changes since [v8]: * None Changes since [v7]: * None Changes since [v6]: * None Changes since [v5]: * None Changes since [v4]: * None Changes since [v3]: * None Changes since [v2]: * Updated commit message and description to explicitly mention that defines are added only for some of the features and not all. Changes since [v1]: * None include/uapi/linux/pci_regs.h | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h index f28e562d7ca8..1c79f6a097d2 100644 --- a/include/uapi/linux/pci_regs.h +++ b/include/uapi/linux/pci_regs.h @@ -713,7 +713,9 @@ #define PCI_EXT_CAP_ID_DPC 0x1D /* Downstream Port Containment */ #define PCI_EXT_CAP_ID_L1SS 0x1E /* L1 PM Substates */ #define PCI_EXT_CAP_ID_PTM 0x1F /* Precision Time Measurement */ -#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PTM +#define PCI_EXT_CAP_ID_DLF 0x25 /* Data Link Feature */ +#define PCI_EXT_CAP_ID_PL 0x26 /* Physical Layer 16.0 GT/s */ +#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PL #define PCI_EXT_CAP_DSN_SIZEOF 12 #define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40 @@ -1053,4 +1055,22 @@ #define PCI_L1SS_CTL1_LTR_L12_TH_SCALE 0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */ #define PCI_L1SS_CTL2 0x0c /* Control 2 Register */ +/* Data Link Feature */ +#define PCI_DLF_CAP 0x04 /* Capabilities Register */ +#define PCI_DLF_LOCAL_DLF_SUP_MASK 0x007fffff /* Local Data Link Feature Supported */ +#define PCI_DLF_EXCHANGE_ENABLE 0x80000000 /* Data Link Feature Exchange Enable */ +#define PCI_DLF_STS 0x08 /* Status Register */ +#define PCI_DLF_REMOTE_DLF_SUP_MASK 0x007fffff /* Remote Data Link Feature Supported */ +#define PCI_DLF_REMOTE_DLF_SUP_VALID 0x80000000 /* Remote Data Link Feature Support Valid */ + +/* Physical Layer 16.0 GT/s */ +#define PCI_PL_16GT_CAP 0x04 /* Capabilities Register */ +#define PCI_PL_16GT_CTRL 0x08 /* Control Register */ +#define PCI_PL_16GT_STS 0x0c /* Status Register */ +#define PCI_PL_16GT_LDPM_STS 0x10 /* Local Data Parity Mismatch Status Register */ +#define PCI_PL_16GT_FRDPM_STS 0x14 /* First Retimer Data Parity Mismatch Status Register */ +#define PCI_PL_16GT_SRDPM_STS 0x18 /* Second Retimer Data Parity Mismatch Status Register */ +#define PCI_PL_16GT_RSVD 0x1C /* Reserved */ +#define PCI_PL_16GT_LE_CTRL 0x20 /* Lane Equalization Control Register */ + #endif /* LINUX_PCI_REGS_H */ From patchwork Mon Jul 1 12:40:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025797 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B475D1510 for ; Mon, 1 Jul 2019 12:40:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A635327F93 for ; Mon, 1 Jul 2019 12:40:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 99FC0286DA; Mon, 1 Jul 2019 12:40:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 170DE28628 for ; Mon, 1 Jul 2019 12:40:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728988AbfGAMkd (ORCPT ); Mon, 1 Jul 2019 08:40:33 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:15717 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727564AbfGAMkc (ORCPT ); Mon, 1 Jul 2019 08:40:32 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:40:30 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:40:31 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 01 Jul 2019 05:40:31 -0700 Received: from HQMAIL112.nvidia.com (172.18.146.18) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:40:30 +0000 Received: from HQMAIL106.nvidia.com (172.18.146.12) by HQMAIL112.nvidia.com (172.18.146.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:40:30 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:40:30 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:40:30 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 02/12] PCI: Disable MSI for Tegra root ports Date: Mon, 1 Jul 2019 18:10:00 +0530 Message-ID: <20190701124010.7484-3-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984830; bh=pvg3DGTCdmr2DUOlKrC0nLjCyNTv9jGrNWgY157aPVw=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=NvQ3DzN0wlR2uLWLBXtNnbY6O7dUZoPhG/4F6tgrMTsDVj4xzJA7+yrV7gaHYyKC/ UDl3Lb19rthZLP5oXrZGD/3tW+IyCGL31TUpfdc+Ojln11Vc4L2yq3Ce/UZGzmHrW0 WZaart+X+JzqnDR4K2hL+nwccqTblfHd+YoxFdSu0H/8uN9SGRqUqpZAIvpldI9mzR 761GH4kULUl0NNh5VCd7BzWm3sterq7rlfB+nPyXT1TDWFJ8RHvLOTXxQ6ya6w7fqz EodJhu2TuM2ybMFWTdkP/541hjo5mfirdFh6DXion5/FNz7xxamUPRyKJu1vBTRtuh iX/3gIEZOIt8A== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Tegra PCIe rootports don't generate MSI interrupts for PME and AER events. Since PCIe spec (Ref: r4.0 sec 7.7.1.2 and 7.7.2.2) doesn't support using a mix of INTx and MSI/MSI-X, MSI needs to be disabled to avoid root ports service drivers registering their respective ISRs with MSI interrupt and to let only INTx be used for all events. Signed-off-by: Vidya Sagar Reviewed-by: Thierry Reding --- Changes since [v11]: * None Changes since [v10]: * Included older Tegra chips to extend the quirk as this issue is present in older Tegra chips as well. Changes since [v9]: * None Changes since [v8]: * None Changes since [v7]: * Changed quirk macro to consider class code as well to avoid this quirk getting applied to Tegra194 when it is operating in endpoint mode. Also quoted relevant sections from PCIe spec in comments. Changes since [v6]: * This is a new patch drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 53 insertions(+) diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index c66c0ca446c4..1d3cac43ecbc 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -2592,6 +2592,59 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_15, nvenet_msi_disable); +/* + * PCIe spec r4.0 sec 7.7.1.2 and sec 7.7.2.2 say that if MSI/MSI-X is enabled, + * then the device can't use INTx interrupts. Tegra's PCIe root ports don't + * generate MSI interrupts for PME and AER events instead only INTx interrupts + * are generated. Though Tegra's PCIe root ports can generate MSI interrupts + * for other events, since PCIe specificiation doesn't support using a mix of + * INTx and MSI/MSI-X, it is required to disable MSI interrupts to avoid port + * service drivers registering their respective ISRs for MSIs. + */ +static void pci_quirk_nvidia_tegra_disable_rp_msi(struct pci_dev *dev) +{ + dev->no_msi = 1; +} +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad0, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad1, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad2, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf0, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf1, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1c, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1d, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e12, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e13, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0fae, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0faf, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x10e5, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x10e6, + PCI_CLASS_BRIDGE_PCI, 8, + pci_quirk_nvidia_tegra_disable_rp_msi); + /* * Some versions of the MCP55 bridge from Nvidia have a legacy IRQ routing * config register. This register controls the routing of legacy From patchwork Mon Jul 1 12:40:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025799 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5AAE9138D for ; Mon, 1 Jul 2019 12:40:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4C96428628 for ; Mon, 1 Jul 2019 12:40:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 40D2B286DA; Mon, 1 Jul 2019 12:40:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CAF0428628 for ; Mon, 1 Jul 2019 12:40:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729003AbfGAMkk (ORCPT ); Mon, 1 Jul 2019 08:40:40 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:9112 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727564AbfGAMkk (ORCPT ); Mon, 1 Jul 2019 08:40:40 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:40:34 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:40:37 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 01 Jul 2019 05:40:37 -0700 Received: from HQMAIL110.nvidia.com (172.18.146.15) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:40:37 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by hqmail110.nvidia.com (172.18.146.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:40:37 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:40:37 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:40:36 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 03/12] PCI: dwc: Perform dbi regs write lock towards the end Date: Mon, 1 Jul 2019 18:10:01 +0530 Message-ID: <20190701124010.7484-4-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984834; bh=ZPHZdQUeNnmFMzPoXfbUAGXppKNf3wrULi+u6tKgtJQ=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=Zy67QgQ6na4QFHRi80uCibc/8fqBuA1VI7f9BWLBkgOvOSIm2Rg0Zt5045B9mqMTv DgFtr4Am59P6TWV3eTCsjbk4lgCKB1pj1a94OQkhzS95g2QZGjHYOr2q43D36Y69KU eGD14AP7XAyUiC9vsD+lRfk6N2oQY7vhV47MsWhdwbiLqo+fmnRE2UiTuZaDoacoEQ JIlQ7TEqC6xIaeTvaVO3aH4esEBBvr4Mg808xzp1OGtZfNOtS2F2ZN3F3f8fP0akGA LFXUfBqNJ+aZym2YJ8jOxuL9QdqSzg7OhK/QL7Pp8AirgyngbeA5pP76u7BDdiCHkI xgfZ2eL0Da+7A== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Some of DesignWare core's DBI registers (a.k.a configuration space registers) are write-protected with a lock without enabling which they are read-only by default. These write-protected registers are implementation specific. Tegra194's BAR-0 register which is at offset 0x10 in the configuration space is an example. Current implementation in dw_pcie_setup_rc() API attempts to unlock those write-protected registers whenever they are updated and lock them back again for writing. This patch attempts to unlock all such write-protected registers for writing in the beginning of the API once and lock them back again towards the end to avoid bloating the API with multiple unlock/lock sequences for all those write-protected registers. Signed-off-by: Vidya Sagar Reviewed-by: Thierry Reding Acked-by: Jingoo Han --- Changes since [v11]: * Modified commit message to make it explicit that write-protected registers are implementation specific. Changes since [v10]: * None Changes since [v9]: * None Changes since [v8]: * None Changes since [v7]: * None Changes since [v6]: * None Changes since [v5]: * Moved write enable to the beginning of the API and write disable to the end Changes since [v4]: * None Changes since [v3]: * None Changes since [v2]: * None Changes since [v1]: * None drivers/pci/controller/dwc/pcie-designware-host.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index f93252d0da5b..d3156446ff27 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -628,6 +628,12 @@ void dw_pcie_setup_rc(struct pcie_port *pp) u32 val, ctrl, num_ctrls; struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + /* + * Enable DBI read-only registers for writing/updating configuration. + * Write permission gets disabled towards the end of this function. + */ + dw_pcie_dbi_ro_wr_en(pci); + dw_pcie_setup(pci); if (!pp->ops->msi_host_init) { @@ -650,12 +656,10 @@ void dw_pcie_setup_rc(struct pcie_port *pp) dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000); /* Setup interrupt pins */ - dw_pcie_dbi_ro_wr_en(pci); val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE); val &= 0xffff00ff; val |= 0x00000100; dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val); - dw_pcie_dbi_ro_wr_dis(pci); /* Setup bus numbers */ val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS); @@ -687,15 +691,13 @@ void dw_pcie_setup_rc(struct pcie_port *pp) dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); - /* Enable write permission for the DBI read-only register */ - dw_pcie_dbi_ro_wr_en(pci); /* Program correct class for RC */ dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); - /* Better disable write permission right after the update */ - dw_pcie_dbi_ro_wr_dis(pci); dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val); val |= PORT_LOGIC_SPEED_CHANGE; dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); + + dw_pcie_dbi_ro_wr_dis(pci); } EXPORT_SYMBOL_GPL(dw_pcie_setup_rc); From patchwork Mon Jul 1 12:40:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025801 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A1CA21510 for ; Mon, 1 Jul 2019 12:40:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 85C8228686 for ; Mon, 1 Jul 2019 12:40:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 78EC9286DE; Mon, 1 Jul 2019 12:40:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B789286F1 for ; Mon, 1 Jul 2019 12:40:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728882AbfGAMkq (ORCPT ); Mon, 1 Jul 2019 08:40:46 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:5459 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727564AbfGAMkq (ORCPT ); Mon, 1 Jul 2019 08:40:46 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:40:47 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:40:44 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 01 Jul 2019 05:40:44 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:40:44 +0000 Received: from HQMAIL108.nvidia.com (172.18.146.13) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:40:43 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL108.nvidia.com (172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:40:43 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:40:43 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 04/12] PCI: dwc: Move config space capability search API Date: Mon, 1 Jul 2019 18:10:02 +0530 Message-ID: <20190701124010.7484-5-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984847; bh=sJkF3F5wk9SXCpOqCUo6JdcopOe1KiThwHS/i4l6JYU=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=r4hRQhXIIHUIlEI/rCcaB91gQAGBgd2zfQbbzM05cXKrWCkvyilS/3gNgK28t5WY0 niQajTfwUKtV1OJIbT5fLoe1IP9WBARpnvr2DKDLR9mnQxjwBvDz7yXhEQJWspvniQ +38fLCrPn3ylO6Z4RR3QyfruvzlOJAjG74a80f3E/YQuhUPPwP2pfaHe916viHEjCt RG3hN04i4VWC1Hs01JTAfirG/fhOKe6mfCVpILx/ZG3yBE53fQnwJQQ4Y2IQU52A2s l3CkRVmCGo2LjBSrWxtz6Tph2IJ2m6WiDAwUfJED1Uv56uTFkWSyZYz94XTqNsUVWk ffWpOvGBQm4Zw== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move PCIe config space capability search API to common DesignWare file as this can be used by both host and ep mode codes. Signed-off-by: Vidya Sagar Acked-by: Gustavo Pimentel Reviewed-by: Thierry Reding --- Changes since [v11]: * None Changes since [v10]: * None Changes since [v9]: * None Changes since [v8]: * None Changes since [v7]: * Changed comment to explicitly state their mere resemblance to standard APIs but not their operation and place of use. Changes since [v6]: * Exported dw_pcie_find_capability() API Changes since [v5]: * None Changes since [v4]: * Removed redundant APIs in pcie-designware-ep.c file after moving them to pcie-designware.c file based on Bjorn's comments. Changes since [v3]: * Rebased to linux-next top of the tree Changes since [v2]: * None Changes since [v1]: * Removed dw_pcie_find_next_ext_capability() API from here and made a separate patch for that .../pci/controller/dwc/pcie-designware-ep.c | 37 +----------------- drivers/pci/controller/dwc/pcie-designware.c | 39 +++++++++++++++++++ drivers/pci/controller/dwc/pcie-designware.h | 2 + 3 files changed, 43 insertions(+), 35 deletions(-) diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 2bf5a35c0570..65f479250087 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c @@ -40,39 +40,6 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) __dw_pcie_ep_reset_bar(pci, bar, 0); } -static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie *pci, u8 cap_ptr, - u8 cap) -{ - u8 cap_id, next_cap_ptr; - u16 reg; - - if (!cap_ptr) - return 0; - - reg = dw_pcie_readw_dbi(pci, cap_ptr); - cap_id = (reg & 0x00ff); - - if (cap_id > PCI_CAP_ID_MAX) - return 0; - - if (cap_id == cap) - return cap_ptr; - - next_cap_ptr = (reg & 0xff00) >> 8; - return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); -} - -static u8 dw_pcie_ep_find_capability(struct dw_pcie *pci, u8 cap) -{ - u8 next_cap_ptr; - u16 reg; - - reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); - next_cap_ptr = (reg & 0x00ff); - - return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); -} - static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, struct pci_epf_header *hdr) { @@ -612,9 +579,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); return -ENOMEM; } - ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI); + ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); - ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX); + ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX); offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); if (offset) { diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 7d25102c304c..7818b4febb08 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -14,6 +14,45 @@ #include "pcie-designware.h" +/* + * These interfaces resemble the pci_find_*capability() interfaces, but these + * are for configuring host controllers, which are bridges *to* PCI devices but + * are not PCI devices themselves. + */ +static u8 __dw_pcie_find_next_cap(struct dw_pcie *pci, u8 cap_ptr, + u8 cap) +{ + u8 cap_id, next_cap_ptr; + u16 reg; + + if (!cap_ptr) + return 0; + + reg = dw_pcie_readw_dbi(pci, cap_ptr); + cap_id = (reg & 0x00ff); + + if (cap_id > PCI_CAP_ID_MAX) + return 0; + + if (cap_id == cap) + return cap_ptr; + + next_cap_ptr = (reg & 0xff00) >> 8; + return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap); +} + +u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap) +{ + u8 next_cap_ptr; + u16 reg; + + reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); + next_cap_ptr = (reg & 0x00ff); + + return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap); +} +EXPORT_SYMBOL_GPL(dw_pcie_find_capability); + int dw_pcie_read(void __iomem *addr, int size, u32 *val) { if (!IS_ALIGNED((uintptr_t)addr, size)) { diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index ffed084a0b4f..d8c66a6827dc 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -251,6 +251,8 @@ struct dw_pcie { #define to_dw_pcie_from_ep(endpoint) \ container_of((endpoint), struct dw_pcie, ep) +u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap); + int dw_pcie_read(void __iomem *addr, int size, u32 *val); int dw_pcie_write(void __iomem *addr, int size, u32 val); From patchwork Mon Jul 1 12:40:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025805 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 889661510 for ; Mon, 1 Jul 2019 12:41:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7891D1FF30 for ; Mon, 1 Jul 2019 12:41:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6248028714; Mon, 1 Jul 2019 12:41:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4D631FF30 for ; Mon, 1 Jul 2019 12:41:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729045AbfGAMk4 (ORCPT ); Mon, 1 Jul 2019 08:40:56 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:5482 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727564AbfGAMk4 (ORCPT ); Mon, 1 Jul 2019 08:40:56 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:40:57 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:40:54 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 01 Jul 2019 05:40:54 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:40:53 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:40:54 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:40:53 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 05/12] PCI: dwc: Add ext config space capability search API Date: Mon, 1 Jul 2019 18:10:03 +0530 Message-ID: <20190701124010.7484-6-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984857; bh=dIaZ1ynlaEypdztl5pyXgen370gLmT2BpPK4ZLYuXDo=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=QvOvCctr+pAz8ksHcURM2xZNCMshg4XarN8qKrannttTQ2FOE4KDbqtq7srMpOwz4 1JHx0l6vv7EunKDAodnpOLMI8y8O6ZybumaiQGvqS3OQGOGwubtWOWR1VNEHACY3CI knIqbiRhJ248tHbsBMMTAjCWpuhR15P+WrRykIc0b2tDubXud3fF9DCvlTj61tSy4S BbIsBpIjy33kShOULN7U0kA3qDoFHLsUdyOSSKzO/DmpjxyWIB5gI7vhrF8+LhIr1k K+6ngmOIArUfxEPLb1a/tgN5d9Zgkem2ek3fYyeoN1wyLZZk3mJvCgBfy0lH99CfWm 5vRiIBSTRznIQ== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add extended configuration space capability search API using struct dw_pcie * pointer Signed-off-by: Vidya Sagar Acked-by: Gustavo Pimentel Acked-by: Thierry Reding --- Changes since [v11]: * None Changes since [v10]: * None Changes since [v9]: * None Changes since [v8]: * Added Acked-by from Thierry Changes since [v7]: * Changed data types of return and arguments to be inline with data being returned and passed. Changes since [v6]: * None Changes since [v5]: * None Changes since [v4]: * None Changes since [v3]: * None Changes since [v2]: * None Changes since [v1]: * This is a new patch in v2 series drivers/pci/controller/dwc/pcie-designware.c | 41 ++++++++++++++++++++ drivers/pci/controller/dwc/pcie-designware.h | 1 + 2 files changed, 42 insertions(+) diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 7818b4febb08..181449e342f1 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -53,6 +53,47 @@ u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap) } EXPORT_SYMBOL_GPL(dw_pcie_find_capability); +static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start, + u8 cap) +{ + u32 header; + int ttl; + int pos = PCI_CFG_SPACE_SIZE; + + /* minimum 8 bytes per capability */ + ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; + + if (start) + pos = start; + + header = dw_pcie_readl_dbi(pci, pos); + /* + * If we have no capabilities, this is indicated by cap ID, + * cap version and next pointer all being 0. + */ + if (header == 0) + return 0; + + while (ttl-- > 0) { + if (PCI_EXT_CAP_ID(header) == cap && pos != start) + return pos; + + pos = PCI_EXT_CAP_NEXT(header); + if (pos < PCI_CFG_SPACE_SIZE) + break; + + header = dw_pcie_readl_dbi(pci, pos); + } + + return 0; +} + +u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap) +{ + return dw_pcie_find_next_ext_capability(pci, 0, cap); +} +EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability); + int dw_pcie_read(void __iomem *addr, int size, u32 *val) { if (!IS_ALIGNED((uintptr_t)addr, size)) { diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index d8c66a6827dc..11c223471416 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -252,6 +252,7 @@ struct dw_pcie { container_of((endpoint), struct dw_pcie, ep) u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap); +u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap); int dw_pcie_read(void __iomem *addr, int size, u32 *val); int dw_pcie_write(void __iomem *addr, int size, u32 val); From patchwork Mon Jul 1 12:40:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025809 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4DD3C138D for ; Mon, 1 Jul 2019 12:41:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E9E52870F for ; Mon, 1 Jul 2019 12:41:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3254928715; Mon, 1 Jul 2019 12:41:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C7A932870F for ; Mon, 1 Jul 2019 12:41:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727898AbfGAMlD (ORCPT ); Mon, 1 Jul 2019 08:41:03 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:9144 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727564AbfGAMlD (ORCPT ); Mon, 1 Jul 2019 08:41:03 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:40:59 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:41:01 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 01 Jul 2019 05:41:01 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:01 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:01 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:41:01 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:41:01 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 06/12] dt-bindings: PCI: designware: Add binding for CDM register check Date: Mon, 1 Jul 2019 18:10:04 +0530 Message-ID: <20190701124010.7484-7-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984859; bh=qaV4xsdH5SZ5f+s7IswQ1pQshcuCt0JZQaiZ7RtG2kc=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=iV3APRVPeXEGrLT7iWOlLVtrNYeCAu3Kd2q97bPYdMainzSTHS6XI0zGjKRgtKR9e WZ3AjGppPlbarLnW+HLITxCqKI0aj7cTUSsFuxVIMI3p+CkGnwyDdtwUUFScqnBPXO 5qaqMktvLJcP2qZxf+zFGLtoWsEz7fdNdAb8Vx0ouAWy2B3gT47M9b6vKUaKI5rvw1 qu+CIPrsSEMc53ilYZC6OuT3iXyaA8XCGZI5ys9AyNRLvCgaIsekm2TTXXPaMr1N0C GVbu1RcrTEap8osuXY+OnJMmxEo35Y4AYJHlFjKu1qhmlgWcXQ4aBoHU54v46I+cPj 5XRoVSYehHbDQ== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add support to enable CDM (Configuration Dependent Module) registers check for any data corruption. CDM registers include standard PCIe configuration space registers, Port Logic registers and iATU and DMA registers. Refer Section S.4 of Synopsys DesignWare Cores PCI Express Controller Databook Version 4.90a Signed-off-by: Vidya Sagar Reviewed-by: Thierry Reding Reviewed-by: Rob Herring --- Changes since [v11]: * None Changes since [v10]: * None Changes since [v9]: * None Changes since [v8]: * None Changes since [v7]: * None Changes since [v6]: * Changed "enable-cdm-check" to "snps,enable-cdm-check" Changes since [v5]: * None Changes since [v4]: * None Changes since [v3]: * None Changes since [v2]: * Changed flag name from 'cdm-check' to 'enable-cdm-check' * Added info about Port Logic and DMA registers being part of CDM Changes since [v1]: * This is a new patch in v2 series Documentation/devicetree/bindings/pci/designware-pcie.txt | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/Documentation/devicetree/bindings/pci/designware-pcie.txt b/Documentation/devicetree/bindings/pci/designware-pcie.txt index 5561a1c060d0..3fba04da6a59 100644 --- a/Documentation/devicetree/bindings/pci/designware-pcie.txt +++ b/Documentation/devicetree/bindings/pci/designware-pcie.txt @@ -34,6 +34,11 @@ Optional properties: - clock-names: Must include the following entries: - "pcie" - "pcie_bus" +- snps,enable-cdm-check: This is a boolean property and if present enables + automatic checking of CDM (Configuration Dependent Module) registers + for data corruption. CDM registers include standard PCIe configuration + space registers, Port Logic registers, DMA and iATU (internal Address + Translation Unit) registers. RC mode: - num-viewport: number of view ports configured in hardware. If a platform does not specify it, the driver assumes 2. From patchwork Mon Jul 1 12:40:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025811 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7BBF61510 for ; Mon, 1 Jul 2019 12:41:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C682286F2 for ; Mon, 1 Jul 2019 12:41:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 60CC62871C; Mon, 1 Jul 2019 12:41:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 02536286F2 for ; Mon, 1 Jul 2019 12:41:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728463AbfGAMlM (ORCPT ); Mon, 1 Jul 2019 08:41:12 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:5502 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727564AbfGAMlL (ORCPT ); Mon, 1 Jul 2019 08:41:11 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:41:14 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:41:10 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 01 Jul 2019 05:41:10 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:10 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:10 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:41:10 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:41:10 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 07/12] PCI: dwc: Add support to enable CDM register check Date: Mon, 1 Jul 2019 18:10:05 +0530 Message-ID: <20190701124010.7484-8-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984874; bh=WxsGU03wWT/HDVEQcu2DIbEgLqwgzrRGajbm3azmCs4=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=dSG7t+kBU0yFb8p9PsmZB1baRkD9kMK60RrljhDTavdndJy2CiSzQZItn9AYHzdl9 Z9BD0Ou59yObNSzbl/WB/xOGAMehlRB3HwS1ApnfNLljda8aTdZcoQWLHMQL1xcIuS zgKktVqrMJnZqVMpfB7C6bgPigI2j/oejlgUyT4I6DOyKOUWjUUbGvQu85ardxzCbm iCwySrnGtQGdSTs8lTDDHU9e8Kxu0ztMafpvSzkjVZXn0iPDlyFtGsf4i5tqoF+YpC AJMDNEneZdI9RQQQdyqgcVnBphgPQHK3SHjiHXpGf5VB4dgfOMJ68A3ELfcQ1x9tc4 UWNsiNt7El59w== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add support to enable CDM (Configuration Dependent Module) register check for any data corruption based on the device-tree flag 'snps,enable-cdm-check'. Signed-off-by: Vidya Sagar Acked-by: Gustavo Pimentel Reviewed-by: Thierry Reding --- Changes since [v11]: * None Changes since [v10]: * None Changes since [v9]: * None Changes since [v8]: * None Changes since [v7]: * None Changes since [v6]: * Changed "enable-cdm-check" to "snps,enable-cdm-check" Changes since [v5]: * None Changes since [v4]: * None Changes since [v3]: * None Changes since [v2]: * Changed code and commit description to reflect change in flag from 'cdm-check' to 'enable-cdm-check' Changes since [v1]: * This is a new patch in v2 series drivers/pci/controller/dwc/pcie-designware.c | 7 +++++++ drivers/pci/controller/dwc/pcie-designware.h | 9 +++++++++ 2 files changed, 16 insertions(+) diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 181449e342f1..01f9227a5ade 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -546,4 +546,11 @@ void dw_pcie_setup(struct dw_pcie *pci) break; } dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); + + if (of_property_read_bool(np, "snps,enable-cdm-check")) { + val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS); + val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS | + PCIE_PL_CHK_REG_CHK_REG_START; + dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val); + } } diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index 11c223471416..5a18e94e52c8 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -86,6 +86,15 @@ #define PCIE_MISC_CONTROL_1_OFF 0x8BC #define PCIE_DBI_RO_WR_EN BIT(0) +#define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20 +#define PCIE_PL_CHK_REG_CHK_REG_START BIT(0) +#define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS BIT(1) +#define PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR BIT(16) +#define PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR BIT(17) +#define PCIE_PL_CHK_REG_CHK_REG_COMPLETE BIT(18) + +#define PCIE_PL_CHK_REG_ERR_ADDR 0xB28 + /* * iATU Unroll-specific register definitions * From 4.80 core version the address translation will be made by unroll From patchwork Mon Jul 1 12:40:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025815 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54B76138D for ; Mon, 1 Jul 2019 12:41:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4597E1FF30 for ; Mon, 1 Jul 2019 12:41:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 39E8A286F2; Mon, 1 Jul 2019 12:41:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D28AD1FF30 for ; Mon, 1 Jul 2019 12:41:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727564AbfGAMlX (ORCPT ); Mon, 1 Jul 2019 08:41:23 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:5516 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727128AbfGAMlX (ORCPT ); Mon, 1 Jul 2019 08:41:23 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:41:25 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:41:22 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 01 Jul 2019 05:41:22 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL108.nvidia.com (172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:21 +0000 Received: from HQMAIL104.nvidia.com (172.18.146.11) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:21 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:41:21 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:41:21 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 08/12] dt-bindings: Add PCIe supports-clkreq property Date: Mon, 1 Jul 2019 18:10:06 +0530 Message-ID: <20190701124010.7484-9-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984886; bh=qd1TJ/xf5j2+yfKNrqwgkiamebdX9SXCTQDXWi0Uk+I=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=d2JSMf1lIsvhQlmInazqnzLB26k8GO8FoKtGAy3y5lOQbkTNhLSfwYkJY+BgFkufF VbX47rZr41GBVXSvCb2LJliDWzpn1CoakzS/YGDo7EmTNVJ97eYsOnVfdHjxmcrTB6 4zBCVlOMR53IxQxkKw5VVuIV1nZOQuf5vgqsszZBpaAMZEixASbm0FJe4YFA6f9ZW1 bvWjTN5rCU0SIDWVhJ0bD5Hu2plukG7igX9LvLOv7OOMdikR4PBYC/2imvSaI5YVoN 35DIHF5V3ZhITGuY9GxDRrZHg1NkAquRRmPCmKQHrV8wS5swpOaR1FP+Shu55ReV9e /n0QgbdaJdPrw== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Some host controllers need to know the existence of clkreq signal routing to downstream devices to be able to advertise low power features like ASPM L1 substates. Without clkreq signal routing being present, enabling ASPM L1 sub states might lead to downstream devices falling off the bus. Hence a new device tree property 'supports-clkreq' is added to make such host controllers aware of clkreq signal routing to downstream devices. Signed-off-by: Vidya Sagar Reviewed-by: Rob Herring Reviewed-by: Thierry Reding --- Changes since [v11]: * Rebased on top of linux-next top of the tree Changes since [v10]: * None Changes since [v9]: * None Changes since [v8]: * None Changes since [v7]: * None Changes since [v6]: * None Changes since [v5]: * s/Documentation\/devicetree/dt-bindings/ in the subject Changes since [v4]: * None Changes since [v3]: * Rebased on top of linux-next top of the tree Changes since [v2]: * None Changes since [v1]: * This is a new patch in v2 series Documentation/devicetree/bindings/pci/pci.txt | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/Documentation/devicetree/bindings/pci/pci.txt b/Documentation/devicetree/bindings/pci/pci.txt index 2a5d91024059..29bcbd88f457 100644 --- a/Documentation/devicetree/bindings/pci/pci.txt +++ b/Documentation/devicetree/bindings/pci/pci.txt @@ -27,6 +27,11 @@ driver implementation may support the following properties: - reset-gpios: If present this property specifies PERST# GPIO. Host drivers can parse the GPIO and apply fundamental reset to endpoints. +- supports-clkreq: + If present this property specifies that CLKREQ signal routing exists from + root port to downstream device and host bridge drivers can do programming + which depends on CLKREQ signal existence. For example, programming root port + not to advertise ASPM L1 Sub-States support if there is no CLKREQ signal. PCI-PCI Bridge properties ------------------------- From patchwork Mon Jul 1 12:40:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025819 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C2CC31510 for ; Mon, 1 Jul 2019 12:41:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B416F1FF30 for ; Mon, 1 Jul 2019 12:41:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A81FE2870F; Mon, 1 Jul 2019 12:41:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C08051FF30 for ; Mon, 1 Jul 2019 12:41:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729104AbfGAMlk (ORCPT ); Mon, 1 Jul 2019 08:41:40 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:15777 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727128AbfGAMlj (ORCPT ); Mon, 1 Jul 2019 08:41:39 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:41:37 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:41:38 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 01 Jul 2019 05:41:38 -0700 Received: from HQMAIL104.nvidia.com (172.18.146.11) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:38 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:41:38 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:41:37 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 09/12] dt-bindings: PCI: tegra: Add device tree support for Tegra194 Date: Mon, 1 Jul 2019 18:10:07 +0530 Message-ID: <20190701124010.7484-10-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add support for Tegra194 PCIe controllers. These controllers are based on Synopsys DesignWare core IP. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984897; bh=SatwB927mzd7rncXv8QPKhwoP5yCvHNiedxQcFowAP0=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=TwYl6guw1NSy2U0469jClHKOS11Rg3sgDd9xFRmPrjAoDhqsG8J3c3utHnH3MQ7cc ESPcFKKlEkaFfrc/KGXfnaQF2jSLEPbGmjn34Nce32I6lTrX97BUOo4yrqcLA0N5gC MPG7nrDtnb/h2kkht+DYSoXcvmMFQFLjnY/B4c7YiscUhIsY4LFdxS8LDjqzWMhVrJ a3orEv+/zCyOqc3D8CN/8iKfbGRUvnpfSC+q1RFQk2/3TRClh0PWNZpA1QnH0xpM7g jotT6QPY73hzd/iTeIZw+msQIoVobpqHkfAmFN+5bwwfhDn7BIc25KAlkUHhD6Sh5J gCw97hPz47fAw== Signed-off-by: Vidya Sagar Reviewed-by: Rob Herring Acked-by: Thierry Reding --- Changes since [v11]: * None Changes since [v10]: * None Changes since [v9]: * None Changes since [v8]: * Added Acked-by from Thierry Changes since [v7]: * Addressed review comments from Thierry * Modified DT example to reflect new changes Changes since [v6]: * Changed description of the property "nvidia,bpmp". * Removed property "nvidia,disable-aspm-states". Changes since [v5]: * Removed 'max-link-speed' as it is going to be a common sub-system property * Removed 'nvidia,init-link-speed' as there isn't much value addition * Removed 'nvidia,wake-gpios' for now * Addressed review comments from Thierry and Rob in general Changes since [v4]: * None Changes since [v3]: * None Changes since [v2]: * Using only 'Cx' (x-being controller number) format to represent a controller * Changed to 'value: description' format where applicable * Changed 'nvidia,init-speed' to 'nvidia,init-link-speed' * Provided more documentation for 'nvidia,init-link-speed' property * Changed 'nvidia,pex-wake' to 'nvidia,wake-gpios' Changes since [v1]: * Added documentation for 'power-domains' property * Removed 'window1' and 'window2' properties * Removed '_clk' and '_rst' from clock and reset names * Dropped 'pcie' from phy-names * Added entry for BPMP-FW handle * Removed offsets for some of the registers and added them in code and would be pickedup based on controller ID * Changed 'nvidia,max-speed' to 'max-link-speed' and is made as an optional * Changed 'nvidia,disable-clock-request' to 'supports-clkreq' with inverted operation * Added more documentation for 'nvidia,update-fc-fixup' property * Removed 'nvidia,enable-power-down' and 'nvidia,plat-gpios' properties * Added '-us' to all properties that represent time in microseconds * Moved P2U documentation to a separate file .../bindings/pci/nvidia,tegra194-pcie.txt | 155 ++++++++++++++++++ 1 file changed, 155 insertions(+) create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt new file mode 100644 index 000000000000..674e5adb2895 --- /dev/null +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt @@ -0,0 +1,155 @@ +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based) + +This PCIe host controller is based on the Synopsis Designware PCIe IP +and thus inherits all the common properties defined in designware-pcie.txt. + +Required properties: +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie". +- device_type: Must be "pci" +- power-domains: A phandle to the node that controls power to the respective + PCIe controller and a specifier name for the PCIe controller. Following are + the specifiers for the different PCIe controllers + TEGRA194_POWER_DOMAIN_PCIEX8B: C0 + TEGRA194_POWER_DOMAIN_PCIEX1A: C1 + TEGRA194_POWER_DOMAIN_PCIEX1A: C2 + TEGRA194_POWER_DOMAIN_PCIEX1A: C3 + TEGRA194_POWER_DOMAIN_PCIEX4A: C4 + TEGRA194_POWER_DOMAIN_PCIEX8A: C5 + these specifiers are defined in + "include/dt-bindings/power/tegra194-powergate.h" file. +- reg: A list of physical base address and length pairs for each set of + controller registers. Must contain an entry for each entry in the reg-names + property. +- reg-names: Must include the following entries: + "appl": Controller's application logic registers + "config": As per the definition in designware-pcie.txt + "atu_dma": iATU and DMA registers. This is where the iATU (internal Address + Translation Unit) registers of the PCIe core are made available + for SW access. + "dbi": The aperture where root port's own configuration registers are + available +- interrupts: A list of interrupt outputs of the controller. Must contain an + entry for each entry in the interrupt-names property. +- interrupt-names: Must include the following entries: + "intr": The Tegra interrupt that is asserted for controller interrupts + "msi": The Tegra interrupt that is asserted when an MSI is received +- bus-range: Range of bus numbers associated with this controller +- #address-cells: Address representation for root ports (must be 3) + - cell 0 specifies the bus and device numbers of the root port: + [23:16]: bus number + [15:11]: device number + - cell 1 denotes the upper 32 address bits and should be 0 + - cell 2 contains the lower 32 address bits and is used to translate to the + CPU address space +- #size-cells: Size representation for root ports (must be 2) +- ranges: Describes the translation of addresses for root ports and standard + PCI regions. The entries must be 7 cells each, where the first three cells + correspond to the address as described for the #address-cells property + above, the fourth and fifth cells are for the physical CPU address to + translate to and the sixth and seventh cells are as described for the + #size-cells property above. + - Entries setup the mapping for the standard I/O, memory and + prefetchable PCI regions. The first cell determines the type of region + that is setup: + - 0x81000000: I/O memory region + - 0x82000000: non-prefetchable memory region + - 0xc2000000: prefetchable memory region + Please refer to the standard PCI bus binding document for a more detailed + explanation. +- #interrupt-cells: Size representation for interrupts (must be 1) +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties + Please refer to the standard PCI bus binding document for a more detailed + explanation. +- clocks: Must contain an entry for each entry in clock-names. + See ../clocks/clock-bindings.txt for details. +- clock-names: Must include the following entries: + - core +- resets: Must contain an entry for each entry in reset-names. + See ../reset/reset.txt for details. +- reset-names: Must include the following entries: + - apb + - core +- phys: Must contain a phandle to P2U PHY for each entry in phy-names. +- phy-names: Must include an entry for each active lane. + "p2u-N": where N ranges from 0 to one less than the total number of lanes +- nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed + by controller-id. Following are the controller ids for each controller. + 0: C0 + 1: C1 + 2: C2 + 3: C3 + 4: C4 + 5: C5 +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals + +Optional properties: +- supports-clkreq: Refer to Documentation/devicetree/bindings/pci/pci.txt +- nvidia,update-fc-fixup: This is a boolean property and needs to be present to + improve performance when a platform is designed in such a way that it + satisfies at least one of the following conditions thereby enabling root + port to exchange optimum number of FC (Flow Control) credits with + downstream devices + 1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS) + 2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and + a) speed is Gen-2 and MPS is 256B + b) speed is >= Gen-3 with any MPS +- nvidia,aspm-cmrt-us: Common Mode Restore Time for proper operation of ASPM + to be specified in microseconds +- nvidia,aspm-pwr-on-t-us: Power On time for proper operation of ASPM to be + specified in microseconds +- nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be + specified in microseconds + +Examples: +========= + +Tegra194: +-------- + + pcie@14180000 { + compatible = "nvidia,tegra194-pcie", "snps,dw-pcie"; + power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>; + reg = <0x00 0x14180000 0x0 0x00020000 /* appl registers (128K) */ + 0x00 0x38000000 0x0 0x00040000 /* configuration space (256K) */ + 0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K) */ + reg-names = "appl", "config", "atu_dma"; + + #address-cells = <3>; + #size-cells = <2>; + device_type = "pci"; + num-lanes = <8>; + linux,pci-domain = <0>; + + clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>; + clock-names = "core"; + + resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>, + <&bpmp TEGRA194_RESET_PEX0_CORE_0>; + reset-names = "apb", "core"; + + interrupts = , /* controller interrupt */ + ; /* MSI interrupt */ + interrupt-names = "intr", "msi"; + + #interrupt-cells = <1>; + interrupt-map-mask = <0 0 0 0>; + interrupt-map = <0 0 0 0 &gic GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>; + + nvidia,bpmp = <&bpmp 0>; + + supports-clkreq; + nvidia,aspm-cmrt-us = <60>; + nvidia,aspm-pwr-on-t-us = <20>; + nvidia,aspm-l0s-entrance-latency-us = <3>; + + bus-range = <0x0 0xff>; + ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000 /* downstream I/O (1MB) */ + 0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000 /* non-prefetchable memory (30MB) */ + 0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>; /* prefetchable memory (16GB) */ + + vddio-pex-ctl-supply = <&vdd_1v8ao>; + + phys = <&p2u_hsio_2>, <&p2u_hsio_3>, <&p2u_hsio_4>, + <&p2u_hsio_5>; + phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3"; + }; From patchwork Mon Jul 1 12:40:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025821 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B0A7138D for ; Mon, 1 Jul 2019 12:41:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0D55027DCD for ; Mon, 1 Jul 2019 12:41:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 016B0285B9; Mon, 1 Jul 2019 12:41:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8670927DCD for ; Mon, 1 Jul 2019 12:41:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728557AbfGAMlq (ORCPT ); Mon, 1 Jul 2019 08:41:46 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:15789 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727128AbfGAMlp (ORCPT ); Mon, 1 Jul 2019 08:41:45 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:41:43 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:41:44 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 01 Jul 2019 05:41:44 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:44 +0000 Received: from HQMAIL106.nvidia.com (172.18.146.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:44 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:41:44 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:41:44 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 10/12] dt-bindings: PHY: P2U: Add Tegra194 P2U block Date: Mon, 1 Jul 2019 18:10:08 +0530 Message-ID: <20190701124010.7484-11-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984904; bh=CceyFuoR+TeuGOjQexj2MktBmB7BoCtdlviA/aDDgeY=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=JX9N5SfR3TgVO2p+lsa18cLUNy4TYR7xyrPqs1VbimtEfKJGg+olinphqZmB/dxSA qkZbQJimSE3V7u08fmYuEay77yXDnP5Bg7BV/ABL+o/k/Uq7B23kFTGj1RKZVp+4UJ Ei9oyqfJG4q/DAeEkB+mQ5s6oelCity9Nfbul7lAEIEwfa/7Xq6VGMAI8unOg/7CLL 2v2GyeZL/IfoH864Ap5swwWgdFCanVhkA47ZxOcGqksatL+RhsSU7jxTEX7xCO0jyL 5bYJUjM4NFI/xh9xRrBgTEUchfS+Jh28nBYLmhar2VWUu9IkNGT5hYe65Tmw7SHjAd s3S5ZZQHIl1cg== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add support for Tegra194 P2U (PIPE to UPHY) module block which is a glue module instantiated one for each PCIe lane between Synopsys DesignWare core based PCIe IP and Universal PHY block. Signed-off-by: Vidya Sagar Reviewed-by: Rob Herring Acked-by: Thierry Reding Acked-by: Kishon Vijay Abraham I --- Changes since [v11]: * None Changes since [v10]: * None Changes since [v9]: * None Changes since [v8]: * None Changes since [v7]: * None Changes since [v6]: * None Changes since [v5]: * Added Sob * Changed node name from "p2u@xxxxxxxx" to "phy@xxxxxxxx" Changes since [v4]: * None Changes since [v3]: * None Changes since [v2]: * Changed node label to reflect new format that includes either 'hsio' or 'nvhs' in its name to reflect which UPHY brick they belong to Changes since [v1]: * This is a new patch in v2 series .../bindings/phy/phy-tegra194-p2u.txt | 28 +++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt new file mode 100644 index 000000000000..d23ff90baad5 --- /dev/null +++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt @@ -0,0 +1,28 @@ +NVIDIA Tegra194 P2U binding + +Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High +Speed) each interfacing with 12 and 8 P2U instances respectively. +A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE +interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe +lane. + +Required properties: +- compatible: For Tegra19x, must contain "nvidia,tegra194-p2u". +- reg: Should be the physical address space and length of respective each P2U + instance. +- reg-names: Must include the entry "ctl". + +Required properties for PHY port node: +- #phy-cells: Defined by generic PHY bindings. Must be 0. + +Refer to phy/phy-bindings.txt for the generic PHY binding properties. + +Example: + +p2u_hsio_0: phy@3e10000 { + compatible = "nvidia,tegra194-p2u"; + reg = <0x03e10000 0x10000>; + reg-names = "ctl"; + + #phy-cells = <0>; +}; From patchwork Mon Jul 1 12:40:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025823 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1819B138D for ; Mon, 1 Jul 2019 12:42:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 094A0285B9 for ; Mon, 1 Jul 2019 12:42:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F160127FAC; Mon, 1 Jul 2019 12:41:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54351285DA for ; Mon, 1 Jul 2019 12:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729125AbfGAMly (ORCPT ); Mon, 1 Jul 2019 08:41:54 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:15799 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727128AbfGAMly (ORCPT ); Mon, 1 Jul 2019 08:41:54 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:41:51 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:41:52 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 01 Jul 2019 05:41:52 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:52 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:41:51 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:41:51 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 11/12] phy: tegra: Add PCIe PIPE2UPHY support Date: Mon, 1 Jul 2019 18:10:09 +0530 Message-ID: <20190701124010.7484-12-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984911; bh=m493sLE104INRvAXIv9rhGDxIzW9R+l62jfKnH1cjOE=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=r5MRArVY4rIPDYuctAOKR9XK2FT4JPx4PVQdWVYUNqc93zxZobnQYfHuKMBzLxzvT NL5Z9mVuQVZYQsgno2dT+TvehMGJeIphvHf4+ISrzOnJJ3hPmnp8Ghb2sYa+B3AIHg uIvUCN8+02VCYkYv9ArPc44kfjba6tsSgpmfLXdijbfqQzjDZ4pyKpeZNlTooWb8oB 8Uh7ClN+fGvK44NFpLm4AZ6dpapFKPU2OnljeP6KbKkoZvZmKwaJI6jlXHnvuOjfyI fqRxDrakufMLyHXgBMpDsEQqitijGqW2ZqDwDNSWVvmZ3ZwQp0RPqQ2Sifrk9yr/Xc GrJ+pPc+RP1gA== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Synopsys DesignWare core based PCIe controllers in Tegra 194 SoC interface with Universal PHY (UPHY) module through a PIPE2UPHY (P2U) module. For each PCIe lane of a controller, there is a P2U unit instantiated at hardware level. This driver provides support for the programming required for each P2U that is going to be used for a PCIe controller. Signed-off-by: Vidya Sagar Acked-by: Kishon Vijay Abraham I --- Changes since [v11]: * None Changes since [v10]: * Replaced PTR_ERR_OR_ZERO() with PTR_ERR() as the check for zero is already present in the code. Changes since [v9]: * Used _relaxed() versions of readl() & writel() Changes since [v8]: * Made it dependent on ARCH_TEGRA_194_SOC directly instead of ARCH_TEGRA Changes since [v7]: * Changed P2U driver file name from pcie-p2u-tegra194.c to phy-tegra194-p2u.c Changes since [v6]: * None Changes since [v5]: * Addressed review comments from Thierry Changes since [v4]: * None Changes since [v3]: * Rebased on top of linux-next top of the tree Changes since [v2]: * Replaced spaces with tabs in Kconfig file * Sorted header file inclusion alphabetically Changes since [v1]: * Added COMPILE_TEST in Kconfig * Removed empty phy_ops implementations * Modified code according to DT documentation file modifications drivers/phy/tegra/Kconfig | 7 ++ drivers/phy/tegra/Makefile | 1 + drivers/phy/tegra/phy-tegra194-p2u.c | 120 +++++++++++++++++++++++++++ 3 files changed, 128 insertions(+) create mode 100644 drivers/phy/tegra/phy-tegra194-p2u.c diff --git a/drivers/phy/tegra/Kconfig b/drivers/phy/tegra/Kconfig index e516967d695b..f9817c3ae85f 100644 --- a/drivers/phy/tegra/Kconfig +++ b/drivers/phy/tegra/Kconfig @@ -7,3 +7,10 @@ config PHY_TEGRA_XUSB To compile this driver as a module, choose M here: the module will be called phy-tegra-xusb. + +config PHY_TEGRA194_P2U + tristate "NVIDIA Tegra194 PIPE2UPHY PHY driver" + depends on ARCH_TEGRA_194_SOC || COMPILE_TEST + select GENERIC_PHY + help + Enable this to support the P2U (PIPE to UPHY) that is part of Tegra 19x SOCs. diff --git a/drivers/phy/tegra/Makefile b/drivers/phy/tegra/Makefile index 64ccaeacb631..320dd389f34d 100644 --- a/drivers/phy/tegra/Makefile +++ b/drivers/phy/tegra/Makefile @@ -6,3 +6,4 @@ phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_124_SOC) += xusb-tegra124.o phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_132_SOC) += xusb-tegra124.o phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_210_SOC) += xusb-tegra210.o phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_186_SOC) += xusb-tegra186.o +obj-$(CONFIG_PHY_TEGRA194_P2U) += phy-tegra194-p2u.o diff --git a/drivers/phy/tegra/phy-tegra194-p2u.c b/drivers/phy/tegra/phy-tegra194-p2u.c new file mode 100644 index 000000000000..7042bed9feaa --- /dev/null +++ b/drivers/phy/tegra/phy-tegra194-p2u.c @@ -0,0 +1,120 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * P2U (PIPE to UPHY) driver for Tegra T194 SoC + * + * Copyright (C) 2019 NVIDIA Corporation. + * + * Author: Vidya Sagar + */ + +#include +#include +#include +#include +#include +#include + +#define P2U_PERIODIC_EQ_CTRL_GEN3 0xc0 +#define P2U_PERIODIC_EQ_CTRL_GEN3_PERIODIC_EQ_EN BIT(0) +#define P2U_PERIODIC_EQ_CTRL_GEN3_INIT_PRESET_EQ_TRAIN_EN BIT(1) +#define P2U_PERIODIC_EQ_CTRL_GEN4 0xc4 +#define P2U_PERIODIC_EQ_CTRL_GEN4_INIT_PRESET_EQ_TRAIN_EN BIT(1) + +#define P2U_RX_DEBOUNCE_TIME 0xa4 +#define P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_MASK 0xffff +#define P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_VAL 160 + +struct tegra_p2u { + void __iomem *base; +}; + +static inline void p2u_writel(struct tegra_p2u *phy, const u32 value, + const u32 reg) +{ + writel_relaxed(value, phy->base + reg); +} + +static inline u32 p2u_readl(struct tegra_p2u *phy, const u32 reg) +{ + return readl_relaxed(phy->base + reg); +} + +static int tegra_p2u_power_on(struct phy *x) +{ + struct tegra_p2u *phy = phy_get_drvdata(x); + u32 val; + + val = p2u_readl(phy, P2U_PERIODIC_EQ_CTRL_GEN3); + val &= ~P2U_PERIODIC_EQ_CTRL_GEN3_PERIODIC_EQ_EN; + val |= P2U_PERIODIC_EQ_CTRL_GEN3_INIT_PRESET_EQ_TRAIN_EN; + p2u_writel(phy, val, P2U_PERIODIC_EQ_CTRL_GEN3); + + val = p2u_readl(phy, P2U_PERIODIC_EQ_CTRL_GEN4); + val |= P2U_PERIODIC_EQ_CTRL_GEN4_INIT_PRESET_EQ_TRAIN_EN; + p2u_writel(phy, val, P2U_PERIODIC_EQ_CTRL_GEN4); + + val = p2u_readl(phy, P2U_RX_DEBOUNCE_TIME); + val &= ~P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_MASK; + val |= P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_VAL; + p2u_writel(phy, val, P2U_RX_DEBOUNCE_TIME); + + return 0; +} + +static const struct phy_ops ops = { + .power_on = tegra_p2u_power_on, + .owner = THIS_MODULE, +}; + +static int tegra_p2u_probe(struct platform_device *pdev) +{ + struct phy_provider *phy_provider; + struct device *dev = &pdev->dev; + struct phy *generic_phy; + struct tegra_p2u *phy; + struct resource *res; + + phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL); + if (!phy) + return -ENOMEM; + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctl"); + phy->base = devm_ioremap_resource(dev, res); + if (IS_ERR(phy->base)) + return PTR_ERR(phy->base); + + platform_set_drvdata(pdev, phy); + + generic_phy = devm_phy_create(dev, NULL, &ops); + if (IS_ERR(generic_phy)) + return PTR_ERR(generic_phy); + + phy_set_drvdata(generic_phy, phy); + + phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); + if (IS_ERR(phy_provider)) + return PTR_ERR(phy_provider); + + return 0; +} + +static const struct of_device_id tegra_p2u_id_table[] = { + { + .compatible = "nvidia,tegra194-p2u", + }, + {} +}; +MODULE_DEVICE_TABLE(of, tegra_p2u_id_table); + +static struct platform_driver tegra_p2u_driver = { + .probe = tegra_p2u_probe, + .driver = { + .name = "tegra194-p2u", + .of_match_table = tegra_p2u_id_table, + }, +}; +module_platform_driver(tegra_p2u_driver); + +MODULE_AUTHOR("Vidya Sagar "); +MODULE_DESCRIPTION("NVIDIA Tegra194 PIPE2UPHY PHY driver"); +MODULE_LICENSE("GPL v2"); From patchwork Mon Jul 1 12:40:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar X-Patchwork-Id: 11025827 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0203C1510 for ; Mon, 1 Jul 2019 12:42:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E199F266F3 for ; Mon, 1 Jul 2019 12:42:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D570C286E3; Mon, 1 Jul 2019 12:42:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 26545266F3 for ; Mon, 1 Jul 2019 12:42:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728627AbfGAMmF (ORCPT ); Mon, 1 Jul 2019 08:42:05 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:9260 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727128AbfGAMmF (ORCPT ); Mon, 1 Jul 2019 08:42:05 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 01 Jul 2019 05:41:56 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 01 Jul 2019 05:41:59 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 01 Jul 2019 05:41:59 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:59 +0000 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 1 Jul 2019 12:41:59 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 1 Jul 2019 12:41:59 +0000 Received: from vidyas-desktop.nvidia.com (Not Verified[10.24.37.38]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 01 Jul 2019 05:41:58 -0700 From: Vidya Sagar To: , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH V12 12/12] PCI: tegra: Add Tegra194 PCIe support Date: Mon, 1 Jul 2019 18:10:10 +0530 Message-ID: <20190701124010.7484-13-vidyas@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190701124010.7484-1-vidyas@nvidia.com> References: <20190701124010.7484-1-vidyas@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1561984917; bh=zyTKC19qus+/AVpiSD9Act23h7seUZw1v9Mm4rMDpJk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=hOn4xazLFlCMurB0tXU2VQd7D4D2sVMOCPQntTW9eMlFrSfVM5QwEy8/CY00I+G9k S0/kMlHS6rxzKygUy1aMMfMY6yqzeNeeW6fcCNhLSTNwCkBQ9vaqkQD5gy6ZVHbZGj 2J7siH4kDDO7nf2mgBNI7ocB/6x7WaB2Cd+rfjiF4NJmpLpXY50Nv9KL/jn6hFnDcI GG+HW2fSeSHO1Ju7MzrD+mD69UvzxBZma8OvJx8ChH/6M58Ya3xyv5MF/C14Isz7Tc 7H5QjfV61Rr7E5hlbi49sWeziYNr5WesNOcToK7enuDtR9CULLo9FhtkghSzkNoCKO ZGLWGgxOz6wVA== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add support for Synopsys DesignWare core IP based PCIe host controller present in Tegra194 SoC. Signed-off-by: Vidya Sagar Acked-by: Thierry Reding --- Changes since [v11]: * None Changes since [v10]: * None Changes since [v9]: * Used _relaxed() versions of readl() & writel() Changes since [v8]: * Made it dependent on ARCH_TEGRA_194_SOC directly Changes since [v7]: * Addressed review comments from Thierry Changes since [v6]: * Removed code around "nvidia,disable-aspm-states" DT property * Refactored code to remove code duplication Changes since [v5]: * Addressed review comments from Thierry Changes since [v4]: * None Changes since [v3]: * None Changes since [v2]: * Changed 'nvidia,init-speed' to 'nvidia,init-link-speed' * Changed 'nvidia,pex-wake' to 'nvidia,wake-gpios' * Removed .runtime_suspend() & .runtime_resume() implementations Changes since [v1]: * Made CONFIG_PCIE_TEGRA194 as 'm' by default from its previous 'y' state * Modified code as per changes made to DT documentation * Refactored code to address Bjorn & Thierry's review comments * Added goto to avoid recursion in tegra_pcie_dw_host_init() API * Merged .scan_bus() of dw_pcie_host_ops implementation to tegra_pcie_dw_host_init() API drivers/pci/controller/dwc/Kconfig | 10 + drivers/pci/controller/dwc/Makefile | 1 + drivers/pci/controller/dwc/pcie-tegra194.c | 1635 ++++++++++++++++++++ 3 files changed, 1646 insertions(+) create mode 100644 drivers/pci/controller/dwc/pcie-tegra194.c diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig index a6ce1ee51b4c..4e7365c31eee 100644 --- a/drivers/pci/controller/dwc/Kconfig +++ b/drivers/pci/controller/dwc/Kconfig @@ -220,6 +220,16 @@ config PCI_MESON and therefore the driver re-uses the DesignWare core functions to implement the driver. +config PCIE_TEGRA194 + tristate "NVIDIA Tegra194 (and later) PCIe controller" + depends on ARCH_TEGRA_194_SOC || COMPILE_TEST + depends on PCI_MSI_IRQ_DOMAIN + select PCIE_DW_HOST + select PHY_TEGRA194_P2U + help + Say Y here if you want support for DesignWare core based PCIe host + controller found in NVIDIA Tegra194 SoC. + config PCIE_UNIPHIER bool "Socionext UniPhier PCIe controllers" depends on ARCH_UNIPHIER || COMPILE_TEST diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile index b085dfd4fab7..b30336181d46 100644 --- a/drivers/pci/controller/dwc/Makefile +++ b/drivers/pci/controller/dwc/Makefile @@ -15,6 +15,7 @@ obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o obj-$(CONFIG_PCI_MESON) += pci-meson.o +obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o # The following drivers are for devices that use the generic ACPI diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c new file mode 100644 index 000000000000..84bd395b7243 --- /dev/null +++ b/drivers/pci/controller/dwc/pcie-tegra194.c @@ -0,0 +1,1635 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * PCIe host controller driver for Tegra194 SoC + * + * Copyright (C) 2019 NVIDIA Corporation. + * + * Author: Vidya Sagar + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "pcie-designware.h" +#include +#include +#include "../../pci.h" + +#define APPL_PINMUX 0x0 +#define APPL_PINMUX_PEX_RST BIT(0) +#define APPL_PINMUX_CLKREQ_OVERRIDE_EN BIT(2) +#define APPL_PINMUX_CLKREQ_OVERRIDE BIT(3) +#define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN BIT(4) +#define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE BIT(5) +#define APPL_PINMUX_CLKREQ_OUT_OVRD_EN BIT(9) +#define APPL_PINMUX_CLKREQ_OUT_OVRD BIT(10) + +#define APPL_CTRL 0x4 +#define APPL_CTRL_SYS_PRE_DET_STATE BIT(6) +#define APPL_CTRL_LTSSM_EN BIT(7) +#define APPL_CTRL_HW_HOT_RST_EN BIT(20) +#define APPL_CTRL_HW_HOT_RST_MODE_MASK GENMASK(1, 0) +#define APPL_CTRL_HW_HOT_RST_MODE_SHIFT 22 +#define APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST 0x1 + +#define APPL_INTR_EN_L0_0 0x8 +#define APPL_INTR_EN_L0_0_LINK_STATE_INT_EN BIT(0) +#define APPL_INTR_EN_L0_0_MSI_RCV_INT_EN BIT(4) +#define APPL_INTR_EN_L0_0_INT_INT_EN BIT(8) +#define APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN BIT(19) +#define APPL_INTR_EN_L0_0_SYS_INTR_EN BIT(30) +#define APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN BIT(31) + +#define APPL_INTR_STATUS_L0 0xC +#define APPL_INTR_STATUS_L0_LINK_STATE_INT BIT(0) +#define APPL_INTR_STATUS_L0_INT_INT BIT(8) +#define APPL_INTR_STATUS_L0_CDM_REG_CHK_INT BIT(18) + +#define APPL_INTR_EN_L1_0_0 0x1C +#define APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN BIT(1) + +#define APPL_INTR_STATUS_L1_0_0 0x20 +#define APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED BIT(1) + +#define APPL_INTR_STATUS_L1_1 0x2C +#define APPL_INTR_STATUS_L1_2 0x30 +#define APPL_INTR_STATUS_L1_3 0x34 +#define APPL_INTR_STATUS_L1_6 0x3C +#define APPL_INTR_STATUS_L1_7 0x40 + +#define APPL_INTR_EN_L1_8_0 0x44 +#define APPL_INTR_EN_L1_8_BW_MGT_INT_EN BIT(2) +#define APPL_INTR_EN_L1_8_AUTO_BW_INT_EN BIT(3) +#define APPL_INTR_EN_L1_8_INTX_EN BIT(11) +#define APPL_INTR_EN_L1_8_AER_INT_EN BIT(15) + +#define APPL_INTR_STATUS_L1_8_0 0x4C +#define APPL_INTR_STATUS_L1_8_0_EDMA_INT_MASK GENMASK(11, 6) +#define APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS BIT(2) +#define APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS BIT(3) + +#define APPL_INTR_STATUS_L1_9 0x54 +#define APPL_INTR_STATUS_L1_10 0x58 +#define APPL_INTR_STATUS_L1_11 0x64 +#define APPL_INTR_STATUS_L1_13 0x74 +#define APPL_INTR_STATUS_L1_14 0x78 +#define APPL_INTR_STATUS_L1_15 0x7C +#define APPL_INTR_STATUS_L1_17 0x88 + +#define APPL_INTR_EN_L1_18 0x90 +#define APPL_INTR_EN_L1_18_CDM_REG_CHK_CMPLT BIT(2) +#define APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR BIT(1) +#define APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR BIT(0) + +#define APPL_INTR_STATUS_L1_18 0x94 +#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT BIT(2) +#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR BIT(1) +#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR BIT(0) + +#define APPL_MSI_CTRL_2 0xB0 + +#define APPL_LTR_MSG_1 0xC4 +#define LTR_MSG_REQ BIT(15) +#define LTR_MST_NO_SNOOP_SHIFT 16 + +#define APPL_LTR_MSG_2 0xC8 +#define APPL_LTR_MSG_2_LTR_MSG_REQ_STATE BIT(3) + +#define APPL_LINK_STATUS 0xCC +#define APPL_LINK_STATUS_RDLH_LINK_UP BIT(0) + +#define APPL_DEBUG 0xD0 +#define APPL_DEBUG_PM_LINKST_IN_L2_LAT BIT(21) +#define APPL_DEBUG_PM_LINKST_IN_L0 0x11 +#define APPL_DEBUG_LTSSM_STATE_MASK GENMASK(8, 3) +#define APPL_DEBUG_LTSSM_STATE_SHIFT 3 +#define LTSSM_STATE_PRE_DETECT 5 + +#define APPL_RADM_STATUS 0xE4 +#define APPL_PM_XMT_TURNOFF_STATE BIT(0) + +#define APPL_DM_TYPE 0x100 +#define APPL_DM_TYPE_MASK GENMASK(3, 0) +#define APPL_DM_TYPE_RP 0x4 +#define APPL_DM_TYPE_EP 0x0 + +#define APPL_CFG_BASE_ADDR 0x104 +#define APPL_CFG_BASE_ADDR_MASK GENMASK(31, 12) + +#define APPL_CFG_IATU_DMA_BASE_ADDR 0x108 +#define APPL_CFG_IATU_DMA_BASE_ADDR_MASK GENMASK(31, 18) + +#define APPL_CFG_MISC 0x110 +#define APPL_CFG_MISC_SLV_EP_MODE BIT(14) +#define APPL_CFG_MISC_ARCACHE_MASK GENMASK(13, 10) +#define APPL_CFG_MISC_ARCACHE_SHIFT 10 +#define APPL_CFG_MISC_ARCACHE_VAL 3 + +#define APPL_CFG_SLCG_OVERRIDE 0x114 +#define APPL_CFG_SLCG_OVERRIDE_SLCG_EN_MASTER BIT(0) + +#define APPL_CAR_RESET_OVRD 0x12C +#define APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N BIT(0) + +#define IO_BASE_IO_DECODE BIT(0) +#define IO_BASE_IO_DECODE_BIT8 BIT(8) + +#define CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE BIT(0) +#define CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE BIT(16) + +#define CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF 0x718 +#define CFG_TIMER_CTRL_ACK_NAK_SHIFT (19) + +#define EVENT_COUNTER_ALL_CLEAR 0x3 +#define EVENT_COUNTER_ENABLE_ALL 0x7 +#define EVENT_COUNTER_ENABLE_SHIFT 2 +#define EVENT_COUNTER_EVENT_SEL_MASK GENMASK(7, 0) +#define EVENT_COUNTER_EVENT_SEL_SHIFT 16 +#define EVENT_COUNTER_EVENT_Tx_L0S 0x2 +#define EVENT_COUNTER_EVENT_Rx_L0S 0x3 +#define EVENT_COUNTER_EVENT_L1 0x5 +#define EVENT_COUNTER_EVENT_L1_1 0x7 +#define EVENT_COUNTER_EVENT_L1_2 0x8 +#define EVENT_COUNTER_GROUP_SEL_SHIFT 24 +#define EVENT_COUNTER_GROUP_5 0x5 + +#define PORT_LOGIC_ACK_F_ASPM_CTRL 0x70C +#define ENTER_ASPM BIT(30) +#define L0S_ENTRANCE_LAT_SHIFT 24 +#define L0S_ENTRANCE_LAT_MASK GENMASK(26, 24) +#define L1_ENTRANCE_LAT_SHIFT 27 +#define L1_ENTRANCE_LAT_MASK GENMASK(29, 27) +#define N_FTS_SHIFT 8 +#define N_FTS_MASK GENMASK(7, 0) +#define N_FTS_VAL 52 + +#define PORT_LOGIC_GEN2_CTRL 0x80C +#define PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE BIT(17) +#define FTS_MASK GENMASK(7, 0) +#define FTS_VAL 52 + +#define PORT_LOGIC_MSI_CTRL_INT_0_EN 0x828 + +#define GEN3_EQ_CONTROL_OFF 0x8a8 +#define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT 8 +#define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK GENMASK(23, 8) +#define GEN3_EQ_CONTROL_OFF_FB_MODE_MASK GENMASK(3, 0) + +#define GEN3_RELATED_OFF 0x890 +#define GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL BIT(0) +#define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16) +#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24 +#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24) + +#define PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT 0x8D0 +#define AMBA_ERROR_RESPONSE_CRS_SHIFT 3 +#define AMBA_ERROR_RESPONSE_CRS_MASK GENMASK(1, 0) +#define AMBA_ERROR_RESPONSE_CRS_OKAY 0 +#define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFFFFFF 1 +#define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 2 + +#define PORT_LOGIC_MSIX_DOORBELL 0x948 + +#define CAP_SPCIE_CAP_OFF 0x154 +#define CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK GENMASK(3, 0) +#define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK GENMASK(11, 8) +#define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT 8 + +#define PL16G_CAP_OFF_DSP_16G_TX_PRESET_MASK GENMASK(3, 0) +#define PL16G_CAP_OFF_USP_16G_TX_PRESET_MASK GENMASK(7, 4) +#define PL16G_CAP_OFF_USP_16G_TX_PRESET_SHIFT 4 + +#define PME_ACK_TIMEOUT 10000 + +#define LTSSM_TIMEOUT 50000 /* 50ms */ + +#define GEN3_GEN4_EQ_PRESET_INIT 5 + +#define GEN1_CORE_CLK_FREQ 62500000 +#define GEN2_CORE_CLK_FREQ 125000000 +#define GEN3_CORE_CLK_FREQ 250000000 +#define GEN4_CORE_CLK_FREQ 500000000 + +static const unsigned int pcie_gen_freq[] = { + GEN1_CORE_CLK_FREQ, + GEN2_CORE_CLK_FREQ, + GEN3_CORE_CLK_FREQ, + GEN4_CORE_CLK_FREQ +}; + +static const u32 event_cntr_ctrl_offset[] = { + 0x1d8, + 0x1a8, + 0x1a8, + 0x1a8, + 0x1c4, + 0x1d8 +}; + +static const u32 event_cntr_data_offset[] = { + 0x1dc, + 0x1ac, + 0x1ac, + 0x1ac, + 0x1c8, + 0x1dc +}; + +struct tegra_pcie_dw { + struct device *dev; + struct resource *appl_res; + struct resource *dbi_res; + struct resource *atu_dma_res; + void __iomem *appl_base; + struct clk *core_clk; + struct reset_control *core_apb_rst; + struct reset_control *core_rst; + struct dw_pcie pci; + enum dw_pcie_device_mode mode; + struct tegra_bpmp *bpmp; + + bool supports_clkreq; + bool enable_cdm_check; + u8 init_link_width; + bool link_state; + u32 msi_ctrl_int; + u32 num_lanes; + u32 max_speed; + u32 cid; + bool update_fc_fixup; + u32 cfg_link_cap_l1sub; + u32 pcie_cap_base; + u32 aspm_cmrt; + u32 aspm_pwr_on_t; + u32 aspm_l0s_enter_lat; + + struct regulator *pex_ctl_supply; + + unsigned int phy_count; + struct phy **phys; + + struct dentry *debugfs; +}; + +static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci) +{ + return container_of(pci, struct tegra_pcie_dw, pci); +} + +static inline void appl_writel(struct tegra_pcie_dw *pcie, const u32 value, + const u32 reg) +{ + writel_relaxed(value, pcie->appl_base + reg); +} + +static inline u32 appl_readl(struct tegra_pcie_dw *pcie, const u32 reg) +{ + return readl_relaxed(pcie->appl_base + reg); +} + +struct tegra_pcie_soc { + enum dw_pcie_device_mode mode; +}; + +static void apply_bad_link_workaround(struct pcie_port *pp) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); + u32 current_link_width; + u16 val; + + /* + * NOTE:- Since this scenario is uncommon and link as such is not + * stable anyway, not waiting to confirm if link is really + * transitioning to Gen-2 speed + */ + val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); + if (val & PCI_EXP_LNKSTA_LBMS) { + current_link_width = (val & PCI_EXP_LNKSTA_NLW) >> + PCI_EXP_LNKSTA_NLW_SHIFT; + if (pcie->init_link_width > current_link_width) { + dev_warn(pci->dev, "PCIe link is bad, width reduced\n"); + val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + + PCI_EXP_LNKCTL2); + val &= ~PCI_EXP_LNKCTL2_TLS; + val |= PCI_EXP_LNKCTL2_TLS_2_5GT; + dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + + PCI_EXP_LNKCTL2, val); + + val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + + PCI_EXP_LNKCTL); + val |= PCI_EXP_LNKCTL_RL; + dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + + PCI_EXP_LNKCTL, val); + } + } +} + +static irqreturn_t tegra_pcie_rp_irq_handler(struct tegra_pcie_dw *pcie) +{ + struct dw_pcie *pci = &pcie->pci; + struct pcie_port *pp = &pci->pp; + u32 val, tmp; + u16 val_w; + + val = appl_readl(pcie, APPL_INTR_STATUS_L0); + if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) { + val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0); + if (val & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) { + appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0); + + /* SBR & Surprise Link Down WAR */ + val = appl_readl(pcie, APPL_CAR_RESET_OVRD); + val &= ~APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N; + appl_writel(pcie, val, APPL_CAR_RESET_OVRD); + udelay(1); + val = appl_readl(pcie, APPL_CAR_RESET_OVRD); + val |= APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N; + appl_writel(pcie, val, APPL_CAR_RESET_OVRD); + + val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL); + val |= PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE; + dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val); + } + } + + if (val & APPL_INTR_STATUS_L0_INT_INT) { + val = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0); + if (val & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) { + appl_writel(pcie, + APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS, + APPL_INTR_STATUS_L1_8_0); + apply_bad_link_workaround(pp); + } + if (val & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) { + appl_writel(pcie, + APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS, + APPL_INTR_STATUS_L1_8_0); + + val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + + PCI_EXP_LNKSTA); + dev_dbg(pci->dev, "Link Speed : Gen-%u\n", val_w & + PCI_EXP_LNKSTA_CLS); + } + } + + val = appl_readl(pcie, APPL_INTR_STATUS_L0); + if (val & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) { + val = appl_readl(pcie, APPL_INTR_STATUS_L1_18); + tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS); + if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) { + dev_info(pci->dev, "CDM check complete\n"); + tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE; + } + if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) { + dev_err(pci->dev, "CDM comparison mismatch\n"); + tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR; + } + if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) { + dev_err(pci->dev, "CDM Logic error\n"); + tmp |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR; + } + dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, tmp); + tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR); + dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", tmp); + } + + return IRQ_HANDLED; +} + +static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg) +{ + struct tegra_pcie_dw *pcie = arg; + + if (pcie->mode == DW_PCIE_RC_TYPE) + return tegra_pcie_rp_irq_handler(pcie); + + return IRQ_NONE; +} + +static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size, + u32 *val) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + + /* + * This is an endpoint mode specific register happen to appear even + * when controller is operating in root port mode and system hangs + * when it is accessed with link being in ASPM-L1 state. + * So skip accessing it altogether + */ + if (where == PORT_LOGIC_MSIX_DOORBELL) { + *val = 0x00000000; + return PCIBIOS_SUCCESSFUL; + } + + return dw_pcie_read(pci->dbi_base + where, size, val); +} + +static int tegra_pcie_dw_wr_own_conf(struct pcie_port *pp, int where, int size, + u32 val) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + + /* + * This is an endpoint mode specific register happen to appear even + * when controller is operating in root port mode and system hangs + * when it is accessed with link being in ASPM-L1 state. + * So skip accessing it altogether + */ + if (where == PORT_LOGIC_MSIX_DOORBELL) + return PCIBIOS_SUCCESSFUL; + + return dw_pcie_write(pci->dbi_base + where, size, val); +} + +#if defined(CONFIG_PCIEASPM) +static void disable_aspm_l11(struct tegra_pcie_dw *pcie) +{ + u32 val; + + val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub); + val &= ~PCI_L1SS_CAP_ASPM_L1_1; + dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val); +} + +static void disable_aspm_l12(struct tegra_pcie_dw *pcie) +{ + u32 val; + + val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub); + val &= ~PCI_L1SS_CAP_ASPM_L1_2; + dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val); +} + +static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event) +{ + u32 val; + + val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid]); + val &= ~(EVENT_COUNTER_EVENT_SEL_MASK << EVENT_COUNTER_EVENT_SEL_SHIFT); + val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT; + val |= event << EVENT_COUNTER_EVENT_SEL_SHIFT; + val |= EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT; + dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val); + val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_data_offset[pcie->cid]); + return val; +} + +static int aspm_state_cnt(struct seq_file *s, void *data) +{ + struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *) + dev_get_drvdata(s->private); + u32 val; + + seq_printf(s, "Tx L0s entry count : %u\n", + event_counter_prog(pcie, EVENT_COUNTER_EVENT_Tx_L0S)); + + seq_printf(s, "Rx L0s entry count : %u\n", + event_counter_prog(pcie, EVENT_COUNTER_EVENT_Rx_L0S)); + + seq_printf(s, "Link L1 entry count : %u\n", + event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1)); + + seq_printf(s, "Link L1.1 entry count : %u\n", + event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_1)); + + seq_printf(s, "Link L1.2 entry count : %u\n", + event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_2)); + + /* Clear all counters */ + dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], + EVENT_COUNTER_ALL_CLEAR); + + /* Re-enable counting */ + val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT; + val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT; + dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val); + + return 0; +} +#endif + +static int init_debugfs(struct tegra_pcie_dw *pcie) +{ +#if defined(CONFIG_PCIEASPM) + struct dentry *d; + + d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt", + pcie->debugfs, aspm_state_cnt); + if (IS_ERR_OR_NULL(d)) + dev_err(pcie->dev, + "Failed to create debugfs file \"aspm_state_cnt\"\n"); +#endif + return 0; +} + +static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); + u32 val; + u16 val_w; + + val = appl_readl(pcie, APPL_INTR_EN_L0_0); + val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN; + appl_writel(pcie, val, APPL_INTR_EN_L0_0); + + val = appl_readl(pcie, APPL_INTR_EN_L1_0_0); + val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN; + appl_writel(pcie, val, APPL_INTR_EN_L1_0_0); + + if (pcie->enable_cdm_check) { + val = appl_readl(pcie, APPL_INTR_EN_L0_0); + val |= APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN; + appl_writel(pcie, val, APPL_INTR_EN_L0_0); + + val = appl_readl(pcie, APPL_INTR_EN_L1_18); + val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR; + val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR; + appl_writel(pcie, val, APPL_INTR_EN_L1_18); + } + + val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base + + PCI_EXP_LNKSTA); + pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >> + PCI_EXP_LNKSTA_NLW_SHIFT; + + val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base + + PCI_EXP_LNKCTL); + val_w |= PCI_EXP_LNKCTL_LBMIE; + dw_pcie_writew_dbi(&pcie->pci, pcie->pcie_cap_base + PCI_EXP_LNKCTL, + val_w); +} + +static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); + u32 val; + + /* Enable legacy interrupt generation */ + val = appl_readl(pcie, APPL_INTR_EN_L0_0); + val |= APPL_INTR_EN_L0_0_SYS_INTR_EN; + val |= APPL_INTR_EN_L0_0_INT_INT_EN; + appl_writel(pcie, val, APPL_INTR_EN_L0_0); + + val = appl_readl(pcie, APPL_INTR_EN_L1_8_0); + val |= APPL_INTR_EN_L1_8_INTX_EN; + val |= APPL_INTR_EN_L1_8_AUTO_BW_INT_EN; + val |= APPL_INTR_EN_L1_8_BW_MGT_INT_EN; + if (IS_ENABLED(CONFIG_PCIEAER)) + val |= APPL_INTR_EN_L1_8_AER_INT_EN; + appl_writel(pcie, val, APPL_INTR_EN_L1_8_0); +} + +static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); + u32 val; + + dw_pcie_msi_init(pp); + + /* Enable MSI interrupt generation */ + val = appl_readl(pcie, APPL_INTR_EN_L0_0); + val |= APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN; + val |= APPL_INTR_EN_L0_0_MSI_RCV_INT_EN; + appl_writel(pcie, val, APPL_INTR_EN_L0_0); +} + +static void tegra_pcie_enable_interrupts(struct pcie_port *pp) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); + + /* Clear interrupt statuses before enabling interrupts */ + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15); + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17); + + tegra_pcie_enable_system_interrupts(pp); + tegra_pcie_enable_legacy_interrupts(pp); + if (IS_ENABLED(CONFIG_PCI_MSI)) + tegra_pcie_enable_msi_interrupts(pp); +} + +static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie) +{ + struct dw_pcie *pci = &pcie->pci; + u32 val, offset, i; + + /* Program init preset */ + for (i = 0; i < pcie->num_lanes; i++) { + dw_pcie_read(pci->dbi_base + CAP_SPCIE_CAP_OFF + + (i * 2), 2, &val); + val &= ~CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK; + val |= GEN3_GEN4_EQ_PRESET_INIT; + val &= ~CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK; + val |= (GEN3_GEN4_EQ_PRESET_INIT << + CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT); + dw_pcie_write(pci->dbi_base + CAP_SPCIE_CAP_OFF + + (i * 2), 2, val); + + offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_PL) + + PCI_PL_16GT_LE_CTRL; + dw_pcie_read(pci->dbi_base + offset + i, 1, &val); + val &= ~PL16G_CAP_OFF_DSP_16G_TX_PRESET_MASK; + val |= GEN3_GEN4_EQ_PRESET_INIT; + val &= ~PL16G_CAP_OFF_USP_16G_TX_PRESET_MASK; + val |= (GEN3_GEN4_EQ_PRESET_INIT << + PL16G_CAP_OFF_USP_16G_TX_PRESET_SHIFT); + dw_pcie_write(pci->dbi_base + offset + i, 1, val); + } + + val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); + val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); + + val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); + val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK; + val |= (0x3ff << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT); + val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK; + dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val); + + val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); + val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; + val |= (0x1 << GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT); + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); + + val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); + val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK; + val |= (0x360 << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT); + val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK; + dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val); + + val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); + val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); +} + +static int tegra_pcie_dw_host_init(struct pcie_port *pp) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); + u32 val, tmp, offset, speed; + unsigned int count; + u16 val_w; + +core_init: + count = 200; +#if defined(CONFIG_PCIEASPM) + offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS); + pcie->cfg_link_cap_l1sub = offset + PCI_L1SS_CAP; +#endif + val = dw_pcie_readl_dbi(pci, PCI_IO_BASE); + val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8); + dw_pcie_writel_dbi(pci, PCI_IO_BASE, val); + + val = dw_pcie_readl_dbi(pci, PCI_PREF_MEMORY_BASE); + val |= CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE; + val |= CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE; + dw_pcie_writel_dbi(pci, PCI_PREF_MEMORY_BASE, val); + + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0); + + /* Configure FTS */ + val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL); + val &= ~(N_FTS_MASK << N_FTS_SHIFT); + val |= N_FTS_VAL << N_FTS_SHIFT; + dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val); + + val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL); + val &= ~FTS_MASK; + val |= FTS_VAL; + dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val); + + /* Enable as 0xFFFF0001 response for CRS */ + val = dw_pcie_readl_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT); + val &= ~(AMBA_ERROR_RESPONSE_CRS_MASK << AMBA_ERROR_RESPONSE_CRS_SHIFT); + val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 << + AMBA_ERROR_RESPONSE_CRS_SHIFT); + dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val); + + /* Configure Max Speed from DT */ + if (pcie->max_speed && pcie->max_speed != -EINVAL) { + val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + + PCI_EXP_LNKCAP); + val &= ~PCI_EXP_LNKCAP_SLS; + val |= pcie->max_speed; + dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, + val); + } + + /* Configure Max lane width from DT */ + val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP); + val &= ~PCI_EXP_LNKCAP_MLW; + val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT); + dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val); + + config_gen3_gen4_eq_presets(pcie); + +#if defined(CONFIG_PCIEASPM) + /* Enable ASPM counters */ + val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT; + val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT; + dw_pcie_writel_dbi(pci, event_cntr_ctrl_offset[pcie->cid], val); + + /* Program T_cmrt and T_pwr_on values */ + val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub); + val &= ~(PCI_L1SS_CAP_CM_RESTORE_TIME | PCI_L1SS_CAP_P_PWR_ON_VALUE); + val |= (pcie->aspm_cmrt << 8); + val |= (pcie->aspm_pwr_on_t << 19); + dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val); + + /* Program L0s and L1 entrance latencies */ + val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL); + val &= ~L0S_ENTRANCE_LAT_MASK; + val |= (pcie->aspm_l0s_enter_lat << L0S_ENTRANCE_LAT_SHIFT); + val |= ENTER_ASPM; + dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val); +#endif + val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); + val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); + + if (pcie->update_fc_fixup) { + val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF); + val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT; + dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val); + } + + dw_pcie_setup_rc(pp); + + clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ); + + /* Assert RST */ + val = appl_readl(pcie, APPL_PINMUX); + val &= ~APPL_PINMUX_PEX_RST; + appl_writel(pcie, val, APPL_PINMUX); + + usleep_range(100, 200); + + /* Enable LTSSM */ + val = appl_readl(pcie, APPL_CTRL); + val |= APPL_CTRL_LTSSM_EN; + appl_writel(pcie, val, APPL_CTRL); + + /* De-assert RST */ + val = appl_readl(pcie, APPL_PINMUX); + val |= APPL_PINMUX_PEX_RST; + appl_writel(pcie, val, APPL_PINMUX); + + msleep(100); + + val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); + while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) { + if (count) { + dev_dbg(pci->dev, "Waiting for link up\n"); + usleep_range(1000, 2000); + val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + + PCI_EXP_LNKSTA); + count--; + continue; + } + + val = appl_readl(pcie, APPL_DEBUG); + val &= APPL_DEBUG_LTSSM_STATE_MASK; + val >>= APPL_DEBUG_LTSSM_STATE_SHIFT; + tmp = appl_readl(pcie, APPL_LINK_STATUS); + tmp &= APPL_LINK_STATUS_RDLH_LINK_UP; + if (!(val == 0x11 && !tmp)) { + dev_info(pci->dev, "Link is down\n"); + return 0; + } + + dev_info(pci->dev, "Link is down in DLL"); + dev_info(pci->dev, "Trying again with DLFE disabled\n"); + /* Disable LTSSM */ + val = appl_readl(pcie, APPL_CTRL); + val &= ~APPL_CTRL_LTSSM_EN; + appl_writel(pcie, val, APPL_CTRL); + + reset_control_assert(pcie->core_rst); + reset_control_deassert(pcie->core_rst); + + offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF); + val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP); + val &= ~PCI_DLF_EXCHANGE_ENABLE; + dw_pcie_writel_dbi(pci, offset, val); + + /* Retry now with DLF Exchange disabled */ + goto core_init; + } + dev_dbg(pci->dev, "Link is up\n"); + + speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) & + PCI_EXP_LNKSTA_CLS; + clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]); + + tegra_pcie_enable_interrupts(pp); + + return 0; +} + +static int tegra_pcie_dw_link_up(struct dw_pcie *pci) +{ + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); + u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); + + return !!(val & PCI_EXP_LNKSTA_DLLLA); +} + +static void tegra_pcie_set_msi_vec_num(struct pcie_port *pp) +{ + pp->num_vectors = MAX_MSI_IRQS; +} + +static const struct dw_pcie_ops tegra_dw_pcie_ops = { + .link_up = tegra_pcie_dw_link_up, +}; + +static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = { + .rd_own_conf = tegra_pcie_dw_rd_own_conf, + .wr_own_conf = tegra_pcie_dw_wr_own_conf, + .host_init = tegra_pcie_dw_host_init, + .set_num_vectors = tegra_pcie_set_msi_vec_num, +}; + +static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie) +{ + unsigned int phy_count = pcie->phy_count; + + while (phy_count--) { + phy_power_off(pcie->phys[phy_count]); + phy_exit(pcie->phys[phy_count]); + } +} + +static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie) +{ + unsigned int i; + int ret; + + for (i = 0; i < pcie->phy_count; i++) { + ret = phy_init(pcie->phys[i]); + if (ret < 0) + goto phy_power_off; + + ret = phy_power_on(pcie->phys[i]); + if (ret < 0) + goto phy_exit; + } + + return 0; + +phy_power_off: + while (i--) { + phy_power_off(pcie->phys[i]); +phy_exit: + phy_exit(pcie->phys[i]); + } + + return ret; +} + +static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie) +{ + struct device_node *np = pcie->dev->of_node; + int ret; + + ret = of_property_read_u32(np, "nvidia,aspm-cmrt-us", &pcie->aspm_cmrt); + if (ret < 0) { + dev_info(pcie->dev, "Failed to read ASPM T_cmrt: %d\n", ret); + return ret; + } + + ret = of_property_read_u32(np, "nvidia,aspm-pwr-on-t-us", + &pcie->aspm_pwr_on_t); + if (ret < 0) + dev_info(pcie->dev, "Failed to read ASPM Power On time: %d\n", + ret); + + ret = of_property_read_u32(np, "nvidia,aspm-l0s-entrance-latency-us", + &pcie->aspm_l0s_enter_lat); + if (ret < 0) + dev_info(pcie->dev, + "Failed to read ASPM L0s Entrance latency: %d\n", ret); + + ret = of_property_read_u32(np, "num-lanes", &pcie->num_lanes); + if (ret < 0) { + dev_err(pcie->dev, "Failed to read num-lanes: %d\n", ret); + return ret; + } + + pcie->max_speed = of_pci_get_max_link_speed(np); + + ret = of_property_read_u32_index(np, "nvidia,bpmp", 1, &pcie->cid); + if (ret) { + dev_err(pcie->dev, "Failed to read Controller-ID: %d\n", ret); + return ret; + } + + pcie->phy_count = of_property_count_strings(np, "phy-names"); + if (pcie->phy_count < 0) { + dev_err(pcie->dev, "Failed to find PHY entries: %d\n", + pcie->phy_count); + return pcie->phy_count; + } + + if (of_property_read_bool(np, "nvidia,update-fc-fixup")) + pcie->update_fc_fixup = true; + + pcie->supports_clkreq = + of_property_read_bool(pcie->dev->of_node, "supports-clkreq"); + + pcie->enable_cdm_check = + of_property_read_bool(np, "snps,enable-cdm-check"); + + return 0; +} + +static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie, + bool enable) +{ + struct mrq_uphy_response resp; + struct tegra_bpmp_message msg; + struct mrq_uphy_request req; + int err; + + if (pcie->cid == 5) + return 0; + + memset(&req, 0, sizeof(req)); + memset(&resp, 0, sizeof(resp)); + + req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE; + req.controller_state.pcie_controller = pcie->cid; + req.controller_state.enable = enable; + + memset(&msg, 0, sizeof(msg)); + msg.mrq = MRQ_UPHY; + msg.tx.data = &req; + msg.tx.size = sizeof(req); + msg.rx.data = &resp; + msg.rx.size = sizeof(resp); + + if (irqs_disabled()) + err = tegra_bpmp_transfer_atomic(pcie->bpmp, &msg); + else + err = tegra_bpmp_transfer(pcie->bpmp, &msg); + + return err; +} + +static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie) +{ + struct pcie_port *pp = &pcie->pci.pp; + struct pci_dev *pdev; + struct pci_bus *child; + + list_for_each_entry(child, &pp->root_bus->children, node) { + /* Bring downstream devices to D0 if they are not already in */ + if (child->parent == pp->root_bus) + break; + } + list_for_each_entry(pdev, &child->devices, bus_list) { + if (PCI_SLOT(pdev->devfn) == 0) { + if (pci_set_power_state(pdev, PCI_D0)) + dev_err(pcie->dev, + "Failed to transition %s to D0 state\n", + dev_name(&pdev->dev)); + } + } +} + +static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie, + bool en_hw_hot_rst) +{ + int ret; + u32 val; + + ret = tegra_pcie_bpmp_set_ctrl_state(pcie, true); + if (ret) { + dev_err(pcie->dev, + "Failed to enable controller %u: %d\n", pcie->cid, ret); + return ret; + } + + ret = regulator_enable(pcie->pex_ctl_supply); + if (ret < 0) { + dev_err(pcie->dev, "Failed to enable regulator: %d\n", ret); + goto fail_reg_en; + } + + ret = clk_prepare_enable(pcie->core_clk); + if (ret) { + dev_err(pcie->dev, "Failed to enable core clock: %d\n", ret); + goto fail_core_clk; + } + + ret = reset_control_deassert(pcie->core_apb_rst); + if (ret) { + dev_err(pcie->dev, "Failed to deassert core APB reset: %d\n", + ret); + goto fail_core_apb_rst; + } + + if (en_hw_hot_rst) { + /* Enable HW_HOT_RST mode */ + val = appl_readl(pcie, APPL_CTRL); + val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << + APPL_CTRL_HW_HOT_RST_MODE_SHIFT); + val |= APPL_CTRL_HW_HOT_RST_EN; + appl_writel(pcie, val, APPL_CTRL); + } + + ret = tegra_pcie_enable_phy(pcie); + if (ret) { + dev_err(pcie->dev, "Failed to enable PHY: %d\n", ret); + goto fail_phy; + } + + /* Update CFG base address */ + appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK, + APPL_CFG_BASE_ADDR); + + /* Configure this core for RP mode operation */ + appl_writel(pcie, APPL_DM_TYPE_RP, APPL_DM_TYPE); + + appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE); + + val = appl_readl(pcie, APPL_CTRL); + appl_writel(pcie, val | APPL_CTRL_SYS_PRE_DET_STATE, APPL_CTRL); + + val = appl_readl(pcie, APPL_CFG_MISC); + val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT); + appl_writel(pcie, val, APPL_CFG_MISC); + + if (!pcie->supports_clkreq) { + val = appl_readl(pcie, APPL_PINMUX); + val |= APPL_PINMUX_CLKREQ_OUT_OVRD_EN; + val |= APPL_PINMUX_CLKREQ_OUT_OVRD; + appl_writel(pcie, val, APPL_PINMUX); + } + + /* Update iATU_DMA base address */ + appl_writel(pcie, + pcie->atu_dma_res->start & APPL_CFG_IATU_DMA_BASE_ADDR_MASK, + APPL_CFG_IATU_DMA_BASE_ADDR); + + reset_control_deassert(pcie->core_rst); + + pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, + PCI_CAP_ID_EXP); + +#if defined(CONFIG_PCIEASPM) + /* Disable ASPM-L1SS advertisement as there is no CLKREQ routing */ + if (!pcie->supports_clkreq) { + disable_aspm_l11(pcie); + disable_aspm_l12(pcie); + } +#endif + return ret; + +fail_phy: + reset_control_assert(pcie->core_apb_rst); +fail_core_apb_rst: + clk_disable_unprepare(pcie->core_clk); +fail_core_clk: + regulator_disable(pcie->pex_ctl_supply); +fail_reg_en: + tegra_pcie_bpmp_set_ctrl_state(pcie, false); + + return ret; +} + +static int __deinit_controller(struct tegra_pcie_dw *pcie) +{ + int ret; + + ret = reset_control_assert(pcie->core_rst); + if (ret) { + dev_err(pcie->dev, "Failed to assert \"core\" reset: %d\n", + ret); + return ret; + } + tegra_pcie_disable_phy(pcie); + ret = reset_control_assert(pcie->core_apb_rst); + if (ret) { + dev_err(pcie->dev, "Failed to assert APB reset: %d\n", ret); + return ret; + } + clk_disable_unprepare(pcie->core_clk); + ret = regulator_disable(pcie->pex_ctl_supply); + if (ret) { + dev_err(pcie->dev, "Failed to disable regulator: %d\n", ret); + return ret; + } + ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false); + if (ret) { + dev_err(pcie->dev, "Failed to disable controller %d: %d\n", + pcie->cid, ret); + return ret; + } + return ret; +} + +static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie) +{ + struct dw_pcie *pci = &pcie->pci; + struct pcie_port *pp = &pci->pp; + int ret; + + ret = tegra_pcie_config_controller(pcie, false); + if (ret < 0) + return ret; + + pp->root_bus_nr = -1; + pp->ops = &tegra_pcie_dw_host_ops; + + ret = dw_pcie_host_init(pp); + if (ret < 0) { + dev_err(pcie->dev, "Failed to add PCIe port: %d\n", ret); + goto fail_host_init; + } + + return 0; + +fail_host_init: + return __deinit_controller(pcie); +} + +static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie) +{ + u32 val; + + if (!tegra_pcie_dw_link_up(&pcie->pci)) + return 0; + + val = appl_readl(pcie, APPL_RADM_STATUS); + val |= APPL_PM_XMT_TURNOFF_STATE; + appl_writel(pcie, val, APPL_RADM_STATUS); + + return readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, val, + val & APPL_DEBUG_PM_LINKST_IN_L2_LAT, + 1, PME_ACK_TIMEOUT); +} + +static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie) +{ + u32 data; + int err; + + if (!tegra_pcie_dw_link_up(&pcie->pci)) { + dev_dbg(pcie->dev, "PCIe link is not up...!\n"); + return; + } + + if (tegra_pcie_try_link_l2(pcie)) { + dev_info(pcie->dev, "Link didn't transition to L2 state\n"); + /* + * TX lane clock freq will reset to Gen1 only if link is in L2 + * or detect state. + * So apply pex_rst to end point to force RP to go into detect + * state + */ + data = appl_readl(pcie, APPL_PINMUX); + data &= ~APPL_PINMUX_PEX_RST; + appl_writel(pcie, data, APPL_PINMUX); + + err = readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, + data, + ((data & + APPL_DEBUG_LTSSM_STATE_MASK) >> + APPL_DEBUG_LTSSM_STATE_SHIFT) == + LTSSM_STATE_PRE_DETECT, + 1, LTSSM_TIMEOUT); + if (err) { + dev_info(pcie->dev, "Link didn't go to detect state\n"); + } else { + /* Disable LTSSM after link is in detect state */ + data = appl_readl(pcie, APPL_CTRL); + data &= ~APPL_CTRL_LTSSM_EN; + appl_writel(pcie, data, APPL_CTRL); + } + } + /* + * DBI registers may not be accessible after this as PLL-E would be + * down depending on how CLKREQ is pulled by end point + */ + data = appl_readl(pcie, APPL_PINMUX); + data |= (APPL_PINMUX_CLKREQ_OVERRIDE_EN | APPL_PINMUX_CLKREQ_OVERRIDE); + /* Cut REFCLK to slot */ + data |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN; + data &= ~APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE; + appl_writel(pcie, data, APPL_PINMUX); +} + +static int tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie) +{ + /* + * link doesn't go into L2 state with some of the endpoints with Tegra + * if they are not in D0 state. So, need to make sure that immediate + * downstream devices are in D0 state before sending PME_TurnOff to put + * link into L2 state + */ + tegra_pcie_downstream_dev_to_D0(pcie); + dw_pcie_host_deinit(&pcie->pci.pp); + tegra_pcie_dw_pme_turnoff(pcie); + return __deinit_controller(pcie); +} + +static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie) +{ + struct pcie_port *pp = &pcie->pci.pp; + struct device *dev = pcie->dev; + char *name; + int ret; + + if (IS_ENABLED(CONFIG_PCI_MSI)) { + pp->msi_irq = of_irq_get_byname(dev->of_node, "msi"); + if (!pp->msi_irq) { + dev_err(dev, "Failed to get MSI interrupt\n"); + return -ENODEV; + } + } + + pm_runtime_enable(dev); + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n", + ret); + pm_runtime_disable(dev); + return ret; + } + + tegra_pcie_init_controller(pcie); + + pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci); + + if (!pcie->link_state) { + ret = -ENOMEDIUM; + goto fail_host_init; + } + + name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node); + if (!name) { + ret = -ENOMEM; + goto fail_host_init; + } + + pcie->debugfs = debugfs_create_dir(name, NULL); + if (!pcie->debugfs) + dev_err(dev, "Failed to create debugfs\n"); + else + init_debugfs(pcie); + + return ret; + +fail_host_init: + tegra_pcie_deinit_controller(pcie); + pm_runtime_put_sync(dev); + pm_runtime_disable(dev); + return ret; +} + +static const struct tegra_pcie_soc tegra_pcie_rc_of_data = { + .mode = DW_PCIE_RC_TYPE, +}; + +static const struct of_device_id tegra_pcie_dw_of_match[] = { + { + .compatible = "nvidia,tegra194-pcie", + .data = &tegra_pcie_rc_of_data, + }, + {}, +}; +MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match); + +static int tegra_pcie_dw_probe(struct platform_device *pdev) +{ + const struct tegra_pcie_soc *data; + struct device *dev = &pdev->dev; + struct resource *atu_dma_res; + struct tegra_pcie_dw *pcie; + struct resource *dbi_res; + struct pcie_port *pp; + struct dw_pcie *pci; + struct phy **phys; + char *name; + int ret; + u32 i; + + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); + if (!pcie) + return -ENOMEM; + + pci = &pcie->pci; + pci->dev = &pdev->dev; + pci->ops = &tegra_dw_pcie_ops; + pp = &pci->pp; + pcie->dev = &pdev->dev; + + data = (struct tegra_pcie_soc *)of_device_get_match_data(dev); + if (!data) + return -EINVAL; + pcie->mode = (enum dw_pcie_device_mode)data->mode; + + ret = tegra_pcie_dw_parse_dt(pcie); + if (ret < 0) { + dev_err(dev, "Failed to parse device tree: %d\n", ret); + return ret; + } + + pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl"); + if (IS_ERR(pcie->pex_ctl_supply)) { + dev_err(dev, "Failed to get regulator: %ld\n", + PTR_ERR(pcie->pex_ctl_supply)); + return PTR_ERR(pcie->pex_ctl_supply); + } + + pcie->core_clk = devm_clk_get(dev, "core"); + if (IS_ERR(pcie->core_clk)) { + dev_err(dev, "Failed to get core clock: %ld\n", + PTR_ERR(pcie->core_clk)); + return PTR_ERR(pcie->core_clk); + } + + pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, + "appl"); + if (!pcie->appl_res) { + dev_err(dev, "Failed to find \"appl\" region\n"); + return PTR_ERR(pcie->appl_res); + } + pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res); + if (IS_ERR(pcie->appl_base)) + return PTR_ERR(pcie->appl_base); + + pcie->core_apb_rst = devm_reset_control_get(dev, "apb"); + if (IS_ERR(pcie->core_apb_rst)) { + dev_err(dev, "Failed to get APB reset: %ld\n", + PTR_ERR(pcie->core_apb_rst)); + return PTR_ERR(pcie->core_apb_rst); + } + + phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL); + if (!phys) + return PTR_ERR(phys); + + for (i = 0; i < pcie->phy_count; i++) { + name = kasprintf(GFP_KERNEL, "p2u-%u", i); + if (!name) { + dev_err(dev, "Failed to create P2U string\n"); + return -ENOMEM; + } + phys[i] = devm_phy_get(dev, name); + kfree(name); + if (IS_ERR(phys[i])) { + ret = PTR_ERR(phys[i]); + dev_err(dev, "Failed to get PHY: %d\n", ret); + return ret; + } + } + + pcie->phys = phys; + + dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); + if (!dbi_res) { + dev_err(dev, "Failed to find \"dbi\" region\n"); + return PTR_ERR(dbi_res); + } + pcie->dbi_res = dbi_res; + + pci->dbi_base = devm_ioremap_resource(dev, dbi_res); + if (IS_ERR(pci->dbi_base)) + return PTR_ERR(pci->dbi_base); + + /* Tegra HW locates DBI2 at a fixed offset from DBI */ + pci->dbi_base2 = pci->dbi_base + 0x1000; + + atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, + "atu_dma"); + if (!atu_dma_res) { + dev_err(dev, "Failed to find \"atu_dma\" region\n"); + return PTR_ERR(atu_dma_res); + } + pcie->atu_dma_res = atu_dma_res; + pci->atu_base = devm_ioremap_resource(dev, atu_dma_res); + if (IS_ERR(pci->atu_base)) + return PTR_ERR(pci->atu_base); + + pcie->core_rst = devm_reset_control_get(dev, "core"); + if (IS_ERR(pcie->core_rst)) { + dev_err(dev, "Failed to get core reset: %ld\n", + PTR_ERR(pcie->core_rst)); + return PTR_ERR(pcie->core_rst); + } + + pp->irq = platform_get_irq_byname(pdev, "intr"); + if (!pp->irq) { + dev_err(dev, "Failed to get \"intr\" interrupt\n"); + return -ENODEV; + } + + ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler, + IRQF_SHARED, "tegra-pcie-intr", pcie); + if (ret) { + dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret); + return ret; + } + + pcie->bpmp = tegra_bpmp_get(dev); + if (IS_ERR(pcie->bpmp)) + return PTR_ERR(pcie->bpmp); + + platform_set_drvdata(pdev, pcie); + + if (pcie->mode == DW_PCIE_RC_TYPE) { + ret = tegra_pcie_config_rp(pcie); + if (ret && ret != -ENOMEDIUM) + goto fail; + else + return 0; + } + +fail: + tegra_bpmp_put(pcie->bpmp); + return ret; +} + +static int tegra_pcie_dw_remove(struct platform_device *pdev) +{ + struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev); + + if (pcie->mode != DW_PCIE_RC_TYPE) + return 0; + + if (!pcie->link_state) + return 0; + + debugfs_remove_recursive(pcie->debugfs); + tegra_pcie_deinit_controller(pcie); + pm_runtime_put_sync(pcie->dev); + pm_runtime_disable(pcie->dev); + tegra_bpmp_put(pcie->bpmp); + + return 0; +} + +static int tegra_pcie_dw_suspend_late(struct device *dev) +{ + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); + u32 val; + + if (!pcie->link_state) + return 0; + + /* Enable HW_HOT_RST mode */ + val = appl_readl(pcie, APPL_CTRL); + val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << + APPL_CTRL_HW_HOT_RST_MODE_SHIFT); + val |= APPL_CTRL_HW_HOT_RST_EN; + appl_writel(pcie, val, APPL_CTRL); + + return 0; +} + +static int tegra_pcie_dw_suspend_noirq(struct device *dev) +{ + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); + + if (!pcie->link_state) + return 0; + + /* Save MSI interrupt vector */ + pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci, + PORT_LOGIC_MSI_CTRL_INT_0_EN); + tegra_pcie_downstream_dev_to_D0(pcie); + tegra_pcie_dw_pme_turnoff(pcie); + return __deinit_controller(pcie); +} + +static int tegra_pcie_dw_resume_noirq(struct device *dev) +{ + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); + int ret; + + if (!pcie->link_state) + return 0; + + ret = tegra_pcie_config_controller(pcie, true); + if (ret < 0) + return ret; + + ret = tegra_pcie_dw_host_init(&pcie->pci.pp); + if (ret < 0) { + dev_err(dev, "Failed to init host: %d\n", ret); + goto fail_host_init; + } + + /* Restore MSI interrupt vector */ + dw_pcie_writel_dbi(&pcie->pci, PORT_LOGIC_MSI_CTRL_INT_0_EN, + pcie->msi_ctrl_int); + + return 0; +fail_host_init: + return __deinit_controller(pcie); +} + +static int tegra_pcie_dw_resume_early(struct device *dev) +{ + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); + u32 val; + + if (!pcie->link_state) + return 0; + + /* Disable HW_HOT_RST mode */ + val = appl_readl(pcie, APPL_CTRL); + val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << + APPL_CTRL_HW_HOT_RST_MODE_SHIFT); + val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST << + APPL_CTRL_HW_HOT_RST_MODE_SHIFT; + val &= ~APPL_CTRL_HW_HOT_RST_EN; + appl_writel(pcie, val, APPL_CTRL); + + return 0; +} + +static void tegra_pcie_dw_shutdown(struct platform_device *pdev) +{ + struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev); + + if (pcie->mode != DW_PCIE_RC_TYPE) + return; + + if (!pcie->link_state) + return; + + debugfs_remove_recursive(pcie->debugfs); + tegra_pcie_downstream_dev_to_D0(pcie); + + disable_irq(pcie->pci.pp.irq); + if (IS_ENABLED(CONFIG_PCI_MSI)) + disable_irq(pcie->pci.pp.msi_irq); + + tegra_pcie_dw_pme_turnoff(pcie); + __deinit_controller(pcie); +} + +static const struct dev_pm_ops tegra_pcie_dw_pm_ops = { + .suspend_late = tegra_pcie_dw_suspend_late, + .suspend_noirq = tegra_pcie_dw_suspend_noirq, + .resume_noirq = tegra_pcie_dw_resume_noirq, + .resume_early = tegra_pcie_dw_resume_early, +}; + +static struct platform_driver tegra_pcie_dw_driver = { + .probe = tegra_pcie_dw_probe, + .remove = tegra_pcie_dw_remove, + .shutdown = tegra_pcie_dw_shutdown, + .driver = { + .name = "tegra194-pcie", + .pm = &tegra_pcie_dw_pm_ops, + .of_match_table = tegra_pcie_dw_of_match, + }, +}; +module_platform_driver(tegra_pcie_dw_driver); + +MODULE_AUTHOR("Vidya Sagar "); +MODULE_DESCRIPTION("NVIDIA PCIe host controller driver"); +MODULE_LICENSE("GPL v2");