From patchwork Tue May 3 22:51:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12836541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52283C433F5 for ; Tue, 3 May 2022 23:01:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243930AbiECXFG (ORCPT ); Tue, 3 May 2022 19:05:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244108AbiECXEL (ORCPT ); Tue, 3 May 2022 19:04:11 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C1D3DB840 for ; Tue, 3 May 2022 16:00:32 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 1D1FEC0E; Wed, 4 May 2022 01:52:14 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 1D1FEC0E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618334; bh=Xf5TTVV8o12m3XVpFeXWY0JpLQE6zkgjlmXy5aj1J0g=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=UJZYtuuw/AiIYMWEPiPJvg8HxsIuWu2kQgaI20J+zkEheyIfUCKBwh7V7xC5vAuE4 a+BmdT2kSHqz1byKRjqRGIyRxE4hvVZDFEZ71MOUCT3h8mwtsrqDBKxwkIL4b897zN c+VnnC7HAAAWH/nz9uGB9JMFBG9qlGxYCI+l9vyI= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:40 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , , , Subject: [PATCH v2 26/26] PCI: dwc: Add DW eDMA engine support Date: Wed, 4 May 2022 01:51:04 +0300 Message-ID: <20220503225104.12108-27-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Since the DW eDMA driver now supports eDMA controllers embedded into the locally accessible DW PCIe Root Ports and End-points, we can use the updated interface to register DW eDMA as DMA engine device if it's available. In order to successfully do that the DW PCIe core driver need to perform some preparations first. First of all it needs to find out the eDMA controller CSRs base address, whether they are accessible over the Port Logic or iATU unrolled space. Afterwards it can try to auto-detect the eDMA controller availability and number of it's read/write channels. If none was found the procedure will just silently halt with no error returned. Secondly the platform is supposed to provide either combined or per-channel IRQ signals. If no valid IRQs set is found the procedure will also halt with no error returned so to be backward compatible with platforms where DW PCIe controllers have eDMA embedded but lack of the IRQs defined for them. Finally before actually probing the eDMA device we need to allocate LLP items buffers. After that the DW eDMA can be registered. If registration is successful the info-message regarding the number of detected Read/Write eDMA channels will be printed to the system log in the same way as it's done for iATU settings. Signed-off-by: Serge Semin --- Changelog v2: - Don't fail eDMA detection procedure if the DW eDMA driver couldn't probe device. That happens if the driver is disabled. (@Manivannan) - Add "dma" registers resource mapping procedure. (@Manivannan) - Move the eDMA CSRs space detection into the dw_pcie_map_detect() method. - Remove eDMA on the dw_pcie_ep_init() internal errors. (@Manivannan) - Remove eDMA in the dw_pcie_ep_exit() method. - Move the dw_pcie_edma_detect() method execution to the tail of the dw_pcie_ep_init() function. --- .../pci/controller/dwc/pcie-designware-ep.c | 12 +- .../pci/controller/dwc/pcie-designware-host.c | 13 +- drivers/pci/controller/dwc/pcie-designware.c | 187 ++++++++++++++++++ drivers/pci/controller/dwc/pcie-designware.h | 20 ++ 4 files changed, 229 insertions(+), 3 deletions(-) diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 6cfcfa34e587..a3f3fa15ebe5 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c @@ -610,8 +610,11 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, void dw_pcie_ep_exit(struct dw_pcie_ep *ep) { + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct pci_epc *epc = ep->epc; + dw_pcie_edma_remove(pci); + pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, epc->mem->window.page_size); @@ -791,6 +794,10 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) goto err_exit_epc_mem; } + ret = dw_pcie_edma_detect(pci); + if (ret) + goto err_free_epc_mem; + if (ep->ops->get_features) { epc_features = ep->ops->get_features(ep); if (epc_features->core_init_notifier) @@ -799,10 +806,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) ret = dw_pcie_ep_init_complete(ep); if (ret) - goto err_free_epc_mem; + goto err_remove_edma; return 0; +err_remove_edma: + dw_pcie_edma_remove(pci); + err_free_epc_mem: pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, epc->mem->window.page_size); diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 3cd5b096a427..0ffef8526d54 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -415,14 +415,18 @@ int dw_pcie_host_init(struct pcie_port *pp) dw_pcie_iatu_detect(pci); - ret = dw_pcie_setup_rc(pp); + ret = dw_pcie_edma_detect(pci); if (ret) goto err_free_msi; + ret = dw_pcie_setup_rc(pp); + if (ret) + goto err_remove_edma; + if (!dw_pcie_link_up(pci) && pci->ops && pci->ops->start_link) { ret = pci->ops->start_link(pci); if (ret) - goto err_free_msi; + goto err_remove_edma; } /* Ignore errors, the link may come up later */ @@ -440,6 +444,9 @@ int dw_pcie_host_init(struct pcie_port *pp) if (pci->ops && pci->ops->stop_link) pci->ops->stop_link(pci); +err_remove_edma: + dw_pcie_edma_remove(pci); + err_free_msi: if (pp->has_msi_ctrl) dw_pcie_free_msi(pp); @@ -462,6 +469,8 @@ void dw_pcie_host_deinit(struct pcie_port *pp) if (pci->ops && pci->ops->stop_link) pci->ops->stop_link(pci); + dw_pcie_edma_remove(pci); + if (pp->has_msi_ctrl) dw_pcie_free_msi(pp); diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 68bd3fc66fd7..aae8b03757f5 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -595,6 +596,8 @@ int dw_pcie_map_detect(struct dw_pcie *pci) pci->atu_base = pci->dbi_base + PCIE_ATU_VIEWPORT_BASE; pci->atu_size = PCIE_ATU_VIEWPORT_SIZE; + pci->edma.reg_base = pci->dbi_base + PCIE_DMA_VIEWPORT_BASE; + dev_info(pci->dev, "iATU/DMA unroll: disabled\n"); return 0; @@ -617,6 +620,17 @@ int dw_pcie_map_detect(struct dw_pcie *pci) if (!pci->atu_size) pci->atu_size = SZ_4K; + if (!pci->edma.reg_base) { + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dma"); + if (res) { + pci->edma.reg_base = devm_ioremap_resource(pci->dev, res); + if (IS_ERR(pci->edma.reg_base)) + return PTR_ERR(pci->edma.reg_base); + } else if (pci->atu_size >= 2 * DEFAULT_DBI_DMA_OFFSET) { + pci->edma.reg_base = pci->atu_base + DEFAULT_DBI_DMA_OFFSET; + } + } + dev_info(pci->dev, "iATU/DMA unroll: enabled\n"); return 0; @@ -678,6 +692,179 @@ void dw_pcie_iatu_detect(struct dw_pcie *pci) pci->region_align / SZ_1K, (pci->region_limit + 1) / SZ_1G); } +static u32 dw_pcie_readl_dma(struct dw_pcie *pci, u32 reg) +{ + u32 val = 0; + int ret; + + if (pci->ops && pci->ops->read_dbi) + return pci->ops->read_dbi(pci, pci->edma.reg_base, reg, 4); + + ret = dw_pcie_read(pci->edma.reg_base + reg, 4, &val); + if (ret) + dev_err(pci->dev, "Read DMA address failed\n"); + + return val; +} + +static int dw_pcie_edma_irq_vector(struct device *dev, unsigned int nr) +{ + struct platform_device *pdev = to_platform_device(dev); + char name[6]; + int ret; + + if (nr >= EDMA_MAX_WR_CH + EDMA_MAX_RD_CH) + return -EINVAL; + + ret = platform_get_irq_byname_optional(pdev, "dma"); + if (ret > 0) + return ret; + + snprintf(name, sizeof(name), "dma%u", nr); + + return platform_get_irq_byname_optional(pdev, name); +} + +static struct dw_edma_core_ops dw_pcie_edma_ops = { + .irq_vector = dw_pcie_edma_irq_vector, +}; + +static int dw_pcie_edma_detect_channels(struct dw_pcie *pci) +{ + u32 val; + + val = dw_pcie_readl_dma(pci, PCIE_DMA_CTRL); + if (!val || val == 0xffffffff) + return 0; + + pci->edma.ll_wr_cnt = FIELD_GET(PCIE_DMA_NUM_WR_CHAN, val); + pci->edma.ll_rd_cnt = FIELD_GET(PCIE_DMA_NUM_RD_CHAN, val); + + if (pci->edma.ll_wr_cnt > EDMA_MAX_WR_CH || + pci->edma.ll_rd_cnt > EDMA_MAX_RD_CH) + return -EINVAL; + + return 0; +} + +static int dw_pcie_edma_irq_verify(struct dw_pcie *pci) +{ + struct platform_device *pdev = to_platform_device(pci->dev); + u16 ch_cnt = pci->edma.ll_wr_cnt + pci->edma.ll_rd_cnt; + char name[6]; + int ret; + + if (pci->edma.nr_irqs == 1) + return 0; + else if (pci->edma.nr_irqs > 1) + return pci->edma.nr_irqs != ch_cnt ? -EINVAL : 0; + + ret = platform_get_irq_byname_optional(pdev, "dma"); + if (ret > 0) { + pci->edma.nr_irqs = 1; + return 0; + } + + for (; pci->edma.nr_irqs < ch_cnt; pci->edma.nr_irqs++) { + snprintf(name, sizeof(name), "dma%d", pci->edma.nr_irqs); + + ret = platform_get_irq_byname_optional(pdev, name); + if (ret <= 0) + return -EINVAL; + } + + return 0; +} + +static int dw_pcie_edma_ll_alloc(struct dw_pcie *pci) +{ + struct dw_edma_region *ll; + dma_addr_t paddr; + int i; + + for (i = 0; i < pci->edma.ll_wr_cnt; i++) { + ll = &pci->edma.ll_region_wr[i]; + ll->sz = DMA_LLP_MEM_SIZE; + ll->vaddr = dmam_alloc_coherent(pci->dev, ll->sz, + &paddr, GFP_KERNEL); + if (!ll->vaddr) + return -ENOMEM; + + ll->paddr = paddr; + } + + for (i = 0; i < pci->edma.ll_rd_cnt; i++) { + ll = &pci->edma.ll_region_rd[i]; + ll->sz = DMA_LLP_MEM_SIZE; + ll->vaddr = dmam_alloc_coherent(pci->dev, ll->sz, + &paddr, GFP_KERNEL); + if (!ll->vaddr) + return -ENOMEM; + + ll->paddr = paddr; + } + + return 0; +} + +int dw_pcie_edma_detect(struct dw_pcie *pci) +{ + int ret; + + if (!pci->edma.reg_base) + return 0; + + pci->edma.dev = pci->dev; + if (!pci->edma.ops) + pci->edma.ops = &dw_pcie_edma_ops; + pci->edma.flags |= DW_EDMA_CHIP_LOCAL; + + if (pci->iatu_dma_unrolled) + pci->edma.mf = EDMA_MF_EDMA_UNROLL; + else + pci->edma.mf = EDMA_MF_EDMA_LEGACY; + + ret = dw_pcie_edma_detect_channels(pci); + if (ret) { + dev_err(pci->dev, "Unexpected NoF eDMA channels found\n"); + return ret; + } + + /* Skip any further initialization if no eDMA found */ + if (!pci->edma.ll_wr_cnt && !pci->edma.ll_rd_cnt) + return 0; + + /* Don't fail on the IRQs verification for the backward compatibility */ + ret = dw_pcie_edma_irq_verify(pci); + if (ret) { + dev_err(pci->dev, "Invalid eDMA IRQs found\n"); + return 0; + } + + ret = dw_pcie_edma_ll_alloc(pci); + if (ret) { + dev_err(pci->dev, "Couldn't allocate LLP memory\n"); + return ret; + } + + /* Don't fail if the DW eDMA driver can't find the device */ + ret = dw_edma_probe(&pci->edma); + if (ret && ret != -ENODEV) { + dev_err(pci->dev, "Couldn't register eDMA device\n"); + return ret; + } + + dev_info(pci->dev, "eDMA channels: %hu wr, %hu rd\n", + pci->edma.ll_wr_cnt, pci->edma.ll_rd_cnt); + + return 0; +} + +void dw_pcie_edma_remove(struct dw_pcie *pci) +{ + dw_edma_remove(&pci->edma); +} + void dw_pcie_setup(struct dw_pcie *pci) { u32 val; diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index e10647b96c68..87add1920f0d 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -13,6 +13,7 @@ #include #include +#include #include #include #include @@ -145,6 +146,18 @@ #define PCIE_MSIX_DOORBELL 0x948 #define PCIE_MSIX_DOORBELL_PF_SHIFT 24 +/* + * eDMA CSRs. DW PCIe IP-core v4.70a and older had the eDMA registers accessible + * over the Port Logic registers space. Afterwords the unrolled mapping was + * introduced so eDMA and iATU could be accessed via a dedicated registers + * space. + */ +#define PCIE_DMA_VIEWPORT_BASE 0x970 +#define PCIE_DMA_UNROLL_BASE 0x80000 +#define PCIE_DMA_CTRL 0x008 +#define PCIE_DMA_NUM_WR_CHAN GENMASK(3, 0) +#define PCIE_DMA_NUM_RD_CHAN GENMASK(19, 16) + #define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20 #define PCIE_PL_CHK_REG_CHK_REG_START BIT(0) #define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS BIT(1) @@ -161,6 +174,7 @@ * this offset, if atu_base not set. */ #define DEFAULT_DBI_ATU_OFFSET (0x3 << 20) +#define DEFAULT_DBI_DMA_OFFSET (0x1 << 19) #define MAX_MSI_IRQS 256 #define MAX_MSI_IRQS_PER_CTRL 32 @@ -172,6 +186,9 @@ #define MAX_IATU_IN 256 #define MAX_IATU_OUT 256 +/* Default eDMA LLP memory size */ +#define DMA_LLP_MEM_SIZE PAGE_SIZE + struct pcie_port; struct dw_pcie; struct dw_pcie_ep; @@ -310,6 +327,7 @@ struct dw_pcie { int num_lanes; int link_gen; u8 n_fts[2]; + struct dw_edma_chip edma; bool iatu_dma_unrolled: 1; bool io_cfg_atu_shared: 1; }; @@ -345,6 +363,8 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index); void dw_pcie_setup(struct dw_pcie *pci); int dw_pcie_map_detect(struct dw_pcie *pci); void dw_pcie_iatu_detect(struct dw_pcie *pci); +int dw_pcie_edma_detect(struct dw_pcie *pci); +void dw_pcie_edma_remove(struct dw_pcie *pci); static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) {