From patchwork Mon Aug 22 18:53:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39B72C32789 for ; Mon, 22 Aug 2022 18:54:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237897AbiHVSyf (ORCPT ); Mon, 22 Aug 2022 14:54:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237514AbiHVSyQ (ORCPT ); Mon, 22 Aug 2022 14:54:16 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 91ECB3DF1E; Mon, 22 Aug 2022 11:53:49 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 9C493DA3; Mon, 22 Aug 2022 21:57:01 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 9C493DA3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194621; bh=f1vOxqFmEFkhbkY2J0p+/lTrm0ixt+lUmDSdWTpArG0=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=pToCucnrtW3o01i3t+Y8jXTro7SZ5pV9tbUVRFBTacKf0x955ARoUkCZTNze0foMe gfD4IR4X2w7BsOJU1szZtdce1KOrk0X5wXXqHz5ejJ0ezbLl1NCcQKrRFGhB0TGoEt 665OQFBlTBZzEJAE5phxivsfwUpNhCSgbctSekWA= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:47 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 01/24] dmaengine: Fix dma_slave_config.dst_addr description Date: Mon, 22 Aug 2022 21:53:09 +0300 Message-ID: <20220822185332.26149-2-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Most likely due to a copy-paste mistake the dst_addr member of the dma_slave_config structure has been marked as ignored if the !source! address belong to the memory. That is relevant to the src_addr field of the structure while the dst_addr field as containing a destination device address is supposed to be ignored if the destination is the CPU memory. Let's fix the field description accordingly. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- include/linux/dmaengine.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index c923f4e60f24..0c020682d894 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -394,7 +394,7 @@ enum dma_slave_buswidth { * should be read (RX), if the source is memory this argument is * ignored. * @dst_addr: this is the physical address where DMA slave data - * should be written (TX), if the source is memory this argument + * should be written (TX), if the destination is memory this argument * is ignored. * @src_addr_width: this is the width in bytes of the source (RX) * register where DMA data shall be read. If the source From patchwork Mon Aug 22 18:53:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951132 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3342CC32789 for ; Mon, 22 Aug 2022 18:54:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237757AbiHVSyy (ORCPT ); Mon, 22 Aug 2022 14:54:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237722AbiHVSyR (ORCPT ); Mon, 22 Aug 2022 14:54:17 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E71A3B46; Mon, 22 Aug 2022 11:53:50 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 5BB4BDA4; Mon, 22 Aug 2022 21:57:02 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 5BB4BDA4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194622; bh=4ovbXrGBEKIMyvXFCCfEeAq9x+AxSm2TXkXBmp6HjJE=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=e66Leh1L/JhqcViTmkIIWqqovAq46eZGuuLnliMVPyen4HNOryOrLrfTv793KVVxw Q920DjMeFHwLWrlZIOfcu/mamk4lVg7jMZQsPeUTIWTT701L+uZB/vH2sBq04MWF2l A7DP63Zv+V7H+0ATwVgP7fsv5rAsUxeFrPg4we7g= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:47 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH RESEND v5 02/24] dmaengine: dw-edma: Release requested IRQs on failure Date: Mon, 22 Aug 2022 21:53:10 +0300 Message-ID: <20220822185332.26149-3-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org From very beginning of the DW eDMA driver live in the kernel the method dw_edma_irq_request() hasn't been designed quite correct. In case if the request_irq() method fails to initialize the IRQ handler at some point the previously requested IRQs will be left initialized. It's prune to errors up to the system crash. Let's fix that by releasing the previously requested IRQs in the cleanup-on-error path of the dw_edma_irq_request() function. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- Changelog v2: - This is a new patch added in v2 iteration of the series. --- drivers/dma/dw-edma/dw-edma-core.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 07f756479663..04efcb16d13d 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -899,10 +899,8 @@ static int dw_edma_irq_request(struct dw_edma *dw, dw_edma_interrupt_read, IRQF_SHARED, dw->name, &dw->irq[i]); - if (err) { - dw->nr_irqs = i; - return err; - } + if (err) + goto err_irq_free; if (irq_get_msi_desc(irq)) get_cached_msi_msg(irq, &dw->irq[i].msi); @@ -911,6 +909,14 @@ static int dw_edma_irq_request(struct dw_edma *dw, dw->nr_irqs = i; } + return 0; + +err_irq_free: + for (i--; i >= 0; i--) { + irq = chip->ops->irq_vector(dev, i); + free_irq(irq, &dw->irq[i]); + } + return err; } From patchwork Mon Aug 22 18:53:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34A0EC28D13 for ; Mon, 22 Aug 2022 18:54:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237958AbiHVSyv (ORCPT ); Mon, 22 Aug 2022 14:54:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237619AbiHVSyR (ORCPT ); Mon, 22 Aug 2022 14:54:17 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3B2F9F5A2; Mon, 22 Aug 2022 11:53:52 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 436A4DA5; Mon, 22 Aug 2022 21:57:03 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 436A4DA5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194623; bh=bhKxCSb6BGFN31yLH9pqLiXJLBs6s4e92BD8GTxZ9uY=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=NJ38bOtYBezszimWoQrhCQy1gV1u7o6vC+tqjRfeBktaUgpwvGMR8wTJPsmimuchm meywh6t5wuhOhqZNUUJX+3Ye9p2qatek9t9WT0eigrVstTGTM5q/s1DsW94gg16AXd QkGSM9uyr0h6o9424vEUz4fryAZVWXAFF64PvwnU= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:48 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH RESEND v5 03/24] dmaengine: dw-edma: Convert ll/dt phys-address to PCIe bus/DMA address Date: Mon, 22 Aug 2022 21:53:11 +0300 Message-ID: <20220822185332.26149-4-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org In accordance with the dw_edma_region.paddr field semantics it is supposed to be initialized with a memory base address visible by the DW eDMA controller. If the DMA engine is embedded into the DW PCIe Host/EP controller, then the address should belong to the Local CPU/Application memory. If eDMA is remotely accessible across the PCIe bus via the PCIe memory IOs, then the address needs to be a part of the PCIe bus memory space. The later case hasn't been well covered in the corresponding glue-driver. Since in general the PCIe memory space doesn't have to match the CPU memory space and the pci_dev.resource[] arrays contain the resources defined in the CPU memory space, a proper conversion needs to be performed, otherwise either the driver won't properly work or much worse the memory corruption will happen. The conversion can be done by means of the pci_bus_address() method. Let's use it to retrieve the LL, DT and CSRs PCIe memory ranges. Note in addition to that we need to extend the dw_edma_region.paddr field size. The field normally contains a memory range base address to be set in the DW eDMA Linked-List pointer register or as a base address of the Linked-List data buffer. In accordance with [1] the LL range is supposed to be created in the Local CPU/Application memory, but depending on the DW eDMA utilization the memory can be created as a part of the PCIe bus address space (as in the case of the DW PCIe EP prototype kit). Thus in the former case the dw_edma_region.paddr field should have the dma_addr_t type, while in the later one - pci_bus_addr_t. Seeing the corresponding CSRs are always 64-bits wide let's convert the dw_edma_region.paddr field type to be u64 and let the client code logic to make sure it has a valid address visible by the DW eDMA controller. For instance the DW eDMA PCIe glue-driver initializes the field with the addresses from the PCIe bus memory space. [1] DesignWare Cores PCI Express Controller Databook - DWC PCIe Root Port, v.5.40a, March 2019, p.1103 Fixes: 41aaff2a2ac0 ("dmaengine: Add Synopsys eDMA IP PCIe glue-logic") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-pcie.c | 8 ++++---- include/linux/dma/edma.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index d6b5e2463884..04c95cba1244 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -231,7 +231,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -ENOMEM; ll_region->vaddr += ll_block->off; - ll_region->paddr = pdev->resource[ll_block->bar].start; + ll_region->paddr = pci_bus_address(pdev, ll_block->bar); ll_region->paddr += ll_block->off; ll_region->sz = ll_block->sz; @@ -240,7 +240,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -ENOMEM; dt_region->vaddr += dt_block->off; - dt_region->paddr = pdev->resource[dt_block->bar].start; + dt_region->paddr = pci_bus_address(pdev, dt_block->bar); dt_region->paddr += dt_block->off; dt_region->sz = dt_block->sz; } @@ -256,7 +256,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -ENOMEM; ll_region->vaddr += ll_block->off; - ll_region->paddr = pdev->resource[ll_block->bar].start; + ll_region->paddr = pci_bus_address(pdev, ll_block->bar); ll_region->paddr += ll_block->off; ll_region->sz = ll_block->sz; @@ -265,7 +265,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -ENOMEM; dt_region->vaddr += dt_block->off; - dt_region->paddr = pdev->resource[dt_block->bar].start; + dt_region->paddr = pci_bus_address(pdev, dt_block->bar); dt_region->paddr += dt_block->off; dt_region->sz = dt_block->sz; } diff --git a/include/linux/dma/edma.h b/include/linux/dma/edma.h index 7d8062e9c544..a864978ddd27 100644 --- a/include/linux/dma/edma.h +++ b/include/linux/dma/edma.h @@ -18,7 +18,7 @@ struct dw_edma; struct dw_edma_region { - phys_addr_t paddr; + u64 paddr; void __iomem *vaddr; size_t sz; }; From patchwork Mon Aug 22 18:53:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1B70C32772 for ; Mon, 22 Aug 2022 18:54:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237674AbiHVSyx (ORCPT ); Mon, 22 Aug 2022 14:54:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237696AbiHVSyR (ORCPT ); Mon, 22 Aug 2022 14:54:17 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A60173DF32; Mon, 22 Aug 2022 11:53:52 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id F408CDA6; Mon, 22 Aug 2022 21:57:03 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com F408CDA6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194624; bh=vwfya36YubZ2bYzJNs2OOzl4KTHtrqBAGr7OFYCUolk=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=FdnuerjpysxXthGrwvxCRCo25SLj2MS2EXuEXSP7PnExg+5ZL7A6iGAYnll77a/Rr mQUYQxzOpljHQXvTK0GdjHeaXBAfNp6m19pHqhUpineyjp5yrNvFgHM4N8BoDrkAkw AhFpLIAiJeIH+/EVT72XFNYgSnXRXnXOpXn9GIV4= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:49 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH RESEND v5 04/24] dmaengine: dw-edma: Fix missing src/dst address of the interleaved xfers Date: Mon, 22 Aug 2022 21:53:12 +0300 Message-ID: <20220822185332.26149-5-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org The interleaved DMA transfers support was added in the commit 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support"). It seems like the support was broken from the very beginning. Depending on the selected channel either source or destination address are left uninitialized which was obviously wrong. I don't really know how come the original modification was working for the commit author. Anyway let's fix it by initializing the destination address of the eDMA burst descriptors for the DEV_TO_MEM interleaved operations and by initializing the source address of the eDMA burst descriptors for the MEM_TO_DEV interleaved operations. Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 04efcb16d13d..f0ef87d75ea9 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -456,6 +456,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) * and destination addresses are increased * by the same portion (data length) */ + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { + burst->dar = dst_addr; } } else { burst->dar = dst_addr; @@ -471,6 +473,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) * and destination addresses are increased * by the same portion (data length) */ + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { + burst->sar = src_addr; } } From patchwork Mon Aug 22 18:53:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EF02C3F6B0 for ; Mon, 22 Aug 2022 18:55:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237722AbiHVSzJ (ORCPT ); Mon, 22 Aug 2022 14:55:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237877AbiHVSyS (ORCPT ); Mon, 22 Aug 2022 14:54:18 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3C945402C8; Mon, 22 Aug 2022 11:53:54 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id E829FDA2; Mon, 22 Aug 2022 21:57:04 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com E829FDA2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194624; bh=e1JmOuGu+fwVSthj7e0X/inZQyWMiOwXQ3dz4tgt440=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=acfdViJ28yVUnCN5jJisD/Lh+y+hF/YzzLuXBTDjRO6ilTC6HfH4uWatiQRdnESf8 N+4otBZgJVSH8ann+YuxpBet60gBVA+2EL9ZYI43Ihm+hX8faHrFl6L79YnffVTp+Z uOsE4e1ZqtiHiw3igpXyw4jv3i/BaimkeKauFGF8= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:50 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH RESEND v5 05/24] dmaengine: dw-edma: Don't permit non-inc interleaved xfers Date: Mon, 22 Aug 2022 21:53:13 +0300 Message-ID: <20220822185332.26149-6-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org DW eDMA controller always increments both source and destination addresses. Permitting DMA interleaved transfers with no src_inc/dst_inc flags set may lead to unexpected behaviour for the device users. Let's fix that by terminating the interleaved transfers if at least one of the dma_interleaved_template.{src_inc,dst_inc} flag is initialized with false value. Note in addition to that we need to increase the source and destination addresses accordingly after each iteration. Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index f0ef87d75ea9..225eab58acb7 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -386,6 +386,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) return NULL; if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) return NULL; + if (!xfer->xfer.il->src_inc || !xfer->xfer.il->dst_inc) + return NULL; } else { return NULL; } @@ -485,15 +487,13 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) struct dma_interleaved_template *il = xfer->xfer.il; struct data_chunk *dc = &il->sgl[i]; - if (il->src_sgl) { - src_addr += burst->sz; + src_addr += burst->sz; + if (il->src_sgl) src_addr += dmaengine_get_src_icg(il, dc); - } - if (il->dst_sgl) { - dst_addr += burst->sz; + dst_addr += burst->sz; + if (il->dst_sgl) dst_addr += dmaengine_get_dst_icg(il, dc); - } } } From patchwork Mon Aug 22 18:53:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75903C32774 for ; Mon, 22 Aug 2022 18:55:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238000AbiHVSzA (ORCPT ); Mon, 22 Aug 2022 14:55:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237368AbiHVSyS (ORCPT ); Mon, 22 Aug 2022 14:54:18 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 95B87402FA; Mon, 22 Aug 2022 11:53:54 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id AE83DDA7; Mon, 22 Aug 2022 21:57:05 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com AE83DDA7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194625; bh=17/2wYbrQgogQ/AnOlqJwuN0f3HJbuhq/LuIABB8iF0=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=Hu7mSLOx7NhpPbD8gLSDPOBDDoZ8BbbsUZr8CrtzAPtsZd7eRN88kNWWIW1c4QEjR ad5NWnIjmd0vhcLrX/cHiEwO8qo2N+vIE0c6/ad7U7c/EpOdqoFpwf4wrHs5T8cYFP 9FalIT4AR073hIjqJQAlyaCfktfUFqpH/cD//wtw= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:51 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH RESEND v5 06/24] dmaengine: dw-edma: Fix invalid interleaved xfers semantics Date: Mon, 22 Aug 2022 21:53:14 +0300 Message-ID: <20220822185332.26149-7-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org The interleaved DMA transfer support added in commit 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") seems contradicting to what the DMA-engine defines. The next conditional statements: if (!xfer->xfer.il->numf) return NULL; if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) return NULL; basically mean that numf can't be zero and frame_size must always be zero, otherwise the transfer won't be executed. But further the transfer execution method takes the frames size from the dma_interleaved_template.sgl[] array for each frame. That array in accordance with [1] is supposed to be of dma_interleaved_template.frame_size size, which as we discovered before the code expects to be zero. So judging by the dw_edma_device_transfer() implementation the method implies the dma_interleaved_template.sgl[] array being of dma_interleaved_template.numf size, which is wrong. Since the dw_edma_device_transfer() method doesn't permit dma_interleaved_template.frame_size being non-zero then actual multi-chunk interleaved transfer turns to be unsupported even though the code implies having it supported. Let's fix that by adding a fully functioning support of the interleaved DMA transfers. First of all dma_interleaved_template.frame_size is supposed to be greater or equal to one thus having at least simple linear chunked frames. Secondly we can create a walk-through all over the chunks and frames just by initializing the number of the eDMA burst transactios as a multiple of dma_interleaved_template.numf and dma_interleaved_template.frame_size and getting the frame_size-modulo of the iteration step as an index of the dma_interleaved_template.sgl[] array. The rest of the dw_edma_device_transfer() method code can be left unchanged. [1] include/linux/dmaengine.h: doc struct dma_interleaved_template Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 225eab58acb7..ef49deb5a7f3 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -333,6 +333,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) struct dw_edma_chunk *chunk; struct dw_edma_burst *burst; struct dw_edma_desc *desc; + size_t fsz = 0; u32 cnt = 0; int i; @@ -382,9 +383,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (xfer->xfer.sg.len < 1) return NULL; } else if (xfer->type == EDMA_XFER_INTERLEAVED) { - if (!xfer->xfer.il->numf) - return NULL; - if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) + if (!xfer->xfer.il->numf || xfer->xfer.il->frame_size < 1) return NULL; if (!xfer->xfer.il->src_inc || !xfer->xfer.il->dst_inc) return NULL; @@ -414,10 +413,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) cnt = xfer->xfer.sg.len; sg = xfer->xfer.sg.sgl; } else if (xfer->type == EDMA_XFER_INTERLEAVED) { - if (xfer->xfer.il->numf > 0) - cnt = xfer->xfer.il->numf; - else - cnt = xfer->xfer.il->frame_size; + cnt = xfer->xfer.il->numf * xfer->xfer.il->frame_size; + fsz = xfer->xfer.il->frame_size; } for (i = 0; i < cnt; i++) { @@ -439,7 +436,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) else if (xfer->type == EDMA_XFER_SCATTER_GATHER) burst->sz = sg_dma_len(sg); else if (xfer->type == EDMA_XFER_INTERLEAVED) - burst->sz = xfer->xfer.il->sgl[i].size; + burst->sz = xfer->xfer.il->sgl[i % fsz].size; chunk->ll_region.sz += burst->sz; desc->alloc_sz += burst->sz; @@ -482,10 +479,9 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (xfer->type == EDMA_XFER_SCATTER_GATHER) { sg = sg_next(sg); - } else if (xfer->type == EDMA_XFER_INTERLEAVED && - xfer->xfer.il->frame_size > 0) { + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { struct dma_interleaved_template *il = xfer->xfer.il; - struct data_chunk *dc = &il->sgl[i]; + struct data_chunk *dc = &il->sgl[i % fsz]; src_addr += burst->sz; if (il->src_sgl) From patchwork Mon Aug 22 18:53:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8202AC28D13 for ; Mon, 22 Aug 2022 18:55:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235582AbiHVSzH (ORCPT ); Mon, 22 Aug 2022 14:55:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237780AbiHVSyS (ORCPT ); Mon, 22 Aug 2022 14:54:18 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 96AC34054F; Mon, 22 Aug 2022 11:53:55 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 8DEA7DA3; Mon, 22 Aug 2022 21:57:06 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 8DEA7DA3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194626; bh=WEY5Ox9q44an3tYFDrhVE8sn0N9S5xeP+Dzf4/epzSU=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=LtR5IKLpea0bhenbQ85qxgQ+aBjfnaxSDPLeMLjOTk9x+AcbYnFp0K5+xu9zvxrNB 6GR7+uPWc7E9hE380prnkt7T04p/9oBW0t6x7w9qE+85i6cskWAcQUXpLAUjehmdaF ORYn0k7U7UmoEnA7vCWoFpsCe4PXUCJEl3lMnrAM= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:51 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 07/24] dmaengine: dw-edma: Add CPU to PCIe bus address translation Date: Mon, 22 Aug 2022 21:53:15 +0300 Message-ID: <20220822185332.26149-8-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Starting from commit 9575632052ba ("dmaengine: make slave address physical") the source and destination addresses of the DMA-slave device have been converted to being defined in CPU address space. It's DMA-device driver responsibility to properly convert them to the reachable DMA bus spaces. In case of the DW eDMA device, the source or destination peripheral (slave) devices reside PCIe bus space. Thus we need to perform the PCIe Host/EP windows-based (i.e. ranges DT-property) addresses translation otherwise the eDMA transactions won't work as expected (or can be even harmful) in case if the CPU and PCIe address spaces don't match. Note 1. Even though the DMA interleaved template has both source and destination addresses declared of dma_addr_t type only CPU memory range is supposed to be mapped in a way so to be seen by the DMA device since it's a subject of the DMA getting towards the system side. The device part must not be mapped since slave device resides in the PCIe bus space, which isn't affected by IOMMUs or iATU translations. DW PCIe eDMA generates corresponding MWr/MRd TLPs on its own. Note 2. This functionality is mainly required for the remote eDMA setup since the CPU address must be manually translated into the PCIe bus space before being written to LLI.{SAR,DAR}. If eDMA is embedded into the locally accessible DW PCIe RP/EP software-based translation isn't required since it will be done by hardware by means of the Outbound iATU as long as the DMA_BYPASS flag is cleared. If the later flag is set or there is no Outbound iATU entry found to which the SAR or DAR falls in (for Read and Write channel respectfully), there won't be any translation performed but DMA will proceed with the corresponding source/destination address as is. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 18 +++++++++++++++++- include/linux/dma/edma.h | 15 +++++++++++++++ 2 files changed, 32 insertions(+), 1 deletion(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index ef49deb5a7f3..be466b781376 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -40,6 +40,17 @@ struct dw_edma_desc *vd2dw_edma_desc(struct virt_dma_desc *vd) return container_of(vd, struct dw_edma_desc, vd); } +static inline +u64 dw_edma_get_pci_address(struct dw_edma_chan *chan, phys_addr_t cpu_addr) +{ + struct dw_edma_chip *chip = chan->dw->chip; + + if (chip->ops->pci_address) + return chip->ops->pci_address(chip->dev, cpu_addr); + + return cpu_addr; +} + static struct dw_edma_burst *dw_edma_alloc_burst(struct dw_edma_chunk *chunk) { struct dw_edma_burst *burst; @@ -328,11 +339,11 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) { struct dw_edma_chan *chan = dchan2dw_edma_chan(xfer->dchan); enum dma_transfer_direction dir = xfer->direction; - phys_addr_t src_addr, dst_addr; struct scatterlist *sg = NULL; struct dw_edma_chunk *chunk; struct dw_edma_burst *burst; struct dw_edma_desc *desc; + u64 src_addr, dst_addr; size_t fsz = 0; u32 cnt = 0; int i; @@ -407,6 +418,11 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) dst_addr = chan->config.dst_addr; } + if (dir == DMA_DEV_TO_MEM) + src_addr = dw_edma_get_pci_address(chan, (phys_addr_t)src_addr); + else + dst_addr = dw_edma_get_pci_address(chan, (phys_addr_t)dst_addr); + if (xfer->type == EDMA_XFER_CYCLIC) { cnt = xfer->xfer.cyclic.cnt; } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { diff --git a/include/linux/dma/edma.h b/include/linux/dma/edma.h index a864978ddd27..380a0a3e251f 100644 --- a/include/linux/dma/edma.h +++ b/include/linux/dma/edma.h @@ -23,8 +23,23 @@ struct dw_edma_region { size_t sz; }; +/** + * struct dw_edma_core_ops - platform-specific eDMA methods + * @irq_vector: Get IRQ number of the passed eDMA channel. Note the + * method accepts the channel id in the end-to-end + * numbering with the eDMA write channels being placed + * first in the row. + * @pci_address: Get PCIe bus address corresponding to the passed CPU + * address. Note there is no need in specifying this + * function if the address translation is performed by + * the DW PCIe RP/EP controller with the DW eDMA device in + * subject and DMA_BYPASS isn't set for all the outbound + * iATU windows. That will be done by the controller + * automatically. + */ struct dw_edma_core_ops { int (*irq_vector)(struct device *dev, unsigned int nr); + u64 (*pci_address)(struct device *dev, phys_addr_t cpu_addr); }; enum dw_edma_map_format { From patchwork Mon Aug 22 18:53:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E02F2C3F6B0 for ; Mon, 22 Aug 2022 18:55:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237988AbiHVSy6 (ORCPT ); Mon, 22 Aug 2022 14:54:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237794AbiHVSyS (ORCPT ); Mon, 22 Aug 2022 14:54:18 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 88BF23ECC3; Mon, 22 Aug 2022 11:53:56 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 489D1DA8; Mon, 22 Aug 2022 21:57:07 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 489D1DA8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194627; bh=N88OoMZKRV6JXPdySXLm56p1hMlYe/rmzVyzAT1y8Mo=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=OPXDcCMaTKEcLY/L0Tz1++ncIuq/Ne4t7qEKcXPu1kzuMGyb9mj1XxT5QPqKm/n+J 5fn6oFlFtarJYh9w/6/35GBkR5zvSOkJow4S0jNMcy7yPI/A5vqU8xu8/GmZWM8LQb x5IKMRWapX+bQgw6lGrxic9j0LuFe7BZkk4OZxms= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:52 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH RESEND v5 08/24] dmaengine: dw-edma: Add PCIe bus address getter to the remote EP glue-driver Date: Mon, 22 Aug 2022 21:53:16 +0300 Message-ID: <20220822185332.26149-9-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org In general the Synopsys PCIe EndPoint IP prototype kit can be attached to a PCIe bus with any PCIe Host controller including to the one with distinctive from CPU address space. Due to that we need to make sure that the source and destination addresses of the DMA-slave devices are properly converted to the PCIe bus address space, otherwise the DMA transaction will not only work as expected, but may cause the memory corruption with subsequent system crash. Let's do that by introducing a new dw_edma_pcie_address() method defined in the dw-edma-pcie.c, which will perform the denoted translation by using the pcibios_resource_to_bus() method. Fixes: 41aaff2a2ac0 ("dmaengine: Add Synopsys eDMA IP PCIe glue-logic") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- Note this patch depends on the patch "dmaengine: dw-edma: Add CPU to PCIe bus address translation" from this series. --- drivers/dma/dw-edma/dw-edma-pcie.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index 04c95cba1244..f530bacfd716 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -95,8 +95,23 @@ static int dw_edma_pcie_irq_vector(struct device *dev, unsigned int nr) return pci_irq_vector(to_pci_dev(dev), nr); } +static u64 dw_edma_pcie_address(struct device *dev, phys_addr_t cpu_addr) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct pci_bus_region region; + struct resource res = { + .flags = IORESOURCE_MEM, + .start = cpu_addr, + .end = cpu_addr, + }; + + pcibios_resource_to_bus(pdev->bus, ®ion, &res); + return region.start; +} + static const struct dw_edma_core_ops dw_edma_pcie_core_ops = { .irq_vector = dw_edma_pcie_irq_vector, + .pci_address = dw_edma_pcie_address, }; static void dw_edma_pcie_get_vsec_dma_data(struct pci_dev *pdev, From patchwork Mon Aug 22 18:53:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4027AC32792 for ; Mon, 22 Aug 2022 18:54:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237978AbiHVSyz (ORCPT ); Mon, 22 Aug 2022 14:54:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237799AbiHVSyT (ORCPT ); Mon, 22 Aug 2022 14:54:19 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 46A0EB7D; Mon, 22 Aug 2022 11:53:57 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 1924CDA4; Mon, 22 Aug 2022 21:57:08 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 1924CDA4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194628; bh=97a9U8E5vZvQUfsqwWz6wdvVx63TZNSKII0ja/EivXE=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=D4cw8tbaUxNXpB8h9AI+Zq9eW31vxqy3ow7qgNrENRxcUlEd/LGJrKC6Rk+0e+3Hv xTb2SsmVt3leXHKxWevQdxEzzqP2o3l08fxcaK3ZY2bOZ7G8VJMrMKx1UYzUPSb11G yZbGUow0rliH1o9pQNqaDprFGPNQI+TZzKAubxCE= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:53 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH RESEND v5 09/24] dmaengine: dw-edma: Drop chancnt initialization Date: Mon, 22 Aug 2022 21:53:17 +0300 Message-ID: <20220822185332.26149-10-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org DMA device drivers aren't supposed to initialize the dma_device.chancnt field. It will be done by the DMA-engine core in accordance with number of added virtual DMA-channels. Pre-initializing it with some value causes having a wrong number of channels printed in the device summary. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index be466b781376..e9cb3056b6b7 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -823,7 +823,6 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; - dma->chancnt = cnt; /* Set DMA channel callbacks */ dma->dev = chip->dev; From patchwork Mon Aug 22 18:53:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951136 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BEB3C32774 for ; Mon, 22 Aug 2022 18:54:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237573AbiHVSy4 (ORCPT ); Mon, 22 Aug 2022 14:54:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237884AbiHVSyT (ORCPT ); Mon, 22 Aug 2022 14:54:19 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 398CE4057C; Mon, 22 Aug 2022 11:53:58 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 1FA0FDA5; Mon, 22 Aug 2022 21:57:09 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 1FA0FDA5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194629; bh=qcWnhDas3Rp2y4FM0RNcrHwFi0XdqwrSM2RgIhUVjFM=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=GW4/xzsYh+PdHe/3J554tAYkNDswjivplT0nFg6aUrpCWkPU6HFzzmXmm6HJgO4hk HlnUx7ZO16taG1GRv9xxg3LJ/b/6UmDbpAiPW/iD7CjxCMaU8epXbBcsxUEFHAWHj1 0rizbVmZhtF7TNkjSeA9LtmxvNiexe3yeZL0fiT4= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:54 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH RESEND v5 10/24] dmaengine: dw-edma: Fix DebugFS reg entry type Date: Mon, 22 Aug 2022 21:53:18 +0300 Message-ID: <20220822185332.26149-11-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org debugfs_entries structure declared in the dw-edma-v0-debugfs.c module contains the DebugFS node' register address. The address is declared as dma_addr_t type, but first it's assigned with virtual CPU IOMEM address and then it's cast back to the virtual address. Even though the castes sandwich will unlikely cause any problem since normally DMA address is at least of the same size as the CPU virtual address, it's at the very least redundant if not to say logically incorrect. Let's fix it by just stop casting the pointer back and worth and just preserve the address as a pointer to void with __iomem qualifier. Fixes: 305aebeff879 ("dmaengine: Add Synopsys eDMA IP version 0 debugfs support") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 5226c9014703..8e61810dea4b 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -14,7 +14,7 @@ #include "dw-edma-core.h" #define REGS_ADDR(name) \ - ((void __force *)®s->name) + ((void __iomem *)®s->name) #define REGISTER(name) \ { #name, REGS_ADDR(name) } @@ -48,12 +48,13 @@ static struct { struct debugfs_entries { const char *name; - dma_addr_t *reg; + void __iomem *reg; }; static int dw_edma_debugfs_u32_get(void *data, u64 *val) { - void __iomem *reg = (void __force __iomem *)data; + void __iomem *reg = data; + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && reg >= (void __iomem *)®s->type.legacy.ch) { void __iomem *ptr = ®s->type.legacy.ch; From patchwork Mon Aug 22 18:53:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 513C6C38147 for ; Mon, 22 Aug 2022 18:54:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237982AbiHVSyz (ORCPT ); Mon, 22 Aug 2022 14:54:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237882AbiHVSyT (ORCPT ); Mon, 22 Aug 2022 14:54:19 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E4CC140BC1; Mon, 22 Aug 2022 11:53:58 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id DA853DA2; Mon, 22 Aug 2022 21:57:09 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com DA853DA2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194629; bh=h2szMtPt2mPxz+bvBYoY3CVT2HKJ3ds0tWkMm2QE3qQ=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=H8ngfmwvq2tWxbFZOv1yDp8LEGnMZXtmzWKTA75zSRlktF1HcFaoH59SRdUzvxOs3 dgcS8+c01IrWVABHQj9Hwltc2axvbF3RqZayhKdgflpU3jlkHsZOckHajp8URRGS+u pZ36Q+e39k+06JHiGRV7wmxDAIe9hNLTdv+tRgog= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:55 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 11/24] dmaengine: dw-edma: Stop checking debugfs_create_*() return value Date: Mon, 22 Aug 2022 21:53:19 +0300 Message-ID: <20220822185332.26149-12-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org First of all they never return NULL. So checking their return value for being not NULL just pointless. Secondly the DebugFS subsystem is designed in a way to be used as simple as possible. So if one of the debugfs_create_*() method in a hierarchy fails, the following methods will just silently return the passed erroneous parental dentry. Finally the code is supposed to be working no matter whether anything DebugFS-related fails. So in order to make code simpler and DebugFS-independent let's drop the debugfs_create_*() methods return value checking in the same way as the most of the kernel drivers do. Note in order to preserve some memory space we suggest to skip the DebugFS nodes initialization if the file system in unavailable. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 20 +++++--------------- 1 file changed, 5 insertions(+), 15 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 8e61810dea4b..6e7f3ef60ca7 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -100,9 +100,8 @@ static void dw_edma_debugfs_create_x32(const struct debugfs_entries entries[], int i; for (i = 0; i < nr_entries; i++) { - if (!debugfs_create_file_unsafe(entries[i].name, 0444, dir, - entries[i].reg, &fops_x32)) - break; + debugfs_create_file_unsafe(entries[i].name, 0444, dir, + entries[i].reg, &fops_x32); } } @@ -168,8 +167,6 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) char name[16]; regs_dir = debugfs_create_dir(WRITE_STR, dir); - if (!regs_dir) - return; nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); @@ -184,8 +181,6 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); ch_dir = debugfs_create_dir(name, regs_dir); - if (!ch_dir) - return; dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].wr, ch_dir); @@ -237,8 +232,6 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) char name[16]; regs_dir = debugfs_create_dir(READ_STR, dir); - if (!regs_dir) - return; nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); @@ -253,8 +246,6 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); ch_dir = debugfs_create_dir(name, regs_dir); - if (!ch_dir) - return; dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].rd, ch_dir); @@ -273,8 +264,6 @@ static void dw_edma_debugfs_regs(void) int nr_entries; regs_dir = debugfs_create_dir(REGISTERS_STR, dw->debugfs); - if (!regs_dir) - return; nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); @@ -285,6 +274,9 @@ static void dw_edma_debugfs_regs(void) void dw_edma_v0_debugfs_on(struct dw_edma *_dw) { + if (!debugfs_initialized()) + return; + dw = _dw; if (!dw) return; @@ -294,8 +286,6 @@ void dw_edma_v0_debugfs_on(struct dw_edma *_dw) return; dw->debugfs = debugfs_create_dir(dw->name, NULL); - if (!dw->debugfs) - return; debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); From patchwork Mon Aug 22 18:53:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61CFCC28D13 for ; Mon, 22 Aug 2022 18:55:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237820AbiHVSy5 (ORCPT ); Mon, 22 Aug 2022 14:54:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237805AbiHVSyT (ORCPT ); Mon, 22 Aug 2022 14:54:19 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A646740BD2; Mon, 22 Aug 2022 11:53:59 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 912D3DA6; Mon, 22 Aug 2022 21:57:10 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 912D3DA6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194630; bh=kTH6C5C5rfbmVRgZ1Ol9qYqwreqwX88JM6KrGJ4NuyM=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=qH2uPxElyt7qqQET4Uw7eo1d1VaX7/O/6ffFR96c+1W4BEr1hrcjRa6892HYWGozd Alikww2OSC+XbkratEXnT7TMFlrpHwja4hnZZqQBB90k0EXwBnOWI0PAZ1df/Ibd0V 8I2XSfvREficM8mQW3ZQY9My08gZA4aqzyc73kfw= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:56 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 12/24] dmaengine: dw-edma: Add dw_edma prefix to the DebugFS nodes descriptor Date: Mon, 22 Aug 2022 21:53:20 +0300 Message-ID: <20220822185332.26149-13-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org The rest of the locally defined and used methods and structures have dw_edma prefix in their names. It's right in accordance with the kernel coding style to follow the locally defined rule of naming. Let's add that prefix to the debugfs_entries structure too especially seeing it's name may be confusing as if that structure belongs to the global DebugFS space. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 6e7f3ef60ca7..2121ffc33cf3 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -46,7 +46,7 @@ static struct { void __iomem *end; } lim[2][EDMA_V0_MAX_NR_CH]; -struct debugfs_entries { +struct dw_edma_debugfs_entry { const char *name; void __iomem *reg; }; @@ -94,7 +94,7 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) } DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); -static void dw_edma_debugfs_create_x32(const struct debugfs_entries entries[], +static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry entries[], int nr_entries, struct dentry *dir) { int i; @@ -108,8 +108,7 @@ static void dw_edma_debugfs_create_x32(const struct debugfs_entries entries[], static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, struct dentry *dir) { - int nr_entries; - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { REGISTER(ch_control1), REGISTER(ch_control2), REGISTER(transfer_size), @@ -120,6 +119,7 @@ static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, REGISTER(llp.lsb), REGISTER(llp.msb), }; + int nr_entries; nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dir); @@ -127,7 +127,7 @@ static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, static void dw_edma_debugfs_regs_wr(struct dentry *dir) { - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ WR_REGISTER(engine_en), WR_REGISTER(doorbell), @@ -148,7 +148,7 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) WR_REGISTER(ch67_imwr_data), WR_REGISTER(linked_list_err_en), }; - const struct debugfs_entries debugfs_unroll_regs[] = { + const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ WR_REGISTER_UNROLL(engine_chgroup), WR_REGISTER_UNROLL(engine_hshake_cnt.lsb), @@ -191,7 +191,7 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) static void dw_edma_debugfs_regs_rd(struct dentry *dir) { - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ RD_REGISTER(engine_en), RD_REGISTER(doorbell), @@ -213,7 +213,7 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) RD_REGISTER(ch45_imwr_data), RD_REGISTER(ch67_imwr_data), }; - const struct debugfs_entries debugfs_unroll_regs[] = { + const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ RD_REGISTER_UNROLL(engine_chgroup), RD_REGISTER_UNROLL(engine_hshake_cnt.lsb), @@ -256,7 +256,7 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) static void dw_edma_debugfs_regs(void) { - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { REGISTER(ctrl_data_arb_prior), REGISTER(ctrl), }; From patchwork Mon Aug 22 18:53:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBE34C38145 for ; Mon, 22 Aug 2022 18:55:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238016AbiHVSzE (ORCPT ); Mon, 22 Aug 2022 14:55:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237807AbiHVSyV (ORCPT ); Mon, 22 Aug 2022 14:54:21 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1886EB47; Mon, 22 Aug 2022 11:54:01 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 4A219DA7; Mon, 22 Aug 2022 21:57:11 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 4A219DA7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194631; bh=c5+2F/6btB/z+Mtjw9amEMBDbgeGzfTl0An/z4fYLP4=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=XsMNBkOkmNTJRhwc509e3/xC81Eie3ydyg9IcSH1srVT7IVd3lEQrBt0gSy0zpOFh +qU3V35Wf/KDcIFaOEvoftruqbvyDTDhv84P/z4TwkR2I8YZBUwBdQylyL0OfGOd29 4zDzLsB/RU+vx9orCdTOHWfj9IDjqHfaObBhYL2k= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:56 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 13/24] dmaengine: dw-edma: Convert DebugFS descs to being kz-allocated Date: Mon, 22 Aug 2022 21:53:21 +0300 Message-ID: <20220822185332.26149-14-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Currently all the DW eDMA DebugFS nodes descriptors are allocated on stack, while the DW eDMA driver private data and CSR limits are statically preserved. Such design won't work for the multi-eDMA platforms. As a preparation to adding the multi-eDMA system setups support we need to have each DebugFS node separately allocated and described. Afterwards we'll put an addition info there like Read/Write channel flag, channel ID, DW eDMA private data reference. Note this conversion is mainly required due to having the legacy DW eDMA controllers with indirect Read/Write channels context CSRs access. If we didn't need to have a synchronized access to these registers the DebugFS code of the driver would have been much simpler. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- Changelog v2: - Drop __iomem qualifier from the struct dw_edma_debugfs_entry instance definition in the dw_edma_debugfs_u32_get() method. (@Manivannan) --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 2121ffc33cf3..78f15e4b07ac 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -53,7 +53,8 @@ struct dw_edma_debugfs_entry { static int dw_edma_debugfs_u32_get(void *data, u64 *val) { - void __iomem *reg = data; + struct dw_edma_debugfs_entry *entry = data; + void __iomem *reg = entry->reg; if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && reg >= (void __iomem *)®s->type.legacy.ch) { @@ -94,14 +95,22 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) } DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); -static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry entries[], +static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], int nr_entries, struct dentry *dir) { + struct dw_edma_debugfs_entry *entries; int i; + entries = devm_kcalloc(dw->chip->dev, nr_entries, sizeof(*entries), + GFP_KERNEL); + if (!entries) + return; + for (i = 0; i < nr_entries; i++) { + entries[i] = ini[i]; + debugfs_create_file_unsafe(entries[i].name, 0444, dir, - entries[i].reg, &fops_x32); + &entries[i], &fops_x32); } } From patchwork Mon Aug 22 18:53:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E346C32792 for ; Mon, 22 Aug 2022 18:55:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237824AbiHVSzD (ORCPT ); Mon, 22 Aug 2022 14:55:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237811AbiHVSyX (ORCPT ); Mon, 22 Aug 2022 14:54:23 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9BA2840BE1; Mon, 22 Aug 2022 11:54:01 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id E682EDA3; Mon, 22 Aug 2022 21:57:11 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com E682EDA3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194631; bh=9NJ4Jn+EzP77tyAam5spYwEPen8SHMY45zMhn+Vo7ts=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=Xymxv7arBWL2z7cgCoxyfJbdtiFWAhkzPbVx3FkotE6WVY+7mny3urEJkrMHncNpS jBwWHrRKPZeFmatgGoJo/Ud1EsCeHLpSzEt5Qs4JJG9pHS2gvG4G80L2o9kMt2vjJD jxoQxWtDDnd5YXGsEEVt01cAa4HpvudAdp3CsKXE= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:57 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 14/24] dmaengine: dw-edma: Rename DebugFS dentry variables to 'dent' Date: Mon, 22 Aug 2022 21:53:22 +0300 Message-ID: <20220822185332.26149-15-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Since we are about to add the eDMA channels direction support to the debugfs module it will be confusing to have both the DebugFS directory and the channels direction short names used in the same code. As a preparation patch let's convert the DebugFS dentry 'dir' variables to having the 'dent' name so to prevent the confusion. Suggested-by: Manivannan Sadhasivam Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- Changelog v2: - This is a new patch added in v2. (@Manivannan) --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 46 ++++++++++++------------ 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 78f15e4b07ac..7bb3363b40e4 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -96,7 +96,7 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], - int nr_entries, struct dentry *dir) + int nr_entries, struct dentry *dent) { struct dw_edma_debugfs_entry *entries; int i; @@ -109,13 +109,13 @@ static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], for (i = 0; i < nr_entries; i++) { entries[i] = ini[i]; - debugfs_create_file_unsafe(entries[i].name, 0444, dir, + debugfs_create_file_unsafe(entries[i].name, 0444, dent, &entries[i], &fops_x32); } } static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, - struct dentry *dir) + struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { REGISTER(ch_control1), @@ -131,10 +131,10 @@ static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, int nr_entries; nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dir); + dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dent); } -static void dw_edma_debugfs_regs_wr(struct dentry *dir) +static void dw_edma_debugfs_regs_wr(struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ @@ -171,34 +171,34 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) WR_REGISTER_UNROLL(ch6_pwr_en), WR_REGISTER_UNROLL(ch7_pwr_en), }; - struct dentry *regs_dir, *ch_dir; + struct dentry *regs_dent, *ch_dent; int nr_entries, i; char name[16]; - regs_dir = debugfs_create_dir(WRITE_STR, dir); + regs_dent = debugfs_create_dir(WRITE_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, - regs_dir); + regs_dent); } for (i = 0; i < dw->wr_ch_cnt; i++) { snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); - ch_dir = debugfs_create_dir(name, regs_dir); + ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].wr, ch_dir); + dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].wr, ch_dent); lim[0][i].start = ®s->type.unroll.ch[i].wr; lim[0][i].end = ®s->type.unroll.ch[i].padding_1[0]; } } -static void dw_edma_debugfs_regs_rd(struct dentry *dir) +static void dw_edma_debugfs_regs_rd(struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ @@ -236,27 +236,27 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) RD_REGISTER_UNROLL(ch6_pwr_en), RD_REGISTER_UNROLL(ch7_pwr_en), }; - struct dentry *regs_dir, *ch_dir; + struct dentry *regs_dent, *ch_dent; int nr_entries, i; char name[16]; - regs_dir = debugfs_create_dir(READ_STR, dir); + regs_dent = debugfs_create_dir(READ_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, - regs_dir); + regs_dent); } for (i = 0; i < dw->rd_ch_cnt; i++) { snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); - ch_dir = debugfs_create_dir(name, regs_dir); + ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].rd, ch_dir); + dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].rd, ch_dent); lim[1][i].start = ®s->type.unroll.ch[i].rd; lim[1][i].end = ®s->type.unroll.ch[i].padding_2[0]; @@ -269,16 +269,16 @@ static void dw_edma_debugfs_regs(void) REGISTER(ctrl_data_arb_prior), REGISTER(ctrl), }; - struct dentry *regs_dir; + struct dentry *regs_dent; int nr_entries; - regs_dir = debugfs_create_dir(REGISTERS_STR, dw->debugfs); + regs_dent = debugfs_create_dir(REGISTERS_STR, dw->debugfs); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); - dw_edma_debugfs_regs_wr(regs_dir); - dw_edma_debugfs_regs_rd(regs_dir); + dw_edma_debugfs_regs_wr(regs_dent); + dw_edma_debugfs_regs_rd(regs_dent); } void dw_edma_v0_debugfs_on(struct dw_edma *_dw) From patchwork Mon Aug 22 18:53:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951142 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DC1EC3F6B0 for ; Mon, 22 Aug 2022 18:55:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237723AbiHVSzE (ORCPT ); Mon, 22 Aug 2022 14:55:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237829AbiHVSyY (ORCPT ); Mon, 22 Aug 2022 14:54:24 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 86A2A40E12; Mon, 22 Aug 2022 11:54:02 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 9A588DA8; Mon, 22 Aug 2022 21:57:12 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 9A588DA8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194632; bh=Gkk7/wuRGiJqLIFB/dQGceCCDwcGwRLJqDMhy3IdGqc=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=l0dF9GlTZS8HSL5nxJ2MGiF9fZtvLT764BthkMc8m0MgXqch2w5xhlhahrVojjxKr KIligtziQEe00jizewsPBo+OT1xKvqmdz+gYiUittYKs4YzA9TYsLV1tsKM3JyGw7C 4ETWc06/kDiDeJY0e33mTj205Lk5BLTUNKmUEY0M= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:58 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 15/24] dmaengine: dw-edma: Simplify the DebugFS context CSRs init procedure Date: Mon, 22 Aug 2022 21:53:23 +0300 Message-ID: <20220822185332.26149-16-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org DW eDMA v4.70a and older have the read and write channels context CSRs indirectly accessible. It means the CSRs like Channel Control, Xfer size, SAR, DAR and LLP address are accessed over at a fixed MMIO address, but their reference to the corresponding channel is determined by the Viewport CSR. In order to have a coherent access to these registers the CSR IOs are supposed to be protected with a spin-lock. DW eDMA v4.80a and newer normally have unrolled Read/Write channel context registers. That is all CSRs denoted before are directly mapped in the controller MMIO space. Since both normal and viewport-based registers are exposed via the DebugFS nodes, the original code author decided to implement an algorithm based on the unrolled CSRs mapping with the viewport addresses recalculation if it's required. The problem is that such implementation turned to be first unscalable (supports a platform with only single eDMA available since a base address statically preserved) and second needlessly overcomplicated (it loops over all Rd/Wr context addresses and re-calculates the viewport base address on each DebugFS node access). The algorithm can be greatly simplified just by adding the channel ID and it's direction fields in the eDMA DebugFS node descriptor. These new parameters can be used to find a CSR offset within the corresponding channel registers space. The DW eDMA DebugFS node getter afterwards will also use them in order to activate the respective context CSRs viewport before reading data from the specified register. In case of the unrolled version of the CSRs mapping there won't be any spin-lock taken/released, no viewport activation as before this modification. Note this modification fixes the REGISTER() macros using an externally defined local variable. The same problem with the rest of the macro will be fixed in the next commit. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 84 +++++++++++------------- 1 file changed, 38 insertions(+), 46 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 7bb3363b40e4..1596eedf35c5 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -15,9 +15,27 @@ #define REGS_ADDR(name) \ ((void __iomem *)®s->name) + +#define REGS_CH_ADDR(name, _dir, _ch) \ + ({ \ + struct dw_edma_v0_ch_regs __iomem *__ch_regs; \ + \ + if ((dw)->chip->mf == EDMA_MF_EDMA_LEGACY) \ + __ch_regs = ®s->type.legacy.ch; \ + else if (_dir == EDMA_DIR_READ) \ + __ch_regs = ®s->type.unroll.ch[_ch].rd; \ + else \ + __ch_regs = ®s->type.unroll.ch[_ch].wr; \ + \ + (void __iomem *)&__ch_regs->name; \ + }) + #define REGISTER(name) \ { #name, REGS_ADDR(name) } +#define CTX_REGISTER(name, dir, ch) \ + { #name, REGS_CH_ADDR(name, dir, ch), dir, ch } + #define WR_REGISTER(name) \ { #name, REGS_ADDR(wr_##name) } #define RD_REGISTER(name) \ @@ -41,14 +59,11 @@ static struct dw_edma *dw; static struct dw_edma_v0_regs __iomem *regs; -static struct { - void __iomem *start; - void __iomem *end; -} lim[2][EDMA_V0_MAX_NR_CH]; - struct dw_edma_debugfs_entry { const char *name; void __iomem *reg; + enum dw_edma_dir dir; + u16 ch; }; static int dw_edma_debugfs_u32_get(void *data, u64 *val) @@ -58,33 +73,16 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && reg >= (void __iomem *)®s->type.legacy.ch) { - void __iomem *ptr = ®s->type.legacy.ch; - u32 viewport_sel = 0; unsigned long flags; - u16 ch; - - for (ch = 0; ch < dw->wr_ch_cnt; ch++) - if (lim[0][ch].start >= reg && reg < lim[0][ch].end) { - ptr += (reg - lim[0][ch].start); - goto legacy_sel_wr; - } - - for (ch = 0; ch < dw->rd_ch_cnt; ch++) - if (lim[1][ch].start >= reg && reg < lim[1][ch].end) { - ptr += (reg - lim[1][ch].start); - goto legacy_sel_rd; - } - - return 0; -legacy_sel_rd: - viewport_sel = BIT(31); -legacy_sel_wr: - viewport_sel |= FIELD_PREP(EDMA_V0_VIEWPORT_MASK, ch); + u32 viewport_sel; + + viewport_sel = entry->dir == EDMA_DIR_READ ? BIT(31) : 0; + viewport_sel |= FIELD_PREP(EDMA_V0_VIEWPORT_MASK, entry->ch); raw_spin_lock_irqsave(&dw->lock, flags); writel(viewport_sel, ®s->type.legacy.viewport_sel); - *val = readl(ptr); + *val = readl(reg); raw_spin_unlock_irqrestore(&dw->lock, flags); } else { @@ -114,19 +112,19 @@ static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], } } -static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, +static void dw_edma_debugfs_regs_ch(enum dw_edma_dir dir, u16 ch, struct dentry *dent) { - const struct dw_edma_debugfs_entry debugfs_regs[] = { - REGISTER(ch_control1), - REGISTER(ch_control2), - REGISTER(transfer_size), - REGISTER(sar.lsb), - REGISTER(sar.msb), - REGISTER(dar.lsb), - REGISTER(dar.msb), - REGISTER(llp.lsb), - REGISTER(llp.msb), + struct dw_edma_debugfs_entry debugfs_regs[] = { + CTX_REGISTER(ch_control1, dir, ch), + CTX_REGISTER(ch_control2, dir, ch), + CTX_REGISTER(transfer_size, dir, ch), + CTX_REGISTER(sar.lsb, dir, ch), + CTX_REGISTER(sar.msb, dir, ch), + CTX_REGISTER(dar.lsb, dir, ch), + CTX_REGISTER(dar.msb, dir, ch), + CTX_REGISTER(llp.lsb, dir, ch), + CTX_REGISTER(llp.msb, dir, ch), }; int nr_entries; @@ -191,10 +189,7 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dent) ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].wr, ch_dent); - - lim[0][i].start = ®s->type.unroll.ch[i].wr; - lim[0][i].end = ®s->type.unroll.ch[i].padding_1[0]; + dw_edma_debugfs_regs_ch(EDMA_DIR_WRITE, i, ch_dent); } } @@ -256,10 +251,7 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dent) ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].rd, ch_dent); - - lim[1][i].start = ®s->type.unroll.ch[i].rd; - lim[1][i].end = ®s->type.unroll.ch[i].padding_2[0]; + dw_edma_debugfs_regs_ch(EDMA_DIR_READ, i, ch_dent); } } From patchwork Mon Aug 22 18:53:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86323C32789 for ; Mon, 22 Aug 2022 18:55:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238029AbiHVSzG (ORCPT ); Mon, 22 Aug 2022 14:55:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237835AbiHVSyY (ORCPT ); Mon, 22 Aug 2022 14:54:24 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 63E2040E1F; Mon, 22 Aug 2022 11:54:03 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 4A5E5DA4; Mon, 22 Aug 2022 21:57:13 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 4A5E5DA4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194633; bh=QUxJaR65YdZJGvnt8gY9IQtkvNZZN89V17AaNbVexmY=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=n/i+oe3NSHsnvHhFGafgiqzwZRRUhkZzK+D0S9NK7gmSG4hjIupaV77jDZdVzEWP/ 2BlF6Iro6gMztPpvBVaENgZg/th7GgeIZyyH9PNSMw5J76YdlcIwfH8moM+nA2vbCZ wpDgBA3sDm8Pdwh1YL1EVksLduUha9KRxxDXGVZ0= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:58 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 16/24] dmaengine: dw-edma: Move eDMA data pointer to DebugFS node descriptor Date: Mon, 22 Aug 2022 21:53:24 +0300 Message-ID: <20220822185332.26149-17-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org The last thing that really stops the DebugFS part of the eDMA driver from supporting the multi-eDMA platform in is keeping the eDMA private data pointer in the static area of the DebugFS module. Since the DebugFS node descriptors are now kz-allocated we can freely move that pointer to being preserved in the descriptors. After the DebugFS initialization procedure that pointer will be used in the DebugFS files getter to access the common CSRs space and the context CSRs spin-lock. So the main part of this change is connected with the DebugFS nodes descriptors initialization macros, which aside with already defined prototypes now require to have the DW eDMA private data pointer passed. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 242 +++++++++++------------ 1 file changed, 117 insertions(+), 125 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 1596eedf35c5..e6cf608d121b 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -13,53 +13,55 @@ #include "dw-edma-v0-regs.h" #include "dw-edma-core.h" -#define REGS_ADDR(name) \ - ((void __iomem *)®s->name) +#define REGS_ADDR(dw, name) \ + ({ \ + struct dw_edma_v0_regs __iomem *__regs = (dw)->chip->reg_base; \ + \ + (void __iomem *)&__regs->name; \ + }) -#define REGS_CH_ADDR(name, _dir, _ch) \ +#define REGS_CH_ADDR(dw, name, _dir, _ch) \ ({ \ struct dw_edma_v0_ch_regs __iomem *__ch_regs; \ \ if ((dw)->chip->mf == EDMA_MF_EDMA_LEGACY) \ - __ch_regs = ®s->type.legacy.ch; \ + __ch_regs = REGS_ADDR(dw, type.legacy.ch); \ else if (_dir == EDMA_DIR_READ) \ - __ch_regs = ®s->type.unroll.ch[_ch].rd; \ + __ch_regs = REGS_ADDR(dw, type.unroll.ch[_ch].rd); \ else \ - __ch_regs = ®s->type.unroll.ch[_ch].wr; \ + __ch_regs = REGS_ADDR(dw, type.unroll.ch[_ch].wr); \ \ (void __iomem *)&__ch_regs->name; \ }) -#define REGISTER(name) \ - { #name, REGS_ADDR(name) } +#define REGISTER(dw, name) \ + { dw, #name, REGS_ADDR(dw, name) } -#define CTX_REGISTER(name, dir, ch) \ - { #name, REGS_CH_ADDR(name, dir, ch), dir, ch } +#define CTX_REGISTER(dw, name, dir, ch) \ + { dw, #name, REGS_CH_ADDR(dw, name, dir, ch), dir, ch } -#define WR_REGISTER(name) \ - { #name, REGS_ADDR(wr_##name) } -#define RD_REGISTER(name) \ - { #name, REGS_ADDR(rd_##name) } +#define WR_REGISTER(dw, name) \ + { dw, #name, REGS_ADDR(dw, wr_##name) } +#define RD_REGISTER(dw, name) \ + { dw, #name, REGS_ADDR(dw, rd_##name) } -#define WR_REGISTER_LEGACY(name) \ - { #name, REGS_ADDR(type.legacy.wr_##name) } +#define WR_REGISTER_LEGACY(dw, name) \ + { dw, #name, REGS_ADDR(dw, type.legacy.wr_##name) } #define RD_REGISTER_LEGACY(name) \ - { #name, REGS_ADDR(type.legacy.rd_##name) } + { dw, #name, REGS_ADDR(dw, type.legacy.rd_##name) } -#define WR_REGISTER_UNROLL(name) \ - { #name, REGS_ADDR(type.unroll.wr_##name) } -#define RD_REGISTER_UNROLL(name) \ - { #name, REGS_ADDR(type.unroll.rd_##name) } +#define WR_REGISTER_UNROLL(dw, name) \ + { dw, #name, REGS_ADDR(dw, type.unroll.wr_##name) } +#define RD_REGISTER_UNROLL(dw, name) \ + { dw, #name, REGS_ADDR(dw, type.unroll.rd_##name) } #define WRITE_STR "write" #define READ_STR "read" #define CHANNEL_STR "channel" #define REGISTERS_STR "registers" -static struct dw_edma *dw; -static struct dw_edma_v0_regs __iomem *regs; - struct dw_edma_debugfs_entry { + struct dw_edma *dw; const char *name; void __iomem *reg; enum dw_edma_dir dir; @@ -69,10 +71,11 @@ struct dw_edma_debugfs_entry { static int dw_edma_debugfs_u32_get(void *data, u64 *val) { struct dw_edma_debugfs_entry *entry = data; + struct dw_edma *dw = entry->dw; void __iomem *reg = entry->reg; if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && - reg >= (void __iomem *)®s->type.legacy.ch) { + reg >= REGS_ADDR(dw, type.legacy.ch)) { unsigned long flags; u32 viewport_sel; @@ -81,7 +84,7 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) raw_spin_lock_irqsave(&dw->lock, flags); - writel(viewport_sel, ®s->type.legacy.viewport_sel); + writel(viewport_sel, REGS_ADDR(dw, type.legacy.viewport_sel)); *val = readl(reg); raw_spin_unlock_irqrestore(&dw->lock, flags); @@ -93,7 +96,8 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) } DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); -static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], +static void dw_edma_debugfs_create_x32(struct dw_edma *dw, + const struct dw_edma_debugfs_entry ini[], int nr_entries, struct dentry *dent) { struct dw_edma_debugfs_entry *entries; @@ -112,62 +116,62 @@ static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], } } -static void dw_edma_debugfs_regs_ch(enum dw_edma_dir dir, u16 ch, - struct dentry *dent) +static void dw_edma_debugfs_regs_ch(struct dw_edma *dw, enum dw_edma_dir dir, + u16 ch, struct dentry *dent) { struct dw_edma_debugfs_entry debugfs_regs[] = { - CTX_REGISTER(ch_control1, dir, ch), - CTX_REGISTER(ch_control2, dir, ch), - CTX_REGISTER(transfer_size, dir, ch), - CTX_REGISTER(sar.lsb, dir, ch), - CTX_REGISTER(sar.msb, dir, ch), - CTX_REGISTER(dar.lsb, dir, ch), - CTX_REGISTER(dar.msb, dir, ch), - CTX_REGISTER(llp.lsb, dir, ch), - CTX_REGISTER(llp.msb, dir, ch), + CTX_REGISTER(dw, ch_control1, dir, ch), + CTX_REGISTER(dw, ch_control2, dir, ch), + CTX_REGISTER(dw, transfer_size, dir, ch), + CTX_REGISTER(dw, sar.lsb, dir, ch), + CTX_REGISTER(dw, sar.msb, dir, ch), + CTX_REGISTER(dw, dar.lsb, dir, ch), + CTX_REGISTER(dw, dar.msb, dir, ch), + CTX_REGISTER(dw, llp.lsb, dir, ch), + CTX_REGISTER(dw, llp.msb, dir, ch), }; int nr_entries; nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dent); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, dent); } -static void dw_edma_debugfs_regs_wr(struct dentry *dent) +static void dw_edma_debugfs_regs_wr(struct dw_edma *dw, struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ - WR_REGISTER(engine_en), - WR_REGISTER(doorbell), - WR_REGISTER(ch_arb_weight.lsb), - WR_REGISTER(ch_arb_weight.msb), + WR_REGISTER(dw, engine_en), + WR_REGISTER(dw, doorbell), + WR_REGISTER(dw, ch_arb_weight.lsb), + WR_REGISTER(dw, ch_arb_weight.msb), /* eDMA interrupts registers */ - WR_REGISTER(int_status), - WR_REGISTER(int_mask), - WR_REGISTER(int_clear), - WR_REGISTER(err_status), - WR_REGISTER(done_imwr.lsb), - WR_REGISTER(done_imwr.msb), - WR_REGISTER(abort_imwr.lsb), - WR_REGISTER(abort_imwr.msb), - WR_REGISTER(ch01_imwr_data), - WR_REGISTER(ch23_imwr_data), - WR_REGISTER(ch45_imwr_data), - WR_REGISTER(ch67_imwr_data), - WR_REGISTER(linked_list_err_en), + WR_REGISTER(dw, int_status), + WR_REGISTER(dw, int_mask), + WR_REGISTER(dw, int_clear), + WR_REGISTER(dw, err_status), + WR_REGISTER(dw, done_imwr.lsb), + WR_REGISTER(dw, done_imwr.msb), + WR_REGISTER(dw, abort_imwr.lsb), + WR_REGISTER(dw, abort_imwr.msb), + WR_REGISTER(dw, ch01_imwr_data), + WR_REGISTER(dw, ch23_imwr_data), + WR_REGISTER(dw, ch45_imwr_data), + WR_REGISTER(dw, ch67_imwr_data), + WR_REGISTER(dw, linked_list_err_en), }; const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ - WR_REGISTER_UNROLL(engine_chgroup), - WR_REGISTER_UNROLL(engine_hshake_cnt.lsb), - WR_REGISTER_UNROLL(engine_hshake_cnt.msb), - WR_REGISTER_UNROLL(ch0_pwr_en), - WR_REGISTER_UNROLL(ch1_pwr_en), - WR_REGISTER_UNROLL(ch2_pwr_en), - WR_REGISTER_UNROLL(ch3_pwr_en), - WR_REGISTER_UNROLL(ch4_pwr_en), - WR_REGISTER_UNROLL(ch5_pwr_en), - WR_REGISTER_UNROLL(ch6_pwr_en), - WR_REGISTER_UNROLL(ch7_pwr_en), + WR_REGISTER_UNROLL(dw, engine_chgroup), + WR_REGISTER_UNROLL(dw, engine_hshake_cnt.lsb), + WR_REGISTER_UNROLL(dw, engine_hshake_cnt.msb), + WR_REGISTER_UNROLL(dw, ch0_pwr_en), + WR_REGISTER_UNROLL(dw, ch1_pwr_en), + WR_REGISTER_UNROLL(dw, ch2_pwr_en), + WR_REGISTER_UNROLL(dw, ch3_pwr_en), + WR_REGISTER_UNROLL(dw, ch4_pwr_en), + WR_REGISTER_UNROLL(dw, ch5_pwr_en), + WR_REGISTER_UNROLL(dw, ch6_pwr_en), + WR_REGISTER_UNROLL(dw, ch7_pwr_en), }; struct dentry *regs_dent, *ch_dent; int nr_entries, i; @@ -176,11 +180,11 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dent) regs_dent = debugfs_create_dir(WRITE_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); - dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, + dw_edma_debugfs_create_x32(dw, debugfs_unroll_regs, nr_entries, regs_dent); } @@ -189,47 +193,47 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dent) ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(EDMA_DIR_WRITE, i, ch_dent); + dw_edma_debugfs_regs_ch(dw, EDMA_DIR_WRITE, i, ch_dent); } } -static void dw_edma_debugfs_regs_rd(struct dentry *dent) +static void dw_edma_debugfs_regs_rd(struct dw_edma *dw, struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ - RD_REGISTER(engine_en), - RD_REGISTER(doorbell), - RD_REGISTER(ch_arb_weight.lsb), - RD_REGISTER(ch_arb_weight.msb), + RD_REGISTER(dw, engine_en), + RD_REGISTER(dw, doorbell), + RD_REGISTER(dw, ch_arb_weight.lsb), + RD_REGISTER(dw, ch_arb_weight.msb), /* eDMA interrupts registers */ - RD_REGISTER(int_status), - RD_REGISTER(int_mask), - RD_REGISTER(int_clear), - RD_REGISTER(err_status.lsb), - RD_REGISTER(err_status.msb), - RD_REGISTER(linked_list_err_en), - RD_REGISTER(done_imwr.lsb), - RD_REGISTER(done_imwr.msb), - RD_REGISTER(abort_imwr.lsb), - RD_REGISTER(abort_imwr.msb), - RD_REGISTER(ch01_imwr_data), - RD_REGISTER(ch23_imwr_data), - RD_REGISTER(ch45_imwr_data), - RD_REGISTER(ch67_imwr_data), + RD_REGISTER(dw, int_status), + RD_REGISTER(dw, int_mask), + RD_REGISTER(dw, int_clear), + RD_REGISTER(dw, err_status.lsb), + RD_REGISTER(dw, err_status.msb), + RD_REGISTER(dw, linked_list_err_en), + RD_REGISTER(dw, done_imwr.lsb), + RD_REGISTER(dw, done_imwr.msb), + RD_REGISTER(dw, abort_imwr.lsb), + RD_REGISTER(dw, abort_imwr.msb), + RD_REGISTER(dw, ch01_imwr_data), + RD_REGISTER(dw, ch23_imwr_data), + RD_REGISTER(dw, ch45_imwr_data), + RD_REGISTER(dw, ch67_imwr_data), }; const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ - RD_REGISTER_UNROLL(engine_chgroup), - RD_REGISTER_UNROLL(engine_hshake_cnt.lsb), - RD_REGISTER_UNROLL(engine_hshake_cnt.msb), - RD_REGISTER_UNROLL(ch0_pwr_en), - RD_REGISTER_UNROLL(ch1_pwr_en), - RD_REGISTER_UNROLL(ch2_pwr_en), - RD_REGISTER_UNROLL(ch3_pwr_en), - RD_REGISTER_UNROLL(ch4_pwr_en), - RD_REGISTER_UNROLL(ch5_pwr_en), - RD_REGISTER_UNROLL(ch6_pwr_en), - RD_REGISTER_UNROLL(ch7_pwr_en), + RD_REGISTER_UNROLL(dw, engine_chgroup), + RD_REGISTER_UNROLL(dw, engine_hshake_cnt.lsb), + RD_REGISTER_UNROLL(dw, engine_hshake_cnt.msb), + RD_REGISTER_UNROLL(dw, ch0_pwr_en), + RD_REGISTER_UNROLL(dw, ch1_pwr_en), + RD_REGISTER_UNROLL(dw, ch2_pwr_en), + RD_REGISTER_UNROLL(dw, ch3_pwr_en), + RD_REGISTER_UNROLL(dw, ch4_pwr_en), + RD_REGISTER_UNROLL(dw, ch5_pwr_en), + RD_REGISTER_UNROLL(dw, ch6_pwr_en), + RD_REGISTER_UNROLL(dw, ch7_pwr_en), }; struct dentry *regs_dent, *ch_dent; int nr_entries, i; @@ -238,11 +242,11 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dent) regs_dent = debugfs_create_dir(READ_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); - dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, + dw_edma_debugfs_create_x32(dw, debugfs_unroll_regs, nr_entries, regs_dent); } @@ -251,15 +255,15 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dent) ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(EDMA_DIR_READ, i, ch_dent); + dw_edma_debugfs_regs_ch(dw, EDMA_DIR_READ, i, ch_dent); } } -static void dw_edma_debugfs_regs(void) +static void dw_edma_debugfs_regs(struct dw_edma *dw) { const struct dw_edma_debugfs_entry debugfs_regs[] = { - REGISTER(ctrl_data_arb_prior), - REGISTER(ctrl), + REGISTER(dw, ctrl_data_arb_prior), + REGISTER(dw, ctrl), }; struct dentry *regs_dent; int nr_entries; @@ -267,40 +271,28 @@ static void dw_edma_debugfs_regs(void) regs_dent = debugfs_create_dir(REGISTERS_STR, dw->debugfs); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); - dw_edma_debugfs_regs_wr(regs_dent); - dw_edma_debugfs_regs_rd(regs_dent); + dw_edma_debugfs_regs_wr(dw, regs_dent); + dw_edma_debugfs_regs_rd(dw, regs_dent); } -void dw_edma_v0_debugfs_on(struct dw_edma *_dw) +void dw_edma_v0_debugfs_on(struct dw_edma *dw) { if (!debugfs_initialized()) return; - dw = _dw; - if (!dw) - return; - - regs = dw->chip->reg_base; - if (!regs) - return; - dw->debugfs = debugfs_create_dir(dw->name, NULL); debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt); - dw_edma_debugfs_regs(); + dw_edma_debugfs_regs(dw); } -void dw_edma_v0_debugfs_off(struct dw_edma *_dw) +void dw_edma_v0_debugfs_off(struct dw_edma *dw) { - dw = _dw; - if (!dw) - return; - debugfs_remove_recursive(dw->debugfs); dw->debugfs = NULL; } From patchwork Mon Aug 22 18:53:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DB4DC32774 for ; Mon, 22 Aug 2022 18:55:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238023AbiHVSzF (ORCPT ); Mon, 22 Aug 2022 14:55:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237833AbiHVSyY (ORCPT ); Mon, 22 Aug 2022 14:54:24 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 44E12B89; Mon, 22 Aug 2022 11:54:04 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 113FADA9; Mon, 22 Aug 2022 21:57:14 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 113FADA9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194634; bh=LbIv8L9kfjP+q0aPoe6b8Tph4w6cDMOBBFPhijrHQXg=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=nx4hWTXGcfJO8cRaOYmAMp6Hs7xfB46fUawX6/p4wMbRSaKrnjcTV+R0zV21kqebr VDS23bej36F95J2/3/sGKTt7l/Bw2yUu5xBhSFXkGTEypjG+66Tv/bO262I2+X6RQA N3FWKQImwszWf5SOsuSY4mSOusBuhsI1dKuLxEEY= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:59 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 17/24] dmaengine: dw-edma: Join Write/Read channels into a single device Date: Mon, 22 Aug 2022 21:53:25 +0300 Message-ID: <20220822185332.26149-18-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Indeed there is no point in such split up because due to multiple reasons. First of all eDMA read and write channels belong to one physical controller. Splitting them up illogical. Secondly the channels differentiating can be done by means of the filtering and the dma_get_slave_caps() method. Finally having these channels handled separately not only needlessly complicates the code, but also causes the DebugFS error printed to console: >> Debugfs: Directory '1f052000.pcie' with parent 'dmaengine' already present! So to speak let's join the read/write channels into a single DMA device. The client drivers will be able to choose the channel with required capability by getting the DMA slave direction setting. It's default value is overridden by the dw_edma_device_caps() callback in accordance with the channel nature. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 116 +++++++++++++++-------------- drivers/dma/dw-edma/dw-edma-core.h | 5 +- 2 files changed, 61 insertions(+), 60 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index e9cb3056b6b7..f5124e7b50cf 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -209,6 +209,24 @@ static void dw_edma_start_transfer(struct dw_edma_chan *chan) desc->chunks_alloc--; } +static void dw_edma_device_caps(struct dma_chan *dchan, + struct dma_slave_caps *caps) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + + if (chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { + if (chan->dir == EDMA_DIR_READ) + caps->directions = BIT(DMA_DEV_TO_MEM); + else + caps->directions = BIT(DMA_MEM_TO_DEV); + } else { + if (chan->dir == EDMA_DIR_WRITE) + caps->directions = BIT(DMA_DEV_TO_MEM); + else + caps->directions = BIT(DMA_MEM_TO_DEV); + } +} + static int dw_edma_device_config(struct dma_chan *dchan, struct dma_slave_config *config) { @@ -723,8 +741,7 @@ static void dw_edma_free_chan_resources(struct dma_chan *dchan) pm_runtime_put(chan->dw->chip->dev); } -static int dw_edma_channel_setup(struct dw_edma *dw, bool write, - u32 wr_alloc, u32 rd_alloc) +static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) { struct dw_edma_chip *chip = dw->chip; struct dw_edma_region *dt_region; @@ -732,27 +749,15 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, struct dw_edma_chan *chan; struct dw_edma_irq *irq; struct dma_device *dma; - u32 alloc, off_alloc; - u32 i, j, cnt; - int err = 0; + u32 i, ch_cnt; u32 pos; - if (write) { - i = 0; - cnt = dw->wr_ch_cnt; - dma = &dw->wr_edma; - alloc = wr_alloc; - off_alloc = 0; - } else { - i = dw->wr_ch_cnt; - cnt = dw->rd_ch_cnt; - dma = &dw->rd_edma; - alloc = rd_alloc; - off_alloc = wr_alloc; - } + ch_cnt = dw->wr_ch_cnt + dw->rd_ch_cnt; + dma = &dw->dma; INIT_LIST_HEAD(&dma->channels); - for (j = 0; (alloc || dw->nr_irqs == 1) && j < cnt; j++, i++) { + + for (i = 0; i < ch_cnt; i++) { chan = &dw->chan[i]; dt_region = devm_kzalloc(dev, sizeof(*dt_region), GFP_KERNEL); @@ -762,52 +767,62 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, chan->vc.chan.private = dt_region; chan->dw = dw; - chan->id = j; - chan->dir = write ? EDMA_DIR_WRITE : EDMA_DIR_READ; + + if (i < dw->wr_ch_cnt) { + chan->id = i; + chan->dir = EDMA_DIR_WRITE; + } else { + chan->id = i - dw->wr_ch_cnt; + chan->dir = EDMA_DIR_READ; + } + chan->configured = false; chan->request = EDMA_REQ_NONE; chan->status = EDMA_ST_IDLE; - if (write) - chan->ll_max = (chip->ll_region_wr[j].sz / EDMA_LL_SZ); + if (chan->dir == EDMA_DIR_WRITE) + chan->ll_max = (chip->ll_region_wr[chan->id].sz / EDMA_LL_SZ); else - chan->ll_max = (chip->ll_region_rd[j].sz / EDMA_LL_SZ); + chan->ll_max = (chip->ll_region_rd[chan->id].sz / EDMA_LL_SZ); chan->ll_max -= 1; dev_vdbg(dev, "L. List:\tChannel %s[%u] max_cnt=%u\n", - write ? "write" : "read", j, chan->ll_max); + chan->dir == EDMA_DIR_WRITE ? "write" : "read", + chan->id, chan->ll_max); if (dw->nr_irqs == 1) pos = 0; + else if (chan->dir == EDMA_DIR_WRITE) + pos = chan->id % wr_alloc; else - pos = off_alloc + (j % alloc); + pos = wr_alloc + chan->id % rd_alloc; irq = &dw->irq[pos]; - if (write) - irq->wr_mask |= BIT(j); + if (chan->dir == EDMA_DIR_WRITE) + irq->wr_mask |= BIT(chan->id); else - irq->rd_mask |= BIT(j); + irq->rd_mask |= BIT(chan->id); irq->dw = dw; memcpy(&chan->msi, &irq->msi, sizeof(chan->msi)); dev_vdbg(dev, "MSI:\t\tChannel %s[%u] addr=0x%.8x%.8x, data=0x%.8x\n", - write ? "write" : "read", j, + chan->dir == EDMA_DIR_WRITE ? "write" : "read", chan->id, chan->msi.address_hi, chan->msi.address_lo, chan->msi.data); chan->vc.desc_free = vchan_free_desc; vchan_init(&chan->vc, dma); - if (write) { - dt_region->paddr = chip->dt_region_wr[j].paddr; - dt_region->vaddr = chip->dt_region_wr[j].vaddr; - dt_region->sz = chip->dt_region_wr[j].sz; + if (chan->dir == EDMA_DIR_WRITE) { + dt_region->paddr = chip->dt_region_wr[chan->id].paddr; + dt_region->vaddr = chip->dt_region_wr[chan->id].vaddr; + dt_region->sz = chip->dt_region_wr[chan->id].sz; } else { - dt_region->paddr = chip->dt_region_rd[j].paddr; - dt_region->vaddr = chip->dt_region_rd[j].vaddr; - dt_region->sz = chip->dt_region_rd[j].sz; + dt_region->paddr = chip->dt_region_rd[chan->id].paddr; + dt_region->vaddr = chip->dt_region_rd[chan->id].vaddr; + dt_region->sz = chip->dt_region_rd[chan->id].sz; } dw_edma_v0_core_device_config(chan); @@ -819,7 +834,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma_cap_set(DMA_CYCLIC, dma->cap_mask); dma_cap_set(DMA_PRIVATE, dma->cap_mask); dma_cap_set(DMA_INTERLEAVE, dma->cap_mask); - dma->directions = BIT(write ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV); + dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; @@ -828,6 +843,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma->dev = chip->dev; dma->device_alloc_chan_resources = dw_edma_alloc_chan_resources; dma->device_free_chan_resources = dw_edma_free_chan_resources; + dma->device_caps = dw_edma_device_caps; dma->device_config = dw_edma_device_config; dma->device_pause = dw_edma_device_pause; dma->device_resume = dw_edma_device_resume; @@ -841,9 +857,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma_set_max_seg_size(dma->dev, U32_MAX); /* Register DMA device */ - err = dma_async_device_register(dma); - - return err; + return dma_async_device_register(dma); } static inline void dw_edma_dec_irq_alloc(int *nr_irqs, u32 *alloc, u16 cnt) @@ -988,13 +1002,8 @@ int dw_edma_probe(struct dw_edma_chip *chip) if (err) return err; - /* Setup write channels */ - err = dw_edma_channel_setup(dw, true, wr_alloc, rd_alloc); - if (err) - goto err_irq_free; - - /* Setup read channels */ - err = dw_edma_channel_setup(dw, false, wr_alloc, rd_alloc); + /* Setup write/read channels */ + err = dw_edma_channel_setup(dw, wr_alloc, rd_alloc); if (err) goto err_irq_free; @@ -1034,15 +1043,8 @@ int dw_edma_remove(struct dw_edma_chip *chip) pm_runtime_disable(dev); /* Deregister eDMA device */ - dma_async_device_unregister(&dw->wr_edma); - list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels, - vc.chan.device_node) { - tasklet_kill(&chan->vc.task); - list_del(&chan->vc.chan.device_node); - } - - dma_async_device_unregister(&dw->rd_edma); - list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels, + dma_async_device_unregister(&dw->dma); + list_for_each_entry_safe(chan, _chan, &dw->dma.channels, vc.chan.device_node) { tasklet_kill(&chan->vc.task); list_del(&chan->vc.chan.device_node); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index 85df2d511907..b576a8fff45a 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -98,10 +98,9 @@ struct dw_edma_irq { struct dw_edma { char name[20]; - struct dma_device wr_edma; - u16 wr_ch_cnt; + struct dma_device dma; - struct dma_device rd_edma; + u16 wr_ch_cnt; u16 rd_ch_cnt; struct dw_edma_irq *irq; From patchwork Mon Aug 22 18:53:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42335C32789 for ; Mon, 22 Aug 2022 18:55:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237357AbiHVSzK (ORCPT ); Mon, 22 Aug 2022 14:55:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237834AbiHVSyZ (ORCPT ); Mon, 22 Aug 2022 14:54:25 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 07AB11A04C; Mon, 22 Aug 2022 11:54:05 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 496FBDA2; Mon, 22 Aug 2022 21:57:15 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 496FBDA2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194635; bh=BBbcUSUK+nhQOSUgzTzaONwGSvzRmhhyC99mc2mlJts=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=Q66pwGFjrAj9Het3YpmckVZIsbhhBKrIYxiROC0O5u1WD0w5lRLsZoadIVeJlompX Oadp7iwG5HF2Se0P1rg1ZtVJSdVPwh3HLHw/GSVUGowvVjRt6Wainn6QWGfNlIVtGz qXIQnbsYvn0SpNZbibsprV+v8N4qVvut980ClaGE= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:54:00 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 18/24] dmaengine: dw-edma: Use DMA-engine device DebugFS subdirectory Date: Mon, 22 Aug 2022 21:53:26 +0300 Message-ID: <20220822185332.26149-19-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Since all DW eDMA read and write channels are now installed in a framework of a single DMA-engine device, we can freely move all the DW eDMA-specific DebugFS nodes into a ready-to-use DMA-engine DebugFS subdirectory. It's created during the DMA-device registration and can be found in the dma_device.dbg_dev_root field. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 3 --- drivers/dma/dw-edma/dw-edma-core.h | 3 --- drivers/dma/dw-edma/dw-edma-v0-core.c | 5 ----- drivers/dma/dw-edma/dw-edma-v0-core.h | 1 - drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 16 ++++------------ drivers/dma/dw-edma/dw-edma-v0-debugfs.h | 5 ----- 6 files changed, 4 insertions(+), 29 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index f5124e7b50cf..7ba3b60c960c 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -1050,9 +1050,6 @@ int dw_edma_remove(struct dw_edma_chip *chip) list_del(&chan->vc.chan.device_node); } - /* Turn debugfs off */ - dw_edma_v0_core_debugfs_off(dw); - return 0; } EXPORT_SYMBOL_GPL(dw_edma_remove); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index b576a8fff45a..e3ad3e372b55 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -111,9 +111,6 @@ struct dw_edma { raw_spinlock_t lock; /* Only for legacy */ struct dw_edma_chip *chip; -#ifdef CONFIG_DEBUG_FS - struct dentry *debugfs; -#endif /* CONFIG_DEBUG_FS */ }; struct dw_edma_sg { diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index 77e6cfe52e0a..66f296daac5a 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -504,8 +504,3 @@ void dw_edma_v0_core_debugfs_on(struct dw_edma *dw) { dw_edma_v0_debugfs_on(dw); } - -void dw_edma_v0_core_debugfs_off(struct dw_edma *dw) -{ - dw_edma_v0_debugfs_off(dw); -} diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.h b/drivers/dma/dw-edma/dw-edma-v0-core.h index 75aec6d31b21..ab96a1f48080 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.h +++ b/drivers/dma/dw-edma/dw-edma-v0-core.h @@ -23,6 +23,5 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first); int dw_edma_v0_core_device_config(struct dw_edma_chan *chan); /* eDMA debug fs callbacks */ void dw_edma_v0_core_debugfs_on(struct dw_edma *dw); -void dw_edma_v0_core_debugfs_off(struct dw_edma *dw); #endif /* _DW_EDMA_V0_CORE_H */ diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index e6cf608d121b..d12c607433bf 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -268,7 +268,7 @@ static void dw_edma_debugfs_regs(struct dw_edma *dw) struct dentry *regs_dent; int nr_entries; - regs_dent = debugfs_create_dir(REGISTERS_STR, dw->debugfs); + regs_dent = debugfs_create_dir(REGISTERS_STR, dw->dma.dbg_dev_root); nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); @@ -282,17 +282,9 @@ void dw_edma_v0_debugfs_on(struct dw_edma *dw) if (!debugfs_initialized()) return; - dw->debugfs = debugfs_create_dir(dw->name, NULL); - - debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); - debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); - debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt); + debugfs_create_u32("mf", 0444, dw->dma.dbg_dev_root, &dw->chip->mf); + debugfs_create_u16("wr_ch_cnt", 0444, dw->dma.dbg_dev_root, &dw->wr_ch_cnt); + debugfs_create_u16("rd_ch_cnt", 0444, dw->dma.dbg_dev_root, &dw->rd_ch_cnt); dw_edma_debugfs_regs(dw); } - -void dw_edma_v0_debugfs_off(struct dw_edma *dw) -{ - debugfs_remove_recursive(dw->debugfs); - dw->debugfs = NULL; -} diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.h b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h index 3391b86edf5a..fb3342d97d6d 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.h +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h @@ -13,15 +13,10 @@ #ifdef CONFIG_DEBUG_FS void dw_edma_v0_debugfs_on(struct dw_edma *dw); -void dw_edma_v0_debugfs_off(struct dw_edma *dw); #else static inline void dw_edma_v0_debugfs_on(struct dw_edma *dw) { } - -static inline void dw_edma_v0_debugfs_off(struct dw_edma *dw) -{ -} #endif /* CONFIG_DEBUG_FS */ #endif /* _DW_EDMA_V0_DEBUG_FS_H */ From patchwork Mon Aug 22 18:53:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDC13C38142 for ; Mon, 22 Aug 2022 18:55:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237516AbiHVSzO (ORCPT ); Mon, 22 Aug 2022 14:55:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236274AbiHVSyZ (ORCPT ); Mon, 22 Aug 2022 14:54:25 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0764911472; Mon, 22 Aug 2022 11:54:06 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id F1CEADA5; Mon, 22 Aug 2022 21:57:15 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com F1CEADA5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194635; bh=osLfZtzFIqt5KIAXvrEPuMWW3SbInLo+cQ7zqsGHX4o=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=EZHOHS9oVsB0vK02jA9eBw/uxBT4qDX0crarG6mJXm6PH8A6imHvgEtUyiyAdpdkt 801qEOh738egNVMwWMLkNrSdiYTjaij/s3P6YQzF4iGkPXWYGpSKxg9YqJ7i0YFIBI IOYkFY32jGrcUnuj+wjHW2ZGA2lMMRTiMD98AbvE= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:54:01 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 19/24] dmaengine: dw-edma: Use non-atomic io-64 methods Date: Mon, 22 Aug 2022 21:53:27 +0300 Message-ID: <20220822185332.26149-20-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Instead of splitting the 64-bits IOs up into two 32-bits ones it's possible to use the already available non-atomic readq/writeq methods implemented exactly for such cases. They are defined in the dedicated header files io-64-nonatomic-lo-hi.h/io-64-nonatomic-hi-lo.h. So in case if the 64-bits readq/writeq methods are unavailable on some platforms at consideration, the corresponding drivers can have any of these headers included and stop locally re-implementing the 64-bits IO accessors taking into account the non-atomic nature of the included methods. Let's do that in the DW eDMA driver too. Note by doing so we can discard the CONFIG_64BIT config ifdefs from the code. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-v0-core.c | 55 +++++++++------------------ 1 file changed, 18 insertions(+), 37 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index 66f296daac5a..51a34b43434c 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -8,6 +8,8 @@ #include +#include + #include "dw-edma-core.h" #include "dw-edma-v0-core.h" #include "dw-edma-v0-regs.h" @@ -53,8 +55,6 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) SET_32(dw, rd_##name, value); \ } while (0) -#ifdef CONFIG_64BIT - #define SET_64(dw, name, value) \ writeq(value, &(__dw_regs(dw)->name)) @@ -80,8 +80,6 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) SET_64(dw, rd_##name, value); \ } while (0) -#endif /* CONFIG_64BIT */ - #define SET_COMPAT(dw, name, value) \ writel(value, &(__dw_regs(dw)->type.unroll.name)) @@ -164,14 +162,13 @@ static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, #define SET_LL_32(ll, value) \ writel(value, ll) -#ifdef CONFIG_64BIT - static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, u64 value, void __iomem *addr) { + unsigned long flags; + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; - unsigned long flags; raw_spin_lock_irqsave(&dw->lock, flags); @@ -181,22 +178,22 @@ static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, writel(viewport_sel, &(__dw_regs(dw)->type.legacy.viewport_sel)); - writeq(value, addr); + } + + writeq(value, addr); + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) raw_spin_unlock_irqrestore(&dw->lock, flags); - } else { - writeq(value, addr); - } } static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, const void __iomem *addr) { - u32 value; + unsigned long flags; + u64 value; if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; - unsigned long flags; raw_spin_lock_irqsave(&dw->lock, flags); @@ -206,12 +203,12 @@ static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, writel(viewport_sel, &(__dw_regs(dw)->type.legacy.viewport_sel)); - value = readq(addr); + } + + value = readq(addr); + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) raw_spin_unlock_irqrestore(&dw->lock, flags); - } else { - value = readq(addr); - } return value; } @@ -225,8 +222,6 @@ static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, #define SET_LL_64(ll, value) \ writeq(value, ll) -#endif /* CONFIG_64BIT */ - /* eDMA management callbacks */ void dw_edma_v0_core_off(struct dw_edma *dw) { @@ -325,19 +320,10 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) /* Transfer size */ SET_LL_32(&lli[i].transfer_size, child->sz); /* SAR */ - #ifdef CONFIG_64BIT - SET_LL_64(&lli[i].sar.reg, child->sar); - #else /* CONFIG_64BIT */ - SET_LL_32(&lli[i].sar.lsb, lower_32_bits(child->sar)); - SET_LL_32(&lli[i].sar.msb, upper_32_bits(child->sar)); - #endif /* CONFIG_64BIT */ + SET_LL_64(&lli[i].sar.reg, child->sar); /* DAR */ - #ifdef CONFIG_64BIT - SET_LL_64(&lli[i].dar.reg, child->dar); - #else /* CONFIG_64BIT */ - SET_LL_32(&lli[i].dar.lsb, lower_32_bits(child->dar)); - SET_LL_32(&lli[i].dar.msb, upper_32_bits(child->dar)); - #endif /* CONFIG_64BIT */ + SET_LL_64(&lli[i].dar.reg, child->dar); + i++; } @@ -349,12 +335,7 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) /* Channel control */ SET_LL_32(&llp->control, control); /* Linked list */ - #ifdef CONFIG_64BIT - SET_LL_64(&llp->llp.reg, chunk->ll_region.paddr); - #else /* CONFIG_64BIT */ - SET_LL_32(&llp->llp.lsb, lower_32_bits(chunk->ll_region.paddr)); - SET_LL_32(&llp->llp.msb, upper_32_bits(chunk->ll_region.paddr)); - #endif /* CONFIG_64BIT */ + SET_LL_64(&llp->llp.reg, chunk->ll_region.paddr); } void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) From patchwork Mon Aug 22 18:53:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951150 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8235C32772 for ; Mon, 22 Aug 2022 18:55:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236102AbiHVSzP (ORCPT ); Mon, 22 Aug 2022 14:55:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237888AbiHVSyZ (ORCPT ); Mon, 22 Aug 2022 14:54:25 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E506E3B966; Mon, 22 Aug 2022 11:54:06 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id A3CBADA6; Mon, 22 Aug 2022 21:57:16 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com A3CBADA6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194636; bh=n7/eCBY+3otMjcR2vj/1GqFEVzdZEejjsBhPqaiBFow=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=BcxB6oUHv8Yeap19sVZMaI/5SoB11yF75FcRQoRf+LY/CZBNlS4E1OkynRJwgmzUA bpcqqHdv008xsYe+e7kcrNT6GiQOc0u6ZZqo/wG4WTQwTEmcYUmcMKn7fHo2F9pstD et6BijAPiBhCL3AconN9U9xEddNO+TB7G61jw1Fs= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:54:02 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 20/24] dmaengine: dw-edma: Drop DT-region allocation Date: Mon, 22 Aug 2022 21:53:28 +0300 Message-ID: <20220822185332.26149-21-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org There is no point in allocating an additional memory for the data target regions passed then to the client drivers. Just use the already available structures defined in the dw_edma_chip instance. Note these regions are unused in normal circumstances since they are specific to the case of eDMA being embedded into the DW PCIe End-point and having it's CSRs accessible over a End-point' BAR. This case is only known to be implemented as a part of the Synopsys PCIe EndPoint IP prototype kit. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 21 ++++----------------- 1 file changed, 4 insertions(+), 17 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 7ba3b60c960c..98a94a66fb82 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -744,7 +744,6 @@ static void dw_edma_free_chan_resources(struct dma_chan *dchan) static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) { struct dw_edma_chip *chip = dw->chip; - struct dw_edma_region *dt_region; struct device *dev = chip->dev; struct dw_edma_chan *chan; struct dw_edma_irq *irq; @@ -760,12 +759,6 @@ static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) for (i = 0; i < ch_cnt; i++) { chan = &dw->chan[i]; - dt_region = devm_kzalloc(dev, sizeof(*dt_region), GFP_KERNEL); - if (!dt_region) - return -ENOMEM; - - chan->vc.chan.private = dt_region; - chan->dw = dw; if (i < dw->wr_ch_cnt) { @@ -813,17 +806,11 @@ static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) chan->msi.data); chan->vc.desc_free = vchan_free_desc; - vchan_init(&chan->vc, dma); + chan->vc.chan.private = chan->dir == EDMA_DIR_WRITE ? + &dw->chip->dt_region_wr[chan->id] : + &dw->chip->dt_region_rd[chan->id]; - if (chan->dir == EDMA_DIR_WRITE) { - dt_region->paddr = chip->dt_region_wr[chan->id].paddr; - dt_region->vaddr = chip->dt_region_wr[chan->id].vaddr; - dt_region->sz = chip->dt_region_wr[chan->id].sz; - } else { - dt_region->paddr = chip->dt_region_rd[chan->id].paddr; - dt_region->vaddr = chip->dt_region_rd[chan->id].vaddr; - dt_region->sz = chip->dt_region_rd[chan->id].sz; - } + vchan_init(&chan->vc, dma); dw_edma_v0_core_device_config(chan); } From patchwork Mon Aug 22 18:53:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951148 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DCC6C3F6B0 for ; Mon, 22 Aug 2022 18:55:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236707AbiHVSzN (ORCPT ); Mon, 22 Aug 2022 14:55:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237885AbiHVSyZ (ORCPT ); Mon, 22 Aug 2022 14:54:25 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9BA6940E15; Mon, 22 Aug 2022 11:54:07 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id A392CDA3; Mon, 22 Aug 2022 21:57:17 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com A392CDA3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194637; bh=HekYV1lBKlai2OBRiEnPOFW81XlcyqdBlYrBwA4qXHk=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=EdcXB0yxe1dc+2X8lGuy1UiX8LS7G17Dv7nJBFeHvezaZYmzir6dDEQF6I5pakPPE DM9/vJn9bkpc7PZay4MkOaJ7tXQmtvkEW+3SsibzrmqX/YBeTga5biURZPnKot5Hgq RLIWUgQTXmGYwwfAB8KJ2ENH7d0EicUwVEEgzubQ= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:54:03 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 21/24] dmaengine: dw-edma: Replace chip ID number with device name Date: Mon, 22 Aug 2022 21:53:29 +0300 Message-ID: <20220822185332.26149-22-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Using some abstract number as the DW eDMA chip identifier isn't really practical. First of all there can be more than one DW eDMA controller on the platform some of them can be detected as the PCIe end-points, some of them can be embedded into the DW PCIe Root Port/End-point controllers. Seeing some abstract number in for instance IRQ handlers list doesn't give a notion regarding their reference to the particular DMA controller. Secondly current DW eDMA chip id implementation doesn't provide the multi-eDMA platforms support for same reason of possibly having eDMA detected on different system buses. At the same time re-implementing something ida-based won't give much benefits especially seeing the DW eDMA chip ID is only used in the IRQ request procedure. So to speak in order to preserve the code simplicity and get to have the multi-eDMA platforms support let's just use the parental device name to create the DW eDMA controller name. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- Changelog v2: - Slightly extend the eDMA name array. (@Manivannan) --- drivers/dma/dw-edma/dw-edma-core.c | 3 ++- drivers/dma/dw-edma/dw-edma-core.h | 2 +- drivers/dma/dw-edma/dw-edma-pcie.c | 1 - include/linux/dma/edma.h | 1 - 4 files changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 98a94a66fb82..6a8282eaebaf 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -979,7 +979,8 @@ int dw_edma_probe(struct dw_edma_chip *chip) if (!dw->chan) return -ENOMEM; - snprintf(dw->name, sizeof(dw->name), "dw-edma-core:%d", chip->id); + snprintf(dw->name, sizeof(dw->name), "dw-edma-core:%s", + dev_name(chip->dev)); /* Disable eDMA, only to establish the ideal initial conditions */ dw_edma_v0_core_off(dw); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index e3ad3e372b55..0ab2b6dba880 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -96,7 +96,7 @@ struct dw_edma_irq { }; struct dw_edma { - char name[20]; + char name[32]; struct dma_device dma; diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index f530bacfd716..3f9dadc73854 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -222,7 +222,6 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, /* Data structure initialization */ chip->dev = dev; - chip->id = pdev->devfn; chip->mf = vsec_data.mf; chip->nr_irqs = nr_irqs; diff --git a/include/linux/dma/edma.h b/include/linux/dma/edma.h index 380a0a3e251f..9d44da4aa59d 100644 --- a/include/linux/dma/edma.h +++ b/include/linux/dma/edma.h @@ -76,7 +76,6 @@ enum dw_edma_chip_flags { */ struct dw_edma_chip { struct device *dev; - int id; int nr_irqs; const struct dw_edma_core_ops *ops; u32 flags; From patchwork Mon Aug 22 18:53:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDE62C32789 for ; Mon, 22 Aug 2022 18:55:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238045AbiHVSzQ (ORCPT ); Mon, 22 Aug 2022 14:55:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237840AbiHVSyZ (ORCPT ); Mon, 22 Aug 2022 14:54:25 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 60A0A40E34; Mon, 22 Aug 2022 11:54:08 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 5A301DA7; Mon, 22 Aug 2022 21:57:18 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 5A301DA7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194638; bh=fnw6YfcisLG+r+b0P4r7zw/J8l1QC0hxTccFKPF3fXM=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=bXkDAz5zp1/wCkzRD3SAfCdNtBCSrVDS1s7XwG4aAH+/EGLWXzwzHo3L2+8WGTsZ2 GUcDhegWHU3yhUnsoVFe4yuaerCr9K7Mg0KhD1tq8JMjp7rTJa/zH9gcMUPCS65lcB kRkRd46WtZuUFbehIkZ8wkOqCXDR5zLN5UucnA+8= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:54:03 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 22/24] dmaengine: dw-edma: Bypass dma-ranges mapping for the local setup Date: Mon, 22 Aug 2022 21:53:30 +0300 Message-ID: <20220822185332.26149-23-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org DW eDMA doesn't perform any translation of the traffic generated on the CPU/Application side. It just generates read/write AXI-bus requests with the specified addresses. But in case if the dma-ranges DT-property is specified for a platform device node, Linux will use it to map the CPU memory regions into the DMAable bus ranges. This isn't what we want for the eDMA embedded into the locally accessed DW PCIe Root Port and End-point. In order to work that around let's set the chan_dma_dev flag for each DW eDMA channel thus forcing the client drivers to getting a custom dma-ranges-less parental device for the mappings. Note it will only work for the client drivers using the dmaengine_get_dma_device() method to get the parental DMA device. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- Changelog v2: - Fix the comment a bit to being clearer. (@Manivannan) Changelog v3: - Conditionally set dchan->dev->device.dma_coherent field since it can be missing on some platforms. (@Manivannan) - Remove Manivannan' rb and tb tags since the patch content has been changed. --- drivers/dma/dw-edma/dw-edma-core.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 6a8282eaebaf..4f56149dc8d8 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -716,6 +716,26 @@ static int dw_edma_alloc_chan_resources(struct dma_chan *dchan) if (chan->status != EDMA_ST_IDLE) return -EBUSY; + /* Bypass the dma-ranges based memory regions mapping for the eDMA + * controlled from the CPU/Application side since in that case + * the local memory address is left untranslated. + */ + if (chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { + dchan->dev->chan_dma_dev = true; + +#if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ + defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU) || \ + defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL) + dchan->dev->device.dma_coherent = chan->dw->chip->dev->dma_coherent; +#endif + + dma_coerce_mask_and_coherent(&dchan->dev->device, + dma_get_mask(chan->dw->chip->dev)); + dchan->dev->device.dma_parms = chan->dw->chip->dev->dma_parms; + } else { + dchan->dev->chan_dma_dev = false; + } + pm_runtime_get(chan->dw->chip->dev); return 0; From patchwork Mon Aug 22 18:53:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951152 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A79EAC28D13 for ; Mon, 22 Aug 2022 18:55:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237919AbiHVSzQ (ORCPT ); Mon, 22 Aug 2022 14:55:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237892AbiHVSyZ (ORCPT ); Mon, 22 Aug 2022 14:54:25 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1282641983; Mon, 22 Aug 2022 11:54:09 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 02104DA4; Mon, 22 Aug 2022 21:57:19 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 02104DA4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194639; bh=jfexZWKKgmMoKbQRWSmsZpAuHhXpqLZNAGbwd9rIFCk=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=QcKhiGPJzGZ3/98du52Qx6pt5b/V2kOS8hX8Qnr45Un1cvcfBEwSo+Jznq3wPZ2eZ 5nz79MMXJdIRqnjbwz6u9MV2mmoQ9cDMe+SkfLWaD0qIWoT0C110lLKJf69FAbje+a kFly8J8w2A13KcO+DJJSXKEZW3M1Bp/2ECi8OiJc= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:54:04 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , Subject: [PATCH RESEND v5 23/24] dmaengine: dw-edma: Skip cleanup procedure if no private data found Date: Mon, 22 Aug 2022 21:53:31 +0300 Message-ID: <20220822185332.26149-24-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org DW eDMA driver private data is preserved in the passed DW eDMA chip info structure. If either probe procedure failed or for some reason the passed info object doesn't have private data pointer initialized we need to halt the DMA device cleanup procedure in order to prevent possible system crashes. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 4f56149dc8d8..5736a537f4c8 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -1040,6 +1040,10 @@ int dw_edma_remove(struct dw_edma_chip *chip) struct dw_edma *dw = chip->dw; int i; + /* Skip removal if no private data found */ + if (!dw) + return -ENODEV; + /* Disable eDMA */ dw_edma_v0_core_off(dw); From patchwork Mon Aug 22 18:53:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FC2EC32772 for ; Mon, 22 Aug 2022 18:55:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236368AbiHVSzS (ORCPT ); Mon, 22 Aug 2022 14:55:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237838AbiHVSy0 (ORCPT ); Mon, 22 Aug 2022 14:54:26 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DB80D2C6; Mon, 22 Aug 2022 11:54:09 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id 3B89ADA2; Mon, 22 Aug 2022 21:57:20 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com 3B89ADA2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194640; bh=BilMhAh0rAW9kSZE+42/3Jw9wWOQJlTnHy2Kt5fSL+w=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=RhdWx3oa2emn27zIBJBAkRP3v3+/BdqphuabM2rCY60eRoDWyWoT6T1yF8uYukoHS aaxfUiSpyICdwPaK0dd+AjLqLNVTuVTzwG5N/cCNSEosGiLxo4XWi+DvRi/F0jiWv/ y7eiZEdqvMst7Xzb4DWraKkAvbaPOoKnK38CKhv8= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:54:05 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam , Lorenzo Pieralisi , =?utf-8?q?Krzysztof_Wilczy=C5=84?= =?utf-8?q?ski?= CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , , , Subject: [PATCH RESEND v5 24/24] PCI: dwc: Add DW eDMA engine support Date: Mon, 22 Aug 2022 21:53:32 +0300 Message-ID: <20220822185332.26149-25-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Since the DW eDMA driver now supports eDMA controllers embedded into the locally accessible DW PCIe Root Ports and Endpoints, we can use the updated interface to register DW eDMA as DMA engine device if it's available. In order to successfully do that the DW PCIe core driver need to perform some preparations first. First of all it needs to find out the eDMA controller CSRs base address, whether they are accessible over the Port Logic or iATU unrolled space. Afterwards it can try to auto-detect the eDMA controller availability and number of it's read/write channels. If none was found the procedure will just silently halt with no error returned. Secondly the platform is supposed to provide either combined or per-channel IRQ signals. If no valid IRQs set is found the procedure will also halt with no error returned so to be backward compatible with the platforms where DW PCIe controllers have eDMA embedded but lack of the IRQs defined for them. Finally before actually probing the eDMA device we need to allocate LLP items buffers. After that the DW eDMA can be registered. If registration is successful the info-message regarding the number of detected Read/Write eDMA channels will be printed to the system log in the similar way as it's done for the iATU settings. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- Changelog v2: - Don't fail eDMA detection procedure if the DW eDMA driver couldn't probe device. That happens if the driver is disabled. (@Manivannan) - Add "dma" registers resource mapping procedure. (@Manivannan) - Move the eDMA CSRs space detection into the dw_pcie_map_detect() method. - Remove eDMA on the dw_pcie_ep_init() internal errors. (@Manivannan) - Remove eDMA in the dw_pcie_ep_exit() method. - Move the dw_pcie_edma_detect() method execution to the tail of the dw_pcie_ep_init() function. Changelog v3: - Add more comprehensive and less regression prune eDMA block detection procedure. - Remove Manivannan tb tag since the patch content has been changed. --- .../pci/controller/dwc/pcie-designware-ep.c | 12 +- .../pci/controller/dwc/pcie-designware-host.c | 13 +- drivers/pci/controller/dwc/pcie-designware.c | 186 ++++++++++++++++++ drivers/pci/controller/dwc/pcie-designware.h | 20 ++ 4 files changed, 228 insertions(+), 3 deletions(-) diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 80a64b63c055..0fe83f08e0d6 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c @@ -612,8 +612,11 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, void dw_pcie_ep_exit(struct dw_pcie_ep *ep) { + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct pci_epc *epc = ep->epc; + dw_pcie_edma_remove(pci); + pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, epc->mem->window.page_size); @@ -767,6 +770,10 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) goto err_exit_epc_mem; } + ret = dw_pcie_edma_detect(pci); + if (ret) + goto err_free_epc_mem; + if (ep->ops->get_features) { epc_features = ep->ops->get_features(ep); if (epc_features->core_init_notifier) @@ -775,10 +782,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) ret = dw_pcie_ep_init_complete(ep); if (ret) - goto err_free_epc_mem; + goto err_remove_edma; return 0; +err_remove_edma: + dw_pcie_edma_remove(pci); + err_free_epc_mem: pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, epc->mem->window.page_size); diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 35da6ec41405..70dab96b1f66 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -481,14 +481,18 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp) dw_pcie_iatu_detect(pci); - ret = dw_pcie_setup_rc(pp); + ret = dw_pcie_edma_detect(pci); if (ret) goto err_free_msi; + ret = dw_pcie_setup_rc(pp); + if (ret) + goto err_remove_edma; + if (!dw_pcie_link_up(pci)) { ret = dw_pcie_start_link(pci); if (ret) - goto err_free_msi; + goto err_remove_edma; } /* Ignore errors, the link may come up later */ @@ -505,6 +509,9 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp) err_stop_link: dw_pcie_stop_link(pci); +err_remove_edma: + dw_pcie_edma_remove(pci); + err_free_msi: if (pp->has_msi_ctrl) dw_pcie_free_msi(pp); @@ -526,6 +533,8 @@ void dw_pcie_host_deinit(struct dw_pcie_rp *pp) dw_pcie_stop_link(pci); + dw_pcie_edma_remove(pci); + if (pp->has_msi_ctrl) dw_pcie_free_msi(pp); diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 1e06ccf2dc9e..3dcbdaf980e4 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -142,6 +143,18 @@ int dw_pcie_get_resources(struct dw_pcie *pci) if (!pci->atu_size) pci->atu_size = SZ_4K; + /* eDMA region can be mapped to a custom base address */ + if (!pci->edma.reg_base) { + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dma"); + if (res) { + pci->edma.reg_base = devm_ioremap_resource(pci->dev, res); + if (IS_ERR(pci->edma.reg_base)) + return PTR_ERR(pci->edma.reg_base); + } else if (pci->atu_size >= 2 * DEFAULT_DBI_DMA_OFFSET) { + pci->edma.reg_base = pci->atu_base + DEFAULT_DBI_DMA_OFFSET; + } + } + /* LLDD is supposed to manually switch the clocks and resets state */ if (dw_pcie_cap_is(pci, REQ_RES)) { ret = dw_pcie_get_clocks(pci); @@ -782,6 +795,179 @@ void dw_pcie_iatu_detect(struct dw_pcie *pci) pci->region_align / SZ_1K, (pci->region_limit + 1) / SZ_1G); } +static u32 dw_pcie_readl_dma(struct dw_pcie *pci, u32 reg) +{ + u32 val = 0; + int ret; + + if (pci->ops && pci->ops->read_dbi) + return pci->ops->read_dbi(pci, pci->edma.reg_base, reg, 4); + + ret = dw_pcie_read(pci->edma.reg_base + reg, 4, &val); + if (ret) + dev_err(pci->dev, "Read DMA address failed\n"); + + return val; +} + +static int dw_pcie_edma_irq_vector(struct device *dev, unsigned int nr) +{ + struct platform_device *pdev = to_platform_device(dev); + char name[6]; + int ret; + + if (nr >= EDMA_MAX_WR_CH + EDMA_MAX_RD_CH) + return -EINVAL; + + ret = platform_get_irq_byname_optional(pdev, "dma"); + if (ret > 0) + return ret; + + snprintf(name, sizeof(name), "dma%u", nr); + + return platform_get_irq_byname_optional(pdev, name); +} + +static struct dw_edma_core_ops dw_pcie_edma_ops = { + .irq_vector = dw_pcie_edma_irq_vector, +}; + +static int dw_pcie_edma_find_chip(struct dw_pcie *pci) +{ + u32 val; + + val = dw_pcie_readl_dbi(pci, PCIE_DMA_VIEWPORT_BASE + PCIE_DMA_CTRL); + if (val == 0xFFFFFFFF && pci->edma.reg_base) { + pci->edma.mf = EDMA_MF_EDMA_UNROLL; + + val = dw_pcie_readl_dma(pci, PCIE_DMA_CTRL); + } else if (val != 0xFFFFFFFF) { + pci->edma.mf = EDMA_MF_EDMA_LEGACY; + + pci->edma.reg_base = pci->dbi_base + PCIE_DMA_VIEWPORT_BASE; + } else { + return -ENODEV; + } + + pci->edma.dev = pci->dev; + + if (!pci->edma.ops) + pci->edma.ops = &dw_pcie_edma_ops; + + pci->edma.flags |= DW_EDMA_CHIP_LOCAL; + + pci->edma.ll_wr_cnt = FIELD_GET(PCIE_DMA_NUM_WR_CHAN, val); + pci->edma.ll_rd_cnt = FIELD_GET(PCIE_DMA_NUM_RD_CHAN, val); + + /* Sanity check the channels count if the mapping was incorrect */ + if (!pci->edma.ll_wr_cnt || pci->edma.ll_wr_cnt > EDMA_MAX_WR_CH || + !pci->edma.ll_rd_cnt || pci->edma.ll_rd_cnt > EDMA_MAX_RD_CH) + return -EINVAL; + + return 0; +} + +static int dw_pcie_edma_irq_verify(struct dw_pcie *pci) +{ + struct platform_device *pdev = to_platform_device(pci->dev); + u16 ch_cnt = pci->edma.ll_wr_cnt + pci->edma.ll_rd_cnt; + char name[6]; + int ret; + + if (pci->edma.nr_irqs == 1) + return 0; + else if (pci->edma.nr_irqs > 1) + return pci->edma.nr_irqs != ch_cnt ? -EINVAL : 0; + + ret = platform_get_irq_byname_optional(pdev, "dma"); + if (ret > 0) { + pci->edma.nr_irqs = 1; + return 0; + } + + for (; pci->edma.nr_irqs < ch_cnt; pci->edma.nr_irqs++) { + snprintf(name, sizeof(name), "dma%d", pci->edma.nr_irqs); + + ret = platform_get_irq_byname_optional(pdev, name); + if (ret <= 0) + return -EINVAL; + } + + return 0; +} + +static int dw_pcie_edma_ll_alloc(struct dw_pcie *pci) +{ + struct dw_edma_region *ll; + dma_addr_t paddr; + int i; + + for (i = 0; i < pci->edma.ll_wr_cnt; i++) { + ll = &pci->edma.ll_region_wr[i]; + ll->sz = DMA_LLP_MEM_SIZE; + ll->vaddr = dmam_alloc_coherent(pci->dev, ll->sz, + &paddr, GFP_KERNEL); + if (!ll->vaddr) + return -ENOMEM; + + ll->paddr = paddr; + } + + for (i = 0; i < pci->edma.ll_rd_cnt; i++) { + ll = &pci->edma.ll_region_rd[i]; + ll->sz = DMA_LLP_MEM_SIZE; + ll->vaddr = dmam_alloc_coherent(pci->dev, ll->sz, + &paddr, GFP_KERNEL); + if (!ll->vaddr) + return -ENOMEM; + + ll->paddr = paddr; + } + + return 0; +} + +int dw_pcie_edma_detect(struct dw_pcie *pci) +{ + int ret; + + /* Don't fail if no eDMA was found (for the backward compatibility) */ + ret = dw_pcie_edma_find_chip(pci); + if (ret) + return 0; + + /* Don't fail on the IRQs verification (for the backward compatibility) */ + ret = dw_pcie_edma_irq_verify(pci); + if (ret) { + dev_err(pci->dev, "Invalid eDMA IRQs found\n"); + return 0; + } + + ret = dw_pcie_edma_ll_alloc(pci); + if (ret) { + dev_err(pci->dev, "Couldn't allocate LLP memory\n"); + return ret; + } + + /* Don't fail if the DW eDMA driver can't find the device */ + ret = dw_edma_probe(&pci->edma); + if (ret && ret != -ENODEV) { + dev_err(pci->dev, "Couldn't register eDMA device\n"); + return ret; + } + + dev_info(pci->dev, "eDMA: unroll %s, %hu wr, %hu rd\n", + pci->edma.mf == EDMA_MF_EDMA_UNROLL ? "T" : "F", + pci->edma.ll_wr_cnt, pci->edma.ll_rd_cnt); + + return 0; +} + +void dw_pcie_edma_remove(struct dw_pcie *pci) +{ + dw_edma_remove(&pci->edma); +} + void dw_pcie_setup(struct dw_pcie *pci) { u32 val; diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index f5530069a204..d11c246b9bc1 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -167,6 +168,18 @@ #define PCIE_MSIX_DOORBELL 0x948 #define PCIE_MSIX_DOORBELL_PF_SHIFT 24 +/* + * eDMA CSRs. DW PCIe IP-core v4.70a and older had the eDMA registers accessible + * over the Port Logic registers space. Afterwords the unrolled mapping was + * introduced so eDMA and iATU could be accessed via a dedicated registers + * space. + */ +#define PCIE_DMA_VIEWPORT_BASE 0x970 +#define PCIE_DMA_UNROLL_BASE 0x80000 +#define PCIE_DMA_CTRL 0x008 +#define PCIE_DMA_NUM_WR_CHAN GENMASK(3, 0) +#define PCIE_DMA_NUM_RD_CHAN GENMASK(19, 16) + #define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20 #define PCIE_PL_CHK_REG_CHK_REG_START BIT(0) #define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS BIT(1) @@ -215,6 +228,7 @@ * this offset, if atu_base not set. */ #define DEFAULT_DBI_ATU_OFFSET (0x3 << 20) +#define DEFAULT_DBI_DMA_OFFSET PCIE_DMA_UNROLL_BASE #define MAX_MSI_IRQS 256 #define MAX_MSI_IRQS_PER_CTRL 32 @@ -226,6 +240,9 @@ #define MAX_IATU_IN 256 #define MAX_IATU_OUT 256 +/* Default eDMA LLP memory size */ +#define DMA_LLP_MEM_SIZE PAGE_SIZE + struct dw_pcie; struct dw_pcie_rp; struct dw_pcie_ep; @@ -370,6 +387,7 @@ struct dw_pcie { int num_lanes; int link_gen; u8 n_fts[2]; + struct dw_edma_chip edma; struct clk_bulk_data app_clks[DW_PCIE_NUM_APP_CLKS]; struct clk_bulk_data core_clks[DW_PCIE_NUM_CORE_CLKS]; struct reset_control_bulk_data app_rsts[DW_PCIE_NUM_APP_RSTS]; @@ -409,6 +427,8 @@ int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index, void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index); void dw_pcie_setup(struct dw_pcie *pci); void dw_pcie_iatu_detect(struct dw_pcie *pci); +int dw_pcie_edma_detect(struct dw_pcie *pci); +void dw_pcie_edma_remove(struct dw_pcie *pci); static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) {