From patchwork Tue Mar 15 17:23:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Appana Durga Kedareswara rao X-Patchwork-Id: 8590811 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 741389F6E1 for ; Tue, 15 Mar 2016 17:35:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0CD4D20204 for ; Tue, 15 Mar 2016 17:35:20 +0000 (UTC) Received: from bombadil.infradead.org (unknown [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 959712011B for ; Tue, 15 Mar 2016 17:35:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1afsic-0002bb-TU; Tue, 15 Mar 2016 17:25:18 +0000 Received: from mail-cys01nam02on0609.outbound.protection.outlook.com ([2a01:111:f400:fe45::609] helo=NAM02-CY1-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1afshq-0001VF-QU for linux-arm-kernel@lists.infradead.org; Tue, 15 Mar 2016 17:24:37 +0000 Received: from CY1NAM02FT011.eop-nam02.prod.protection.outlook.com (10.152.74.56) by CY1NAM02HT087.eop-nam02.prod.protection.outlook.com (10.152.74.219) with Microsoft SMTP Server (TLS) id 15.1.434.11; Tue, 15 Mar 2016 17:24:07 +0000 Authentication-Results: spf=pass (sender IP is 149.199.60.83) smtp.mailfrom=xilinx.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.60.83 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01; Received: from xsj-pvapsmtpgw01 (149.199.60.83) by CY1NAM02FT011.mail.protection.outlook.com (10.152.75.156) with Microsoft SMTP Server (TLS) id 15.1.443.6 via Frontend Transport; Tue, 15 Mar 2016 17:24:06 +0000 Received: from unknown-38-66.xilinx.com ([149.199.38.66] helo=xsj-pvapsmtp01) by xsj-pvapsmtpgw01 with esmtp (Exim 4.63) (envelope-from ) id 1afshS-00066U-Ez; Tue, 15 Mar 2016 10:24:06 -0700 Received: from [127.0.0.1] (helo=localhost) by xsj-pvapsmtp01 with smtp (Exim 4.63) (envelope-from ) id 1afshS-0004bo-BO; Tue, 15 Mar 2016 10:24:06 -0700 Received: from xsj-pvapsmtp01 (mailman.xilinx.com [149.199.38.66]) by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id u2FHO0ij008843; Tue, 15 Mar 2016 10:24:00 -0700 Received: from [172.23.64.207] (helo=xhd-lin64re117.xilinx.com) by xsj-pvapsmtp01 with esmtp (Exim 4.63) (envelope-from ) id 1afshL-0004ZX-Nw; Tue, 15 Mar 2016 10:24:00 -0700 Received: by xhd-lin64re117.xilinx.com (Postfix, from userid 13614) id EAA08208FB; Tue, 15 Mar 2016 22:53:58 +0530 (IST) From: Kedareswara rao Appana To: , , , , , , , , Subject: [PATCH 6/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine Date: Tue, 15 Mar 2016 22:53:11 +0530 Message-ID: <1458062592-27981-7-git-send-email-appanad@xilinx.com> X-Mailer: git-send-email 2.1.2 In-Reply-To: <1458062592-27981-1-git-send-email-appanad@xilinx.com> References: <1458062592-27981-1-git-send-email-appanad@xilinx.com> X-RCIS-Action: ALLOW X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.0.0.1202-22194.005 X-TM-AS-User-Approved-Sender: Yes;Yes X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:149.199.60.83; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(2980300002)(438002)(189002)(199003)(4326007)(5001770100001)(2950100001)(189998001)(1096002)(1220700001)(2906002)(46386002)(76176999)(4001450100002)(50466002)(6806005)(87936001)(5008740100001)(106466001)(11100500001)(33646002)(586003)(19580405001)(81166005)(86362001)(551934003)(19580395003)(229853001)(2201001)(92566002)(48376002)(47776003)(63266004)(50986999)(36386004)(103686003)(36756003)(50226001)(90966002)(42186005)(52956003)(2004002)(107986001)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1NAM02HT087; H:xsj-pvapsmtpgw01; FPR:; SPF:Pass; MLV:sfv; MX:1; A:1; LANG:en; MIME-Version: 1.0 X-MS-Office365-Filtering-Correlation-Id: 98d15535-edbe-4eba-9d61-08d34cf69d58 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(8251501002); SRVR:CY1NAM02HT087; X-Microsoft-Antispam-PRVS: <4850f3cf30f34e5e8e80bd9a587fabe9@CY1NAM02HT087.eop-nam02.prod.protection.outlook.com> X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(8121501046)(5005006)(13018025)(13024025)(13023025)(13017025)(13015025)(10201501046)(3002001); SRVR:CY1NAM02HT087; BCL:0; PCL:0; RULEID:; SRVR:CY1NAM02HT087; X-Forefront-PRVS: 08828D20BC X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Mar 2016 17:24:06.9555 (UTC) X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83]; Helo=[xsj-pvapsmtpgw01] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1NAM02HT087 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160315_102431_291641_8174D31A X-CRM114-Status: GOOD ( 18.20 ) X-Spam-Score: -1.9 (-) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RDNS_NONE,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support for the AXI Central Direct Memory Access (AXI CDMA) core, which is a soft Xilinx IP core that provides high-bandwidth Direct Memory Access (DMA) between a memory-mapped source address and a memory-mapped destination address. Signed-off-by: Kedareswara rao Appana --- drivers/dma/xilinx/xilinx_vdma.c | 173 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 169 insertions(+), 4 deletions(-) diff --git a/drivers/dma/xilinx/xilinx_vdma.c b/drivers/dma/xilinx/xilinx_vdma.c index 87525a9..e6caf79 100644 --- a/drivers/dma/xilinx/xilinx_vdma.c +++ b/drivers/dma/xilinx/xilinx_vdma.c @@ -22,6 +22,10 @@ * channels, one is to transmit data from memory to a device and another is * to receive from a device. * + * The AXI CDMA, is a soft IP, which provides high-bandwidth Direct Memory + * Access (DMA) between a memory-mapped source address and a memory-mapped + * destination address. + * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 2 of the License, or @@ -147,6 +151,7 @@ #define AXIVDMA_SUPPORT BIT(0) #define AXIDMA_SUPPORT BIT(1) +#define AXICDMA_SUPPORT BIT(2) /* AXI DMA Specific Registers/Offsets */ #define XILINX_DMA_REG_SRCDSTADDR 0x18 @@ -161,6 +166,9 @@ #define XILINX_DMA_COALESCE_MAX 255 #define XILINX_DMA_NUM_APP_WORDS 5 +/* AXI CDMA Specific Masks */ +#define XILINX_CDMA_CR_SGMODE BIT(3) + /** * struct xilinx_vdma_desc_hw - Hardware Descriptor * @next_desc: Next Descriptor Pointer @0x00 @@ -552,6 +560,12 @@ static int xilinx_vdma_alloc_chan_resources(struct dma_chan *dchan) /* Enable interrupts */ vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR, XILINX_VDMA_DMAXR_ALL_IRQ_MASK); + + if ((chan->xdev->quirks & AXICDMA_SUPPORT) && chan->has_sg) { + vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR, + XILINX_CDMA_CR_SGMODE); + } + return 0; } @@ -674,6 +688,81 @@ static void xilinx_vdma_start(struct xilinx_vdma_chan *chan) } /** + * xilinx_cdma_start_transfer - Starts cdma transfer + * @chan: Driver specific channel struct pointer + */ +static void xilinx_cdma_start_transfer(struct xilinx_vdma_chan *chan) +{ + struct xilinx_vdma_tx_descriptor *head_desc, *tail_desc; + struct xilinx_vdma_tx_segment *tail_segment; + u32 ctrl_reg = vdma_ctrl_read(chan, XILINX_VDMA_REG_DMACR); + + if (chan->err) + return; + + if (list_empty(&chan->pending_list)) + return; + + head_desc = list_first_entry(&chan->pending_list, + struct xilinx_vdma_tx_descriptor, node); + tail_desc = list_last_entry(&chan->pending_list, + struct xilinx_vdma_tx_descriptor, node); + tail_segment = list_last_entry(&tail_desc->segments, + struct xilinx_vdma_tx_segment, node); + + if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) { + ctrl_reg &= ~XILINX_DMA_CR_COALESCE_MAX; + ctrl_reg |= chan->desc_pendingcount << + XILINX_DMA_CR_COALESCE_SHIFT; + vdma_ctrl_write(chan, XILINX_VDMA_REG_DMACR, ctrl_reg); + } + + if (chan->has_sg) { + vdma_ctrl_write(chan, XILINX_VDMA_REG_CURDESC, + head_desc->async_tx.phys); + + /* Update tail ptr register which will start the transfer */ + vdma_ctrl_write(chan, XILINX_VDMA_REG_TAILDESC, + tail_segment->phys); + } else { + /* In simple mode */ + struct xilinx_vdma_tx_segment *segment; + struct xilinx_vdma_desc_hw *hw; + + segment = list_first_entry(&head_desc->segments, + struct xilinx_vdma_tx_segment, + node); + + hw = &segment->hw; + + vdma_ctrl_write(chan, XILINX_DMA_REG_SRCDSTADDR, hw->buf_addr); + vdma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR, + hw->dstaddr_vsize); + + /* Start the transfer */ + vdma_ctrl_write(chan, XILINX_DMA_REG_BTT, + hw->control_stride & XILINX_DMA_MAX_TRANS_LEN); + } + + list_splice_tail_init(&chan->pending_list, &chan->active_list); + chan->desc_pendingcount = 0; +} + +/** + * xilinx_cdma_issue_pending - Issue pending transactions + * @dchan: DMA channel + */ +static void xilinx_cdma_issue_pending(struct dma_chan *dchan) +{ + struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan); + unsigned long flags; + + spin_lock_irqsave(&chan->lock, flags); + xilinx_cdma_start_transfer(chan); + spin_unlock_irqrestore(&chan->lock, flags); +} + +/** * xilinx_vdma_start_transfer - Starts VDMA transfer * @chan: Driver specific channel struct pointer */ @@ -1285,6 +1374,69 @@ error: } /** + * xilinx_cdma_prep_memcpy - prepare descriptors for a memcpy transaction + * @dchan: DMA channel + * @dma_dst: destination address + * @dma_src: source address + * @len: transfer length + * @flags: transfer ack flags + * + * Return: Async transaction descriptor on success and NULL on failure + */ +static struct dma_async_tx_descriptor * +xilinx_cdma_prep_memcpy(struct dma_chan *dchan, dma_addr_t dma_dst, + dma_addr_t dma_src, size_t len, unsigned long flags) +{ + struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan); + struct xilinx_vdma_desc_hw *hw; + struct xilinx_vdma_tx_descriptor *desc; + struct xilinx_vdma_tx_segment *segment, *prev; + + if (!len || len > XILINX_DMA_MAX_TRANS_LEN) + return NULL; + + desc = xilinx_vdma_alloc_tx_descriptor(chan); + if (!desc) + return NULL; + + dma_async_tx_descriptor_init(&desc->async_tx, &chan->common); + desc->async_tx.tx_submit = xilinx_vdma_tx_submit; + async_tx_ack(&desc->async_tx); + + /* Allocate the link descriptor from DMA pool */ + segment = xilinx_vdma_alloc_tx_segment(chan); + if (!segment) + goto error; + + hw = &segment->hw; + hw->control_stride = len; + hw->buf_addr = dma_src; + hw->dstaddr_vsize = dma_dst; + + /* Fill the previous next descriptor with current */ + prev = list_last_entry(&desc->segments, + struct xilinx_vdma_tx_segment, node); + prev->hw.next_desc = segment->phys; + + /* Insert the segment into the descriptor segments list. */ + list_add_tail(&segment->node, &desc->segments); + + prev = segment; + + /* Link the last hardware descriptor with the first. */ + segment = list_first_entry(&desc->segments, + struct xilinx_vdma_tx_segment, node); + desc->async_tx.phys = segment->phys; + prev->hw.next_desc = segment->phys; + + return &desc->async_tx; + +error: + xilinx_vdma_free_tx_descriptor(chan, desc); + return NULL; +} + +/** * xilinx_vdma_terminate_all - Halt the channel and free descriptors * @chan: Driver specific VDMA Channel pointer */ @@ -1472,8 +1624,10 @@ static int xilinx_vdma_chan_probe(struct xilinx_vdma_device *xdev, if (xdev->quirks & AXIVDMA_SUPPORT) chan->start_transfer = xilinx_vdma_start_transfer; - else + else if (xdev->quirks & AXIDMA_SUPPORT) chan->start_transfer = xilinx_dma_start_transfer; + else + chan->start_transfer = xilinx_cdma_start_transfer; /* Request the interrupt */ chan->irq = irq_of_parse_and_map(node, 0); @@ -1534,9 +1688,14 @@ static const struct xdma_platform_data xdma_def = { .quirks = AXIDMA_SUPPORT, }; +static const struct xdma_platform_data xcdma_def = { + .quirks = AXICDMA_SUPPORT, +}; + static const struct of_device_id xilinx_vdma_of_ids[] = { { .compatible = "xlnx,axi-vdma-1.00.a", .data = &xvdma_def}, { .compatible = "xlnx,axi-dma-1.00.a", .data = &xdma_def}, + { .compatible = "xlnx,axi-cdma-1.00.a", .data = &xcdma_def}, {} }; MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids); @@ -1601,8 +1760,10 @@ static int xilinx_vdma_probe(struct platform_device *pdev) xdev->common.dev = &pdev->dev; INIT_LIST_HEAD(&xdev->common.channels); - dma_cap_set(DMA_SLAVE, xdev->common.cap_mask); - dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask); + if (!(xdev->quirks & AXICDMA_SUPPORT)) { + dma_cap_set(DMA_SLAVE, xdev->common.cap_mask); + dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask); + } xdev->common.device_alloc_chan_resources = xilinx_vdma_alloc_chan_resources; @@ -1614,13 +1775,17 @@ static int xilinx_vdma_probe(struct platform_device *pdev) xdev->common.device_issue_pending = xilinx_vdma_issue_pending; xdev->common.device_prep_interleaved_dma = xilinx_vdma_dma_prep_interleaved; - } else { + } else if (xdev->quirks & AXIDMA_SUPPORT) { xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg; xdev->common.device_issue_pending = xilinx_dma_issue_pending; xdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); xdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; + } else { + dma_cap_set(DMA_MEMCPY, xdev->common.cap_mask); + xdev->common.device_prep_dma_memcpy = xilinx_cdma_prep_memcpy; + xdev->common.device_issue_pending = xilinx_cdma_issue_pending; } platform_set_drvdata(pdev, xdev);