From patchwork Wed Aug 7 10:19:30 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Mack X-Patchwork-Id: 2840272 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4EFDCBF535 for ; Wed, 7 Aug 2013 12:20:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CB03820205 for ; Wed, 7 Aug 2013 12:20:11 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EF65A20256 for ; Wed, 7 Aug 2013 12:20:06 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V70t5-0003F8-IK; Wed, 07 Aug 2013 10:22:41 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V70s9-0003Nm-HT; Wed, 07 Aug 2013 10:21:41 +0000 Received: from svenfoo.org ([82.94.215.22] helo=mail.zonque.de) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V70qs-0003Cl-Q0 for linux-arm-kernel@lists.infradead.org; Wed, 07 Aug 2013 10:20:33 +0000 Received: from localhost (localhost [127.0.0.1]) by mail.zonque.de (Postfix) with ESMTP id 86784C05A0; Wed, 7 Aug 2013 12:19:49 +0200 (CEST) Received: from mail.zonque.de ([127.0.0.1]) by localhost (rambrand.bugwerft.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Z9pUPrtBNTDf; Wed, 7 Aug 2013 12:19:49 +0200 (CEST) Received: from tamtam.fritz.box (p5DDC7ADB.dip0.t-ipconnect.de [93.220.122.219]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.zonque.de (Postfix) with ESMTPSA id 6822FC059D; Wed, 7 Aug 2013 12:19:48 +0200 (CEST) From: Daniel Mack To: vinod.koul@intel.com, djbw@fb.com Subject: [PATCH 12/12] dma: mmp_pdma: add support for cyclic DMA descriptors Date: Wed, 7 Aug 2013 12:19:30 +0200 Message-Id: <1375870770-14263-13-git-send-email-zonque@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1375870770-14263-1-git-send-email-zonque@gmail.com> References: <1375870770-14263-1-git-send-email-zonque@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130807_062023_891533_8DFA0B14 X-CRM114-Status: GOOD ( 17.52 ) X-Spam-Score: 1.5 (+) Cc: wangx@marvell.com, eric.y.miao@gmail.com, nico@fluxnic.net, haojian.zhuang@gmail.com, Daniel Mack , cxie4@marvell.com, ezequiel.garcia@free-electrons.com, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,KHOP_BIG_TO_CC,RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Provide a callback to prepare cyclic DMA transfers. This is for instance needed for audio channel transport. Signed-off-by: Daniel Mack --- drivers/dma/mmp_pdma.c | 116 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 115 insertions(+), 1 deletion(-) diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c index 0c4a2d5..e95a685 100644 --- a/drivers/dma/mmp_pdma.c +++ b/drivers/dma/mmp_pdma.c @@ -97,6 +97,8 @@ struct mmp_pdma_chan { struct dma_async_tx_descriptor desc; struct mmp_pdma_phy *phy; enum dma_transfer_direction dir; + struct mmp_pdma_desc_sw *cyclic_first; /* first desc_sw if channel + * is in cyclic mode */ /* * We memorize the original start address of the first descriptor as * well as the original total length so we can later determine the @@ -530,6 +532,8 @@ mmp_pdma_prep_memcpy(struct dma_chan *dchan, new->desc.ddadr = DDADR_STOP; new->desc.dcmd |= DCMD_ENDIRQEN; + chan->cyclic_first = NULL; + return &first->async_tx; fail: @@ -611,6 +615,96 @@ mmp_pdma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, new->desc.ddadr = DDADR_STOP; new->desc.dcmd |= DCMD_ENDIRQEN; + chan->dir = dir; + chan->cyclic_first = NULL; + + return &first->async_tx; + +fail: + if (first) + mmp_pdma_free_desc_list(chan, &first->tx_list); + return NULL; +} + +static struct dma_async_tx_descriptor *mmp_pdma_prep_dma_cyclic( + struct dma_chan *dchan, dma_addr_t buf_addr, size_t len, + size_t period_len, enum dma_transfer_direction direction, + unsigned long flags, void *context) +{ + struct mmp_pdma_chan *chan; + struct mmp_pdma_desc_sw *first = NULL, *prev = NULL, *new; + dma_addr_t dma_src, dma_dst; + + if (!dchan || !len || !period_len) + return NULL; + + /* the buffer length must be a multiple of period_len */ + if (len % period_len != 0) + return NULL; + + if (period_len > PDMA_MAX_DESC_BYTES) + return NULL; + + chan = to_mmp_pdma_chan(dchan); + + switch (direction) { + case DMA_MEM_TO_DEV: + dma_src = buf_addr; + dma_dst = chan->dev_addr; + break; + case DMA_DEV_TO_MEM: + dma_dst = buf_addr; + dma_src = chan->dev_addr; + break; + default: + dev_err(chan->dev, "Unsupported direction for cyclic DMA\n"); + return NULL; + } + + chan->start_addr = buf_addr; + chan->total_len = len; + chan->dir = direction; + + do { + /* Allocate the link descriptor from DMA pool */ + new = mmp_pdma_alloc_descriptor(chan); + if (!new) { + dev_err(chan->dev, "no memory for desc\n"); + goto fail; + } + + new->desc.dcmd = chan->dcmd | DCMD_ENDIRQEN | + (DCMD_LENGTH & period_len); + new->desc.dsadr = dma_src; + new->desc.dtadr = dma_dst; + + if (!first) + first = new; + else + prev->desc.ddadr = new->async_tx.phys; + + new->async_tx.cookie = 0; + async_tx_ack(&new->async_tx); + + prev = new; + len -= period_len; + + if (chan->dir == DMA_MEM_TO_DEV) + dma_src += period_len; + else + dma_dst += period_len; + + /* Insert the link descriptor to the LD ring */ + list_add_tail(&new->node, &first->tx_list); + } while (len); + + first->async_tx.flags = flags; /* client is in control of this ack */ + first->async_tx.cookie = -EBUSY; + + /* make the cyclic link */ + new->desc.ddadr = first->async_tx.phys; + chan->cyclic_first = first; + return &first->async_tx; fail: @@ -746,8 +840,25 @@ static void dma_do_tasklet(unsigned long data) LIST_HEAD(chain_cleanup); unsigned long flags; - /* submit pending list; callback for each desc; free desc */ + if (chan->cyclic_first) { + dma_async_tx_callback cb = NULL; + void *cb_data = NULL; + + spin_lock_irqsave(&chan->desc_lock, flags); + desc = chan->cyclic_first; + cb = desc->async_tx.callback; + cb_data = desc->async_tx.callback_param; + spin_unlock_irqrestore(&chan->desc_lock, flags); + + start_pending_queue(chan); + if (cb) + cb(cb_data); + + return; + } + + /* submit pending list; callback for each desc; free desc */ spin_lock_irqsave(&chan->desc_lock, flags); /* update the cookie if we have some descriptors to cleanup */ @@ -780,6 +891,7 @@ static void dma_do_tasklet(unsigned long data) /* Remove from the list of transactions */ list_del(&desc->node); + /* Run the link descriptor callback function */ if (txd->callback) txd->callback(txd->callback_param); @@ -908,12 +1020,14 @@ static int mmp_pdma_probe(struct platform_device *op) dma_cap_set(DMA_SLAVE, pdev->device.cap_mask); dma_cap_set(DMA_MEMCPY, pdev->device.cap_mask); + dma_cap_set(DMA_CYCLIC, pdev->device.cap_mask); pdev->device.dev = &op->dev; pdev->device.device_alloc_chan_resources = mmp_pdma_alloc_chan_resources; pdev->device.device_free_chan_resources = mmp_pdma_free_chan_resources; pdev->device.device_tx_status = mmp_pdma_tx_status; pdev->device.device_prep_dma_memcpy = mmp_pdma_prep_memcpy; pdev->device.device_prep_slave_sg = mmp_pdma_prep_slave_sg; + pdev->device.device_prep_dma_cyclic = mmp_pdma_prep_dma_cyclic; pdev->device.device_issue_pending = mmp_pdma_issue_pending; pdev->device.device_control = mmp_pdma_control; pdev->device.copy_align = PDMA_ALIGNMENT;