From patchwork Mon May 19 17:05:58 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasily Khoruzhick X-Patchwork-Id: 4204641 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id EAEA7BEEAB for ; Mon, 19 May 2014 17:10:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 070CF202B4 for ; Mon, 19 May 2014 17:10:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0B5FE20251 for ; Mon, 19 May 2014 17:10:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WmR39-0005S3-Ja; Mon, 19 May 2014 17:08:31 +0000 Received: from mail-ee0-x232.google.com ([2a00:1450:4013:c00::232]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WmR36-0005Lf-Pe for linux-arm-kernel@lists.infradead.org; Mon, 19 May 2014 17:08:29 +0000 Received: by mail-ee0-f50.google.com with SMTP id e51so3811484eek.23 for ; Mon, 19 May 2014 10:08:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Xo5TQh8l3+bs/jnyDpKnH9sDXXmQxOrqC3kdkj9/F1Y=; b=kgS5clOouncYgNRjlNveSqftk+oiwWsh+cdVVLGUK5yfN5dBvib43WpyFN0PUV/XSO +c/QKUWhF+6u9Y4fvTIW8k3e9A6S//YdiU+RpqrSjyxaaifChob7WsORwf2lQ+rNITcN I2f1em/1TkTk0s428M5D8D+UjitVMBVNfZy5I4MCbqCcknWwRMFF3HdrtH806svtLhsK fGqaLWdzaC/1KNAEqngizcsHLSI7Qd8UOuylgxGqo3yJLjtbP4dC+yWjcVTYccgYIpop m8sy/EptUYrfLiKU6bWeOqSR7MCArsWu/2MnKe9J2BSy/HRWyzzEAth2MSNXH0Bge71a dTTw== X-Received: by 10.14.213.6 with SMTP id z6mr5922825eeo.98.1400519286668; Mon, 19 May 2014 10:08:06 -0700 (PDT) Received: from localhost.localdomain ([80.249.95.37]) by mx.google.com with ESMTPSA id m44sm43208309eeh.14.2014.05.19.10.08.04 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 19 May 2014 10:08:05 -0700 (PDT) From: Vasily Khoruzhick To: Andy Shevchenko , Vinod Koul , Dan Williams , dmaengine@vger.kernel.org, Heiko Stuebner , arm-linux Subject: [PATCH v2 2/2] dmaengine: s3c24xx-dma: Add cyclic transfer support Date: Mon, 19 May 2014 20:05:58 +0300 Message-Id: <1400519158-5801-2-git-send-email-anarsoul@gmail.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1400519158-5801-1-git-send-email-anarsoul@gmail.com> References: <1400519158-5801-1-git-send-email-anarsoul@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140519_100828_987234_DD8614EA X-CRM114-Status: GOOD ( 22.47 ) X-Spam-Score: -0.1 (/) Cc: Vasily Khoruzhick X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Many audio interface drivers require support of cyclic transfers to work correctly, for example Samsung ASoC DMA driver. This patch adds support for cyclic transfers to the s3c24xx-dma driver Signed-off-by: Vasily Khoruzhick --- v2: make a code a bit shorter by using is_slave_direction() drivers/dma/s3c24xx-dma.c | 112 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 111 insertions(+), 1 deletion(-) diff --git a/drivers/dma/s3c24xx-dma.c b/drivers/dma/s3c24xx-dma.c index 2167608..141e220 100644 --- a/drivers/dma/s3c24xx-dma.c +++ b/drivers/dma/s3c24xx-dma.c @@ -164,6 +164,7 @@ struct s3c24xx_sg { * @disrcc: value for source control register * @didstc: value for destination control register * @dcon: base value for dcon register + * @cyclic: indicate cyclic transfer */ struct s3c24xx_txd { struct virt_dma_desc vd; @@ -173,6 +174,7 @@ struct s3c24xx_txd { u32 disrcc; u32 didstc; u32 dcon; + bool cyclic; }; struct s3c24xx_dma_chan; @@ -669,8 +671,10 @@ static irqreturn_t s3c24xx_dma_irq(int irq, void *data) /* when more sg's are in this txd, start the next one */ if (!list_is_last(txd->at, &txd->dsg_list)) { txd->at = txd->at->next; + if (txd->cyclic) + vchan_cyclic_callback(&txd->vd); s3c24xx_dma_start_next_sg(s3cchan, txd); - } else { + } else if (!txd->cyclic) { s3cchan->at = NULL; vchan_cookie_complete(&txd->vd); @@ -682,6 +686,12 @@ static irqreturn_t s3c24xx_dma_irq(int irq, void *data) s3c24xx_dma_start_next_txd(s3cchan); else s3c24xx_dma_phy_free(s3cchan); + } else { + vchan_cyclic_callback(&txd->vd); + + /* Cyclic: reset at beginning */ + txd->at = txd->dsg_list.next; + s3c24xx_dma_start_next_sg(s3cchan, txd); } } spin_unlock(&s3cchan->vc.lock); @@ -877,6 +887,104 @@ static struct dma_async_tx_descriptor *s3c24xx_dma_prep_memcpy( return vchan_tx_prep(&s3cchan->vc, &txd->vd, flags); } +static struct dma_async_tx_descriptor *s3c24xx_dma_prep_dma_cyclic( + struct dma_chan *chan, dma_addr_t addr, size_t size, size_t period, + enum dma_transfer_direction direction, unsigned long flags, + void *context) +{ + struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan); + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; + const struct s3c24xx_dma_platdata *pdata = s3cdma->pdata; + struct s3c24xx_dma_channel *cdata = &pdata->channels[s3cchan->id]; + struct s3c24xx_txd *txd; + struct s3c24xx_sg *dsg; + unsigned sg_len; + dma_addr_t slave_addr; + u32 hwcfg = 0; + int i; + + dev_dbg(&s3cdma->pdev->dev, + "prepare cyclic transaction of %d bytes with period %d from %s\n", + size, period, s3cchan->name); + + if (!is_slave_direction(direction)) { + dev_err(&s3cdma->pdev->dev, + "direction %d unsupported\n", direction); + return NULL; + } + + txd = s3c24xx_dma_get_txd(); + if (!txd) + return NULL; + + txd->cyclic = 1; + + if (cdata->handshake) + txd->dcon |= S3C24XX_DCON_HANDSHAKE; + + switch (cdata->bus) { + case S3C24XX_DMA_APB: + txd->dcon |= S3C24XX_DCON_SYNC_PCLK; + hwcfg |= S3C24XX_DISRCC_LOC_APB; + break; + case S3C24XX_DMA_AHB: + txd->dcon |= S3C24XX_DCON_SYNC_HCLK; + hwcfg |= S3C24XX_DISRCC_LOC_AHB; + break; + } + + /* + * Always assume our peripheral desintation is a fixed + * address in memory. + */ + hwcfg |= S3C24XX_DISRCC_INC_FIXED; + + /* + * Individual dma operations are requested by the slave, + * so serve only single atomic operations (S3C24XX_DCON_SERV_SINGLE). + */ + txd->dcon |= S3C24XX_DCON_SERV_SINGLE; + + if (direction == DMA_MEM_TO_DEV) { + txd->disrcc = S3C24XX_DISRCC_LOC_AHB | + S3C24XX_DISRCC_INC_INCREMENT; + txd->didstc = hwcfg; + slave_addr = s3cchan->cfg.dst_addr; + txd->width = s3cchan->cfg.dst_addr_width; + } else { + txd->disrcc = hwcfg; + txd->didstc = S3C24XX_DIDSTC_LOC_AHB | + S3C24XX_DIDSTC_INC_INCREMENT; + slave_addr = s3cchan->cfg.src_addr; + txd->width = s3cchan->cfg.src_addr_width; + } + + sg_len = size / period; + + for (i = 0; i < sg_len; i++) { + dsg = kzalloc(sizeof(*dsg), GFP_NOWAIT); + if (!dsg) { + s3c24xx_dma_free_txd(txd); + return NULL; + } + list_add_tail(&dsg->node, &txd->dsg_list); + + dsg->len = period; + /* Check last period length */ + if (i == (sg_len - 1)) + dsg->len = size - (period * i); + if (direction == DMA_MEM_TO_DEV) { + dsg->src_addr = addr + (period * i); + dsg->dst_addr = slave_addr; + } else { /* DMA_DEV_TO_MEM */ + dsg->src_addr = slave_addr; + dsg->dst_addr = addr + (period * i); + } + } + + return vchan_tx_prep(&s3cchan->vc, &txd->vd, flags); +} + static struct dma_async_tx_descriptor *s3c24xx_dma_prep_slave_sg( struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len, enum dma_transfer_direction direction, @@ -1197,6 +1305,7 @@ static int s3c24xx_dma_probe(struct platform_device *pdev) /* Initialize slave engine for SoC internal dedicated peripherals */ dma_cap_set(DMA_SLAVE, s3cdma->slave.cap_mask); + dma_cap_set(DMA_CYCLIC, s3cdma->slave.cap_mask); dma_cap_set(DMA_PRIVATE, s3cdma->slave.cap_mask); s3cdma->slave.dev = &pdev->dev; s3cdma->slave.device_alloc_chan_resources = @@ -1206,6 +1315,7 @@ static int s3c24xx_dma_probe(struct platform_device *pdev) s3cdma->slave.device_tx_status = s3c24xx_dma_tx_status; s3cdma->slave.device_issue_pending = s3c24xx_dma_issue_pending; s3cdma->slave.device_prep_slave_sg = s3c24xx_dma_prep_slave_sg; + s3cdma->slave.device_prep_dma_cyclic = s3c24xx_dma_prep_dma_cyclic; s3cdma->slave.device_control = s3c24xx_dma_control; /* Register as many memcpy channels as there are physical channels */