From patchwork Thu Jan 2 15:09:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King X-Patchwork-Id: 3425551 Return-Path: X-Original-To: patchwork-linux-omap@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 286889F2E9 for ; Thu, 2 Jan 2014 15:15:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 03EA42015A for ; Thu, 2 Jan 2014 15:15:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A4AD02015D for ; Thu, 2 Jan 2014 15:15:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752045AbaABPPb (ORCPT ); Thu, 2 Jan 2014 10:15:31 -0500 Received: from gw-1.arm.linux.org.uk ([78.32.30.217]:54169 "EHLO pandora.arm.linux.org.uk" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751210AbaABPPT (ORCPT ); Thu, 2 Jan 2014 10:15:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=pandora; h=Date:Sender:Message-Id:Subject:Cc:To:From:References:In-Reply-To; bh=qoQ/k1xz/1+uRM1LRbJyof9yjpFCG3XVXeqD+g3TKJI=; b=XaxFTOruhwcCDpt/usb68x183JckJabxAkc2XDRG2yZdN4zITV8MvIRgJ+YoqR7eH8/dqdFlvITmRTrYGb3wfwumI5YZM0YYLHhevQk6z92MZTvdKrY45iObef3bR6rFl07FW0irD+wFS5V0E0LvjhV2H2fNdwajjEHaQ5JuEO8=; Received: from [2001:4d48:ad52:3201:222:68ff:fe15:37dd] (port=56288 helo=rmk-PC.arm.linux.org.uk) by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.76) (envelope-from ) id 1Vyju0-0003b9-Jk; Thu, 02 Jan 2014 15:09:40 +0000 Received: from rmk by rmk-PC.arm.linux.org.uk with local (Exim 4.76) (envelope-from ) id 1Vyju0-0005EE-6a; Thu, 02 Jan 2014 15:09:40 +0000 In-Reply-To: <20140102150836.GA3826@n2100.arm.linux.org.uk> References: <20140102150836.GA3826@n2100.arm.linux.org.uk> From: Russell King To: dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org Cc: Vinod Koul , Dan Williams Subject: [PATCH RFC 05/26] dmaengine: omap-dma: control start/stop directly Message-Id: Date: Thu, 02 Jan 2014 15:09:40 +0000 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Program the non-cyclic mode DMA start/stop directly, rather than via arch/arm/plat-omap/dma.c. Signed-off-by: Russell King --- drivers/dma/omap-dma.c | 152 ++++++++++++++++++++++++++++++++++++++++++++--- 1 files changed, 142 insertions(+), 10 deletions(-) diff --git a/drivers/dma/omap-dma.c b/drivers/dma/omap-dma.c index 602c98aebca8..8e2dd4f658d5 100644 --- a/drivers/dma/omap-dma.c +++ b/drivers/dma/omap-dma.c @@ -5,6 +5,7 @@ * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. */ +#include #include #include #include @@ -60,6 +61,7 @@ struct omap_desc { uint8_t sync_mode; /* OMAP_DMA_SYNC_xxx */ uint8_t sync_type; /* OMAP_DMA_xxx_SYNC* */ uint8_t periph_port; /* Peripheral port */ + uint16_t cicr; /* CICR value */ unsigned sglen; struct omap_sg sg[0]; @@ -95,6 +97,112 @@ static void omap_dma_desc_free(struct virt_dma_desc *vd) kfree(container_of(vd, struct omap_desc, vd)); } +static void omap_dma_start(struct omap_chan *c, struct omap_desc *d) +{ + struct omap_dmadev *od = to_omap_dma_dev(c->vc.chan.device); + uint32_t val; + + if (__dma_omap15xx(od->plat->dma_attr)) + c->plat->dma_write(0, CPC, c->dma_ch); + else + c->plat->dma_write(0, CDAC, c->dma_ch); + + if (!__dma_omap15xx(od->plat->dma_attr) && c->cyclic) { + val = c->plat->dma_read(CLNK_CTRL, c->dma_ch); + + if (dma_omap1()) + val &= ~(1 << 14); + + val |= c->dma_ch | 1 << 15; + + c->plat->dma_write(val, CLNK_CTRL, c->dma_ch); + } else if (od->plat->errata & DMA_ERRATA_PARALLEL_CHANNELS) + c->plat->dma_write(c->dma_ch, CLNK_CTRL, c->dma_ch); + + /* Clear CSR */ + if (dma_omap1()) + c->plat->dma_read(CSR, c->dma_ch); + else + c->plat->dma_write(~0, CSR, c->dma_ch); + + /* Enable interrupts */ + c->plat->dma_write(d->cicr, CICR, c->dma_ch); + + val = c->plat->dma_read(CCR, c->dma_ch); + if (od->plat->errata & DMA_ERRATA_IFRAME_BUFFERING) + val |= OMAP_DMA_CCR_BUFFERING_DISABLE; + val |= OMAP_DMA_CCR_EN; + mb(); + c->plat->dma_write(val, CCR, c->dma_ch); +} + +static void omap_dma_stop(struct omap_chan *c) +{ + struct omap_dmadev *od = to_omap_dma_dev(c->vc.chan.device); + uint32_t val; + + /* disable irq */ + c->plat->dma_write(0, CICR, c->dma_ch); + + /* Clear CSR */ + if (dma_omap1()) + c->plat->dma_read(CSR, c->dma_ch); + else + c->plat->dma_write(~0, CSR, c->dma_ch); + + val = c->plat->dma_read(CCR, c->dma_ch); + if (od->plat->errata & DMA_ERRATA_i541 && + val & OMAP_DMA_CCR_SEL_SRC_DST_SYNC) { + uint32_t sysconfig; + unsigned i; + + sysconfig = c->plat->dma_read(OCP_SYSCONFIG, c->dma_ch); + val = sysconfig & ~DMA_SYSCONFIG_MIDLEMODE_MASK; + val |= DMA_SYSCONFIG_MIDLEMODE(DMA_IDLEMODE_NO_IDLE); + c->plat->dma_write(val, OCP_SYSCONFIG, c->dma_ch); + + val = c->plat->dma_read(CCR, c->dma_ch); + val &= ~OMAP_DMA_CCR_EN; + c->plat->dma_write(val, CCR, c->dma_ch); + + /* Wait for sDMA FIFO to drain */ + for (i = 0; ; i++) { + val = c->plat->dma_read(CCR, c->dma_ch); + if (!(val & (OMAP_DMA_CCR_RD_ACTIVE | OMAP_DMA_CCR_WR_ACTIVE))) + break; + + if (i > 100) + break; + + udelay(5); + } + + if (val & (OMAP_DMA_CCR_RD_ACTIVE | OMAP_DMA_CCR_WR_ACTIVE)) + dev_err(c->vc.chan.device->dev, + "DMA drain did not complete on lch %d\n", + c->dma_ch); + + c->plat->dma_write(sysconfig, OCP_SYSCONFIG, c->dma_ch); + } else { + val &= ~OMAP_DMA_CCR_EN; + c->plat->dma_write(val, CCR, c->dma_ch); + } + + mb(); + + if (!__dma_omap15xx(od->plat->dma_attr) && c->cyclic) { + val = c->plat->dma_read(CLNK_CTRL, c->dma_ch); + + if (dma_omap1()) + val |= 1 << 14; /* set the STOP_LNK bit */ + + if (dma_omap2plus()) + val &= ~(1 << 15); /* Clear the ENABLE_LNK bit */ + + c->plat->dma_write(val, CLNK_CTRL, c->dma_ch); + } +} + static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d, unsigned idx) { @@ -113,7 +221,7 @@ static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d, c->plat->dma_write(sg->en, CEN, c->dma_ch); c->plat->dma_write(sg->fn, CFN, c->dma_ch); - omap_start_dma(c->dma_ch); + omap_dma_start(c, d); } static void omap_dma_start_desc(struct omap_chan *c) @@ -436,6 +544,12 @@ static struct dma_async_tx_descriptor *omap_dma_prep_slave_sg( d->sync_mode = OMAP_DMA_SYNC_FRAME; d->sync_type = sync_type; d->periph_port = OMAP_DMA_PORT_TIPB; + d->cicr = OMAP_DMA_DROP_IRQ | OMAP_DMA_BLOCK_IRQ; + + if (dma_omap1()) + d->cicr |= OMAP1_DMA_TOUT_IRQ; + else if (dma_omap2plus()) + d->cicr |= OMAP2_DMA_MISALIGNED_ERR_IRQ | OMAP2_DMA_TRANS_ERR_IRQ; /* * Build our scatterlist entries: each contains the address, @@ -465,6 +579,7 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic( size_t period_len, enum dma_transfer_direction dir, unsigned long flags, void *context) { + struct omap_dmadev *od = to_omap_dma_dev(chan->device); struct omap_chan *c = to_omap_dma_chan(chan); enum dma_slave_buswidth dev_width; struct omap_desc *d; @@ -521,15 +636,25 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic( d->sg[0].en = period_len / es_bytes[es]; d->sg[0].fn = buf_len / period_len; d->sglen = 1; + d->cicr = OMAP_DMA_DROP_IRQ; + if (flags & DMA_PREP_INTERRUPT) + d->cicr |= OMAP_DMA_FRAME_IRQ; + + if (dma_omap1()) + d->cicr |= OMAP1_DMA_TOUT_IRQ; + else if (dma_omap2plus()) + d->cicr |= OMAP2_DMA_MISALIGNED_ERR_IRQ | OMAP2_DMA_TRANS_ERR_IRQ; if (!c->cyclic) { c->cyclic = true; - omap_dma_link_lch(c->dma_ch, c->dma_ch); - if (flags & DMA_PREP_INTERRUPT) - omap_enable_dma_irq(c->dma_ch, OMAP_DMA_FRAME_IRQ); + if (__dma_omap15xx(od->plat->dma_attr)) { + uint32_t val; - omap_disable_dma_irq(c->dma_ch, OMAP_DMA_BLOCK_IRQ); + val = c->plat->dma_read(CCR, c->dma_ch); + val |= 3 << 8; + c->plat->dma_write(val, CCR, c->dma_ch); + } } if (dma_omap2plus()) { @@ -570,20 +695,27 @@ static int omap_dma_terminate_all(struct omap_chan *c) /* * Stop DMA activity: we assume the callback will not be called - * after omap_stop_dma() returns (even if it does, it will see + * after omap_dma_stop() returns (even if it does, it will see * c->desc is NULL and exit.) */ if (c->desc) { c->desc = NULL; /* Avoid stopping the dma twice */ if (!c->paused) - omap_stop_dma(c->dma_ch); + omap_dma_stop(c); } if (c->cyclic) { c->cyclic = false; c->paused = false; - omap_dma_unlink_lch(c->dma_ch, c->dma_ch); + + if (__dma_omap15xx(od->plat->dma_attr)) { + uint32_t val; + + val = c->plat->dma_read(CCR, c->dma_ch); + val &= ~(3 << 8); + c->plat->dma_write(val, CCR, c->dma_ch); + } } vchan_get_all_descriptors(&c->vc, &head); @@ -600,7 +732,7 @@ static int omap_dma_pause(struct omap_chan *c) return -EINVAL; if (!c->paused) { - omap_stop_dma(c->dma_ch); + omap_dma_stop(c); c->paused = true; } @@ -614,7 +746,7 @@ static int omap_dma_resume(struct omap_chan *c) return -EINVAL; if (c->paused) { - omap_start_dma(c->dma_ch); + omap_dma_start(c, c->desc); c->paused = false; }