From patchwork Thu Oct 11 19:04:26 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matt Porter X-Patchwork-Id: 1582871 Return-Path: X-Original-To: patchwork-spi-devel-general@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from lists.sourceforge.net (lists.sourceforge.net [216.34.181.88]) by patchwork1.kernel.org (Postfix) with ESMTP id 6598440D51 for ; Thu, 11 Oct 2012 19:04:00 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=sfs-ml-3.b.ch3.sourceforge.com) by sfs-ml-3.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1TMO34-0001ea-TN; Thu, 11 Oct 2012 19:03:58 +0000 Received: from sog-mx-4.v43.ch3.sourceforge.com ([172.29.43.194] helo=mx.sourceforge.net) by sfs-ml-3.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1TMO33-0001eO-Aw for spi-devel-general@lists.sourceforge.net; Thu, 11 Oct 2012 19:03:57 +0000 Received-SPF: pass (sog-mx-4.v43.ch3.sourceforge.com: domain of gmail.com designates 209.85.223.175 as permitted sender) client-ip=209.85.223.175; envelope-from=ohiomdp@gmail.com; helo=mail-ie0-f175.google.com; Received: from mail-ie0-f175.google.com ([209.85.223.175]) by sog-mx-4.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1TMO32-0006j0-Nt for spi-devel-general@lists.sourceforge.net; Thu, 11 Oct 2012 19:03:57 +0000 Received: by mail-ie0-f175.google.com with SMTP id c13so3864263ieb.34 for ; Thu, 11 Oct 2012 12:03:51 -0700 (PDT) Received: by 10.50.0.193 with SMTP id 1mr38963igg.0.1349982231358; Thu, 11 Oct 2012 12:03:51 -0700 (PDT) Received: from beef.ohporter.com (cpe-24-166-64-7.neo.res.rr.com. [24.166.64.7]) by mx.google.com with ESMTPS id us4sm7461igc.9.2012.10.11.12.03.49 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 11 Oct 2012 12:03:50 -0700 (PDT) From: Matt Porter To: Tony Lindgren , Sekhar Nori , Grant Likely , Mark Brown , Benoit Cousson , Russell King , Vinod Koul , Rob Landley , Chris Ball Subject: [RFC PATCH v2 01/16] dmaengine: edma: fix slave config dependency on direction Date: Thu, 11 Oct 2012 15:04:26 -0400 Message-Id: <1349982281-10785-2-git-send-email-mporter@ti.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1349982281-10785-1-git-send-email-mporter@ti.com> References: <1349982281-10785-1-git-send-email-mporter@ti.com> X-Spam-Score: -1.7 (-) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -1.5 SPF_CHECK_PASS SPF reports sender host as permitted sender for sender-domain 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (ohiomdp[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.2 AWL AWL: From: address is in the auto white-list X-Headers-End: 1TMO32-0006j0-Nt Cc: Linux DaVinci Kernel List , Arnd Bergmann , Linux Documentation List , Devicetree Discuss , Linux MMC List , Linux Kernel Mailing List , Rob Herring , Dan Williams , Linux SPI Devel List , Linux OMAP List , Linux ARM Kernel List X-BeenThere: spi-devel-general@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: Linux SPI core/device drivers discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: spi-devel-general-bounces@lists.sourceforge.net The edma_slave_config() implementation depends on the direction field such that it will not properly configure a slave channel when called without direction set. This fixes the implementation so that the slave config is copied as is and prep_slave_sg() handles the direction dependent handling. spi-omap2-mcspi and omap_hsmmc both expose this bug as they configure the slave channel config from a common path with an unconfigured direction field. Signed-off-by: Matt Porter --- drivers/dma/edma.c | 55 ++++++++++++++++++++++++++-------------------------- 1 file changed, 27 insertions(+), 28 deletions(-) diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c index 05aea3c..fdcf079 100644 --- a/drivers/dma/edma.c +++ b/drivers/dma/edma.c @@ -69,9 +69,7 @@ struct edma_chan { int ch_num; bool alloced; int slot[EDMA_MAX_SLOTS]; - dma_addr_t addr; - int addr_width; - int maxburst; + struct dma_slave_config cfg; }; struct edma_cc { @@ -178,29 +176,14 @@ static int edma_terminate_all(struct edma_chan *echan) return 0; } - static int edma_slave_config(struct edma_chan *echan, - struct dma_slave_config *config) + struct dma_slave_config *cfg) { - if ((config->src_addr_width > DMA_SLAVE_BUSWIDTH_4_BYTES) || - (config->dst_addr_width > DMA_SLAVE_BUSWIDTH_4_BYTES)) + if (cfg->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES || + cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES) return -EINVAL; - if (config->direction == DMA_MEM_TO_DEV) { - if (config->dst_addr) - echan->addr = config->dst_addr; - if (config->dst_addr_width) - echan->addr_width = config->dst_addr_width; - if (config->dst_maxburst) - echan->maxburst = config->dst_maxburst; - } else if (config->direction == DMA_DEV_TO_MEM) { - if (config->src_addr) - echan->addr = config->src_addr; - if (config->src_addr_width) - echan->addr_width = config->src_addr_width; - if (config->src_maxburst) - echan->maxburst = config->src_maxburst; - } + memcpy(&echan->cfg, cfg, sizeof(echan->cfg)); return 0; } @@ -235,6 +218,9 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg( struct edma_chan *echan = to_edma_chan(chan); struct device *dev = chan->device->dev; struct edma_desc *edesc; + dma_addr_t dev_addr; + enum dma_slave_buswidth dev_width; + u32 burst; struct scatterlist *sg; int i; int acnt, bcnt, ccnt, src, dst, cidx; @@ -243,7 +229,20 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg( if (unlikely(!echan || !sgl || !sg_len)) return NULL; - if (echan->addr_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) { + if (direction == DMA_DEV_TO_MEM) { + dev_addr = echan->cfg.src_addr; + dev_width = echan->cfg.src_addr_width; + burst = echan->cfg.src_maxburst; + } else if (direction == DMA_MEM_TO_DEV) { + dev_addr = echan->cfg.dst_addr; + dev_width = echan->cfg.dst_addr_width; + burst = echan->cfg.dst_maxburst; + } else { + dev_err(dev, "%s: bad direction?\n", __func__); + return NULL; + } + + if (dev_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) { dev_err(dev, "Undefined slave buswidth\n"); return NULL; } @@ -275,14 +274,14 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg( } } - acnt = echan->addr_width; + acnt = dev_width; /* * If the maxburst is equal to the fifo width, use * A-synced transfers. This allows for large contiguous * buffer transfers using only one PaRAM set. */ - if (echan->maxburst == 1) { + if (burst == 1) { edesc->absync = false; ccnt = sg_dma_len(sg) / acnt / (SZ_64K - 1); bcnt = sg_dma_len(sg) / acnt - ccnt * (SZ_64K - 1); @@ -302,7 +301,7 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg( */ } else { edesc->absync = true; - bcnt = echan->maxburst; + bcnt = burst; ccnt = sg_dma_len(sg) / (acnt * bcnt); if (ccnt > (SZ_64K - 1)) { dev_err(dev, "Exceeded max SG segment size\n"); @@ -313,13 +312,13 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg( if (direction == DMA_MEM_TO_DEV) { src = sg_dma_address(sg); - dst = echan->addr; + dst = dev_addr; src_bidx = acnt; src_cidx = cidx; dst_bidx = 0; dst_cidx = 0; } else { - src = echan->addr; + src = dev_addr; dst = sg_dma_address(sg); src_bidx = 0; src_cidx = 0;