From patchwork Mon Jul 30 19:28:27 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guennadi Liakhovetski X-Patchwork-Id: 1256061 Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 8CCEAE00C6 for ; Mon, 30 Jul 2012 19:30:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754982Ab2G3T3r (ORCPT ); Mon, 30 Jul 2012 15:29:47 -0400 Received: from moutng.kundenserver.de ([212.227.126.186]:60380 "EHLO moutng.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754922Ab2G3T2d (ORCPT ); Mon, 30 Jul 2012 15:28:33 -0400 Received: from axis700.grange (dslb-178-001-225-018.pools.arcor-ip.net [178.1.225.18]) by mrelayeu.kundenserver.de (node=mreu2) with ESMTP (Nemesis) id 0LznVv-1Tqd8C3BFz-014o2j; Mon, 30 Jul 2012 21:28:28 +0200 Received: by axis700.grange (Postfix, from userid 1000) id 71903189B85; Mon, 30 Jul 2012 21:28:27 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by axis700.grange (Postfix) with ESMTP id 6EFAD189B84; Mon, 30 Jul 2012 21:28:27 +0200 (CEST) Date: Mon, 30 Jul 2012 21:28:27 +0200 (CEST) From: Guennadi Liakhovetski X-X-Sender: lyakh@axis700.grange To: linux-sh@vger.kernel.org cc: "Koul, Vinod" , Paul Mundt , Magnus Damm , Yoshihiro Shimoda , linux-kernel@vger.kernel.org Subject: [PATCH 1/2] dmaengine: shdma: restore partial transfer calculation In-Reply-To: Message-ID: References: MIME-Version: 1.0 X-Provags-ID: V02:K0:c0WIoJ0n7I1ECtzfvkG/1L0/wa0nzMCO0A+P7f5p7Kr +g++qNPMqQ2iJ+mCiEvP6yxfnTO7AzAVEbcMqXeItUGUK411ng MUschkaPlpfBDuxSa1zjXSSzpdQKVyxVw6bE8kuJgcrF6rAX4j 6S3XZu4VuMWJQflgGrwOKCsD75D78/C1x8SXoAZFoj5NVSMc6f pceJZtPFOrq3JrizkILh24u/kk3qnAVjiRQiJ46MQpS1HJGjEq J/jZVg1OFFWA+AA6b7ZE/ZRMvvK9T3UpNEcEAC0ynJxMrvjpfi bZWXcmtSVBNpezWgn3BXglX5oXXNu3Jqjtax4NIQ2QlqUvtR6z 5GErnONDxGbFR5TzHEEHLXXzq+YiCTjPxG+0pfJdhxC/wfGtKj 48TagLq1YqDPg== Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org The recent shdma driver split has mistakenly removed support for partial DMA transfer size calculation on forced termination. This patch restores it. Signed-off-by: Guennadi Liakhovetski --- drivers/dma/sh/shdma-base.c | 9 +++++++++ drivers/dma/sh/shdma.c | 12 ++++++++++++ include/linux/shdma-base.h | 2 ++ 3 files changed, 23 insertions(+), 0 deletions(-) diff --git a/drivers/dma/sh/shdma-base.c b/drivers/dma/sh/shdma-base.c index 27f5c78..f4cd946 100644 --- a/drivers/dma/sh/shdma-base.c +++ b/drivers/dma/sh/shdma-base.c @@ -483,6 +483,7 @@ static struct shdma_desc *shdma_add_desc(struct shdma_chan *schan, new->mark = DESC_PREPARED; new->async_tx.flags = flags; new->direction = direction; + new->partial = 0; *len -= copy_size; if (direction == DMA_MEM_TO_MEM || direction == DMA_MEM_TO_DEV) @@ -644,6 +645,14 @@ static int shdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, case DMA_TERMINATE_ALL: spin_lock_irqsave(&schan->chan_lock, flags); ops->halt_channel(schan); + + if (ops->get_partial && !list_empty(&schan->ld_queue)) { + /* Record partial transfer */ + struct shdma_desc *desc = list_first_entry(&schan->ld_queue, + struct shdma_desc, node); + desc->partial = ops->get_partial(schan, desc); + } + spin_unlock_irqrestore(&schan->chan_lock, flags); shdma_chan_ld_cleanup(schan, true); diff --git a/drivers/dma/sh/shdma.c b/drivers/dma/sh/shdma.c index 027c9be..f41bcc5 100644 --- a/drivers/dma/sh/shdma.c +++ b/drivers/dma/sh/shdma.c @@ -381,6 +381,17 @@ static bool sh_dmae_chan_irq(struct shdma_chan *schan, int irq) return true; } +static size_t sh_dmae_get_partial(struct shdma_chan *schan, + struct shdma_desc *sdesc) +{ + struct sh_dmae_chan *sh_chan = container_of(schan, struct sh_dmae_chan, + shdma_chan); + struct sh_dmae_desc *sh_desc = container_of(sdesc, + struct sh_dmae_desc, shdma_desc); + return (sh_desc->hw.tcr - sh_dmae_readl(sh_chan, TCR)) << + sh_chan->xmit_shift; +} + /* Called from error IRQ or NMI */ static bool sh_dmae_reset(struct sh_dmae_device *shdev) { @@ -632,6 +643,7 @@ static const struct shdma_ops sh_dmae_shdma_ops = { .start_xfer = sh_dmae_start_xfer, .embedded_desc = sh_dmae_embedded_desc, .chan_irq = sh_dmae_chan_irq, + .get_partial = sh_dmae_get_partial, }; static int __devinit sh_dmae_probe(struct platform_device *pdev) diff --git a/include/linux/shdma-base.h b/include/linux/shdma-base.h index 93f9821..a3728bf 100644 --- a/include/linux/shdma-base.h +++ b/include/linux/shdma-base.h @@ -50,6 +50,7 @@ struct shdma_desc { struct list_head node; struct dma_async_tx_descriptor async_tx; enum dma_transfer_direction direction; + size_t partial; dma_cookie_t cookie; int chunks; int mark; @@ -98,6 +99,7 @@ struct shdma_ops { void (*start_xfer)(struct shdma_chan *, struct shdma_desc *); struct shdma_desc *(*embedded_desc)(void *, int); bool (*chan_irq)(struct shdma_chan *, int); + size_t (*get_partial)(struct shdma_chan *, struct shdma_desc *); }; struct shdma_dev {