From patchwork Thu Apr 17 14:40:47 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 4009111 X-Patchwork-Delegate: vinod.koul@intel.com Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id ECA13BFF02 for ; Thu, 17 Apr 2014 14:40:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D5ADC2038E for ; Thu, 17 Apr 2014 14:40:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C7EA620394 for ; Thu, 17 Apr 2014 14:40:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751029AbaDQOke (ORCPT ); Thu, 17 Apr 2014 10:40:34 -0400 Received: from www.linutronix.de ([62.245.132.108]:49295 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751178AbaDQOke (ORCPT ); Thu, 17 Apr 2014 10:40:34 -0400 Received: from localhost ([127.0.0.1] helo=ionos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1WanUP-0004kT-1i; Thu, 17 Apr 2014 16:40:33 +0200 Message-Id: <20140417143250.066998289@linutronix.de> User-Agent: quilt/0.60-1 Date: Thu, 17 Apr 2014 14:40:47 -0000 From: Thomas Gleixner To: dmaengine@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, dan.j.williams@intel.com, vinod.koul@intel.com, nsekhar@ti.com, joelf@ti.com, Peter Ujfalusi Subject: [patch 6/6] dma: edma: Provide granular accounting References: <20140417133737.892475126@linutronix.de> Content-Disposition: inline; filename=dma-edma-provide-granular-accounting.patch X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1, SHORTCIRCUIT=-0.0001 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The first slot in the ParamRAM of EDMA holds the current active subtransfer. Depending on the direction we read either the source or the destination address from there. In the internat psets we have the address of the buffer(s). In the cyclic case we only use the internal pset[0] which holds the start address of the circular buffer and calculate the remaining room to the end of the buffer. In the SG case we read the current address and compare it to the internal psets address and length. If the current address is outside of this range, the pset has been processed already and we can mark it done, update the residue value and process the next set. If its inside the range we know that we look at the current active set and stop the walk. In case of intermediate transfers we update the stats in the callback function before starting the next batch of transfers. The tx_status callback and the callback are serialized via vchan.lock. In the unexpected case that the read of the paramram fails due to concurrent updates by the DMA engine, we return the last good value. Signed-off-by: Thomas Gleixner --- drivers/dma/edma.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 79 insertions(+), 2 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Index: linux-2.6/drivers/dma/edma.c =================================================================== --- linux-2.6.orig/drivers/dma/edma.c +++ linux-2.6/drivers/dma/edma.c @@ -71,7 +71,10 @@ struct edma_desc { int absync; int pset_nr; int processed; + int processed_stat; u32 residue; + u32 residue_stat; + int slot0; struct edma_pset pset[0]; }; @@ -448,6 +451,8 @@ static struct dma_async_tx_descriptor *e } } + edesc->slot0 = echan->slot[0]; + /* Configure PaRAM sets for each SG */ for_each_sg(sgl, sg, sg_len, i) { /* Get address for each SG */ @@ -476,6 +481,7 @@ static struct dma_async_tx_descriptor *e if (i == sg_len - 1) edesc->pset[i].hwpar.opt |= TCINTEN; } + edesc->residue_stat = edesc->residue; return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags); } @@ -543,7 +549,7 @@ static struct dma_async_tx_descriptor *e edesc->cyclic = 1; edesc->pset_nr = nslots; - edesc->residue = buf_len; + edesc->residue = edesc->residue_stat = buf_len; edesc->direction = direction; dev_dbg(dev, "%s: nslots=%d\n", __func__, nslots); @@ -613,6 +619,7 @@ static struct dma_async_tx_descriptor *e */ edesc->pset[i].hwpar.opt |= TCINTEN; } + edesc->slot0 = echan->slot[0]; return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags); } @@ -645,7 +652,17 @@ static void edma_callback(unsigned ch_nu vchan_cookie_complete(&edesc->vdesc); edma_execute(echan); } else { + int i, n; + dev_dbg(dev, "Intermediate transfer complete on channel %d\n", ch_num); + + /* Update statistics for tx_status */ + n = edesc->processed; + for (i = edesc->processed_stat; i < n; i++) + edesc->residue -= edesc->pset[i].len; + edesc->processed_stat = n; + edesc->residue_stat = edesc->residue; + edma_execute(echan); } } @@ -773,6 +790,66 @@ static void edma_issue_pending(struct dm spin_unlock_irqrestore(&echan->vchan.lock, flags); } +static u32 edma_residue(struct edma_desc *edesc) +{ + bool dst = edesc->direction == DMA_DEV_TO_MEM; + struct edma_pset *pset = edesc->pset; + dma_addr_t done, pos; + int ret, i; + + /* + * We always read the dst/src position from the first RamPar + * pset. That's the one which is active now. + */ + ret = edma_get_position(edesc->slot0, &pos, dst); + + /* + * edma_get_position() can fail due to concurrent + * updates to the pset. Unlikely, but can happen. + * Return the last known residue value. + */ + if (ret) + return edesc->residue_stat; + + /* + * Cyclic is simple. Just subtract pset[0].addr from pos. + * + * We never update edesc->residue in the cyclic case, so we + * can tell the remaining room to the end of the circular + * buffer. + */ + if (edesc->cyclic) { + done = pos - pset->addr; + edesc->residue_stat = edesc->residue - done; + return edesc->residue_stat; + } + + /* + * For SG operation we catch up with the last processed + * status. + */ + pset += edesc->processed_stat; + + for (i = edesc->processed_stat; i < edesc->processed; i++, pset++) { + /* + * If we are inside this pset address range, we know + * this is the active one. Get the current delta and + * stop walking the psets. + */ + if (pos >= pset->addr && pos < pset->addr + pset->len) { + edesc->residue_stat = edesc->residue; + edesc->residue_stat -= pos - pset->addr; + break; + } + + /* Otherwise mark it done and update residue[_stat]. */ + edesc->processed_stat++; + edesc->residue -= pset->len; + edesc->residue_stat = edesc->residue; + } + return edesc->residue_stat; +} + /* Check request completion status */ static enum dma_status edma_tx_status(struct dma_chan *chan, dma_cookie_t cookie, @@ -789,7 +866,7 @@ static enum dma_status edma_tx_status(st spin_lock_irqsave(&echan->vchan.lock, flags); if (echan->edesc && echan->edesc->vdesc.tx.cookie == cookie) - txstate->residue = echan->edesc->residue; + txstate->residue = edma_residue(echan->edesc); else if ((vdesc = vchan_find_desc(&echan->vchan, cookie))) txstate->residue = to_edma_desc(&vdesc->tx)->residue; spin_unlock_irqrestore(&echan->vchan.lock, flags);