From patchwork Fri Sep 11 12:27:02 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 7161221 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id CA6EDBEEC1 for ; Fri, 11 Sep 2015 12:34:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D452E20519 for ; Fri, 11 Sep 2015 12:34:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AEBAD20532 for ; Fri, 11 Sep 2015 12:34:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751913AbbIKM2N (ORCPT ); Fri, 11 Sep 2015 08:28:13 -0400 Received: from comal.ext.ti.com ([198.47.26.152]:56056 "EHLO comal.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752788AbbIKM2L (ORCPT ); Fri, 11 Sep 2015 08:28:11 -0400 Received: from dlelxv90.itg.ti.com ([172.17.2.17]) by comal.ext.ti.com (8.13.7/8.13.7) with ESMTP id t8BCRbF0007207; Fri, 11 Sep 2015 07:27:37 -0500 Received: from DLEE70.ent.ti.com (dlemailx.itg.ti.com [157.170.170.113]) by dlelxv90.itg.ti.com (8.14.3/8.13.8) with ESMTP id t8BCRbEi031246; Fri, 11 Sep 2015 07:27:37 -0500 Received: from dlep33.itg.ti.com (157.170.170.75) by DLEE70.ent.ti.com (157.170.170.113) with Microsoft SMTP Server id 14.3.224.2; Fri, 11 Sep 2015 07:29:41 -0500 Received: from dflp33.itg.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by dlep33.itg.ti.com (8.14.3/8.13.8) with ESMTP id t8BCROpu013634; Fri, 11 Sep 2015 07:27:34 -0500 From: Peter Ujfalusi To: , , CC: , , , , , Subject: [PATCH v2 03/23] dmaengine: edma: Simplify and optimize the edma_execute path Date: Fri, 11 Sep 2015 15:27:02 +0300 Message-ID: <1441974442-8233-4-git-send-email-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.5.1 In-Reply-To: <1441974442-8233-1-git-send-email-peter.ujfalusi@ti.com> References: <1441974442-8233-1-git-send-email-peter.ujfalusi@ti.com> MIME-Version: 1.0 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The code path in edma_execute() and edma_callback() can be simplified and make it more optimal. There is not need to call in to edma_execute() when the transfer has been finished for example. Also the handling of missed/first or next batch of paRAMs can be done in a more optimal way. Signed-off-by: Peter Ujfalusi --- drivers/dma/edma.c | 76 +++++++++++++++++++++--------------------------------- 1 file changed, 29 insertions(+), 47 deletions(-) diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c index 3e5d4f193005..19fa49d6f555 100644 --- a/drivers/dma/edma.c +++ b/drivers/dma/edma.c @@ -154,15 +154,11 @@ static void edma_execute(struct edma_chan *echan) struct device *dev = echan->vchan.chan.device->dev; int i, j, left, nslots; - /* If either we processed all psets or we're still not started */ - if (!echan->edesc || - echan->edesc->pset_nr == echan->edesc->processed) { - /* Get next vdesc */ + if (!echan->edesc) { + /* Setup is needed for the first transfer */ vdesc = vchan_next_desc(&echan->vchan); - if (!vdesc) { - echan->edesc = NULL; + if (!vdesc) return; - } list_del(&vdesc->node); echan->edesc = to_edma_desc(&vdesc->tx); } @@ -220,28 +216,26 @@ static void edma_execute(struct edma_chan *echan) echan->ecc->dummy_slot); } - if (edesc->processed <= MAX_NR_SG) { - dev_dbg(dev, "first transfer starting on channel %d\n", - echan->ch_num); - edma_start(echan->ch_num); - } else { - dev_dbg(dev, "chan: %d: completed %d elements, resuming\n", - echan->ch_num, edesc->processed); - edma_resume(echan->ch_num); - } - - /* - * This happens due to setup times between intermediate transfers - * in long SG lists which have to be broken up into transfers of - * MAX_NR_SG - */ if (echan->missed) { + /* + * This happens due to setup times between intermediate + * transfers in long SG lists which have to be broken up into + * transfers of MAX_NR_SG + */ dev_dbg(dev, "missed event on channel %d\n", echan->ch_num); edma_clean_channel(echan->ch_num); edma_stop(echan->ch_num); edma_start(echan->ch_num); edma_trigger_channel(echan->ch_num); echan->missed = 0; + } else if (edesc->processed <= MAX_NR_SG) { + dev_dbg(dev, "first transfer starting on channel %d\n", + echan->ch_num); + edma_start(echan->ch_num); + } else { + dev_dbg(dev, "chan: %d: completed %d elements, resuming\n", + echan->ch_num, edesc->processed); + edma_resume(echan->ch_num); } } @@ -259,20 +253,17 @@ static int edma_terminate_all(struct dma_chan *chan) * echan->edesc is NULL and exit.) */ if (echan->edesc) { - int cyclic = echan->edesc->cyclic; - + edma_stop(echan->ch_num); + /* Move the cyclic channel back to default queue */ + if (echan->edesc->cyclic) + edma_assign_channel_eventq(echan->ch_num, + EVENTQ_DEFAULT); /* * free the running request descriptor * since it is not in any of the vdesc lists */ edma_desc_free(&echan->edesc->vdesc); - echan->edesc = NULL; - edma_stop(echan->ch_num); - /* Move the cyclic channel back to default queue */ - if (cyclic) - edma_assign_channel_eventq(echan->ch_num, - EVENTQ_DEFAULT); } vchan_get_all_descriptors(&echan->vchan, &head); @@ -725,41 +716,33 @@ static void edma_callback(unsigned ch_num, u16 ch_status, void *data) edesc = echan->edesc; - /* Pause the channel for non-cyclic */ - if (!edesc || (edesc && !edesc->cyclic)) - edma_pause(echan->ch_num); - + spin_lock(&echan->vchan.lock); switch (ch_status) { case EDMA_DMA_COMPLETE: - spin_lock(&echan->vchan.lock); - if (edesc) { if (edesc->cyclic) { vchan_cyclic_callback(&edesc->vdesc); + goto out; } else if (edesc->processed == edesc->pset_nr) { dev_dbg(dev, "Transfer complete, stopping channel %d\n", ch_num); edesc->residue = 0; edma_stop(echan->ch_num); vchan_cookie_complete(&edesc->vdesc); - edma_execute(echan); + echan->edesc = NULL; } else { dev_dbg(dev, "Intermediate transfer complete on channel %d\n", ch_num); + edma_pause(echan->ch_num); + /* Update statistics for tx_status */ edesc->residue -= edesc->sg_len; edesc->residue_stat = edesc->residue; edesc->processed_stat = edesc->processed; - - edma_execute(echan); } + edma_execute(echan); } - - spin_unlock(&echan->vchan.lock); - break; case EDMA_DMA_CC_ERROR: - spin_lock(&echan->vchan.lock); - edma_read_slot(EDMA_CHAN_SLOT(echan->slot[0]), &p); /* @@ -788,13 +771,12 @@ static void edma_callback(unsigned ch_num, u16 ch_status, void *data) edma_start(echan->ch_num); edma_trigger_channel(echan->ch_num); } - - spin_unlock(&echan->vchan.lock); - break; default: break; } +out: + spin_unlock(&echan->vchan.lock); } /* Alloc channel resources */