From patchwork Tue May 21 09:36:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 10953267 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC979912 for ; Tue, 21 May 2019 09:36:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BDF891FFF9 for ; Tue, 21 May 2019 09:36:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B26332881A; Tue, 21 May 2019 09:36:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0EC0C1FFF9 for ; Tue, 21 May 2019 09:36:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726900AbfEUJgl (ORCPT ); Tue, 21 May 2019 05:36:41 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:52230 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726289AbfEUJgl (ORCPT ); Tue, 21 May 2019 05:36:41 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id x4L9aZ7u117167; Tue, 21 May 2019 04:36:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1558431395; bh=t+Qlr1SoRNQmbucrnseL/C40XiGOQTaRa+VSXOPP/Fo=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=H1KeSTGONCB6+Ebz/ax9gTQtoTM8tioup6aS3ja99Ag9dEw4bIllN0vWsw+TBVTQY +dcNGlb21jJSskIx8U1TSUMeyBISsxuhTYJXSFr+ryLLDX/hkYiaj4CdhFstPKBja1 g6+fjZws0gToRDW3mrvU+VAAhL4NHYZzzf1Txa0M= Received: from DLEE103.ent.ti.com (dlee103.ent.ti.com [157.170.170.33]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id x4L9aZVe030347 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 21 May 2019 04:36:35 -0500 Received: from DLEE100.ent.ti.com (157.170.170.30) by DLEE103.ent.ti.com (157.170.170.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5; Tue, 21 May 2019 04:36:34 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE100.ent.ti.com (157.170.170.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5 via Frontend Transport; Tue, 21 May 2019 04:36:34 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id x4L9aUga018934; Tue, 21 May 2019 04:36:33 -0500 From: Peter Ujfalusi To: CC: , , , Subject: [PATCH v3 1/2] dmaengine: ti: edma: Clean up the 2x32bit array register accesses Date: Tue, 21 May 2019 12:36:45 +0300 Message-ID: <20190521093646.21836-2-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190521093646.21836-1-peter.ujfalusi@ti.com> References: <20190521093646.21836-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce defines for getting the array index and the bit number within the 64bit array register pairs. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/edma.c | 106 ++++++++++++++++++++++++------------------ 1 file changed, 61 insertions(+), 45 deletions(-) diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c index ceabdea40ae0..a39f817b3888 100644 --- a/drivers/dma/ti/edma.c +++ b/drivers/dma/ti/edma.c @@ -133,6 +133,17 @@ #define EDMA_CONT_PARAMS_FIXED_EXACT 1002 #define EDMA_CONT_PARAMS_FIXED_NOT_EXACT 1003 +/* + * 64bit array registers are split into two 32bit registers: + * reg0: channel/event 0-31 + * reg1: channel/event 32-63 + * + * bit 5 in the channel number tells the array index (0/1) + * bit 0-4 (0x1f) is the bit offset within the register + */ +#define EDMA_REG_ARRAY_INDEX(channel) ((channel) >> 5) +#define EDMA_CHANNEL_BIT(channel) (BIT((channel) & 0x1f)) + /* PaRAM slots are laid out like this */ struct edmacc_param { u32 opt; @@ -441,15 +452,14 @@ static void edma_setup_interrupt(struct edma_chan *echan, bool enable) { struct edma_cc *ecc = echan->ecc; int channel = EDMA_CHAN_SLOT(echan->ch_num); + int idx = EDMA_REG_ARRAY_INDEX(channel); + int ch_bit = EDMA_CHANNEL_BIT(channel); if (enable) { - edma_shadow0_write_array(ecc, SH_ICR, channel >> 5, - BIT(channel & 0x1f)); - edma_shadow0_write_array(ecc, SH_IESR, channel >> 5, - BIT(channel & 0x1f)); + edma_shadow0_write_array(ecc, SH_ICR, idx, ch_bit); + edma_shadow0_write_array(ecc, SH_IESR, idx, ch_bit); } else { - edma_shadow0_write_array(ecc, SH_IECR, channel >> 5, - BIT(channel & 0x1f)); + edma_shadow0_write_array(ecc, SH_IECR, idx, ch_bit); } } @@ -587,26 +597,26 @@ static void edma_start(struct edma_chan *echan) { struct edma_cc *ecc = echan->ecc; int channel = EDMA_CHAN_SLOT(echan->ch_num); - int j = (channel >> 5); - unsigned int mask = BIT(channel & 0x1f); + int idx = EDMA_REG_ARRAY_INDEX(channel); + int ch_bit = EDMA_CHANNEL_BIT(channel); if (!echan->hw_triggered) { /* EDMA channels without event association */ - dev_dbg(ecc->dev, "ESR%d %08x\n", j, - edma_shadow0_read_array(ecc, SH_ESR, j)); - edma_shadow0_write_array(ecc, SH_ESR, j, mask); + dev_dbg(ecc->dev, "ESR%d %08x\n", idx, + edma_shadow0_read_array(ecc, SH_ESR, idx)); + edma_shadow0_write_array(ecc, SH_ESR, idx, ch_bit); } else { /* EDMA channel with event association */ - dev_dbg(ecc->dev, "ER%d %08x\n", j, - edma_shadow0_read_array(ecc, SH_ER, j)); + dev_dbg(ecc->dev, "ER%d %08x\n", idx, + edma_shadow0_read_array(ecc, SH_ER, idx)); /* Clear any pending event or error */ - edma_write_array(ecc, EDMA_ECR, j, mask); - edma_write_array(ecc, EDMA_EMCR, j, mask); + edma_write_array(ecc, EDMA_ECR, idx, ch_bit); + edma_write_array(ecc, EDMA_EMCR, idx, ch_bit); /* Clear any SER */ - edma_shadow0_write_array(ecc, SH_SECR, j, mask); - edma_shadow0_write_array(ecc, SH_EESR, j, mask); - dev_dbg(ecc->dev, "EER%d %08x\n", j, - edma_shadow0_read_array(ecc, SH_EER, j)); + edma_shadow0_write_array(ecc, SH_SECR, idx, ch_bit); + edma_shadow0_write_array(ecc, SH_EESR, idx, ch_bit); + dev_dbg(ecc->dev, "EER%d %08x\n", idx, + edma_shadow0_read_array(ecc, SH_EER, idx)); } } @@ -614,19 +624,19 @@ static void edma_stop(struct edma_chan *echan) { struct edma_cc *ecc = echan->ecc; int channel = EDMA_CHAN_SLOT(echan->ch_num); - int j = (channel >> 5); - unsigned int mask = BIT(channel & 0x1f); + int idx = EDMA_REG_ARRAY_INDEX(channel); + int ch_bit = EDMA_CHANNEL_BIT(channel); - edma_shadow0_write_array(ecc, SH_EECR, j, mask); - edma_shadow0_write_array(ecc, SH_ECR, j, mask); - edma_shadow0_write_array(ecc, SH_SECR, j, mask); - edma_write_array(ecc, EDMA_EMCR, j, mask); + edma_shadow0_write_array(ecc, SH_EECR, idx, ch_bit); + edma_shadow0_write_array(ecc, SH_ECR, idx, ch_bit); + edma_shadow0_write_array(ecc, SH_SECR, idx, ch_bit); + edma_write_array(ecc, EDMA_EMCR, idx, ch_bit); /* clear possibly pending completion interrupt */ - edma_shadow0_write_array(ecc, SH_ICR, j, mask); + edma_shadow0_write_array(ecc, SH_ICR, idx, ch_bit); - dev_dbg(ecc->dev, "EER%d %08x\n", j, - edma_shadow0_read_array(ecc, SH_EER, j)); + dev_dbg(ecc->dev, "EER%d %08x\n", idx, + edma_shadow0_read_array(ecc, SH_EER, idx)); /* REVISIT: consider guarding against inappropriate event * chaining by overwriting with dummy_paramset. @@ -640,45 +650,49 @@ static void edma_stop(struct edma_chan *echan) static void edma_pause(struct edma_chan *echan) { int channel = EDMA_CHAN_SLOT(echan->ch_num); - unsigned int mask = BIT(channel & 0x1f); - edma_shadow0_write_array(echan->ecc, SH_EECR, channel >> 5, mask); + edma_shadow0_write_array(echan->ecc, SH_EECR, + EDMA_REG_ARRAY_INDEX(channel), + EDMA_CHANNEL_BIT(channel)); } /* Re-enable EDMA hardware events on the specified channel. */ static void edma_resume(struct edma_chan *echan) { int channel = EDMA_CHAN_SLOT(echan->ch_num); - unsigned int mask = BIT(channel & 0x1f); - edma_shadow0_write_array(echan->ecc, SH_EESR, channel >> 5, mask); + edma_shadow0_write_array(echan->ecc, SH_EESR, + EDMA_REG_ARRAY_INDEX(channel), + EDMA_CHANNEL_BIT(channel)); } static void edma_trigger_channel(struct edma_chan *echan) { struct edma_cc *ecc = echan->ecc; int channel = EDMA_CHAN_SLOT(echan->ch_num); - unsigned int mask = BIT(channel & 0x1f); + int idx = EDMA_REG_ARRAY_INDEX(channel); + int ch_bit = EDMA_CHANNEL_BIT(channel); - edma_shadow0_write_array(ecc, SH_ESR, (channel >> 5), mask); + edma_shadow0_write_array(ecc, SH_ESR, idx, ch_bit); - dev_dbg(ecc->dev, "ESR%d %08x\n", (channel >> 5), - edma_shadow0_read_array(ecc, SH_ESR, (channel >> 5))); + dev_dbg(ecc->dev, "ESR%d %08x\n", idx, + edma_shadow0_read_array(ecc, SH_ESR, idx)); } static void edma_clean_channel(struct edma_chan *echan) { struct edma_cc *ecc = echan->ecc; int channel = EDMA_CHAN_SLOT(echan->ch_num); - int j = (channel >> 5); - unsigned int mask = BIT(channel & 0x1f); + int idx = EDMA_REG_ARRAY_INDEX(channel); + int ch_bit = EDMA_CHANNEL_BIT(channel); - dev_dbg(ecc->dev, "EMR%d %08x\n", j, edma_read_array(ecc, EDMA_EMR, j)); - edma_shadow0_write_array(ecc, SH_ECR, j, mask); + dev_dbg(ecc->dev, "EMR%d %08x\n", idx, + edma_read_array(ecc, EDMA_EMR, idx)); + edma_shadow0_write_array(ecc, SH_ECR, idx, ch_bit); /* Clear the corresponding EMR bits */ - edma_write_array(ecc, EDMA_EMCR, j, mask); + edma_write_array(ecc, EDMA_EMCR, idx, ch_bit); /* Clear any SER */ - edma_shadow0_write_array(ecc, SH_SECR, j, mask); + edma_shadow0_write_array(ecc, SH_SECR, idx, ch_bit); edma_write(ecc, EDMA_CCERRCLR, BIT(16) | BIT(1) | BIT(0)); } @@ -708,7 +722,8 @@ static int edma_alloc_channel(struct edma_chan *echan, int channel = EDMA_CHAN_SLOT(echan->ch_num); /* ensure access through shadow region 0 */ - edma_or_array2(ecc, EDMA_DRAE, 0, channel >> 5, BIT(channel & 0x1f)); + edma_or_array2(ecc, EDMA_DRAE, 0, EDMA_REG_ARRAY_INDEX(channel), + EDMA_CHANNEL_BIT(channel)); /* ensure no events are pending */ edma_stop(echan); @@ -2482,8 +2497,9 @@ static int edma_pm_resume(struct device *dev) for (i = 0; i < ecc->num_channels; i++) { if (echan[i].alloced) { /* ensure access through shadow region 0 */ - edma_or_array2(ecc, EDMA_DRAE, 0, i >> 5, - BIT(i & 0x1f)); + edma_or_array2(ecc, EDMA_DRAE, 0, + EDMA_REG_ARRAY_INDEX(i), + EDMA_CHANNEL_BIT(i)); edma_setup_interrupt(&echan[i], true); From patchwork Tue May 21 09:36:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 10953271 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A837C912 for ; Tue, 21 May 2019 09:36:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 997C21FFF9 for ; Tue, 21 May 2019 09:36:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8DAB02881A; Tue, 21 May 2019 09:36:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B2011FFF9 for ; Tue, 21 May 2019 09:36:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727044AbfEUJgm (ORCPT ); Tue, 21 May 2019 05:36:42 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:43398 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726289AbfEUJgm (ORCPT ); Tue, 21 May 2019 05:36:42 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id x4L9aaMn098360; Tue, 21 May 2019 04:36:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1558431396; bh=GxL6bewKmok2K7kcNTTkF6/TucQW9A3nmg8gzoigNdg=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=H+looPK3EAy+tE5QRwlleHXYrg7fRyZM+3opkRhG8QPEmFeTI4c9FQag+ujuE2o70 5aZ2Omv1Du9f8sTedOByowdOdiasKfWRYaTkyV6B8JimGqqqe+6fB9JhCQJPYYgN7h rpreUoZDC7uVgiuA2tY0n83Zj3kSpbAF5FW3coUM= Received: from DLEE107.ent.ti.com (dlee107.ent.ti.com [157.170.170.37]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id x4L9aaPK030354 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 21 May 2019 04:36:36 -0500 Received: from DLEE107.ent.ti.com (157.170.170.37) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5; Tue, 21 May 2019 04:36:36 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5 via Frontend Transport; Tue, 21 May 2019 04:36:36 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id x4L9aUgb018934; Tue, 21 May 2019 04:36:35 -0500 From: Peter Ujfalusi To: CC: , , , Subject: [PATCH v3 2/2] dmaengine: ti: edma: Enable support for polled (memcpy) completion Date: Tue, 21 May 2019 12:36:46 +0300 Message-ID: <20190521093646.21836-3-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190521093646.21836-1-peter.ujfalusi@ti.com> References: <20190521093646.21836-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When a DMA client driver decides that it is not providing callback for completion of a transfer (and/or does not set the DMA_PREP_INTERRUPT) but it will poll the status of the transfer (in case of short memcpy for example) we will not get interrupt for the completion of the transfer and will not mark the transaction as done. Check the event registers (ER and EER) and if the channel is inactive then return with DMA_COMPLETE to let the client know that the transfer is completed. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/edma.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c index a39f817b3888..1ad539c38468 100644 --- a/drivers/dma/ti/edma.c +++ b/drivers/dma/ti/edma.c @@ -1226,8 +1226,9 @@ static struct dma_async_tx_descriptor *edma_prep_dma_memcpy( edesc->pset[0].param.opt |= ITCCHEN; if (nslots == 1) { - /* Enable transfer complete interrupt */ - edesc->pset[0].param.opt |= TCINTEN; + /* Enable transfer complete interrupt if requested */ + if (tx_flags & DMA_PREP_INTERRUPT) + edesc->pset[0].param.opt |= TCINTEN; } else { /* Enable transfer complete chaining for the first slot */ edesc->pset[0].param.opt |= TCCHEN; @@ -1254,7 +1255,9 @@ static struct dma_async_tx_descriptor *edma_prep_dma_memcpy( } edesc->pset[1].param.opt |= ITCCHEN; - edesc->pset[1].param.opt |= TCINTEN; + /* Enable transfer complete interrupt if requested */ + if (tx_flags & DMA_PREP_INTERRUPT) + edesc->pset[1].param.opt |= TCINTEN; } return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags); @@ -1816,6 +1819,20 @@ static enum dma_status edma_tx_status(struct dma_chan *chan, unsigned long flags; ret = dma_cookie_status(chan, cookie, txstate); + + if (ret != DMA_COMPLETE && echan->edesc && !echan->edesc->cyclic) { + struct edma_cc *ecc = echan->ecc; + int channel = EDMA_CHAN_SLOT(echan->ch_num); + int idx = EDMA_REG_ARRAY_INDEX(channel); + int ch_bit = EDMA_CHANNEL_BIT(channel); + unsigned int sh_er = edma_shadow0_read_array(ecc, SH_ER, idx); + unsigned int sh_eer = edma_shadow0_read_array(ecc, SH_EER, idx); + + /* The channel is no longer active */ + if (!(sh_er & ch_bit) && !(sh_eer & ch_bit)) + ret = DMA_COMPLETE; + } + if (ret == DMA_COMPLETE || !txstate) return ret;