From patchwork Wed Oct 14 13:12:18 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 7395221 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id DCDBCBEEA4 for ; Wed, 14 Oct 2015 13:16:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CB4FF2085A for ; Wed, 14 Oct 2015 13:16:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C68B920865 for ; Wed, 14 Oct 2015 13:16:53 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZmLtU-0003S3-Jv; Wed, 14 Oct 2015 13:15:00 +0000 Received: from arroyo.ext.ti.com ([192.94.94.40]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZmLsJ-00028A-6o for linux-arm-kernel@lists.infradead.org; Wed, 14 Oct 2015 13:13:49 +0000 Received: from dflxv15.itg.ti.com ([128.247.5.124]) by arroyo.ext.ti.com (8.13.7/8.13.7) with ESMTP id t9EDDCdp028473; Wed, 14 Oct 2015 08:13:12 -0500 Received: from DFLE72.ent.ti.com (dfle72.ent.ti.com [128.247.5.109]) by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id t9EDDCQn030274; Wed, 14 Oct 2015 08:13:12 -0500 Received: from dlep33.itg.ti.com (157.170.170.75) by DFLE72.ent.ti.com (128.247.5.109) with Microsoft SMTP Server id 14.3.224.2; Wed, 14 Oct 2015 08:13:12 -0500 Received: from dlep32.itg.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by dlep33.itg.ti.com (8.14.3/8.13.8) with ESMTP id t9EDCRpC025212; Wed, 14 Oct 2015 08:13:08 -0500 From: Peter Ujfalusi To: , , Subject: [PATCH 07/13] dmaengine: edma: Refactor the dma device and channel struct initialization Date: Wed, 14 Oct 2015 16:12:18 +0300 Message-ID: <1444828344-21378-8-git-send-email-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.6.1 In-Reply-To: <1444828344-21378-1-git-send-email-peter.ujfalusi@ti.com> References: <1444828344-21378-1-git-send-email-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151014_061347_491094_39E1A660 X-CRM114-Status: GOOD ( 12.94 ) X-Spam-Score: -6.9 (------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, arnd@arndb.de, tony@atomide.com, r.schwebel@pengutronix.de, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, olof@lixom.net, linux-omap@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move all code under one function to do the dma device and eDMA channel related setup so they are not scattered around the driver. Signed-off-by: Peter Ujfalusi --- drivers/dma/edma.c | 79 +++++++++++++++++++++++++----------------------------- 1 file changed, 37 insertions(+), 42 deletions(-) diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c index d064fbc47351..53188b9383a6 100644 --- a/drivers/dma/edma.c +++ b/drivers/dma/edma.c @@ -1750,18 +1750,49 @@ static enum dma_status edma_tx_status(struct dma_chan *chan, return ret; } -static void __init edma_chan_init(struct edma_cc *ecc, struct dma_device *dma, - struct edma_chan *echans) +#define EDMA_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ + BIT(DMA_SLAVE_BUSWIDTH_3_BYTES) | \ + BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)) + +static void edma_dma_init(struct edma_cc *ecc) { + struct dma_device *ddev = &ecc->dma_slave; int i, j; + dma_cap_zero(ddev->cap_mask); + dma_cap_set(DMA_SLAVE, ddev->cap_mask); + dma_cap_set(DMA_CYCLIC, ddev->cap_mask); + dma_cap_set(DMA_MEMCPY, ddev->cap_mask); + + ddev->device_prep_slave_sg = edma_prep_slave_sg; + ddev->device_prep_dma_cyclic = edma_prep_dma_cyclic; + ddev->device_prep_dma_memcpy = edma_prep_dma_memcpy; + ddev->device_alloc_chan_resources = edma_alloc_chan_resources; + ddev->device_free_chan_resources = edma_free_chan_resources; + ddev->device_issue_pending = edma_issue_pending; + ddev->device_tx_status = edma_tx_status; + ddev->device_config = edma_slave_config; + ddev->device_pause = edma_dma_pause; + ddev->device_resume = edma_dma_resume; + ddev->device_terminate_all = edma_terminate_all; + + ddev->src_addr_widths = EDMA_DMA_BUSWIDTHS; + ddev->dst_addr_widths = EDMA_DMA_BUSWIDTHS; + ddev->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); + ddev->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; + + ddev->dev = ecc->dev; + + INIT_LIST_HEAD(&ddev->channels); + for (i = 0; i < ecc->num_channels; i++) { - struct edma_chan *echan = &echans[i]; + struct edma_chan *echan = &ecc->slave_chans[i]; echan->ch_num = EDMA_CTLR_CHAN(ecc->id, i); echan->ecc = ecc; echan->vchan.desc_free = edma_desc_free; - vchan_init(&echan->vchan, dma); + vchan_init(&echan->vchan, ddev); INIT_LIST_HEAD(&echan->node); for (j = 0; j < EDMA_MAX_SLOTS; j++) @@ -1769,36 +1800,6 @@ static void __init edma_chan_init(struct edma_cc *ecc, struct dma_device *dma, } } -#define EDMA_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ - BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ - BIT(DMA_SLAVE_BUSWIDTH_3_BYTES) | \ - BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)) - -static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma, - struct device *dev) -{ - dma->device_prep_slave_sg = edma_prep_slave_sg; - dma->device_prep_dma_cyclic = edma_prep_dma_cyclic; - dma->device_prep_dma_memcpy = edma_prep_dma_memcpy; - dma->device_alloc_chan_resources = edma_alloc_chan_resources; - dma->device_free_chan_resources = edma_free_chan_resources; - dma->device_issue_pending = edma_issue_pending; - dma->device_tx_status = edma_tx_status; - dma->device_config = edma_slave_config; - dma->device_pause = edma_dma_pause; - dma->device_resume = edma_dma_resume; - dma->device_terminate_all = edma_terminate_all; - - dma->src_addr_widths = EDMA_DMA_BUSWIDTHS; - dma->dst_addr_widths = EDMA_DMA_BUSWIDTHS; - dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); - dma->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; - - dma->dev = dev; - - INIT_LIST_HEAD(&dma->channels); -} - static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata, struct edma_cc *ecc) { @@ -2131,14 +2132,8 @@ static int edma_probe(struct platform_device *pdev) } ecc->info = info; - dma_cap_zero(ecc->dma_slave.cap_mask); - dma_cap_set(DMA_SLAVE, ecc->dma_slave.cap_mask); - dma_cap_set(DMA_CYCLIC, ecc->dma_slave.cap_mask); - dma_cap_set(DMA_MEMCPY, ecc->dma_slave.cap_mask); - - edma_dma_init(ecc, &ecc->dma_slave, dev); - - edma_chan_init(ecc, &ecc->dma_slave, ecc->slave_chans); + /* Init the dma device and channels */ + edma_dma_init(ecc); for (i = 0; i < ecc->num_channels; i++) { /* Assign all channels to the default queue */