From patchwork Tue Sep 23 14:18:14 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Shevchenko X-Patchwork-Id: 4956731 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B33C39F32F for ; Tue, 23 Sep 2014 14:19:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7393D200E6 for ; Tue, 23 Sep 2014 14:19:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6024620136 for ; Tue, 23 Sep 2014 14:19:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754066AbaIWOTM (ORCPT ); Tue, 23 Sep 2014 10:19:12 -0400 Received: from mga09.intel.com ([134.134.136.24]:35432 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753619AbaIWOST (ORCPT ); Tue, 23 Sep 2014 10:18:19 -0400 Received: from azsmga001.ch.intel.com ([10.2.17.19]) by orsmga102.jf.intel.com with ESMTP; 23 Sep 2014 07:12:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.04,580,1406617200"; d="scan'208";a="479453555" Received: from smile.fi.intel.com (HELO smile) ([10.237.72.76]) by azsmga001.ch.intel.com with ESMTP; 23 Sep 2014 07:18:16 -0700 Received: from andy by smile with local (Exim 4.84) (envelope-from ) id 1XWQv1-00080P-HJ; Tue, 23 Sep 2014 17:18:15 +0300 From: Andy Shevchenko To: dmaengine@vger.kernel.org, "Koul, Vinod" , "Girdwood, Liam R" , "Chew, Kean Ho" , "pure . logic @ nexus-software . ie" Cc: Andy Shevchenko Subject: [PATCH v1 5/6] dmaengine: dw: enable and disable controller when needed Date: Tue, 23 Sep 2014 17:18:14 +0300 Message-Id: <1411481895-30603-6-git-send-email-andriy.shevchenko@linux.intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1411481895-30603-1-git-send-email-andriy.shevchenko@linux.intel.com> References: <1411481895-30603-1-git-send-email-andriy.shevchenko@linux.intel.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Enable controller automatically whenever first user requires for a channel and disable it when the last user gone. Signed-off-by: Andy Shevchenko --- drivers/dma/dw/core.c | 60 ++++++++++++++++++++++++++++++--------------------- drivers/dma/dw/regs.h | 1 + 2 files changed, 36 insertions(+), 25 deletions(-) diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c index 4812638..2447221 100644 --- a/drivers/dma/dw/core.c +++ b/drivers/dma/dw/core.c @@ -1094,6 +1094,31 @@ static void dwc_issue_pending(struct dma_chan *chan) spin_unlock_irqrestore(&dwc->lock, flags); } +/*----------------------------------------------------------------------*/ + +static void dw_dma_off(struct dw_dma *dw) +{ + int i; + + dma_writel(dw, CFG, 0); + + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask); + channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask); + channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask); + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask); + + while (dma_readl(dw, CFG) & DW_CFG_DMA_EN) + cpu_relax(); + + for (i = 0; i < dw->dma.chancnt; i++) + dw->chan[i].initialized = false; +} + +static void dw_dma_on(struct dw_dma *dw) +{ + dma_writel(dw, CFG, DW_CFG_DMA_EN); +} + static int dwc_alloc_chan_resources(struct dma_chan *chan) { struct dw_dma_chan *dwc = to_dw_dma_chan(chan); @@ -1118,6 +1143,11 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan) * doesn't mean what you think it means), and status writeback. */ + /* Enable controller here if needed */ + if (!dw->in_use) + dw_dma_on(dw); + dw->in_use |= dwc->mask; + spin_lock_irqsave(&dwc->lock, flags); i = dwc->descs_allocated; while (dwc->descs_allocated < NR_DESCS_PER_CHANNEL) { @@ -1182,6 +1212,11 @@ static void dwc_free_chan_resources(struct dma_chan *chan) spin_unlock_irqrestore(&dwc->lock, flags); + /* Disable controller in case it was a last user */ + dw->in_use &= ~dwc->mask; + if (!dw->in_use) + dw_dma_off(dw); + list_for_each_entry_safe(desc, _desc, &list, desc_node) { dev_vdbg(chan2dev(chan), " freeing descriptor %p\n", desc); dma_pool_free(dw->desc_pool, desc, desc->txd.phys); @@ -1452,29 +1487,6 @@ EXPORT_SYMBOL(dw_dma_cyclic_free); /*----------------------------------------------------------------------*/ -static void dw_dma_off(struct dw_dma *dw) -{ - int i; - - dma_writel(dw, CFG, 0); - - channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask); - channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask); - channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask); - channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask); - - while (dma_readl(dw, CFG) & DW_CFG_DMA_EN) - cpu_relax(); - - for (i = 0; i < dw->dma.chancnt; i++) - dw->chan[i].initialized = false; -} - -static void dw_dma_on(struct dw_dma *dw) -{ - dma_writel(dw, CFG, DW_CFG_DMA_EN); -} - int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) { struct dw_dma *dw; @@ -1648,8 +1660,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) dw->dma.device_tx_status = dwc_tx_status; dw->dma.device_issue_pending = dwc_issue_pending; - dw_dma_on(dw); - err = dma_async_device_register(&dw->dma); if (err) goto err_dma_register; diff --git a/drivers/dma/dw/regs.h b/drivers/dma/dw/regs.h index e8f92b2..848e232 100644 --- a/drivers/dma/dw/regs.h +++ b/drivers/dma/dw/regs.h @@ -281,6 +281,7 @@ struct dw_dma { /* channels */ struct dw_dma_chan *chan; u8 all_chan_mask; + u8 in_use; /* hardware configuration */ unsigned char nr_masters;