From patchwork Wed Jun 1 07:20:34 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guennadi Liakhovetski X-Patchwork-Id: 834952 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.3) with ESMTP id p517JY7S016179 for ; Wed, 1 Jun 2011 07:20:40 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752767Ab1FAHUj (ORCPT ); Wed, 1 Jun 2011 03:20:39 -0400 Received: from moutng.kundenserver.de ([212.227.126.187]:59302 "EHLO moutng.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751294Ab1FAHUj (ORCPT ); Wed, 1 Jun 2011 03:20:39 -0400 Received: from axis700.grange (dslb-094-221-030-014.pools.arcor-ip.net [94.221.30.14]) by mrelayeu.kundenserver.de (node=mrbap3) with ESMTP (Nemesis) id 0MPGgy-1QW5a63ygC-004Txs; Wed, 01 Jun 2011 09:20:35 +0200 Received: by axis700.grange (Postfix, from userid 1000) id 6429F189B6D; Wed, 1 Jun 2011 09:20:34 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by axis700.grange (Postfix) with ESMTP id 5C87F189B6B; Wed, 1 Jun 2011 09:20:34 +0200 (CEST) Date: Wed, 1 Jun 2011 09:20:34 +0200 (CEST) From: Guennadi Liakhovetski X-X-Sender: lyakh@axis700.grange To: linux-sh@vger.kernel.org cc: Dan Williams , Vinod Koul , "Rafael J. Wysocki" Subject: [PATCH/RFC] dma: shdma: transfer based runtime PM Message-ID: MIME-Version: 1.0 X-Provags-ID: V02:K0:8Ce8V8LI2csn/SB/XbFwJZrPcDLgx1O5geHleu2eTRP JYn1MsQElfWp+GwWX4anPsPClDh1lhXuf++/oJvHnwqS0nyLb3 Njo0m0Ir5bjc2n4QjQ8bzKaLH/TKsjmkIZvlZuqXEs5NzqKV5Q k1WiMFsQ/iSsdgICruJkqbqK7f+Ytv4B2txw/Z7Ot/q6mV1By4 +fLPdgl6dFtm5VFtY3eFeMAVE+pi/ok0ESXCRCILv8= Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Wed, 01 Jun 2011 07:20:40 +0000 (UTC) Currently the shdma dmaengine driver uses runtime PM to save power, when no channel on the specific controller is requested by a user. This patch switches the driver to count individual DMA transfers. That way the controller can be powered down between transfers, even if some of its channels are in use. Signed-off-by: Guennadi Liakhovetski --- I marked this an RFC, because it might make sense to first test it with Rafael's upcoming power-domain code for sh-mobile, before committing. drivers/dma/shdma.c | 28 ++++++++++++++++------------ 1 files changed, 16 insertions(+), 12 deletions(-) diff --git a/drivers/dma/shdma.c b/drivers/dma/shdma.c index 6eb8454..94d78f2 100644 --- a/drivers/dma/shdma.c +++ b/drivers/dma/shdma.c @@ -235,10 +235,22 @@ static dma_cookie_t sh_dmae_tx_submit(struct dma_async_tx_descriptor *tx) struct sh_desc *desc = tx_to_sh_desc(tx), *chunk, *last = desc, *c; struct sh_dmae_chan *sh_chan = to_sh_chan(tx->chan); dma_async_tx_callback callback = tx->callback; + struct sh_dmae_slave *param = tx->chan->private; dma_cookie_t cookie; + pm_runtime_get_sync(sh_chan->dev); + spin_lock_bh(&sh_chan->desc_lock); + if (param) { + const struct sh_dmae_slave_config *cfg = param->config; + + dmae_set_dmars(sh_chan, cfg->mid_rid); + dmae_set_chcr(sh_chan, cfg->chcr); + } else { + dmae_init(sh_chan); + } + cookie = sh_chan->common.cookie; cookie++; if (cookie < 0) @@ -319,8 +331,6 @@ static int sh_dmae_alloc_chan_resources(struct dma_chan *chan) struct sh_dmae_slave *param = chan->private; int ret; - pm_runtime_get_sync(sh_chan->dev); - /* * This relies on the guarantee from dmaengine that alloc_chan_resources * never runs concurrently with itself or free_chan_resources. @@ -340,11 +350,6 @@ static int sh_dmae_alloc_chan_resources(struct dma_chan *chan) } param->config = cfg; - - dmae_set_dmars(sh_chan, cfg->mid_rid); - dmae_set_chcr(sh_chan, cfg->chcr); - } else { - dmae_init(sh_chan); } spin_lock_bh(&sh_chan->desc_lock); @@ -378,7 +383,6 @@ edescalloc: clear_bit(param->slave_id, sh_dmae_slave_used); etestused: efindslave: - pm_runtime_put(sh_chan->dev); return ret; } @@ -390,7 +394,6 @@ static void sh_dmae_free_chan_resources(struct dma_chan *chan) struct sh_dmae_chan *sh_chan = to_sh_chan(chan); struct sh_desc *desc, *_desc; LIST_HEAD(list); - int descs = sh_chan->descs_allocated; /* Protect against ISR */ spin_lock_irq(&sh_chan->desc_lock); @@ -417,9 +420,6 @@ static void sh_dmae_free_chan_resources(struct dma_chan *chan) spin_unlock_bh(&sh_chan->desc_lock); - if (descs > 0) - pm_runtime_put(sh_chan->dev); - list_for_each_entry_safe(desc, _desc, &list, node) kfree(desc); } @@ -735,6 +735,9 @@ static dma_async_tx_callback __ld_cleanup(struct sh_dmae_chan *sh_chan, bool all async_tx_test_ack(&desc->async_tx)) || all) { /* Remove from ld_queue list */ desc->mark = DESC_IDLE; + + if (tx->cookie > 0) + pm_runtime_put(sh_chan->dev); list_move(&desc->node, &sh_chan->ld_free); } } @@ -894,6 +897,7 @@ static bool sh_dmae_reset(struct sh_dmae_device *shdev) desc->mark = DESC_IDLE; if (tx->callback) tx->callback(tx->callback_param); + pm_runtime_put(sh_chan->dev); } spin_lock(&sh_chan->desc_lock);