From patchwork Thu Apr 16 13:39:14 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Baldyga X-Patchwork-Id: 6225951 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C8037BF4A6 for ; Thu, 16 Apr 2015 13:40:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9F7122014A for ; Thu, 16 Apr 2015 13:40:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2E6FC20149 for ; Thu, 16 Apr 2015 13:40:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757501AbbDPNkb (ORCPT ); Thu, 16 Apr 2015 09:40:31 -0400 Received: from mailout2.samsung.com ([203.254.224.25]:58931 "EHLO mailout2.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757428AbbDPNjx (ORCPT ); Thu, 16 Apr 2015 09:39:53 -0400 Received: from epcpsbgm2.samsung.com (epcpsbgm2 [203.254.230.27]) by mailout2.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0NMW00368JA306A0@mailout2.samsung.com>; Thu, 16 Apr 2015 22:39:39 +0900 (KST) X-AuditID: cbfee61b-f79536d000000f1f-d1-552fbb9a4ceb Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm2.samsung.com (EPCPMTA) with SMTP id EA.E3.03871.A9BBF255; Thu, 16 Apr 2015 22:39:39 +0900 (KST) Received: from AMDC2122.DIGITAL.local ([106.120.53.17]) by mmp2.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTPA id <0NMW00D4SJ9HOBA0@mmp2.samsung.com>; Thu, 16 Apr 2015 22:39:38 +0900 (KST) From: Robert Baldyga To: linux@arm.linux.org.uk, dan.j.williams@intel.com, vinod.koul@intel.com Cc: lars@metafoo.de, dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, m.szyprowski@samsung.com, Robert Baldyga Subject: [PATCH 3/3] dmaengine: pl330: get rid of pm_runtime_irq_safe() Date: Thu, 16 Apr 2015 15:39:14 +0200 Message-id: <1429191554-24972-4-git-send-email-r.baldyga@samsung.com> X-Mailer: git-send-email 1.9.1 In-reply-to: <1429191554-24972-1-git-send-email-r.baldyga@samsung.com> References: <1429191554-24972-1-git-send-email-r.baldyga@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprBLMWRmVeSWpSXmKPExsVy+t9jQd3Zu/VDDT7+YLaYPvUCo8XqqX9Z LZZMns9qcXnXHDaL25d5LdYeuctu8eDwTnaLl337WRw4PFqae9g8Fu95yeSx5M0hVo++LasY PT5vkgtgjeKySUnNySxLLdK3S+DKWD1hNmPBJM+KY3cfMjcwHrbpYuTkkBAwkVh6aRobhC0m ceHeeiCbi0NIYDqjRPenJywQzk9GibPLPrGDVLEJ6Ehs+T6BsYuRg0NEwEti96s4kBpmgQmM Et+2LmICqREWcJd4/fcTC4jNIqAqcWzxDbBeXgFXiQvrL7BDbJOTOHlsMiuIzSngJvHx60Fm EFsIqObVka3MExh5FzAyrGIUTS1ILihOSs810itOzC0uzUvXS87P3cQIDq9n0jsYVzVYHGIU 4GBU4uH9kaIfKsSaWFZcmXuIUYKDWUmE98F2oBBvSmJlVWpRfnxRaU5q8SFGaQ4WJXHeObpy oUIC6YklqdmpqQWpRTBZJg5OqQZG7Xf2wT6837WDg6dZLZ2RodH3/PS06+5JPO6ek/V6njfd 37T85KKm3qBXVZsbHwkvYlq0WL83qmBK2wXPPeuLfF749Sq+mvr6tlC5c0665EJb9cWfPuzW FWEWVav+IFxgyHxBc598z87IjLjn85fcOs9a9Xl1WjOL/ZnnS6M3XvA1trKIYJmkxFKckWio xVxUnAgAX14c7isCAAA= Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As using pm_runtime_irq_safe() causes power domain is always enabled, we want to get rid of it to reach better power efficiency. For this purpose we call pm_runtime_get()/pm_runtime_put() in pl330_pm_get/pl330_pm_put() functions, which are called when channel client want to use DMA channel. With pm_runtime_irq_safe() enabled the only action performed by pm_runtime_get()/pm_runtime_put() functions was enabling and disabling AHB clock, so now we do that manually to avoid using pm_runtime functions in atomic context. We also use no_pm_pclk_management flag in amba_driver to prevent bus driver from touching pclk in runtime pm callbacks. In result we manage AHB clock as we did before, plus we can disable power domain when used isn't using DMA channel. Signed-off-by: Robert Baldyga --- drivers/dma/pl330.c | 107 ++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 82 insertions(+), 25 deletions(-) diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c index 0e1f567..5f9b867c 100644 --- a/drivers/dma/pl330.c +++ b/drivers/dma/pl330.c @@ -266,9 +266,6 @@ static unsigned cmd_line; #define NR_DEFAULT_DESC 16 -/* Delay for runtime PM autosuspend, ms */ -#define PL330_AUTOSUSPEND_DELAY 20 - /* Populated by the PL330 core driver for DMA API driver's info */ struct pl330_config { u32 periph_id; @@ -484,6 +481,8 @@ struct pl330_dmac { enum pl330_dmac_state state; /* Holds list of reqs with due callbacks */ struct list_head req_done; + /* Refcount for AMBA clock management */ + unsigned int pclk_refcnt; /* Peripheral channels connected to this DMAC */ unsigned int num_peripherals; @@ -548,6 +547,30 @@ static inline u32 get_revision(u32 periph_id) return (periph_id >> PERIPH_REV_SHIFT) & PERIPH_REV_MASK; } +static inline int pl330_pclk_enable(struct pl330_dmac *pl330) +{ + struct amba_device *pcdev = to_amba_device(pl330->ddma.dev); + unsigned long flags; + int ret; + + spin_lock_irqsave(&pl330->lock, flags); + ret = pl330->pclk_refcnt++ ? 0 : amba_pclk_enable(pcdev); + spin_unlock_irqrestore(&pl330->lock, flags); + + return ret; +} + +static inline void pl330_pclk_disable(struct pl330_dmac *pl330) +{ + struct amba_device *pcdev = to_amba_device(pl330->ddma.dev); + unsigned long flags; + + spin_lock_irqsave(&pl330->lock, flags); + if (--pl330->pclk_refcnt == 0) + amba_pclk_disable(pcdev); + spin_unlock_irqrestore(&pl330->lock, flags); +} + static inline u32 _emit_ADDH(unsigned dry_run, u8 buf[], enum pl330_dst da, u16 val) { @@ -2032,11 +2055,9 @@ static void pl330_tasklet(unsigned long data) } spin_unlock_irqrestore(&pch->lock, flags); - /* If work list empty, power down */ - if (power_down) { - pm_runtime_mark_last_busy(pch->dmac->ddma.dev); - pm_runtime_put_autosuspend(pch->dmac->ddma.dev); - } + /* If work list empty, disable clock */ + if (power_down) + pl330_pclk_disable(pch->dmac); } bool pl330_filter(struct dma_chan *chan, void *param) @@ -2155,6 +2176,21 @@ static int pl330_terminate_all(struct dma_chan *chan) return 0; } +static int pl330_pm_get(struct dma_chan *chan) +{ + struct dma_pl330_chan *pch = to_pchan(chan); + + return pm_runtime_get_sync(pch->dmac->ddma.dev); +} + +static int pl330_pm_put(struct dma_chan *chan) +{ + struct dma_pl330_chan *pch = to_pchan(chan); + + pm_runtime_mark_last_busy(pch->dmac->ddma.dev); + return pm_runtime_put_autosuspend(pch->dmac->ddma.dev); +} + /* * We don't support DMA_RESUME command because of hardware * limitations, so after pausing the channel we cannot restore @@ -2167,8 +2203,12 @@ int pl330_pause(struct dma_chan *chan) struct dma_pl330_chan *pch = to_pchan(chan); struct pl330_dmac *pl330 = pch->dmac; unsigned long flags; + int ret; + + ret = pl330_pclk_enable(pl330); + if (ret < 0) + return ret; - pm_runtime_get_sync(pl330->ddma.dev); spin_lock_irqsave(&pch->lock, flags); spin_lock(&pl330->lock); @@ -2176,8 +2216,7 @@ int pl330_pause(struct dma_chan *chan) spin_unlock(&pl330->lock); spin_unlock_irqrestore(&pch->lock, flags); - pm_runtime_mark_last_busy(pl330->ddma.dev); - pm_runtime_put_autosuspend(pl330->ddma.dev); + pl330_pclk_disable(pl330); return 0; } @@ -2186,10 +2225,15 @@ static void pl330_free_chan_resources(struct dma_chan *chan) { struct dma_pl330_chan *pch = to_pchan(chan); unsigned long flags; + int ret; tasklet_kill(&pch->task); pm_runtime_get_sync(pch->dmac->ddma.dev); + ret = pl330_pclk_enable(pch->dmac); + if (ret < 0) + return; + spin_lock_irqsave(&pch->lock, flags); pl330_release_channel(pch->thread); @@ -2199,6 +2243,7 @@ static void pl330_free_chan_resources(struct dma_chan *chan) list_splice_tail_init(&pch->work_list, &pch->dmac->desc_pool); spin_unlock_irqrestore(&pch->lock, flags); + pl330_pclk_disable(pch->dmac); pm_runtime_mark_last_busy(pch->dmac->ddma.dev); pm_runtime_put_autosuspend(pch->dmac->ddma.dev); } @@ -2207,11 +2252,14 @@ int pl330_get_current_xferred_count(struct dma_pl330_chan *pch, struct dma_pl330_desc *desc) { struct pl330_thread *thrd = pch->thread; - struct pl330_dmac *pl330 = pch->dmac; void __iomem *regs = thrd->dmac->base; u32 val, addr; + int ret; + + ret = pl330_pclk_enable(pch->dmac); + if (ret < 0) + return ret; - pm_runtime_get_sync(pl330->ddma.dev); val = addr = 0; if (desc->rqcfg.src_inc) { val = readl(regs + SA(thrd->id)); @@ -2220,8 +2268,8 @@ int pl330_get_current_xferred_count(struct dma_pl330_chan *pch, val = readl(regs + DA(thrd->id)); addr = desc->px.dst_addr; } - pm_runtime_mark_last_busy(pch->dmac->ddma.dev); - pm_runtime_put_autosuspend(pl330->ddma.dev); + + pl330_pclk_disable(pch->dmac); return val - addr; } @@ -2277,16 +2325,19 @@ static void pl330_issue_pending(struct dma_chan *chan) { struct dma_pl330_chan *pch = to_pchan(chan); unsigned long flags; + int ret; spin_lock_irqsave(&pch->lock, flags); if (list_empty(&pch->work_list)) { /* * Warn on nothing pending. Empty submitted_list may - * break our pm_runtime usage counter as it is - * updated on work_list emptiness status. + * break our clock usage counter as it is updated on + * work_list emptiness status. */ WARN_ON(list_empty(&pch->submitted_list)); - pm_runtime_get_sync(pch->dmac->ddma.dev); + ret = pl330_pclk_enable(pch->dmac); + if (ret < 0) + return; } list_splice_tail_init(&pch->submitted_list, &pch->work_list); spin_unlock_irqrestore(&pch->lock, flags); @@ -2720,13 +2771,13 @@ static irqreturn_t pl330_irq_handler(int irq, void *data) static int __maybe_unused pl330_suspend(struct device *dev) { struct amba_device *pcdev = to_amba_device(dev); + struct pl330_dmac *pl330 = amba_get_drvdata(pcdev); pm_runtime_disable(dev); - if (!pm_runtime_status_suspended(dev)) { - /* amba did not disable the clock */ + if (pl330->pclk_refcnt) amba_pclk_disable(pcdev); - } + amba_pclk_unprepare(pcdev); return 0; @@ -2735,14 +2786,18 @@ static int __maybe_unused pl330_suspend(struct device *dev) static int __maybe_unused pl330_resume(struct device *dev) { struct amba_device *pcdev = to_amba_device(dev); + struct pl330_dmac *pl330 = amba_get_drvdata(pcdev); int ret; ret = amba_pclk_prepare(pcdev); if (ret) return ret; - if (!pm_runtime_status_suspended(dev)) + if (pl330->pclk_refcnt) { ret = amba_pclk_enable(pcdev); + if (ret) + return ret; + } pm_runtime_enable(dev); @@ -2815,6 +2870,8 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id) if (!add_desc(pl330, GFP_KERNEL, NR_DEFAULT_DESC)) dev_warn(&adev->dev, "unable to allocate desc\n"); + amba_pclk_disable(adev); + INIT_LIST_HEAD(&pd->channels); /* Initialize channel parameters */ @@ -2869,6 +2926,8 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id) pd->device_tx_status = pl330_tx_status; pd->device_prep_slave_sg = pl330_prep_slave_sg; pd->device_config = pl330_config; + pd->device_pm_get = pl330_pm_get; + pd->device_pm_put = pl330_pm_put; pd->device_pause = pl330_pause; pd->device_terminate_all = pl330_terminate_all; pd->device_issue_pending = pl330_issue_pending; @@ -2902,7 +2961,6 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id) if (ret) dev_err(&adev->dev, "unable to set the seg size\n"); - dev_info(&adev->dev, "Loaded driver for PL330 DMAC-%x\n", adev->periphid); dev_info(&adev->dev, @@ -2910,9 +2968,7 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id) pcfg->data_buf_dep, pcfg->data_bus_width / 8, pcfg->num_chan, pcfg->num_peri, pcfg->num_events); - pm_runtime_irq_safe(&adev->dev); pm_runtime_use_autosuspend(&adev->dev); - pm_runtime_set_autosuspend_delay(&adev->dev, PL330_AUTOSUSPEND_DELAY); pm_runtime_mark_last_busy(&adev->dev); pm_runtime_put_autosuspend(&adev->dev); @@ -2987,6 +3043,7 @@ static struct amba_driver pl330_driver = { .id_table = pl330_ids, .probe = pl330_probe, .remove = pl330_remove, + .no_pm_pclk_management = true, }; module_amba_driver(pl330_driver);