From patchwork Fri Jan 13 16:17:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Lindgren X-Patchwork-Id: 9515975 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9065660761 for ; Fri, 13 Jan 2017 16:26:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 77FAB204BF for ; Fri, 13 Jan 2017 16:26:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6CDDC2874F; Fri, 13 Jan 2017 16:26:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 59142204BF for ; Fri, 13 Jan 2017 16:26:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751013AbdAMQ0Z (ORCPT ); Fri, 13 Jan 2017 11:26:25 -0500 Received: from muru.com ([72.249.23.125]:57272 "EHLO muru.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750906AbdAMQ0Y (ORCPT ); Fri, 13 Jan 2017 11:26:24 -0500 Received: from atomide.com (localhost [127.0.0.1]) by muru.com (Postfix) with ESMTPS id 663268120; Fri, 13 Jan 2017 16:18:18 +0000 (UTC) Date: Fri, 13 Jan 2017 08:17:31 -0800 From: Tony Lindgren To: Grygorii Strashko Cc: Dan Williams , Vinod Koul , Bin Liu , Daniel Mack , Felipe Balbi , George Cherian , Johan Hovold , Peter Ujfalusi , Sekhar Nori , Sebastian Andrzej Siewior , dmaengine@vger.kernel.org, linux-usb@vger.kernel.org, linux-omap@vger.kernel.org, Andy Shevchenko , Kevin Hilman , Patrick Titiano , Sergei Shtylyov Subject: Re: [PATCH] dmaengine: cppi41: Fix oops in cppi41_runtime_resume Message-ID: <20170113161731.GV2630@atomide.com> References: <20170112213016.19367-1-tony@atomide.com> <927792da-2e90-b2ae-1206-8fcb504d7551@ti.com> <20170112221933.GM2630@atomide.com> <1c8967b7-d59b-e53d-feeb-80c71464fb94@ti.com> <20170112230456.GS2630@atomide.com> <20170113000314.GU2630@atomide.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20170113000314.GU2630@atomide.com> User-Agent: Mutt/1.7.2 (2016-11-26) Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP * Tony Lindgren [170112 16:04]: > * Grygorii Strashko [170112 15:43]: > > @@ -457,38 +449,36 @@ static void push_desc_queue(struct cppi41_channel *c) > > cppi_writel(reg, cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num)); > > } > > > > -static void pending_desc(struct cppi41_channel *c) > > +static void cppi41_dma_issue_pending(struct dma_chan *chan) > > { > > + struct cppi41_channel *c = to_cpp41_chan(chan); > > struct cppi41_dd *cdd = c->cdd; > > + int error; > > + struct cppi41_channel *_c; > > unsigned long flags; > > > > spin_lock_irqsave(&cdd->lock, flags); > > list_add_tail(&c->node, &cdd->pending); > > - spin_unlock_irqrestore(&cdd->lock, flags); > > -} > > - > > -static void cppi41_dma_issue_pending(struct dma_chan *chan) > > -{ > > - struct cppi41_channel *c = to_cpp41_chan(chan); > > - struct cppi41_dd *cdd = c->cdd; > > - int error; > > > > error = pm_runtime_get(cdd->ddev.dev); > > - if ((error != -EINPROGRESS) && error < 0) { > > + if (error < 0) { > > pm_runtime_put_noidle(cdd->ddev.dev); > > dev_err(cdd->ddev.dev, "Failed to pm_runtime_get: %i\n", > > error); > > - > > + spin_unlock_irqrestore(&cdd->lock, flags); > > return; > > } > > > > - if (likely(pm_runtime_active(cdd->ddev.dev))) > > - push_desc_queue(c); > > - else > > - pending_desc(c); > > + if (!cdd->is_suspended) { > > + list_for_each_entry_safe(c, _c, &cdd->pending, node) { > > + push_desc_queue(c); > > + list_del(&c->node); > > + }; > > + } > > > > pm_runtime_mark_last_busy(cdd->ddev.dev); > > pm_runtime_put_autosuspend(cdd->ddev.dev); > > + spin_lock_irqsave(&cdd->lock, flags); > > } > > So always add to the queue no matter, then always flush the queue > directly if active? Yeah that might work :) > > Don't we need spinlock in the list_for_each_entry_safe() to prevent > it racing with runtime_resume() though? I could not apply for me as looks like your mail server replaced tabs with spaces it seems :( But anyways here's your basic idea plugged into my recent patch and it seems to work. I maybe have missed something though while reading so please check. The pm_runtime_get/mark_last_busy/put_autosuspend and WARN_ON() we want to keep in cppi41_irq() at least for now :) And I'm thinking we must also callcppi41_run_queue() with spinlock held to prevent out of order triggering of the DMA transfers. Does this look OK to you? Regards, Tony 8< ----------------------- --- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/dma/cppi41.c b/drivers/dma/cppi41.c index d5ba43a87a68..6ee593eb2acb 100644 --- a/drivers/dma/cppi41.c +++ b/drivers/dma/cppi41.c @@ -153,6 +153,8 @@ struct cppi41_dd { /* context for suspend/resume */ unsigned int dma_tdfdq; + + bool is_suspended; }; #define FIST_COMPLETION_QUEUE 93 @@ -316,11 +318,12 @@ static irqreturn_t cppi41_irq(int irq, void *data) __iormb(); while (val) { + unsigned long flags; u32 desc, len; int error; error = pm_runtime_get(cdd->ddev.dev); - if (error < 0) + if ((error != -EINPROGRESS) && error < 0) dev_err(cdd->ddev.dev, "%s pm runtime get: %i\n", __func__, error); @@ -340,6 +343,11 @@ static irqreturn_t cppi41_irq(int irq, void *data) else len = pd_trans_len(c->desc->pd0); + /* This warning should never trigger */ + spin_lock_irqsave(&cdd->lock, flags); + WARN_ON(cdd->is_suspended); + spin_unlock_irqrestore(&cdd->lock, flags); + c->residue = pd_trans_len(c->desc->pd6) - len; dma_cookie_complete(&c->txd); dmaengine_desc_get_callback_invoke(&c->txd, NULL); @@ -457,20 +465,26 @@ static void push_desc_queue(struct cppi41_channel *c) cppi_writel(reg, cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num)); } -static void pending_desc(struct cppi41_channel *c) +/* + * Caller must hold cdd->lock to prevent push_desc_queue() + * getting called out of order. We have both cppi41_dma_issue_pending() + * and cppi41_runtime_resume() call this function. + */ +static void cppi41_run_queue(struct cppi41_dd *cdd) { - struct cppi41_dd *cdd = c->cdd; - unsigned long flags; + struct cppi41_channel *c, *_c; - spin_lock_irqsave(&cdd->lock, flags); - list_add_tail(&c->node, &cdd->pending); - spin_unlock_irqrestore(&cdd->lock, flags); + list_for_each_entry_safe(c, _c, &cdd->pending, node) { + push_desc_queue(c); + list_del(&c->node); + } } static void cppi41_dma_issue_pending(struct dma_chan *chan) { struct cppi41_channel *c = to_cpp41_chan(chan); struct cppi41_dd *cdd = c->cdd; + unsigned long flags; int error; error = pm_runtime_get(cdd->ddev.dev); @@ -482,10 +496,11 @@ static void cppi41_dma_issue_pending(struct dma_chan *chan) return; } - if (likely(pm_runtime_active(cdd->ddev.dev))) - push_desc_queue(c); - else - pending_desc(c); + spin_lock_irqsave(&cdd->lock, flags); + list_add_tail(&c->node, &cdd->pending); + if (!cdd->is_suspended) + cppi41_run_queue(cdd); + spin_unlock_irqrestore(&cdd->lock, flags); pm_runtime_mark_last_busy(cdd->ddev.dev); pm_runtime_put_autosuspend(cdd->ddev.dev); @@ -1150,6 +1165,11 @@ static int __maybe_unused cppi41_resume(struct device *dev) static int __maybe_unused cppi41_runtime_suspend(struct device *dev) { struct cppi41_dd *cdd = dev_get_drvdata(dev); + unsigned long flags; + + spin_lock_irqsave(&cdd->lock, flags); + cdd->is_suspended = true; + spin_unlock_irqrestore(&cdd->lock, flags); WARN_ON(!list_empty(&cdd->pending)); @@ -1159,14 +1179,11 @@ static int __maybe_unused cppi41_runtime_suspend(struct device *dev) static int __maybe_unused cppi41_runtime_resume(struct device *dev) { struct cppi41_dd *cdd = dev_get_drvdata(dev); - struct cppi41_channel *c, *_c; unsigned long flags; spin_lock_irqsave(&cdd->lock, flags); - list_for_each_entry_safe(c, _c, &cdd->pending, node) { - push_desc_queue(c); - list_del(&c->node); - } + cdd->is_suspended = false; + cppi41_run_queue(cdd); spin_unlock_irqrestore(&cdd->lock, flags); return 0;