From patchwork Mon Sep 18 11:08:01 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sylvain Lesne X-Patchwork-Id: 9956451 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4560160568 for ; Mon, 18 Sep 2017 11:08:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 363F428B1D for ; Mon, 18 Sep 2017 11:08:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2B04A28BD7; Mon, 18 Sep 2017 11:08:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A6B6828B1E for ; Mon, 18 Sep 2017 11:08:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752473AbdIRLIL (ORCPT ); Mon, 18 Sep 2017 07:08:11 -0400 Received: from mout.kundenserver.de ([212.227.17.10]:56648 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752150AbdIRLIJ (ORCPT ); Mon, 18 Sep 2017 07:08:09 -0400 Received: from localhost.localdomain ([109.190.217.251]) by mrelayeu.kundenserver.de (mreue102 [212.227.15.184]) with ESMTPSA (Nemesis) id 0MWAdl-1dvrru2Zvl-00XNCZ; Mon, 18 Sep 2017 13:08:03 +0200 From: Sylvain Lesne To: dmaengine@vger.kernel.org Cc: vinod.koul@intel.com, sr@denx.de Subject: [RFC PATCH 2/2] dmaengine: altera: fix spinlock usage Date: Mon, 18 Sep 2017 13:08:01 +0200 Message-Id: <1505732881-7484-3-git-send-email-lesne@alse-fr.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1505732881-7484-1-git-send-email-lesne@alse-fr.com> References: <1505732881-7484-1-git-send-email-lesne@alse-fr.com> X-Provags-ID: V03:K0:1n3rdTD4KalLWOLeUu2cOGY+Fuz9hp5X5l3C6fKxjOxV+driWoS +QBSP4q/79WviUs+WysZp3rvmCiFJQnVnTdoFjuL1xgWc8DHYHq5x3AOHODt8HIUo88VU4p RBm+C3fymPjE/wWBWQLNcbiCDpOS5F0Oic1fWQl3J2yvtj16EtF0itFZj6W5GHQ8DcC8ieI gXzyABq2+rifZvM2K3P9g== X-UI-Out-Filterresults: notjunk:1; V01:K0:2V1QirI66mA=:CkZmfVbeoaw2BI4DeYaYWY mWzIgosxdapAD8ELJmXd23AaewdzUoD9vDy6N+Jaq3B0J8Md0RDqML172bfIyrHY+UvPbxCTq N/9ArL50wJbhWk8UrTf+r4qU33VTwpflc9mmc65fq9tbPQ60StDCPLQr3gWxQcVf/UvwubmfZ Fk4ChwGz8LhCpPacOLcnoBAiWc1lE6cGhZSjGR/LmfEOTjg+zVyleARTrjAfBhw20C+FsNi+q 5jRQkFAzl1kdG5LPdcqsn280qM7orXXZVprj7/JxxzkfE2w15H62J5FGe69UUzkvQnGqTLBkm hogM3qHn2JXJLfpDuOcWWctp23sntKH0zhjJDo9nvUQPrwyLwrlZtOEn8qRZj0DwCuoJIhVWL pMIcnrmpEjyKdSMueH2BYWUUgs5aJ94LSfYG2vuG2Qa4r4xAQ1O3QcuT+wiEkxQopQQFrjyeB OexjFIhWrzOMKPqeqWunRp1vYZ/OWrP2AH+wCluvLQBD0jZCbZyeCgkzH+MEWBE62JqXOUEPM O8+DmkSMgT9JLSG02Q7p7oAwwurJzEBtafXi97BhJeJ4IWFVwC0s86ih99bHiRUTmyvqBvFkz VsdMWEMn8KcyqNaWyDU82sL597KnTKJOQJATix8Wylngx7TbKd9fV2eCfgBCy+FpuVJ4p3eUD ISFsRwOHRjcyEsS7y/XwdNS1V/yti2HFkLcgCqXlx1H++TwtDnKVN8L9cSdYhnH18MsE+eqn6 zoGWw1thQgkOB+smdcWFgI/JyOk6sf05cwuXkw== Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Since this lock is acquired in both process and IRQ context, failing to to disable IRQs when trying to acquire the lock in process context can lead to deadlocks. Signed-off-by: Sylvain Lesne Reviewed-by: Stefan Roese --- I'm not sure that this is the "smartest" fix, but I think something should be done to fix the spinlock usage in this driver (lockdep agrees!). --- drivers/dma/altera-msgdma.c | 35 +++++++++++++++++++++-------------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/drivers/dma/altera-msgdma.c b/drivers/dma/altera-msgdma.c index 35cbf2365f68..339186f25a2a 100644 --- a/drivers/dma/altera-msgdma.c +++ b/drivers/dma/altera-msgdma.c @@ -212,11 +212,12 @@ struct msgdma_device { static struct msgdma_sw_desc *msgdma_get_descriptor(struct msgdma_device *mdev) { struct msgdma_sw_desc *desc; + unsigned long flags; - spin_lock_bh(&mdev->lock); + spin_lock_irqsave(&mdev->lock, flags); desc = list_first_entry(&mdev->free_list, struct msgdma_sw_desc, node); list_del(&desc->node); - spin_unlock_bh(&mdev->lock); + spin_unlock_irqrestore(&mdev->lock, flags); INIT_LIST_HEAD(&desc->tx_list); @@ -306,13 +307,14 @@ static dma_cookie_t msgdma_tx_submit(struct dma_async_tx_descriptor *tx) struct msgdma_device *mdev = to_mdev(tx->chan); struct msgdma_sw_desc *new; dma_cookie_t cookie; + unsigned long flags; new = tx_to_desc(tx); - spin_lock_bh(&mdev->lock); + spin_lock_irqsave(&mdev->lock, flags); cookie = dma_cookie_assign(tx); list_add_tail(&new->node, &mdev->pending_list); - spin_unlock_bh(&mdev->lock); + spin_unlock_irqrestore(&mdev->lock, flags); return cookie; } @@ -336,17 +338,18 @@ msgdma_prep_memcpy(struct dma_chan *dchan, dma_addr_t dma_dst, struct msgdma_extended_desc *desc; size_t copy; u32 desc_cnt; + unsigned long irqflags; desc_cnt = DIV_ROUND_UP(len, MSGDMA_MAX_TRANS_LEN); - spin_lock_bh(&mdev->lock); + spin_lock_irqsave(&mdev->lock, irqflags); if (desc_cnt > mdev->desc_free_cnt) { spin_unlock_bh(&mdev->lock); dev_dbg(mdev->dev, "mdev %p descs are not available\n", mdev); return NULL; } mdev->desc_free_cnt -= desc_cnt; - spin_unlock_bh(&mdev->lock); + spin_unlock_irqrestore(&mdev->lock, irqflags); do { /* Allocate and populate the descriptor */ @@ -397,18 +400,19 @@ msgdma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, u32 desc_cnt = 0, i; struct scatterlist *sg; u32 stride; + unsigned long irqflags; for_each_sg(sgl, sg, sg_len, i) desc_cnt += DIV_ROUND_UP(sg_dma_len(sg), MSGDMA_MAX_TRANS_LEN); - spin_lock_bh(&mdev->lock); + spin_lock_irqsave(&mdev->lock, irqflags); if (desc_cnt > mdev->desc_free_cnt) { spin_unlock_bh(&mdev->lock); dev_dbg(mdev->dev, "mdev %p descs are not available\n", mdev); return NULL; } mdev->desc_free_cnt -= desc_cnt; - spin_unlock_bh(&mdev->lock); + spin_unlock_irqrestore(&mdev->lock, irqflags); avail = sg_dma_len(sgl); @@ -566,10 +570,11 @@ static void msgdma_start_transfer(struct msgdma_device *mdev) static void msgdma_issue_pending(struct dma_chan *chan) { struct msgdma_device *mdev = to_mdev(chan); + unsigned long flags; - spin_lock_bh(&mdev->lock); + spin_lock_irqsave(&mdev->lock, flags); msgdma_start_transfer(mdev); - spin_unlock_bh(&mdev->lock); + spin_unlock_irqrestore(&mdev->lock, flags); } /** @@ -634,10 +639,11 @@ static void msgdma_free_descriptors(struct msgdma_device *mdev) static void msgdma_free_chan_resources(struct dma_chan *dchan) { struct msgdma_device *mdev = to_mdev(dchan); + unsigned long flags; - spin_lock_bh(&mdev->lock); + spin_lock_irqsave(&mdev->lock, flags); msgdma_free_descriptors(mdev); - spin_unlock_bh(&mdev->lock); + spin_unlock_irqrestore(&mdev->lock, flags); kfree(mdev->sw_desq); } @@ -682,8 +688,9 @@ static void msgdma_tasklet(unsigned long data) u32 count; u32 __maybe_unused size; u32 __maybe_unused status; + unsigned long flags; - spin_lock(&mdev->lock); + spin_lock_irqsave(&mdev->lock, flags); /* Read number of responses that are available */ count = ioread32(mdev->csr + MSGDMA_CSR_RESP_FILL_LEVEL); @@ -704,7 +711,7 @@ static void msgdma_tasklet(unsigned long data) msgdma_chan_desc_cleanup(mdev); } - spin_unlock(&mdev->lock); + spin_unlock_irqrestore(&mdev->lock, flags); } /**