From patchwork Wed May 20 03:46:19 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuninori Morimoto X-Patchwork-Id: 6441941 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5CE6B9F38D for ; Wed, 20 May 2015 03:46:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3D91E203DF for ; Wed, 20 May 2015 03:46:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EB9F92039E for ; Wed, 20 May 2015 03:46:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751323AbbETDqZ (ORCPT ); Tue, 19 May 2015 23:46:25 -0400 Received: from relmlor2.renesas.com ([210.160.252.172]:45077 "EHLO relmlie1.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751132AbbETDqY (ORCPT ); Tue, 19 May 2015 23:46:24 -0400 Received: from unknown (HELO relmlir4.idc.renesas.com) ([10.200.68.154]) by relmlie1.idc.renesas.com with ESMTP; 20 May 2015 12:46:22 +0900 Received: from relmlac2.idc.renesas.com (relmlac2.idc.renesas.com [10.200.69.22]) by relmlir4.idc.renesas.com (Postfix) with ESMTP id A6C94488DC; Wed, 20 May 2015 12:46:22 +0900 (JST) Received: by relmlac2.idc.renesas.com (Postfix, from userid 0) id 90D5C2806E; Wed, 20 May 2015 12:46:22 +0900 (JST) Received: from relmlac2.idc.renesas.com (localhost [127.0.0.1]) by relmlac2.idc.renesas.com (Postfix) with ESMTP id 8BD482806D; Wed, 20 May 2015 12:46:22 +0900 (JST) Received: from relmlii2.idc.renesas.com [10.200.68.66] by relmlac2.idc.renesas.com with ESMTP id NAM15917; Wed, 20 May 2015 12:46:22 +0900 X-IronPort-AV: E=Sophos;i="5.13,462,1427727600"; d="scan'208";a="187726400" Received: from mail-hk1lp0126.outbound.protection.outlook.com (HELO APAC01-HK1-obe.outbound.protection.outlook.com) ([207.46.51.126]) by relmlii2.idc.renesas.com with ESMTP/TLS/AES256-SHA; 20 May 2015 12:46:21 +0900 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=kuninori.morimoto.gx@renesas.com; Received: from morimoto-PC.renesas.com (211.11.155.132) by HKXPR06MB310.apcprd06.prod.outlook.com (10.141.135.152) with Microsoft SMTP Server (TLS) id 15.1.166.22; Wed, 20 May 2015 03:46:19 +0000 Message-ID: <871tib52vy.wl%kuninori.morimoto.gx@renesas.com> From: Kuninori Morimoto Subject: [PATCH 1/4] dmaengine: rcar-dmac: fixup spinlock in rcar-dmac User-Agent: Wanderlust/2.15.9 Emacs/24.3 Mule/6.0 To: Vinod Koul , Mark Brown CC: Simon , Nguyen Viet Dung , Magnus , Linux-SH , Linux-ALSA , Liam Girdwood , Laurent , Geert Uytterhoeven , Cao Minh Hiep , sakato , In-Reply-To: <87382r52x2.wl%kuninori.morimoto.gx@renesas.com> References: <87382r52x2.wl%kuninori.morimoto.gx@renesas.com> MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Date: Wed, 20 May 2015 03:46:19 +0000 X-Originating-IP: [211.11.155.132] X-ClientProxiedBy: KAWPR01CA0029.jpnprd01.prod.outlook.com (25.165.48.139) To HKXPR06MB310.apcprd06.prod.outlook.com (10.141.135.152) X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:HKXPR06MB310; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(5005006)(3002001); SRVR:HKXPR06MB310; BCL:0; PCL:0; RULEID:; SRVR:HKXPR06MB310; X-Forefront-PRVS: 0582641F53 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10019020)(6009001)(189002)(199003)(189998001)(23726002)(42186005)(229853001)(5001960100002)(5001860100001)(5001830100001)(5001770100001)(46406003)(122386002)(62966003)(77156002)(2950100001)(106356001)(19580395003)(36756003)(19580405001)(33646002)(69596002)(83506001)(50986999)(47776003)(50466002)(64706001)(101416001)(54356999)(76176999)(86362001)(66066001)(87976001)(81156007)(92566002)(97736004)(40100003)(4001540100001)(4001350100001)(68736005)(105586002)(53416004)(46102003)(77096005); DIR:OUT; SFP:1102; SCL:1; SRVR:HKXPR06MB310; H:morimoto-PC.renesas.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; X-OriginatorOrg: renesas.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2015 03:46:19.8443 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: HKXPR06MB310 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Kuninori Morimoto Current rcar-dmac driver is using spin_lock_irq() / spin_unlock_irq() in some functions. But, some other driver might call DMAEngine API during interrupt disabled. In such case, rcar-dmac side spin_unlock_irq() forcefully allows all interrupts. Therefore, other driver receives unexpected interruption, and its exclusive access control will be broken. This patch replaces spin_lock_irq() to spin_lock_irqsave(), and spin_unlock_irq() to spin_unlock_irqrestore(). Reported-by: Cao Minh Hiep Signed-off-by: Kuninori Morimoto Tested-by: Keita Kobayashi --- drivers/dma/sh/rcar-dmac.c | 55 ++++++++++++++++++++++++++-------------------- 1 file changed, 31 insertions(+), 24 deletions(-) diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c index a18d16c..6a5d4b9 100644 --- a/drivers/dma/sh/rcar-dmac.c +++ b/drivers/dma/sh/rcar-dmac.c @@ -465,6 +465,7 @@ static dma_cookie_t rcar_dmac_tx_submit(struct dma_async_tx_descriptor *tx) static int rcar_dmac_desc_alloc(struct rcar_dmac_chan *chan, gfp_t gfp) { struct rcar_dmac_desc_page *page; + unsigned long flags; LIST_HEAD(list); unsigned int i; @@ -482,10 +483,10 @@ static int rcar_dmac_desc_alloc(struct rcar_dmac_chan *chan, gfp_t gfp) list_add_tail(&desc->node, &list); } - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); list_splice_tail(&list, &chan->desc.free); list_add_tail(&page->node, &chan->desc.pages); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); return 0; } @@ -516,6 +517,7 @@ static void rcar_dmac_desc_put(struct rcar_dmac_chan *chan, static void rcar_dmac_desc_recycle_acked(struct rcar_dmac_chan *chan) { struct rcar_dmac_desc *desc, *_desc; + unsigned long flags; LIST_HEAD(list); /* @@ -524,9 +526,9 @@ static void rcar_dmac_desc_recycle_acked(struct rcar_dmac_chan *chan) * list_for_each_entry_safe, isn't safe if we release the channel lock * around the rcar_dmac_desc_put() call. */ - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); list_splice_init(&chan->desc.wait, &list); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); list_for_each_entry_safe(desc, _desc, &list, node) { if (async_tx_test_ack(&desc->async_tx)) { @@ -539,9 +541,9 @@ static void rcar_dmac_desc_recycle_acked(struct rcar_dmac_chan *chan) return; /* Put the remaining descriptors back in the wait list. */ - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); list_splice(&list, &chan->desc.wait); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); } /* @@ -556,12 +558,13 @@ static void rcar_dmac_desc_recycle_acked(struct rcar_dmac_chan *chan) static struct rcar_dmac_desc *rcar_dmac_desc_get(struct rcar_dmac_chan *chan) { struct rcar_dmac_desc *desc; + unsigned long flags; int ret; /* Recycle acked descriptors before attempting allocation. */ rcar_dmac_desc_recycle_acked(chan); - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); while (list_empty(&chan->desc.free)) { /* @@ -570,17 +573,17 @@ static struct rcar_dmac_desc *rcar_dmac_desc_get(struct rcar_dmac_chan *chan) * allocated descriptors. If the allocation fails return an * error. */ - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); ret = rcar_dmac_desc_alloc(chan, GFP_NOWAIT); if (ret < 0) return NULL; - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); } desc = list_first_entry(&chan->desc.free, struct rcar_dmac_desc, node); list_del(&desc->node); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); return desc; } @@ -593,6 +596,7 @@ static struct rcar_dmac_desc *rcar_dmac_desc_get(struct rcar_dmac_chan *chan) static int rcar_dmac_xfer_chunk_alloc(struct rcar_dmac_chan *chan, gfp_t gfp) { struct rcar_dmac_desc_page *page; + unsigned long flags; LIST_HEAD(list); unsigned int i; @@ -606,10 +610,10 @@ static int rcar_dmac_xfer_chunk_alloc(struct rcar_dmac_chan *chan, gfp_t gfp) list_add_tail(&chunk->node, &list); } - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); list_splice_tail(&list, &chan->desc.chunks_free); list_add_tail(&page->node, &chan->desc.pages); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); return 0; } @@ -627,9 +631,10 @@ static struct rcar_dmac_xfer_chunk * rcar_dmac_xfer_chunk_get(struct rcar_dmac_chan *chan) { struct rcar_dmac_xfer_chunk *chunk; + unsigned long flags; int ret; - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); while (list_empty(&chan->desc.chunks_free)) { /* @@ -638,18 +643,18 @@ rcar_dmac_xfer_chunk_get(struct rcar_dmac_chan *chan) * allocated descriptors. If the allocation fails return an * error. */ - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); ret = rcar_dmac_xfer_chunk_alloc(chan, GFP_NOWAIT); if (ret < 0) return NULL; - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); } chunk = list_first_entry(&chan->desc.chunks_free, struct rcar_dmac_xfer_chunk, node); list_del(&chunk->node); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); return chunk; } @@ -964,12 +969,13 @@ static void rcar_dmac_free_chan_resources(struct dma_chan *chan) struct rcar_dmac *dmac = to_rcar_dmac(chan->device); struct rcar_dmac_desc_page *page, *_page; struct rcar_dmac_desc *desc; + unsigned long flags; LIST_HEAD(list); /* Protect against ISR */ - spin_lock_irq(&rchan->lock); + spin_lock_irqsave(&rchan->lock, flags); rcar_dmac_chan_halt(rchan); - spin_unlock_irq(&rchan->lock); + spin_unlock_irqrestore(&rchan->lock, flags); /* Now no new interrupts will occur */ @@ -1351,8 +1357,9 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev) { struct rcar_dmac_chan *chan = dev; struct rcar_dmac_desc *desc; + unsigned long flags; - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); /* For cyclic transfers notify the user after every chunk. */ if (chan->desc.running && chan->desc.running->cyclic) { @@ -1364,9 +1371,9 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev) callback_param = desc->async_tx.callback_param; if (callback) { - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); callback(callback_param); - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); } } @@ -1381,20 +1388,20 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev) list_del(&desc->node); if (desc->async_tx.callback) { - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); /* * We own the only reference to this descriptor, we can * safely dereference it without holding the channel * lock. */ desc->async_tx.callback(desc->async_tx.callback_param); - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); } list_add_tail(&desc->node, &chan->desc.wait); } - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); /* Recycle all acked descriptors. */ rcar_dmac_desc_recycle_acked(chan);