From patchwork Thu Feb 5 02:27:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuninori Morimoto X-Patchwork-Id: 5780791 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6A5A9BF6C3 for ; Thu, 5 Feb 2015 02:27:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 64BF920125 for ; Thu, 5 Feb 2015 02:27:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2C49E20279 for ; Thu, 5 Feb 2015 02:27:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752540AbbBEC1m (ORCPT ); Wed, 4 Feb 2015 21:27:42 -0500 Received: from relmlor3.renesas.com ([210.160.252.173]:63821 "EHLO relmlie2.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752182AbbBEC1l (ORCPT ); Wed, 4 Feb 2015 21:27:41 -0500 Received: from unknown (HELO relmlir3.idc.renesas.com) ([10.200.68.153]) by relmlie2.idc.renesas.com with ESMTP; 05 Feb 2015 11:27:40 +0900 Received: from relmlac4.idc.renesas.com (relmlac4.idc.renesas.com [10.200.69.24]) by relmlir3.idc.renesas.com (Postfix) with ESMTP id 82D2C3F98D; Thu, 5 Feb 2015 11:27:40 +0900 (JST) Received: by relmlac4.idc.renesas.com (Postfix, from userid 0) id 788A6480A4; Thu, 5 Feb 2015 11:27:40 +0900 (JST) Received: from relmlac4.idc.renesas.com (localhost [127.0.0.1]) by relmlac4.idc.renesas.com (Postfix) with ESMTP id 71C61480A3; Thu, 5 Feb 2015 11:27:40 +0900 (JST) Received: from relmlii1.idc.renesas.com [10.200.68.65] by relmlac4.idc.renesas.com with ESMTP id MBL20938; Thu, 5 Feb 2015 11:27:40 +0900 X-IronPort-AV: E=Sophos;i="5.09,521,1418050800"; d="scan'208";a="179157803" Received: from mail-sg1lp0093.outbound.protection.outlook.com (HELO APAC01-SG1-obe.outbound.protection.outlook.com) ([207.46.51.93]) by relmlii1.idc.renesas.com with ESMTP/TLS/AES256-SHA; 05 Feb 2015 11:27:39 +0900 Received: from remon.renesas.com (211.11.155.132) by SINPR06MB171.apcprd06.prod.outlook.com (10.242.57.18) with Microsoft SMTP Server (TLS) id 15.1.75.20; Thu, 5 Feb 2015 02:27:37 +0000 Message-ID: <87sielhz1i.wl%kuninori.morimoto.gx@renesas.com> From: Kuninori Morimoto Subject: [PATCH] dmaengine: rcar-dmac: fixup spinlock in rcar-dmac User-Agent: Wanderlust/2.14.0 Emacs/23.3 Mule/6.0 To: Laurent Pinchart , Vinod Koul CC: Lars-Peter Clausen , Simon , Magnus , Linux-SH , MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Date: Thu, 5 Feb 2015 02:27:37 +0000 X-Originating-IP: [211.11.155.132] X-ClientProxiedBy: SG2PR03CA0011.apcprd03.prod.outlook.com (25.160.233.21) To SINPR06MB171.apcprd06.prod.outlook.com (10.242.57.18) Authentication-Results: ideasonboard.com; dkim=none (message not signed) header.d=none; X-Microsoft-Antispam: UriScan:; X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:;SRVR:SINPR06MB171; X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004); SRVR:SINPR06MB171; X-Forefront-PRVS: 0478C23FE0 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10019020)(6009001)(33646002)(86362001)(92566002)(53416004)(122386002)(77156002)(42186005)(62966003)(40100003)(36756003)(46406003)(19580405001)(77096005)(19580395003)(46102003)(83506001)(47776003)(54356999)(50466002)(229853001)(50986999)(23726002)(66066001); DIR:OUT; SFP:1102; SCL:1; SRVR:SINPR06MB171; H:remon.renesas.com; FPR:; SPF:None; MLV:sfv; LANG:en; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:;SRVR:SINPR06MB171; X-OriginatorOrg: renesas.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2015 02:27:37.9213 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: SINPR06MB171 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jun Watanabe Current rcar-dmac driver is using spin_lock_irq() / spin_unlock_irq() in some functions. But, some other driver might call DMAEngine API during interrupt disabled. In such case, rcar-dmac side spin_unlock_irq() forcefully allows all interrupts. Therefore, other driver receives unexpected interruption, and its exclusive access control will be broken. This patch replaces spin_lock_irq() to spin_lock_irqsave(), and spin_unlock_irq() to spin_unlock_irqrestore(). Signed-off-by: Jun Watanabe Signed-off-by: Hiroyuki Yokoyama Signed-off-by: Kuninori Morimoto --- drivers/dma/sh/rcar-dmac.c | 61 +++++++++++++++++++++++++------------------- 1 file changed, 35 insertions(+), 26 deletions(-) diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c index 29dd09a..6bc9941 100644 --- a/drivers/dma/sh/rcar-dmac.c +++ b/drivers/dma/sh/rcar-dmac.c @@ -453,6 +453,7 @@ static int rcar_dmac_desc_alloc(struct rcar_dmac_chan *chan, gfp_t gfp) struct rcar_dmac_desc_page *page; LIST_HEAD(list); unsigned int i; + unsigned long flags; page = (void *)get_zeroed_page(gfp); if (!page) @@ -468,10 +469,10 @@ static int rcar_dmac_desc_alloc(struct rcar_dmac_chan *chan, gfp_t gfp) list_add_tail(&desc->node, &list); } - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); list_splice_tail(&list, &chan->desc.free); list_add_tail(&page->node, &chan->desc.pages); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); return 0; } @@ -493,15 +494,18 @@ static int rcar_dmac_desc_alloc(struct rcar_dmac_chan *chan, gfp_t gfp) static void rcar_dmac_desc_put(struct rcar_dmac_chan *chan, struct rcar_dmac_desc *desc) { - spin_lock_irq(&chan->lock); + unsigned long flags; + + spin_lock_irqsave(&chan->lock, flags); list_splice_tail_init(&desc->chunks, &chan->desc.chunks_free); list_add_tail(&desc->node, &chan->desc.free); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); } static void rcar_dmac_desc_recycle_acked(struct rcar_dmac_chan *chan) { struct rcar_dmac_desc *desc, *_desc; + unsigned long flags; LIST_HEAD(list); /* @@ -510,9 +514,9 @@ static void rcar_dmac_desc_recycle_acked(struct rcar_dmac_chan *chan) * list_for_each_entry_safe, isn't safe if we release the channel lock * around the rcar_dmac_desc_put() call. */ - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); list_splice_init(&chan->desc.wait, &list); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); list_for_each_entry_safe(desc, _desc, &list, node) { if (async_tx_test_ack(&desc->async_tx)) { @@ -525,9 +529,9 @@ static void rcar_dmac_desc_recycle_acked(struct rcar_dmac_chan *chan) return; /* Put the remaining descriptors back in the wait list. */ - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); list_splice(&list, &chan->desc.wait); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); } /* @@ -542,12 +546,13 @@ static void rcar_dmac_desc_recycle_acked(struct rcar_dmac_chan *chan) static struct rcar_dmac_desc *rcar_dmac_desc_get(struct rcar_dmac_chan *chan) { struct rcar_dmac_desc *desc; + unsigned long flags; int ret; /* Recycle acked descriptors before attempting allocation. */ rcar_dmac_desc_recycle_acked(chan); - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); do { if (list_empty(&chan->desc.free)) { @@ -557,11 +562,11 @@ static struct rcar_dmac_desc *rcar_dmac_desc_get(struct rcar_dmac_chan *chan) * the newly allocated descriptors. If the allocation * fails return an error. */ - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); ret = rcar_dmac_desc_alloc(chan, GFP_NOWAIT); if (ret < 0) return NULL; - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); continue; } @@ -570,7 +575,7 @@ static struct rcar_dmac_desc *rcar_dmac_desc_get(struct rcar_dmac_chan *chan) list_del(&desc->node); } while (!desc); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); return desc; } @@ -585,6 +590,7 @@ static int rcar_dmac_xfer_chunk_alloc(struct rcar_dmac_chan *chan, gfp_t gfp) struct rcar_dmac_desc_page *page; LIST_HEAD(list); unsigned int i; + unsigned long flags; page = (void *)get_zeroed_page(gfp); if (!page) @@ -596,10 +602,10 @@ static int rcar_dmac_xfer_chunk_alloc(struct rcar_dmac_chan *chan, gfp_t gfp) list_add_tail(&chunk->node, &list); } - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); list_splice_tail(&list, &chan->desc.chunks_free); list_add_tail(&page->node, &chan->desc.pages); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); return 0; } @@ -617,9 +623,10 @@ static struct rcar_dmac_xfer_chunk * rcar_dmac_xfer_chunk_get(struct rcar_dmac_chan *chan) { struct rcar_dmac_xfer_chunk *chunk; + unsigned long flags; int ret; - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); do { if (list_empty(&chan->desc.chunks_free)) { @@ -629,11 +636,11 @@ rcar_dmac_xfer_chunk_get(struct rcar_dmac_chan *chan) * the newly allocated descriptors. If the allocation * fails return an error. */ - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); ret = rcar_dmac_xfer_chunk_alloc(chan, GFP_NOWAIT); if (ret < 0) return NULL; - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); continue; } @@ -642,7 +649,7 @@ rcar_dmac_xfer_chunk_get(struct rcar_dmac_chan *chan) list_del(&chunk->node); } while (!chunk); - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); return chunk; } @@ -962,12 +969,13 @@ static void rcar_dmac_free_chan_resources(struct dma_chan *chan) struct rcar_dmac *dmac = to_rcar_dmac(chan->device); struct rcar_dmac_desc_page *page, *_page; struct rcar_dmac_desc *desc; + unsigned long flags; LIST_HEAD(list); /* Protect against ISR */ - spin_lock_irq(&rchan->lock); + spin_lock_irqsave(&rchan->lock, flags); rcar_dmac_chan_halt(rchan); - spin_unlock_irq(&rchan->lock); + spin_unlock_irqrestore(&rchan->lock, flags); /* Now no new interrupts will occur */ @@ -1349,8 +1357,9 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev) { struct rcar_dmac_chan *chan = dev; struct rcar_dmac_desc *desc; + unsigned long flags; - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); /* For cyclic transfers notify the user after every chunk. */ if (chan->desc.running && chan->desc.running->cyclic) { @@ -1362,9 +1371,9 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev) callback_param = desc->async_tx.callback_param; if (callback) { - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); callback(callback_param); - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); } } @@ -1379,20 +1388,20 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev) list_del(&desc->node); if (desc->async_tx.callback) { - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); /* * We own the only reference to this descriptor, we can * safely dereference it without holding the channel * lock. */ desc->async_tx.callback(desc->async_tx.callback_param); - spin_lock_irq(&chan->lock); + spin_lock_irqsave(&chan->lock, flags); } list_add_tail(&desc->node, &chan->desc.wait); } - spin_unlock_irq(&chan->lock); + spin_unlock_irqrestore(&chan->lock, flags); /* Recycle all acked descriptors. */ rcar_dmac_desc_recycle_acked(chan);