From patchwork Fri Aug 21 18:02:50 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geert Uytterhoeven X-Patchwork-Id: 7053271 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A6911C05AC for ; Fri, 21 Aug 2015 18:03:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9506F20380 for ; Fri, 21 Aug 2015 18:03:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8BA40203B4 for ; Fri, 21 Aug 2015 18:03:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752531AbbHUSDl (ORCPT ); Fri, 21 Aug 2015 14:03:41 -0400 Received: from laurent.telenet-ops.be ([195.130.137.89]:36578 "EHLO laurent.telenet-ops.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752612AbbHUSD2 (ORCPT ); Fri, 21 Aug 2015 14:03:28 -0400 Received: from ayla.of.borg ([84.193.93.87]) by laurent.telenet-ops.be with bizsmtp id 7W3H1r00H1t5w8s01W3NjA; Fri, 21 Aug 2015 20:03:26 +0200 Received: from ramsan.of.borg ([192.168.97.29] helo=ramsan) by ayla.of.borg with esmtp (Exim 4.82) (envelope-from ) id 1ZSqer-0007ok-VY; Fri, 21 Aug 2015 20:03:18 +0200 Received: from geert by ramsan with local (Exim 4.82) (envelope-from ) id 1ZSqet-0001r1-Pj; Fri, 21 Aug 2015 20:03:19 +0200 From: Geert Uytterhoeven To: Greg Kroah-Hartman , Jiri Slaby Cc: Magnus Damm , Yoshihiro Shimoda , Laurent Pinchart , Nobuhiro Iwamatsu , Yoshihiro Kaneko , Kazuya Mizuguchi , Koji Matsuoka , Wolfram Sang , Guennadi Liakhovetski , linux-serial@vger.kernel.org, linux-sh@vger.kernel.org, Geert Uytterhoeven Subject: [PATCH v3 26/33] serial: sh-sci: Fix race condition between RX worker and cleanup Date: Fri, 21 Aug 2015 20:02:50 +0200 Message-Id: <1440180177-6924-27-git-send-email-geert+renesas@glider.be> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1440180177-6924-1-git-send-email-geert+renesas@glider.be> References: <1440180177-6924-1-git-send-email-geert+renesas@glider.be> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP During serial port shutdown, the DMA receive worker function may still be called after the receive DMA cleanup function has been called. Fix this race condition between work_fn_rx() and sci_rx_dma_release() by acquiring the port's spinlock in sci_rx_dma_release(). This requires releasing the spinlock in work_fn_rx() before calling (any function that may call) sci_rx_dma_release(). Terminate all active receive DMA descriptors to release them, and to make sure no more completions come in. Do the same in sci_tx_dma_release() for symmetry, although the serial upper layer will no longer submit more data at this point of time. Signed-off-by: Geert Uytterhoeven --- v3: - Move invalidation of cookies inside the lock, v2: - New. --- drivers/tty/serial/sh-sci.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index f6ed203dde41bf83..35e24b726fe605d1 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -1362,9 +1362,13 @@ static void sci_rx_dma_release(struct sci_port *s, bool enable_pio) { struct dma_chan *chan = s->chan_rx; struct uart_port *port = &s->port; + unsigned long flags; + spin_lock_irqsave(&port->lock, flags); s->chan_rx = NULL; s->cookie_rx[0] = s->cookie_rx[1] = -EINVAL; + spin_unlock_irqrestore(&port->lock, flags); + dmaengine_terminate_all(chan); dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, sg_virt(&s->sg_rx[0]), sg_dma_address(&s->sg_rx[0])); dma_release_channel(chan); @@ -1376,9 +1380,13 @@ static void sci_tx_dma_release(struct sci_port *s, bool enable_pio) { struct dma_chan *chan = s->chan_tx; struct uart_port *port = &s->port; + unsigned long flags; + spin_lock_irqsave(&port->lock, flags); s->chan_tx = NULL; s->cookie_tx = -EINVAL; + spin_unlock_irqrestore(&port->lock, flags); + dmaengine_terminate_all(chan); dma_unmap_single(chan->device->dev, s->tx_dma_addr, UART_XMIT_SIZE, DMA_TO_DEVICE); dma_release_channel(chan); @@ -1444,7 +1452,8 @@ static void work_fn_rx(struct work_struct *work) } else { dev_err(port->dev, "%s: Rx cookie %d not found!\n", __func__, s->active_rx); - goto out; + spin_unlock_irqrestore(&port->lock, flags); + return; } status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state); @@ -1464,9 +1473,10 @@ static void work_fn_rx(struct work_struct *work) if (count) tty_flip_buffer_push(&port->state->port); - sci_submit_rx(s); + spin_unlock_irqrestore(&port->lock, flags); - goto out; + sci_submit_rx(s); + return; } desc = dmaengine_prep_slave_sg(s->chan_rx, &s->sg_rx[new], 1, @@ -1485,14 +1495,13 @@ static void work_fn_rx(struct work_struct *work) dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n", __func__, s->cookie_rx[new], new, s->active_rx); -out: spin_unlock_irqrestore(&port->lock, flags); return; fail: + spin_unlock_irqrestore(&port->lock, flags); dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); sci_rx_dma_release(s, true); - spin_unlock_irqrestore(&port->lock, flags); } static void work_fn_tx(struct work_struct *work)