From patchwork Thu Jul 16 18:21:54 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geert Uytterhoeven X-Patchwork-Id: 6810341 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8599F9F380 for ; Thu, 16 Jul 2015 18:22:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A18FD20673 for ; Thu, 16 Jul 2015 18:22:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BBFB920687 for ; Thu, 16 Jul 2015 18:22:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756199AbbGPSWW (ORCPT ); Thu, 16 Jul 2015 14:22:22 -0400 Received: from laurent.telenet-ops.be ([195.130.137.89]:37894 "EHLO laurent.telenet-ops.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756206AbbGPSWE (ORCPT ); Thu, 16 Jul 2015 14:22:04 -0400 Received: from ayla.of.borg ([84.193.93.87]) by laurent.telenet-ops.be with bizsmtp id t6N31q00p1t5w8s016N3Y6; Thu, 16 Jul 2015 20:22:03 +0200 Received: from ramsan.of.borg ([192.168.97.29] helo=ramsan) by ayla.of.borg with esmtp (Exim 4.82) (envelope-from ) id 1ZFnnG-00008I-MX; Thu, 16 Jul 2015 20:22:02 +0200 Received: from geert by ramsan with local (Exim 4.82) (envelope-from ) id 1ZFnnH-0007Ly-B8; Thu, 16 Jul 2015 20:22:03 +0200 From: Geert Uytterhoeven To: Magnus Damm , Laurent Pinchart , Nobuhiro Iwamatsu , Yoshihiro Kaneko , Kazuya Mizuguchi , Koji Matsuoka , Wolfram Sang , Guennadi Liakhovetski Cc: linux-sh@vger.kernel.org, Geert Uytterhoeven Subject: [PATCH/RFC v2 23/29] serial: sh-sci: Fix race condition between RX work_struct and cleanup Date: Thu, 16 Jul 2015 20:21:54 +0200 Message-Id: <1437070920-28069-24-git-send-email-geert+renesas@glider.be> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1437070920-28069-1-git-send-email-geert+renesas@glider.be> References: <1437070920-28069-1-git-send-email-geert+renesas@glider.be> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP During serial port shutdown, the DMA receive worker function may still be called after the receive DMA cleanup function has been called. Fix this race condition between work_fn_rx() and sci_rx_dma_release() by acquiring the port's spinlock in sci_rx_dma_release(). This requires releasing the spinlock in work_fn_rx() before calling (any function that may call) sci_rx_dma_release(). Terminate all active receive DMA descriptors to release them, and to make sure no more completions come in. Do the same in sci_tx_dma_release() for symmetry, although the serial upper layer will no longer submit more data at this point of time. Signed-off-by: Geert Uytterhoeven --- drivers/tty/serial/sh-sci.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index 5707cd0c8e432be4..9714fc1b72b3cf8c 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -1362,8 +1362,12 @@ static void sci_rx_dma_release(struct sci_port *s, bool enable_pio) { struct dma_chan *chan = s->chan_rx; struct uart_port *port = &s->port; + unsigned long flags; + spin_lock_irqsave(&port->lock, flags); s->chan_rx = NULL; + spin_unlock_irqrestore(&port->lock, flags); + dmaengine_terminate_all(chan); s->cookie_rx[0] = s->cookie_rx[1] = -EINVAL; if (sg_dma_address(&s->sg_rx[0])) { dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, @@ -1380,8 +1384,12 @@ static void sci_tx_dma_release(struct sci_port *s, bool enable_pio) { struct dma_chan *chan = s->chan_tx; struct uart_port *port = &s->port; + unsigned long flags; + spin_lock_irqsave(&port->lock, flags); s->chan_tx = NULL; + spin_unlock_irqrestore(&port->lock, flags); + dmaengine_terminate_all(chan); s->cookie_tx = -EINVAL; if (s->sg_len_tx) { /* Restore sg_dma_len() and sg_dma_address() */ @@ -1456,7 +1464,8 @@ static void work_fn_rx(struct work_struct *work) } else { dev_err(port->dev, "%s: Rx cookie %d not found!\n", __func__, s->active_rx); - goto out; + spin_unlock_irqrestore(&port->lock, flags); + return; } desc = s->desc_rx[new]; @@ -1477,23 +1486,23 @@ static void work_fn_rx(struct work_struct *work) if (count) tty_flip_buffer_push(&port->state->port); + spin_unlock_irqrestore(&port->lock, flags); sci_submit_rx(s); - - goto out; + return; } s->cookie_rx[new] = dmaengine_submit(desc); if (dma_submit_error(s->cookie_rx[new])) { dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); + spin_unlock_irqrestore(&port->lock, flags); sci_rx_dma_release(s, true); - goto out; + return; } s->active_rx = s->cookie_rx[!new]; dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n", __func__, s->cookie_rx[new], new, s->active_rx); -out: spin_unlock_irqrestore(&port->lock, flags); }