From patchwork Fri Aug 21 18:25:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geert Uytterhoeven X-Patchwork-Id: 7053531 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3C9B7C05AC for ; Fri, 21 Aug 2015 18:25:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0E6BF2042B for ; Fri, 21 Aug 2015 18:25:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BB87D2042C for ; Fri, 21 Aug 2015 18:25:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752527AbbHUSZw (ORCPT ); Fri, 21 Aug 2015 14:25:52 -0400 Received: from baptiste.telenet-ops.be ([195.130.132.51]:48627 "EHLO baptiste.telenet-ops.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752549AbbHUSZt (ORCPT ); Fri, 21 Aug 2015 14:25:49 -0400 Received: from ayla.of.borg ([84.193.93.87]) by baptiste.telenet-ops.be with bizsmtp id 7WRm1r00A1t5w8s01WRmoV; Fri, 21 Aug 2015 20:25:48 +0200 Received: from ramsan.of.borg ([192.168.97.29] helo=ramsan) by ayla.of.borg with esmtp (Exim 4.82) (envelope-from ) id 1ZSr0c-00081p-4b; Fri, 21 Aug 2015 20:25:46 +0200 Received: from geert by ramsan with local (Exim 4.82) (envelope-from ) id 1ZSr0d-0001v7-VW; Fri, 21 Aug 2015 20:25:47 +0200 From: Geert Uytterhoeven To: linux-sh@vger.kernel.org Cc: Magnus Damm , Yoshihiro Shimoda , Laurent Pinchart , Nobuhiro Iwamatsu , Yoshihiro Kaneko , Kazuya Mizuguchi , Koji Matsuoka , Wolfram Sang , Guennadi Liakhovetski , linux-serial@vger.kernel.org, Geert Uytterhoeven Subject: [PATCH/RFC v3 2/4] serial: sh-sci: Get rid of the workqueue to handle receive DMA requests Date: Fri, 21 Aug 2015 20:25:44 +0200 Message-Id: <1440181546-7334-3-git-send-email-geert+renesas@glider.be> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1440181546-7334-1-git-send-email-geert+renesas@glider.be> References: <1440181546-7334-1-git-send-email-geert+renesas@glider.be> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The receive DMA workqueue function work_fn_rx() handles two things: 1. Reception of a full buffer on completion of a receive DMA request, 2. Reception of a partial buffer on receive DMA time-out. The workqueue is kicked by both the receive DMA completion handler, and by a timer to handle DMA time-out. As there are always two receive DMA requests active, it's possible that the receive DMA completion handler is called a second time before the workqueue function runs. As the time-out handler re-enables the receive interrupt, an interrupt may come in before time-out has been fully handled. Move part 1 into the receive DMA completion handler, and move part 2 into the receive DMA time-out handler, to fix these race conditions. Signed-off-by: Geert Uytterhoeven --- v3: - New. --- drivers/tty/serial/sh-sci.c | 165 ++++++++++++++++++++------------------------ 1 file changed, 76 insertions(+), 89 deletions(-) diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index 0622cafaf1c71cab..0b6a367ac343221c 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -115,7 +115,6 @@ struct sci_port { struct sh_dmae_slave param_tx; struct sh_dmae_slave param_rx; struct work_struct work_tx; - struct work_struct work_rx; struct timer_list rx_timer; unsigned int rx_timeout; #endif @@ -1336,10 +1335,29 @@ static int sci_dma_rx_find_active(struct sci_port *s) return -1; } +static void sci_rx_dma_release(struct sci_port *s, bool enable_pio) +{ + struct dma_chan *chan = s->chan_rx; + struct uart_port *port = &s->port; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + s->chan_rx = NULL; + s->cookie_rx[0] = s->cookie_rx[1] = -EINVAL; + spin_unlock_irqrestore(&port->lock, flags); + dmaengine_terminate_all(chan); + dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, s->rx_buf[0], + sg_dma_address(&s->sg_rx[0])); + dma_release_channel(chan); + if (enable_pio) + sci_start_rx(port); +} + static void sci_dma_rx_complete(void *arg) { struct sci_port *s = arg; struct uart_port *port = &s->port; + struct dma_async_tx_descriptor *desc; unsigned long flags; int active, count = 0; @@ -1354,30 +1372,32 @@ static void sci_dma_rx_complete(void *arg) mod_timer(&s->rx_timer, jiffies + s->rx_timeout); - spin_unlock_irqrestore(&port->lock, flags); - if (count) tty_flip_buffer_push(&port->state->port); - schedule_work(&s->work_rx); -} + desc = dmaengine_prep_slave_sg(s->chan_rx, &s->sg_rx[active], 1, + DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!desc) + goto fail; -static void sci_rx_dma_release(struct sci_port *s, bool enable_pio) -{ - struct dma_chan *chan = s->chan_rx; - struct uart_port *port = &s->port; - unsigned long flags; + desc->callback = sci_dma_rx_complete; + desc->callback_param = s; + s->cookie_rx[active] = dmaengine_submit(desc); + if (dma_submit_error(s->cookie_rx[active])) + goto fail; - spin_lock_irqsave(&port->lock, flags); - s->chan_rx = NULL; - s->cookie_rx[0] = s->cookie_rx[1] = -EINVAL; + s->active_rx = s->cookie_rx[!active]; + + dev_dbg(port->dev, " cookie %d #%d, new active cookie %d\n", + s->cookie_rx[active], active, s->active_rx); spin_unlock_irqrestore(&port->lock, flags); - dmaengine_terminate_all(chan); - dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, s->rx_buf[0], - sg_dma_address(&s->sg_rx[0])); - dma_release_channel(chan); - if (enable_pio) - sci_start_rx(port); + return; + +fail: + spin_unlock_irqrestore(&port->lock, flags); + dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); + sci_rx_dma_release(s, true); } static void sci_tx_dma_release(struct sci_port *s, bool enable_pio) @@ -1438,72 +1458,6 @@ fail: sci_rx_dma_release(s, true); } -static void work_fn_rx(struct work_struct *work) -{ - struct sci_port *s = container_of(work, struct sci_port, work_rx); - struct uart_port *port = &s->port; - struct dma_async_tx_descriptor *desc; - struct dma_tx_state state; - enum dma_status status; - unsigned long flags; - int new; - - spin_lock_irqsave(&port->lock, flags); - new = sci_dma_rx_find_active(s); - if (new < 0) { - spin_unlock_irqrestore(&port->lock, flags); - return; - } - - status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state); - if (status != DMA_COMPLETE) { - /* Handle incomplete DMA receive */ - struct dma_chan *chan = s->chan_rx; - unsigned int read; - int count; - - dmaengine_terminate_all(chan); - read = sg_dma_len(&s->sg_rx[new]) - state.residue; - dev_dbg(port->dev, "Read %u bytes with cookie %d\n", read, - s->active_rx); - - if (read) { - count = sci_dma_rx_push(s, s->rx_buf[new], read); - if (count) - tty_flip_buffer_push(&port->state->port); - } - - spin_unlock_irqrestore(&port->lock, flags); - - sci_submit_rx(s); - return; - } - - desc = dmaengine_prep_slave_sg(s->chan_rx, &s->sg_rx[new], 1, - DMA_DEV_TO_MEM, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); - if (!desc) - goto fail; - - desc->callback = sci_dma_rx_complete; - desc->callback_param = s; - s->cookie_rx[new] = dmaengine_submit(desc); - if (dma_submit_error(s->cookie_rx[new])) - goto fail; - - s->active_rx = s->cookie_rx[!new]; - - dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n", - __func__, s->cookie_rx[new], new, s->active_rx); - spin_unlock_irqrestore(&port->lock, flags); - return; - -fail: - spin_unlock_irqrestore(&port->lock, flags); - dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); - sci_rx_dma_release(s, true); -} - static void work_fn_tx(struct work_struct *work) { struct sci_port *s = container_of(work, struct sci_port, work_tx); @@ -1676,15 +1630,49 @@ static void rx_timer_fn(unsigned long arg) { struct sci_port *s = (struct sci_port *)arg; struct uart_port *port = &s->port; - u16 scr = serial_port_in(port, SCSCR); + struct dma_tx_state state; + enum dma_status status; + unsigned long flags; + unsigned int read; + int active, count; + u16 scr; + spin_lock_irqsave(&port->lock, flags); + + dev_dbg(port->dev, "DMA Rx timed out\n"); + scr = serial_port_in(port, SCSCR); if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) { scr &= ~SCSCR_RDRQE; enable_irq(s->irqs[SCIx_RXI_IRQ]); } serial_port_out(port, SCSCR, scr | SCSCR_RIE); - dev_dbg(port->dev, "DMA Rx timed out\n"); - schedule_work(&s->work_rx); + + active = sci_dma_rx_find_active(s); + if (active < 0) { + spin_unlock_irqrestore(&port->lock, flags); + return; + } + + status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state); + if (status == DMA_COMPLETE) + dev_dbg(port->dev, "Cookie %d #%d has already completed\n", + s->active_rx, active); + + /* Handle incomplete DMA receive */ + dmaengine_terminate_all(s->chan_rx); + read = sg_dma_len(&s->sg_rx[active]) - state.residue; + dev_dbg(port->dev, "Read %u bytes with cookie %d\n", read, + s->active_rx); + + if (read) { + count = sci_dma_rx_push(s, s->rx_buf[active], read); + if (count) + tty_flip_buffer_push(&port->state->port); + } + + spin_unlock_irqrestore(&port->lock, flags); + + sci_submit_rx(s); } static void sci_request_dma(struct uart_port *port) @@ -1768,7 +1756,6 @@ static void sci_request_dma(struct uart_port *port) dma += s->buf_len_rx; } - INIT_WORK(&s->work_rx, work_fn_rx); setup_timer(&s->rx_timer, rx_timer_fn, (unsigned long)s); sci_submit_rx(s);