From patchwork Sun May 17 15:50:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yoshihiro Kaneko X-Patchwork-Id: 6424001 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7E750C0432 for ; Sun, 17 May 2015 15:50:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3C54620605 for ; Sun, 17 May 2015 15:50:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E23C720602 for ; Sun, 17 May 2015 15:50:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751121AbbEQPuu (ORCPT ); Sun, 17 May 2015 11:50:50 -0400 Received: from mail-pa0-f53.google.com ([209.85.220.53]:33861 "EHLO mail-pa0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750709AbbEQPuu (ORCPT ); Sun, 17 May 2015 11:50:50 -0400 Received: by pabru16 with SMTP id ru16so112991576pab.1; Sun, 17 May 2015 08:50:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=oOErBOrPj4tQZhuvDRgkp9Mf73ZgTDoK68a5p8rDrTU=; b=dL99qdQK+6UlTYHSyAjJ2g1eXyfNHFyDzVpk2bNXtIsM+A83ItuB4WKQhs2mEssA1f 2ALX450LcC7A2MuN/F0kWMB01VxYn7nL9QT1jNA1lllr7SQLzcaCM3U+mlj7EtQdo1JF l4iqtjeS5I+P+ivkf0fQEhlxrK0J2Fu9H16zqXjHn4LYC/m98osRyj88UzRcORdlRTIL KN5texTJ+grtjbHrVXC3Z53SiZnK3U1RtcPGpfiMfF3qrZL728FovySgiFq2vh/HSvtq 0x4bohCWA1ADX7AkQ4aI+WzUYwQDTMtaLCbsyO9bBWgrg6qNd6dPVv1mXM/P/lwM2pm8 lriQ== X-Received: by 10.68.102.1 with SMTP id fk1mr37086973pbb.56.1431877849514; Sun, 17 May 2015 08:50:49 -0700 (PDT) Received: from localhost.localdomain (KD118152108246.ppp-bb.dion.ne.jp. [118.152.108.246]) by mx.google.com with ESMTPSA id l8sm7463006pdj.80.2015.05.17.08.50.47 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 17 May 2015 08:50:49 -0700 (PDT) From: Yoshihiro Kaneko To: linux-serial@vger.kernel.org Cc: Greg Kroah-Hartman , Simon Horman , Magnus Damm , linux-sh@vger.kernel.org, Geert Uytterhoeven Subject: [PATCH v2] serial: sh-sci: Remove schedule_work in sci_dma_rx_complete Date: Mon, 18 May 2015 00:50:30 +0900 Message-Id: <1431877830-4877-1-git-send-email-ykaneko0929@gmail.com> X-Mailer: git-send-email 1.9.1 Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Kazuya Mizuguchi This patch calls work_fn_rx() directly so that the driver can avoid executing of next sci_dma_rx_complete() before work_fn_rx() completes. Signed-off-by: Kazuya Mizuguchi Signed-off-by: Yoshihiro Kaneko --- This patch is based on the tty-next branch of Greg Kroah-Hartman's tty tree. v2 [Yoshihiro Kaneko] * Move work_fn_rx() up and remove a function prototype accorded drivers/tty/serial/sh-sci.c | 225 ++++++++++++++++++++++---------------------- 1 file changed, 113 insertions(+), 112 deletions(-) diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index e7d6566..5b216ea 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -1251,6 +1251,118 @@ static unsigned int sci_get_mctrl(struct uart_port *port) } #ifdef CONFIG_SERIAL_SH_SCI_DMA +static void work_fn_rx(struct work_struct *work) +{ + struct sci_port *s = container_of(work, struct sci_port, work_rx); + struct uart_port *port = &s->port; + struct dma_async_tx_descriptor *desc; + int new; + + if (s->active_rx == s->cookie_rx[0]) { + new = 0; + } else if (s->active_rx == s->cookie_rx[1]) { + new = 1; + } else { + dev_err(port->dev, "cookie %d not found!\n", s->active_rx); + return; + } + desc = s->desc_rx[new]; + + if (dma_async_is_tx_complete(s->chan_rx, s->active_rx, NULL, NULL) != + DMA_COMPLETE) { + /* Handle incomplete DMA receive */ + struct dma_chan *chan = s->chan_rx; + struct shdma_desc *sh_desc = container_of(desc, + struct shdma_desc, async_tx); + unsigned long flags; + int count; + + dmaengine_terminate_all(chan); + dev_dbg(port->dev, "Read %zu bytes with cookie %d\n", + sh_desc->partial, sh_desc->cookie); + + spin_lock_irqsave(&port->lock, flags); + count = sci_dma_rx_push(s, sh_desc->partial); + spin_unlock_irqrestore(&port->lock, flags); + + if (count) + tty_flip_buffer_push(&port->state->port); + + sci_submit_rx(s); + + return; + } + + s->cookie_rx[new] = desc->tx_submit(desc); + if (s->cookie_rx[new] < 0) { + dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); + sci_rx_dma_release(s, true); + return; + } + + s->active_rx = s->cookie_rx[!new]; + + dev_dbg(port->dev, "%s: cookie %d #%d, new active #%d\n", + __func__, s->cookie_rx[new], new, s->active_rx); +} + +static void work_fn_tx(struct work_struct *work) +{ + struct sci_port *s = container_of(work, struct sci_port, work_tx); + struct dma_async_tx_descriptor *desc; + struct dma_chan *chan = s->chan_tx; + struct uart_port *port = &s->port; + struct circ_buf *xmit = &port->state->xmit; + struct scatterlist *sg = &s->sg_tx; + + /* + * DMA is idle now. + * Port xmit buffer is already mapped, and it is one page... Just adjust + * offsets and lengths. Since it is a circular buffer, we have to + * transmit till the end, and then the rest. Take the port lock to get a + * consistent xmit buffer state. + */ + spin_lock_irq(&port->lock); + sg->offset = xmit->tail & (UART_XMIT_SIZE - 1); + sg_dma_address(sg) = (sg_dma_address(sg) & ~(UART_XMIT_SIZE - 1)) + + sg->offset; + sg_dma_len(sg) = min_t(int, CIRC_CNT(xmit->head, xmit->tail, + UART_XMIT_SIZE), CIRC_CNT_TO_END(xmit->head, xmit->tail, + UART_XMIT_SIZE)); + spin_unlock_irq(&port->lock); + + BUG_ON(!sg_dma_len(sg)); + + desc = dmaengine_prep_slave_sg(chan, + sg, s->sg_len_tx, DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!desc) { + /* switch to PIO */ + sci_tx_dma_release(s, true); + return; + } + + dma_sync_sg_for_device(port->dev, sg, 1, DMA_TO_DEVICE); + + spin_lock_irq(&port->lock); + s->desc_tx = desc; + desc->callback = sci_dma_tx_complete; + desc->callback_param = s; + spin_unlock_irq(&port->lock); + s->cookie_tx = desc->tx_submit(desc); + if (s->cookie_tx < 0) { + dev_warn(port->dev, "Failed submitting Tx DMA descriptor\n"); + /* switch to PIO */ + sci_tx_dma_release(s, true); + return; + } + + dev_dbg(port->dev, "%s: %p: %d...%d, cookie %d\n", + __func__, xmit->buf, xmit->tail, xmit->head, s->cookie_tx); + + dma_async_issue_pending(chan); +} + static void sci_dma_tx_complete(void *arg) { struct sci_port *s = arg; @@ -1341,7 +1453,7 @@ static void sci_dma_rx_complete(void *arg) if (count) tty_flip_buffer_push(&port->state->port); - schedule_work(&s->work_rx); + work_fn_rx(&s->work_rx); } static void sci_rx_dma_release(struct sci_port *s, bool enable_pio) @@ -1412,117 +1524,6 @@ static void sci_submit_rx(struct sci_port *s) dma_async_issue_pending(chan); } - -static void work_fn_rx(struct work_struct *work) -{ - struct sci_port *s = container_of(work, struct sci_port, work_rx); - struct uart_port *port = &s->port; - struct dma_async_tx_descriptor *desc; - int new; - - if (s->active_rx == s->cookie_rx[0]) { - new = 0; - } else if (s->active_rx == s->cookie_rx[1]) { - new = 1; - } else { - dev_err(port->dev, "cookie %d not found!\n", s->active_rx); - return; - } - desc = s->desc_rx[new]; - - if (dma_async_is_tx_complete(s->chan_rx, s->active_rx, NULL, NULL) != - DMA_COMPLETE) { - /* Handle incomplete DMA receive */ - struct dma_chan *chan = s->chan_rx; - struct shdma_desc *sh_desc = container_of(desc, - struct shdma_desc, async_tx); - unsigned long flags; - int count; - - dmaengine_terminate_all(chan); - dev_dbg(port->dev, "Read %zu bytes with cookie %d\n", - sh_desc->partial, sh_desc->cookie); - - spin_lock_irqsave(&port->lock, flags); - count = sci_dma_rx_push(s, sh_desc->partial); - spin_unlock_irqrestore(&port->lock, flags); - - if (count) - tty_flip_buffer_push(&port->state->port); - - sci_submit_rx(s); - - return; - } - - s->cookie_rx[new] = desc->tx_submit(desc); - if (s->cookie_rx[new] < 0) { - dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); - sci_rx_dma_release(s, true); - return; - } - - s->active_rx = s->cookie_rx[!new]; - - dev_dbg(port->dev, "%s: cookie %d #%d, new active #%d\n", - __func__, s->cookie_rx[new], new, s->active_rx); -} - -static void work_fn_tx(struct work_struct *work) -{ - struct sci_port *s = container_of(work, struct sci_port, work_tx); - struct dma_async_tx_descriptor *desc; - struct dma_chan *chan = s->chan_tx; - struct uart_port *port = &s->port; - struct circ_buf *xmit = &port->state->xmit; - struct scatterlist *sg = &s->sg_tx; - - /* - * DMA is idle now. - * Port xmit buffer is already mapped, and it is one page... Just adjust - * offsets and lengths. Since it is a circular buffer, we have to - * transmit till the end, and then the rest. Take the port lock to get a - * consistent xmit buffer state. - */ - spin_lock_irq(&port->lock); - sg->offset = xmit->tail & (UART_XMIT_SIZE - 1); - sg_dma_address(sg) = (sg_dma_address(sg) & ~(UART_XMIT_SIZE - 1)) + - sg->offset; - sg_dma_len(sg) = min((int)CIRC_CNT(xmit->head, xmit->tail, UART_XMIT_SIZE), - CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE)); - spin_unlock_irq(&port->lock); - - BUG_ON(!sg_dma_len(sg)); - - desc = dmaengine_prep_slave_sg(chan, - sg, s->sg_len_tx, DMA_MEM_TO_DEV, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); - if (!desc) { - /* switch to PIO */ - sci_tx_dma_release(s, true); - return; - } - - dma_sync_sg_for_device(port->dev, sg, 1, DMA_TO_DEVICE); - - spin_lock_irq(&port->lock); - s->desc_tx = desc; - desc->callback = sci_dma_tx_complete; - desc->callback_param = s; - spin_unlock_irq(&port->lock); - s->cookie_tx = desc->tx_submit(desc); - if (s->cookie_tx < 0) { - dev_warn(port->dev, "Failed submitting Tx DMA descriptor\n"); - /* switch to PIO */ - sci_tx_dma_release(s, true); - return; - } - - dev_dbg(port->dev, "%s: %p: %d...%d, cookie %d\n", - __func__, xmit->buf, xmit->tail, xmit->head, s->cookie_tx); - - dma_async_issue_pending(chan); -} #endif static void sci_start_tx(struct uart_port *port)