From patchwork Fri Sep 18 11:08:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geert Uytterhoeven X-Patchwork-Id: 7215161 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2CCB59F6DA for ; Fri, 18 Sep 2015 11:08:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2008D2095D for ; Fri, 18 Sep 2015 11:08:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0757B2095F for ; Fri, 18 Sep 2015 11:08:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753559AbbIRLIp (ORCPT ); Fri, 18 Sep 2015 07:08:45 -0400 Received: from andre.telenet-ops.be ([195.130.132.53]:53620 "EHLO andre.telenet-ops.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753566AbbIRLIk (ORCPT ); Fri, 18 Sep 2015 07:08:40 -0400 Received: from ayla.of.borg ([84.195.106.123]) by andre.telenet-ops.be with bizsmtp id Jb8Y1r01G2fm56U01b8YXH; Fri, 18 Sep 2015 13:08:38 +0200 Received: from ramsan.of.borg ([192.168.97.29] helo=ramsan) by ayla.of.borg with esmtp (Exim 4.82) (envelope-from ) id 1ZctWq-0001Vw-Hw; Fri, 18 Sep 2015 13:08:32 +0200 Received: from geert by ramsan with local (Exim 4.82) (envelope-from ) id 1ZctWx-0005Ob-54; Fri, 18 Sep 2015 13:08:39 +0200 From: Geert Uytterhoeven To: Greg Kroah-Hartman , Jiri Slaby Cc: Muhammad Hamza Farooq , Magnus Damm , Yoshihiro Shimoda , Laurent Pinchart , Nobuhiro Iwamatsu , Yoshihiro Kaneko , Kazuya Mizuguchi , Koji Matsuoka , Wolfram Sang , Guennadi Liakhovetski , linux-serial@vger.kernel.org, linux-sh@vger.kernel.org, Geert Uytterhoeven Subject: [PATCH v4 10/10] serial: sh-sci: Add DT support to DMA setup Date: Fri, 18 Sep 2015 13:08:33 +0200 Message-Id: <1442574513-20648-11-git-send-email-geert+renesas@glider.be> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1442574513-20648-1-git-send-email-geert+renesas@glider.be> References: <1442574513-20648-1-git-send-email-geert+renesas@glider.be> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add support for obtaining DMA channel information from the device tree. This requires switching from the legacy sh_dmae_slave structures with hardcoded channel numbers and the corresponding filter function to: 1. dma_request_slave_channel_compat(), - On legacy platforms, dma_request_slave_channel_compat() uses the passed DMA channel numbers that originate from platform device data, - On DT-based platforms, dma_request_slave_channel_compat() will retrieve the information from DT. 2. and the generic dmaengine_slave_config() configuration method, which requires filling in DMA register ports and slave bus widths. Signed-off-by: Geert Uytterhoeven Acked-by: Laurent Pinchart --- v4: - Dropped RFC status, - Moved to end of new series, v3: - Moved to end of series, to avoid enabling broken DMA, v2: - Add Acked-by. --- drivers/tty/serial/sh-sci.c | 78 +++++++++++++++++++++++++++------------------ 1 file changed, 47 insertions(+), 31 deletions(-) diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index b1d1ce1986e6c064..960e50a97558cff5 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -112,8 +112,6 @@ struct sci_port { struct scatterlist sg_rx[2]; void *rx_buf[2]; size_t buf_len_rx; - struct sh_dmae_slave param_tx; - struct sh_dmae_slave param_rx; struct work_struct work_tx; struct timer_list rx_timer; unsigned int rx_timeout; @@ -1263,17 +1261,6 @@ static void work_fn_tx(struct work_struct *work) dma_async_issue_pending(chan); } -static bool filter(struct dma_chan *chan, void *slave) -{ - struct sh_dmae_slave *param = slave; - - dev_dbg(chan->device->dev, "%s: slave ID %d\n", - __func__, param->shdma_slave.slave_id); - - chan->private = ¶m->shdma_slave; - return true; -} - static void rx_timer_fn(unsigned long arg) { struct sci_port *s = (struct sci_port *)arg; @@ -1347,28 +1334,62 @@ static void rx_timer_fn(unsigned long arg) spin_unlock_irqrestore(&port->lock, flags); } +static struct dma_chan *sci_request_dma_chan(struct uart_port *port, + enum dma_transfer_direction dir, + unsigned int id) +{ + dma_cap_mask_t mask; + struct dma_chan *chan; + struct dma_slave_config cfg; + int ret; + + dma_cap_zero(mask); + dma_cap_set(DMA_SLAVE, mask); + + chan = dma_request_slave_channel_compat(mask, shdma_chan_filter, + (void *)(unsigned long)id, port->dev, + dir == DMA_MEM_TO_DEV ? "tx" : "rx"); + if (!chan) { + dev_warn(port->dev, + "dma_request_slave_channel_compat failed\n"); + return NULL; + } + + memset(&cfg, 0, sizeof(cfg)); + cfg.direction = dir; + if (dir == DMA_MEM_TO_DEV) { + cfg.dst_addr = port->mapbase + + (sci_getreg(port, SCxTDR)->offset << port->regshift); + cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + } else { + cfg.src_addr = port->mapbase + + (sci_getreg(port, SCxRDR)->offset << port->regshift); + cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + } + + ret = dmaengine_slave_config(chan, &cfg); + if (ret) { + dev_warn(port->dev, "dmaengine_slave_config failed %d\n", ret); + dma_release_channel(chan); + return NULL; + } + + return chan; +} + static void sci_request_dma(struct uart_port *port) { struct sci_port *s = to_sci_port(port); - struct sh_dmae_slave *param; struct dma_chan *chan; - dma_cap_mask_t mask; dev_dbg(port->dev, "%s: port %d\n", __func__, port->line); - if (s->cfg->dma_slave_tx <= 0 || s->cfg->dma_slave_rx <= 0) + if (!port->dev->of_node && + (s->cfg->dma_slave_tx <= 0 || s->cfg->dma_slave_rx <= 0)) return; - dma_cap_zero(mask); - dma_cap_set(DMA_SLAVE, mask); - - param = &s->param_tx; - - /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_TX */ - param->shdma_slave.slave_id = s->cfg->dma_slave_tx; - s->cookie_tx = -EINVAL; - chan = dma_request_channel(mask, filter, param); + chan = sci_request_dma_chan(port, DMA_MEM_TO_DEV, s->cfg->dma_slave_tx); dev_dbg(port->dev, "%s: TX: got channel %p\n", __func__, chan); if (chan) { s->chan_tx = chan; @@ -1390,12 +1411,7 @@ static void sci_request_dma(struct uart_port *port) INIT_WORK(&s->work_tx, work_fn_tx); } - param = &s->param_rx; - - /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_RX */ - param->shdma_slave.slave_id = s->cfg->dma_slave_rx; - - chan = dma_request_channel(mask, filter, param); + chan = sci_request_dma_chan(port, DMA_DEV_TO_MEM, s->cfg->dma_slave_rx); dev_dbg(port->dev, "%s: RX: got channel %p\n", __func__, chan); if (chan) { unsigned int i;