From patchwork Thu Jul 2 13:18:13 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cyrille Pitchen X-Patchwork-Id: 6709561 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D2CDC9F380 for ; Thu, 2 Jul 2015 13:23:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C7FFD206F2 for ; Thu, 2 Jul 2015 13:23:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 31E4E206E0 for ; Thu, 2 Jul 2015 13:23:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZAeRV-0003rm-18; Thu, 02 Jul 2015 13:22:17 +0000 Received: from eusmtp01.atmel.com ([212.144.249.242]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZAePa-0006a2-GY for linux-arm-kernel@lists.infradead.org; Thu, 02 Jul 2015 13:20:20 +0000 Received: from tenerife.corp.atmel.com (10.161.101.13) by eusmtp01.atmel.com (10.161.101.30) with Microsoft SMTP Server id 14.3.235.1; Thu, 2 Jul 2015 15:19:53 +0200 From: Cyrille Pitchen To: , , , , , , , Subject: [PATCH v4 5/5] tty/serial: at91: use 32bit writes into TX FIFO when DMA is enabled Date: Thu, 2 Jul 2015 15:18:13 +0200 Message-ID: <744588591e107efd4eb55c768384e13e49f1f032.1435842688.git.cyrille.pitchen@atmel.com> X-Mailer: git-send-email 1.8.2.2 In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150702_062019_081999_6DAC5D93 X-CRM114-Status: GOOD ( 16.74 ) X-Spam-Score: -4.8 (----) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, devicetree@vger.kernel.org, pawel.moll@arm.com, ijc+devicetree@hellion.org.uk, linux-kernel@vger.kernel.org, robh+dt@kernel.org, galak@codeaurora.org, Cyrille Pitchen , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For now this improvement is only used with TX DMA transfers. The data width must be set properly when configuring the DMA controller. Also the FIFO configuration must be set to match the DMA transfer data width: TXRDYM (Transmitter Ready Mode) and RXRDYM (Receiver Ready Mode) must be set into the FIFO Mode Register. These values are used by the USART to trigger the DMA controller. In single data mode they are not used and should be reset to 0. So the TXRDYM bits are changed to FOUR_DATA; then USART triggers the DMA controller when at least 4 data can be written into the TX FIFO througth the THR. On the other hand the RXRDYM bits are left unchanged to ONE_DATA. Atmel eXtended DMA controller allows us to set a different data width for each part of a scatter-gather transfer. So when calling dmaengine_slave_config() to configure the TX path, we just need to set dst_addr_width to the maximum data width. Then DMA writes into THR are split into up to two parts. The first part carries the first data to be sent and has a length equal to the greatest multiple of 4 (bytes) lower than or equal to the total length of the TX DMA transfer. The second part carries the trailing data (up to 3 bytes). The first part is written by the DMA into THR using 32 bit accesses, whereas 8bit accesses are used for the second part. Signed-off-by: Cyrille Pitchen Acked-by: Nicolas Ferre --- drivers/tty/serial/atmel_serial.c | 66 ++++++++++++++++++++++++++++++--------- 1 file changed, 51 insertions(+), 15 deletions(-) diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c index 87de21f0c7a3..e91b3b2f0590 100644 --- a/drivers/tty/serial/atmel_serial.c +++ b/drivers/tty/serial/atmel_serial.c @@ -144,6 +144,7 @@ struct atmel_uart_port { unsigned int irq_status; unsigned int irq_status_prev; unsigned int status_change; + unsigned int tx_len; struct circ_buf rx_ring; @@ -745,10 +746,10 @@ static void atmel_complete_tx_dma(void *arg) if (chan) dmaengine_terminate_all(chan); - xmit->tail += sg_dma_len(&atmel_port->sg_tx); + xmit->tail += atmel_port->tx_len; xmit->tail &= UART_XMIT_SIZE - 1; - port->icount.tx += sg_dma_len(&atmel_port->sg_tx); + port->icount.tx += atmel_port->tx_len; spin_lock_irq(&atmel_port->lock_tx); async_tx_ack(atmel_port->desc_tx); @@ -796,7 +797,9 @@ static void atmel_tx_dma(struct uart_port *port) struct circ_buf *xmit = &port->state->xmit; struct dma_chan *chan = atmel_port->chan_tx; struct dma_async_tx_descriptor *desc; - struct scatterlist *sg = &atmel_port->sg_tx; + struct scatterlist sgl[2], *sg, *sg_tx = &atmel_port->sg_tx; + unsigned int tx_len, part1_len, part2_len, sg_len; + dma_addr_t phys_addr; /* Make sure we have an idle channel */ if (atmel_port->desc_tx != NULL) @@ -812,18 +815,46 @@ static void atmel_tx_dma(struct uart_port *port) * Take the port lock to get a * consistent xmit buffer state. */ - sg->offset = xmit->tail & (UART_XMIT_SIZE - 1); - sg_dma_address(sg) = (sg_dma_address(sg) & - ~(UART_XMIT_SIZE - 1)) - + sg->offset; - sg_dma_len(sg) = CIRC_CNT_TO_END(xmit->head, - xmit->tail, - UART_XMIT_SIZE); - BUG_ON(!sg_dma_len(sg)); + tx_len = CIRC_CNT_TO_END(xmit->head, + xmit->tail, + UART_XMIT_SIZE); + + if (atmel_port->fifo_size) { + /* multi data mode */ + part1_len = (tx_len & ~0x3); /* DWORD access */ + part2_len = (tx_len & 0x3); /* BYTE access */ + } else { + /* single data (legacy) mode */ + part1_len = 0; + part2_len = tx_len; /* BYTE access only */ + } + + sg_init_table(sgl, 2); + sg_len = 0; + phys_addr = sg_dma_address(sg_tx) + xmit->tail; + if (part1_len) { + sg = &sgl[sg_len++]; + sg_dma_address(sg) = phys_addr; + sg_dma_len(sg) = part1_len; + + phys_addr += part1_len; + } + + if (part2_len) { + sg = &sgl[sg_len++]; + sg_dma_address(sg) = phys_addr; + sg_dma_len(sg) = part2_len; + } + + /* + * save tx_len so atmel_complete_tx_dma() will increase + * xmit->tail correctly + */ + atmel_port->tx_len = tx_len; desc = dmaengine_prep_slave_sg(chan, - sg, - 1, + sgl, + sg_len, DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); @@ -832,7 +863,7 @@ static void atmel_tx_dma(struct uart_port *port) return; } - dma_sync_sg_for_device(port->dev, sg, 1, DMA_TO_DEVICE); + dma_sync_sg_for_device(port->dev, sg_tx, 1, DMA_TO_DEVICE); atmel_port->desc_tx = desc; desc->callback = atmel_complete_tx_dma; @@ -892,7 +923,9 @@ static int atmel_prepare_tx_dma(struct uart_port *port) /* Configure the slave DMA */ memset(&config, 0, sizeof(config)); config.direction = DMA_MEM_TO_DEV; - config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + config.dst_addr_width = (atmel_port->fifo_size) ? + DMA_SLAVE_BUSWIDTH_4_BYTES : + DMA_SLAVE_BUSWIDTH_1_BYTE; config.dst_addr = port->mapbase + ATMEL_US_THR; config.dst_maxburst = 1; @@ -1831,6 +1864,9 @@ static int atmel_startup(struct uart_port *port) ATMEL_US_RXFCLR | ATMEL_US_TXFLCLR); + if (atmel_use_dma_tx(port)) + txrdym = ATMEL_US_FOUR_DATA; + fmr = ATMEL_US_TXRDYM(txrdym) | ATMEL_US_RXRDYM(rxrdym); if (atmel_port->rts_high && atmel_port->rts_low)