From patchwork Tue Jun 28 05:32:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bhuvanchandra DV X-Patchwork-Id: 9201857 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5ECD46075F for ; Tue, 28 Jun 2016 05:33:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4EC11285E7 for ; Tue, 28 Jun 2016 05:33:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43145285EA; Tue, 28 Jun 2016 05:33:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F3B06285E7 for ; Tue, 28 Jun 2016 05:33:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751941AbcF1Fdf (ORCPT ); Tue, 28 Jun 2016 01:33:35 -0400 Received: from mail-db5eur01on0108.outbound.protection.outlook.com ([104.47.2.108]:58832 "EHLO EUR01-DB5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750753AbcF1Fde (ORCPT ); Tue, 28 Jun 2016 01:33:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toradex.onmicrosoft.com; s=selector1-toradex-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=g3SXEwDQHcY6VMfsN7G0HNObiONvLotL7orxkJdA6ms=; b=izsEaMVvn/G+SgfhoJy5EMMn4Idc8E0kOVx0OyaeHSzPzGrAIwUsIc7IfHmbqW2NjTDKDcnb3YOQYvstM0k+OvN0NclrfLP/RAerUBvG86pi9G7ZOBJTqchcmrvXbK51wZHkA1R8MrWSu1ed8ltYJuzjW6P6IjZHvKzA6eb0eWI= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=bhuvanchandra.dv@toradex.com; Received: from tdx-in-nb-0014.toradex.ext (115.115.243.34) by HE1PR0501MB2089.eurprd05.prod.outlook.com (10.167.246.9) with Microsoft SMTP Server (TLS) id 15.1.523.12; Tue, 28 Jun 2016 05:33:27 +0000 From: Bhuvanchandra DV To: CC: , , , , , , , , , , Bhuvanchandra DV Subject: [PATCH v2 6/9] tty: serial: fsl-lpuart: Use cyclic DMA for Rx Date: Tue, 28 Jun 2016 11:02:32 +0530 Message-ID: <20160628053235.5114-7-bhuvanchandra.dv@toradex.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20160628053235.5114-1-bhuvanchandra.dv@toradex.com> References: <20160628053235.5114-1-bhuvanchandra.dv@toradex.com> MIME-Version: 1.0 X-Originating-IP: [115.115.243.34] X-ClientProxiedBy: BM1PR01CA0056.INDPRD01.PROD.OUTLOOK.COM (10.163.199.28) To HE1PR0501MB2089.eurprd05.prod.outlook.com (10.167.246.9) X-MS-Office365-Filtering-Correlation-Id: 1c5a0309-6084-4f73-45b6-08d39f15bc9d X-Microsoft-Exchange-Diagnostics: 1; HE1PR0501MB2089; 2:0zTPgVucBMdr19VcTOrPaTKRgTtk0chzECO1cQc6J9nXFtT7imDZLYoKkB+BtP/CV/JXMXniwiF829j7VpZAwpvGQMkOg+bEG/WkyPxAn4I0egKjrd0f4tzpUvX9R3NhFnPLGMJTHy70IW3QouB1+BTBmitVIRU+v3Avbho8zlemrWxvxdIm8k50HmYBcMfQ; 3:vI9ro3VA8uhQXMF0Y3g+Vec2dhS/qfYP6bNKk6xDq66Whn8XKirKLVMikowAHWM41lf4lJmbWm+zk8XT2wqXT7sadSdsul0y7HWRqqSiPAJE+q5jxc9KqpuguUbF6qkE; 25:Y/4vpaeSvj7i0DhwnK3OIDT3JzY+j5TLaIC+hPb/Rh1BiLduk3qxb7NX/kKpD4tnrwLcs4DrxDV07agdqXrfgbBHpsg4NP7BdISxt/F7rkXuhozwsW6PH5VJaC+TA4NsS6HX6mRmrrQssOqNvRhxQu9e8KMWL2C8/uf4ht9Cy7cOalSn0tygR54CUHCMYr1HvaTtkw5+woqYfPxRYi5asOfWEa21jP/Yc/loRGKchBDVka245XVUjkhux2hZbc14Im/s5TH70zeKF+X94Bc3mPO+UkL6/OU6m+oH0AL5SI66xILUV+zcH2a5EEEQoezCOzdJJMkE9XEe1EV2lnGIFi7og69WNM50PqhBP7eQJ0St1K2uqr6saNRU8P+800lU43dql0Phz+KpHWo4psgjPkwqIfBB0T/ZqQbOOT57FxA= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:HE1PR0501MB2089; X-Microsoft-Exchange-Diagnostics: 1; HE1PR0501MB2089; 20:Zoz4OC+s07HTPpFGhPo8OKbeJF/64fDnDSD9VxuoruOjEwBjxdPXQgalNwYza9Q3wiFjtWYeHdtXFQAJEy+bfuViXBoBfkf2ySKDGphhYSByZHSo0dzwX/fb3EN16D5ZnPY6FV+Yl6vYGV7iBZ+eGsbb4Eelzx+n/Z4jKLKlsAHFMh//0wnqAljRK4ekrcqT6QToMeJoUYNutMBMQ2y11A7Tei3O4t/dWT3BrR2PNaC/yfg1uKxfa8lfnXmjHHdFFfzIJa6Dw6L+Tw30Do8zysDD8qLBMmYPyueDK2lXvC+Ck4Wa6PjlOXYHd9aWDvT2qUTkpokeogFja66hIPyEvQ==; 4:8SEBs9vafa9g3532nSIOi/UYTxh2pq1F2FYVVhAoP7dUQkx1MEMf3/0gsKPSC4F15nRQMq+Kf9rnBFilIEUrT/EBRDOww7OzfvzsciTISxtrk0CDPv2I0RvW2VyR4k9JRMleBre5JNqw00L3SFJkpWhrcVO+cfWtPish3VIuQwtPFHPcmwcBr12EDMnPpviky/+gvOXZBWUQJ2vcpJEW0TlOsyezj/G6UpCcuSsbA4DwvAsweemySv1t2LXypk1i3dXm+AhTXpCIXaiJS+8+CKz5hS6LOyhQgLPKIGWhc9crhhhlm9z4w1vXB0XTAjW+UUhtVxPbrgZbkwelt6abOtOLcITIUu9WQRe3mOP5Cs/gxS1iN5DtDLnmZ2iOZ84l99rZk5Y/CiKuwAEnTKu3uS6VleXh06fGOl1MDo4gCO45/iB7zOu49z/VtqDHPsAWypzPnIMrhhNJdq3ra55+mA== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(60795455431006)(17755550239193); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001); SRVR:HE1PR0501MB2089; BCL:0; PCL:0; RULEID:; SRVR:HE1PR0501MB2089; X-Forefront-PRVS: 0987ACA2E2 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10019020)(4630300001)(6009001)(7916002)(199003)(189002)(189998001)(50226002)(110136002)(107886002)(7846002)(47776003)(66066001)(36756003)(8676002)(101416001)(1076002)(48376002)(33646002)(6116002)(81166006)(92566002)(3846002)(50986999)(586003)(81156014)(50466002)(2351001)(2950100001)(77096005)(105586002)(229853001)(86362001)(76176999)(5003940100001)(68736007)(305945005)(106356001)(4326007)(7736002)(53416004)(4001430100002)(19580405001)(19580395003)(69596002)(42186005)(97736004)(2906002)(83323001); DIR:OUT; SFP:1102; SCL:1; SRVR:HE1PR0501MB2089; H:tdx-in-nb-0014.toradex.ext; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: toradex.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; HE1PR0501MB2089; 23:uBGpcTf+/qzZDUXCL+ZLTn1n3mdzUfHHQF9rtWP?= =?us-ascii?Q?dh3VYnsECHTJf11Pywyqo8XKYWaJPp1k6u70iiwhXQGaT7OxkRf5jq2TzmjU?= =?us-ascii?Q?6azmnyilkDgm+Kvc9PPKxR8cb4EK3vimx/RJS2qStT5uVsBT7UBWlOJAqnej?= =?us-ascii?Q?d0HuZ4GX/keUeMKKnQxcGfc5/GRvj9SEGAp4Ru4tWTuG0IjWBVAHMwYn3Sgu?= =?us-ascii?Q?lw9ZlMHicIW3TGn1rJfrx4x6O5bDuDV64L9KyVEz4KGmcAttnO7D4QZqMEGe?= =?us-ascii?Q?AW+Xq8oOgwtagCYN2iQp5O7leU3CKrnXI3SxZ8NTQrVQEpWKnw/4pKGl+xbH?= =?us-ascii?Q?TzRitubCbG87Mk95i4GMo1hOk59npes55KRHPcrXBMlw/xVln5Iaq32dLICB?= =?us-ascii?Q?97xU7Ej1pDp02XmeDzzHxxgCJJoE0xscKSLe16fPDYa8HyV7X5tA//Z55M6h?= =?us-ascii?Q?5R4B8NJzwEWUph9/cufZWlpecPEjU/39beml4nE28jxZYD1KXVtU0wenX2Tm?= =?us-ascii?Q?i5FD9QeiPZknJgDwZmrpSqoE58AMm/YOzmM1kQ9y7l3NIisQc0hGOU89/TfM?= =?us-ascii?Q?BI+RkOr1VDrLn7forSyDf8N0wS5DN2wyr4alLNTZFiuJEoF18kVKc2uQ7qVL?= =?us-ascii?Q?3cLoQkxSOvYMiCAaBQj1RcXGDZgWXL/pERmLewl6C3bceGS4Gh0+X4I9HvXW?= =?us-ascii?Q?8rV6nkq9c1UGiRIha3DpYJ7IQQKmFbyf3NfosgSMFJSAO8QGqlghF99lS4VF?= =?us-ascii?Q?bXr+KqxbQKLeCt6cgJyi1fyzzXNQA/sS87ISTUkP9gSan1EpzQXINOfGHt3s?= =?us-ascii?Q?NKIcKYNHyZJse+8RrjSHuIyKBZElNSqn70Um9d1o8dn1NjZB9bbNKeLmsmB9?= =?us-ascii?Q?ZSoXTpgr3gRFdDFvtlr3UDT/dP0P7l/Il/HoTvo0JYNjLYgXKAtOO084dof5?= =?us-ascii?Q?JEVyG+AqQS+UeFlVJ8a7Tq7ORS/Nil3SeGv/D91GL+9VgR7ue5r+QXwsdHSq?= =?us-ascii?Q?wlJOISGckOAteyRKq4J2Nn1VwOOPNRjFzCu8NqCgny/33BN2lV3iEvCLPO9y?= =?us-ascii?Q?QoVEb35ZKrgJ7mLQDF1VWQUHxECF+xDoOF9gU7KOfm+FVuokjyVO+fdAf7PD?= =?us-ascii?Q?XOp6I26uNcIRKIWiZ49B9AuaXeDuig8+jU5OFYfOeEhL+NyJZZ64e7Sc+adq?= =?us-ascii?Q?WHd2K3gtHL+HtUcC4rOq/zIc7jTLUTe1MVNVD?= X-Microsoft-Exchange-Diagnostics: 1; HE1PR0501MB2089; 6:R1VbR7+nSKH/sJzlvG+7NO20iGOCbRv0YPSptIIKq1t5zaFZfm3KR5mnT6sFrdAg7FjHGk51LBQULYiyWRD02peF6JHXxaQrX0SveBuDj4hjbofYc8w1F8HkcxnYt8xa+PKIf/8DmU+d/Ojo9CF/rxf/poFpWXC1gTadgK9d5y97X6Ak8YYx/QAeJNR8IqU2KC06495z6HuSOJmcIn2axisGP5FglUdTo6z2gzg0tBzMzyDrDzvhGsFmqk01POrdXIaeTXNGy9AgHkNGlnOIqkc3HCrJy8P45B2SuDYH81Sq8nQnlkx5yNvH8Sg/36tW; 5:/Us1r/XnJ9sHeW4miuDLTeA4nEuP4Q35EMoxcu4M7YE7tYiH+gbDEvieszmLmk/ifWb6BbArcBY/MT1sY0g48nBvfcJhTstmhcy6wO+CH4tln6pAXynWH18TL3ZAsACLeFyVNAJ/32v1HdyIhMjHqw==; 24:VJ1ucVMrN86GQuAoI0mXABcRTGNFY/P1pjXYra4X6GRumbsiNadi8U7pn0aaI6rscxpvZfd5AHggLQJ1KSDQXsFWz4kjTpwf9p7t/exx92c=; 7:g5XBEMIlRxj+Fw1lddf2OsNxOpgWp6xPyWDBaTrEbRDHxln7uBX0C386j/2k6vEeeYpOi7/Y+XCx0KVAkgslQAYr8EulztPz40c/RMub5S2kKciGrd0m+3xL8IpJEoUX9gVINEHS4Mi3VXCL89EJLJ2bAV3rcHT90nnaXPBN943rsWYZZ5YgScupiR9rCCFmMSzkUILep4LvwHqQvrIFjOAwIEgn4sJftB96l5No7dgjUn1Zux2xnqBn51vrp25f/7PnpR+bpJccUtiJgH/vaA== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: toradex.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2016 05:33:27.3694 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0501MB2089 Sender: linux-clk-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-clk@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The initial approach of DMA implementatin for RX is inefficient due to switching from PIO to DMA, this leads to overruns especially on instances with the smaller FIFO. To address these issues this patch uses a cyclic DMA for receiver path. Some part of the code is borrowed from atmel serial driver. Signed-off-by: Bhuvanchandra DV --- drivers/tty/serial/fsl_lpuart.c | 483 +++++++++++++++++++++------------------- 1 file changed, 258 insertions(+), 225 deletions(-) diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c index 615f191..51d2b5a 100644 --- a/drivers/tty/serial/fsl_lpuart.c +++ b/drivers/tty/serial/fsl_lpuart.c @@ -224,7 +224,8 @@ #define UARTWATER_TXWATER_OFF 0 #define UARTWATER_RXWATER_OFF 16 -#define FSL_UART_RX_DMA_BUFFER_SIZE 64 +/* Rx DMA timeout in ms, which is used to calculate Rx ring buffer size */ +#define DMA_RX_TIMEOUT (10) #define DRIVER_NAME "fsl-lpuart" #define DEV_NAME "ttyLP" @@ -244,17 +245,17 @@ struct lpuart_port { struct dma_async_tx_descriptor *dma_tx_desc; struct dma_async_tx_descriptor *dma_rx_desc; dma_addr_t dma_tx_buf_bus; - dma_addr_t dma_rx_buf_bus; dma_cookie_t dma_tx_cookie; dma_cookie_t dma_rx_cookie; unsigned char *dma_tx_buf_virt; - unsigned char *dma_rx_buf_virt; unsigned int dma_tx_bytes; unsigned int dma_rx_bytes; int dma_tx_in_progress; - int dma_rx_in_progress; unsigned int dma_rx_timeout; struct timer_list lpuart_timer; + struct scatterlist rx_sgl; + struct circ_buf rx_ring; + int rx_dma_rng_buf_len; }; static const struct of_device_id lpuart_dt_ids[] = { @@ -270,7 +271,6 @@ MODULE_DEVICE_TABLE(of, lpuart_dt_ids); /* Forward declare this for the dma callbacks*/ static void lpuart_dma_tx_complete(void *arg); -static void lpuart_dma_rx_complete(void *arg); static u32 lpuart32_read(void __iomem *addr) { @@ -316,32 +316,6 @@ static void lpuart32_stop_rx(struct uart_port *port) lpuart32_write(temp & ~UARTCTRL_RE, port->membase + UARTCTRL); } -static void lpuart_copy_rx_to_tty(struct lpuart_port *sport, - struct tty_port *tty, int count) -{ - int copied; - - sport->port.icount.rx += count; - - if (!tty) { - dev_err(sport->port.dev, "No tty port\n"); - return; - } - - dma_sync_single_for_cpu(sport->port.dev, sport->dma_rx_buf_bus, - FSL_UART_RX_DMA_BUFFER_SIZE, DMA_FROM_DEVICE); - copied = tty_insert_flip_string(tty, - ((unsigned char *)(sport->dma_rx_buf_virt)), count); - - if (copied != count) { - WARN_ON(1); - dev_err(sport->port.dev, "RxData copy to tty layer failed\n"); - } - - dma_sync_single_for_device(sport->port.dev, sport->dma_rx_buf_bus, - FSL_UART_RX_DMA_BUFFER_SIZE, DMA_TO_DEVICE); -} - static void lpuart_pio_tx(struct lpuart_port *sport) { struct circ_buf *xmit = &sport->port.state->xmit; @@ -433,28 +407,6 @@ static void lpuart_dma_tx_complete(void *arg) spin_unlock_irqrestore(&sport->port.lock, flags); } -static int lpuart_dma_rx(struct lpuart_port *sport) -{ - dma_sync_single_for_device(sport->port.dev, sport->dma_rx_buf_bus, - FSL_UART_RX_DMA_BUFFER_SIZE, DMA_TO_DEVICE); - sport->dma_rx_desc = dmaengine_prep_slave_single(sport->dma_rx_chan, - sport->dma_rx_buf_bus, FSL_UART_RX_DMA_BUFFER_SIZE, - DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT); - - if (!sport->dma_rx_desc) { - dev_err(sport->port.dev, "Not able to get desc for rx\n"); - return -EIO; - } - - sport->dma_rx_desc->callback = lpuart_dma_rx_complete; - sport->dma_rx_desc->callback_param = sport; - sport->dma_rx_in_progress = 1; - sport->dma_rx_cookie = dmaengine_submit(sport->dma_rx_desc); - dma_async_issue_pending(sport->dma_rx_chan); - - return 0; -} - static void lpuart_flush_buffer(struct uart_port *port) { struct lpuart_port *sport = container_of(port, struct lpuart_port, port); @@ -464,73 +416,6 @@ static void lpuart_flush_buffer(struct uart_port *port) } } -static void lpuart_dma_rx_complete(void *arg) -{ - struct lpuart_port *sport = arg; - struct tty_port *port = &sport->port.state->port; - unsigned long flags; - - async_tx_ack(sport->dma_rx_desc); - mod_timer(&sport->lpuart_timer, jiffies + sport->dma_rx_timeout); - - spin_lock_irqsave(&sport->port.lock, flags); - - sport->dma_rx_in_progress = 0; - lpuart_copy_rx_to_tty(sport, port, FSL_UART_RX_DMA_BUFFER_SIZE); - tty_flip_buffer_push(port); - lpuart_dma_rx(sport); - - spin_unlock_irqrestore(&sport->port.lock, flags); -} - -static void lpuart_dma_rx_terminate(struct lpuart_port *sport) -{ - struct tty_port *port = &sport->port.state->port; - struct dma_tx_state state; - unsigned long flags; - unsigned char temp; - int count; - - del_timer(&sport->lpuart_timer); - dmaengine_pause(sport->dma_rx_chan); - dmaengine_tx_status(sport->dma_rx_chan, sport->dma_rx_cookie, &state); - dmaengine_terminate_all(sport->dma_rx_chan); - count = FSL_UART_RX_DMA_BUFFER_SIZE - state.residue; - async_tx_ack(sport->dma_rx_desc); - - spin_lock_irqsave(&sport->port.lock, flags); - - sport->dma_rx_in_progress = 0; - lpuart_copy_rx_to_tty(sport, port, count); - tty_flip_buffer_push(port); - temp = readb(sport->port.membase + UARTCR5); - writeb(temp & ~UARTCR5_RDMAS, sport->port.membase + UARTCR5); - - spin_unlock_irqrestore(&sport->port.lock, flags); -} - -static void lpuart_timer_func(unsigned long data) -{ - lpuart_dma_rx_terminate((struct lpuart_port *)data); -} - -static inline void lpuart_prepare_rx(struct lpuart_port *sport) -{ - unsigned long flags; - unsigned char temp; - - spin_lock_irqsave(&sport->port.lock, flags); - - sport->lpuart_timer.expires = jiffies + sport->dma_rx_timeout; - add_timer(&sport->lpuart_timer); - - lpuart_dma_rx(sport); - temp = readb(sport->port.membase + UARTCR5); - writeb(temp | UARTCR5_RDMAS, sport->port.membase + UARTCR5); - - spin_unlock_irqrestore(&sport->port.lock, flags); -} - static inline void lpuart_transmit_buffer(struct lpuart_port *sport) { struct circ_buf *xmit = &sport->port.state->xmit; @@ -770,18 +655,14 @@ out: static irqreturn_t lpuart_int(int irq, void *dev_id) { struct lpuart_port *sport = dev_id; - unsigned char sts, crdma; + unsigned char sts; sts = readb(sport->port.membase + UARTSR1); - crdma = readb(sport->port.membase + UARTCR5); - if (sts & UARTSR1_RDRF && !(crdma & UARTCR5_RDMAS)) { - if (sport->lpuart_dma_rx_use) - lpuart_prepare_rx(sport); - else - lpuart_rxint(irq, dev_id); - } - if (sts & UARTSR1_TDRE && !(crdma & UARTCR5_TDMAS)) { + if (sts & UARTSR1_RDRF) + lpuart_rxint(irq, dev_id); + + if (sts & UARTSR1_TDRE) { if (sport->lpuart_dma_tx_use) lpuart_pio_tx(sport); else @@ -834,6 +715,209 @@ static unsigned int lpuart32_tx_empty(struct uart_port *port) TIOCSER_TEMT : 0; } +static void lpuart_copy_rx_to_tty(struct lpuart_port *sport) +{ + struct tty_port *port = &sport->port.state->port; + struct dma_tx_state state; + enum dma_status dmastat; + struct circ_buf *ring = &sport->rx_ring; + unsigned long flags; + int count = 0; + unsigned char sr; + + sr = readb(sport->port.membase + UARTSR1); + + if (sr & (UARTSR1_PE | UARTSR1_FE)) { + /* Read DR to clear the error flags */ + readb(sport->port.membase + UARTDR); + + if (sr & UARTSR1_PE) + sport->port.icount.parity++; + else if (sr & UARTSR1_FE) + sport->port.icount.frame++; + } + + async_tx_ack(sport->dma_rx_desc); + + spin_lock_irqsave(&sport->port.lock, flags); + + dmastat = dmaengine_tx_status(sport->dma_rx_chan, + sport->dma_rx_cookie, + &state); + + if (dmastat == DMA_ERROR) { + dev_err(sport->port.dev, "Rx DMA transfer failed!\n"); + spin_unlock_irqrestore(&sport->port.lock, flags); + return; + } + + /* CPU claims ownership of RX DMA buffer */ + dma_sync_sg_for_cpu(sport->port.dev, &sport->rx_sgl, 1, DMA_FROM_DEVICE); + + /* + * ring->head points to the end of data already written by the DMA. + * ring->tail points to the beginning of data to be read by the + * framework. + * The current transfer size should not be larger than the dma buffer + * length. + */ + ring->head = sport->rx_sgl.length - state.residue; + BUG_ON(ring->head > sport->rx_sgl.length); + /* + * At this point ring->head may point to the first byte right after the + * last byte of the dma buffer: + * 0 <= ring->head <= sport->rx_sgl.length + * + * However ring->tail must always points inside the dma buffer: + * 0 <= ring->tail <= sport->rx_sgl.length - 1 + * + * Since we use a ring buffer, we have to handle the case + * where head is lower than tail. In such a case, we first read from + * tail to the end of the buffer then reset tail. + */ + if (ring->head < ring->tail) { + count = sport->rx_sgl.length - ring->tail; + + tty_insert_flip_string(port, ring->buf + ring->tail, count); + ring->tail = 0; + sport->port.icount.rx += count; + } + + /* Finally we read data from tail to head */ + if (ring->tail < ring->head) { + count = ring->head - ring->tail; + tty_insert_flip_string(port, ring->buf + ring->tail, count); + /* Wrap ring->head if needed */ + if (ring->head >= sport->rx_sgl.length) + ring->head = 0; + ring->tail = ring->head; + sport->port.icount.rx += count; + } + + dma_sync_sg_for_device(sport->port.dev, &sport->rx_sgl, 1, + DMA_FROM_DEVICE); + + spin_unlock_irqrestore(&sport->port.lock, flags); + + tty_flip_buffer_push(port); + mod_timer(&sport->lpuart_timer, jiffies + sport->dma_rx_timeout); +} + +static void lpuart_dma_rx_complete(void *arg) +{ + struct lpuart_port *sport = arg; + + lpuart_copy_rx_to_tty(sport); +} + +static void lpuart_timer_func(unsigned long data) +{ + struct lpuart_port *sport = (struct lpuart_port *)data; + + lpuart_copy_rx_to_tty(sport); +} + +static inline int lpuart_start_rx_dma(struct lpuart_port *sport) +{ + struct dma_slave_config dma_rx_sconfig = {}; + struct circ_buf *ring = &sport->rx_ring; + int ret, nent; + int bits, baud; + struct tty_struct *tty = tty_port_tty_get(&sport->port.state->port); + struct ktermios *termios = &tty->termios; + + baud = tty_get_baud_rate(tty); + + bits = (termios->c_cflag & CSIZE) == CS7 ? 9 : 10; + if (termios->c_cflag & PARENB) + bits++; + + /* + * Calculate length of one DMA buffer size to keep latency below + * 10ms at any baud rate. + */ + sport->rx_dma_rng_buf_len = (DMA_RX_TIMEOUT * baud / bits / 1000) * 2; + sport->rx_dma_rng_buf_len = (1 << (fls(sport->rx_dma_rng_buf_len) - 1)); + if (sport->rx_dma_rng_buf_len < 16) + sport->rx_dma_rng_buf_len = 16; + + ring->buf = kmalloc(sport->rx_dma_rng_buf_len, GFP_KERNEL); + if (!ring->buf) { + dev_err(sport->port.dev, "Ring buf alloc failed\n"); + return -ENOMEM; + } + + sg_init_one(&sport->rx_sgl, ring->buf, sport->rx_dma_rng_buf_len); + sg_set_buf(&sport->rx_sgl, ring->buf, sport->rx_dma_rng_buf_len); + nent = dma_map_sg(sport->port.dev, &sport->rx_sgl, 1, DMA_FROM_DEVICE); + + if (!nent) { + dev_err(sport->port.dev, "DMA Rx mapping error\n"); + return -EINVAL; + } + + dma_rx_sconfig.src_addr = sport->port.mapbase + UARTDR; + dma_rx_sconfig.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + dma_rx_sconfig.src_maxburst = 1; + dma_rx_sconfig.direction = DMA_DEV_TO_MEM; + ret = dmaengine_slave_config(sport->dma_rx_chan, &dma_rx_sconfig); + + if (ret < 0) { + dev_err(sport->port.dev, + "DMA Rx slave config failed, err = %d\n", ret); + return ret; + } + + sport->dma_rx_desc = dmaengine_prep_dma_cyclic(sport->dma_rx_chan, + sg_dma_address(&sport->rx_sgl), + sport->rx_sgl.length, + sport->rx_sgl.length / 2, + DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT); + if (!sport->dma_rx_desc) { + dev_err(sport->port.dev, "Cannot prepare cyclic DMA\n"); + return -EFAULT; + } + + sport->dma_rx_desc->callback = lpuart_dma_rx_complete; + sport->dma_rx_desc->callback_param = sport; + sport->dma_rx_cookie = dmaengine_submit(sport->dma_rx_desc); + dma_async_issue_pending(sport->dma_rx_chan); + + writeb(readb(sport->port.membase + UARTCR5) | UARTCR5_RDMAS, + sport->port.membase + UARTCR5); + + return 0; +} + +static void lpuart_dma_tx_free(struct uart_port *port) +{ + struct lpuart_port *sport = container_of(port, + struct lpuart_port, port); + + dma_unmap_single(sport->port.dev, sport->dma_tx_buf_bus, + UART_XMIT_SIZE, DMA_TO_DEVICE); + + sport->dma_tx_buf_bus = 0; + sport->dma_tx_buf_virt = NULL; +} + +static void lpuart_dma_rx_free(struct uart_port *port) +{ + struct lpuart_port *sport = container_of(port, + struct lpuart_port, port); + + if (sport->dma_rx_chan) + dmaengine_terminate_all(sport->dma_rx_chan); + + dma_unmap_sg(sport->port.dev, &sport->rx_sgl, 1, DMA_FROM_DEVICE); + kfree(sport->rx_ring.buf); + sport->rx_ring.tail = 0; + sport->rx_ring.head = 0; + sport->dma_rx_desc = NULL; + sport->dma_rx_cookie = -EINVAL; +} + static unsigned int lpuart_get_mctrl(struct uart_port *port) { unsigned int temp = 0; @@ -1015,72 +1099,12 @@ static int lpuart_dma_tx_request(struct uart_port *port) return 0; } -static int lpuart_dma_rx_request(struct uart_port *port) +static void rx_dma_timer_init(struct lpuart_port *sport) { - struct lpuart_port *sport = container_of(port, - struct lpuart_port, port); - struct dma_slave_config dma_rx_sconfig; - dma_addr_t dma_bus; - unsigned char *dma_buf; - int ret; - - dma_buf = devm_kzalloc(sport->port.dev, - FSL_UART_RX_DMA_BUFFER_SIZE, GFP_KERNEL); - - if (!dma_buf) { - dev_err(sport->port.dev, "Dma rx alloc failed\n"); - return -ENOMEM; - } - - dma_bus = dma_map_single(sport->dma_rx_chan->device->dev, dma_buf, - FSL_UART_RX_DMA_BUFFER_SIZE, DMA_FROM_DEVICE); - - if (dma_mapping_error(sport->dma_rx_chan->device->dev, dma_bus)) { - dev_err(sport->port.dev, "dma_map_single rx failed\n"); - return -ENOMEM; - } - - dma_rx_sconfig.src_addr = sport->port.mapbase + UARTDR; - dma_rx_sconfig.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; - dma_rx_sconfig.src_maxburst = 1; - dma_rx_sconfig.direction = DMA_DEV_TO_MEM; - ret = dmaengine_slave_config(sport->dma_rx_chan, &dma_rx_sconfig); - - if (ret < 0) { - dev_err(sport->port.dev, - "Dma slave config failed, err = %d\n", ret); - return ret; - } - - sport->dma_rx_buf_virt = dma_buf; - sport->dma_rx_buf_bus = dma_bus; - sport->dma_rx_in_progress = 0; - - return 0; -} - -static void lpuart_dma_tx_free(struct uart_port *port) -{ - struct lpuart_port *sport = container_of(port, - struct lpuart_port, port); - - dma_unmap_single(sport->port.dev, sport->dma_tx_buf_bus, - UART_XMIT_SIZE, DMA_TO_DEVICE); - - sport->dma_tx_buf_bus = 0; - sport->dma_tx_buf_virt = NULL; -} - -static void lpuart_dma_rx_free(struct uart_port *port) -{ - struct lpuart_port *sport = container_of(port, - struct lpuart_port, port); - - dma_unmap_single(sport->port.dev, sport->dma_rx_buf_bus, - FSL_UART_RX_DMA_BUFFER_SIZE, DMA_FROM_DEVICE); - - sport->dma_rx_buf_bus = 0; - sport->dma_rx_buf_virt = NULL; + setup_timer(&sport->lpuart_timer, lpuart_timer_func, + (unsigned long)sport); + sport->lpuart_timer.expires = jiffies + sport->dma_rx_timeout; + add_timer(&sport->lpuart_timer); } static int lpuart_startup(struct uart_port *port) @@ -1101,22 +1125,6 @@ static int lpuart_startup(struct uart_port *port) sport->rxfifo_size = 0x1 << (((temp >> UARTPFIFO_RXSIZE_OFF) & UARTPFIFO_FIFOSIZE_MASK) + 1); - if (sport->dma_rx_chan && !lpuart_dma_rx_request(port)) { - sport->lpuart_dma_rx_use = true; - setup_timer(&sport->lpuart_timer, lpuart_timer_func, - (unsigned long)sport); - } else - sport->lpuart_dma_rx_use = false; - - - if (sport->dma_tx_chan && !lpuart_dma_tx_request(port)) { - sport->lpuart_dma_tx_use = true; - temp = readb(port->membase + UARTCR5); - temp &= ~UARTCR5_RDMAS; - writeb(temp | UARTCR5_TDMAS, port->membase + UARTCR5); - } else - sport->lpuart_dma_tx_use = false; - ret = devm_request_irq(port->dev, port->irq, lpuart_int, 0, DRIVER_NAME, sport); if (ret) @@ -1130,7 +1138,28 @@ static int lpuart_startup(struct uart_port *port) temp |= (UARTCR2_RIE | UARTCR2_TIE | UARTCR2_RE | UARTCR2_TE); writeb(temp, sport->port.membase + UARTCR2); + if (sport->dma_rx_chan && !lpuart_start_rx_dma(sport)) { + /* set Rx DMA timeout */ + sport->dma_rx_timeout = msecs_to_jiffies(DMA_RX_TIMEOUT); + if (!sport->dma_rx_timeout) + sport->dma_rx_timeout = 1; + + sport->lpuart_dma_rx_use = true; + rx_dma_timer_init(sport); + } else { + sport->lpuart_dma_rx_use = false; + } + + if (sport->dma_tx_chan && !lpuart_dma_tx_request(port)) { + sport->lpuart_dma_tx_use = true; + temp = readb(port->membase + UARTCR5); + writeb(temp | UARTCR5_TDMAS, port->membase + UARTCR5); + } else { + sport->lpuart_dma_tx_use = false; + } + spin_unlock_irqrestore(&sport->port.lock, flags); + return 0; } @@ -1187,8 +1216,8 @@ static void lpuart_shutdown(struct uart_port *port) devm_free_irq(port->dev, port->irq, sport); if (sport->lpuart_dma_rx_use) { - lpuart_dma_rx_free(&sport->port); del_timer_sync(&sport->lpuart_timer); + lpuart_dma_rx_free(&sport->port); } if (sport->lpuart_dma_tx_use) @@ -1318,17 +1347,6 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios, /* update the per-port timeout */ uart_update_timeout(port, termios->c_cflag, baud); - if (sport->lpuart_dma_rx_use) { - /* Calculate delay for 1.5 DMA buffers */ - sport->dma_rx_timeout = (sport->port.timeout - HZ / 50) * - FSL_UART_RX_DMA_BUFFER_SIZE * 3 / - sport->rxfifo_size / 2; - dev_dbg(port->dev, "DMA Rx t-out %ums, tty t-out %u jiffies\n", - sport->dma_rx_timeout * 1000 / HZ, sport->port.timeout); - if (sport->dma_rx_timeout < msecs_to_jiffies(20)) - sport->dma_rx_timeout = msecs_to_jiffies(20); - } - /* wait transmit engin complete */ while (!(readb(sport->port.membase + UARTSR1) & UARTSR1_TC)) barrier(); @@ -1353,6 +1371,24 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios, /* restore control register */ writeb(old_cr2, sport->port.membase + UARTCR2); + /* + * If new baud rate is set, we will also need to update the Ring buffer + * length according to the selected baud rate and restart Rx DMA path. + */ + if (old) { + if (sport->lpuart_dma_rx_use) { + del_timer_sync(&sport->lpuart_timer); + lpuart_dma_rx_free(&sport->port); + } + + if (sport->dma_rx_chan && !lpuart_start_rx_dma(sport)) { + sport->lpuart_dma_rx_use = true; + rx_dma_timer_init(sport); + } else { + sport->lpuart_dma_rx_use = false; + } + } + spin_unlock_irqrestore(&sport->port.lock, flags); } @@ -1937,9 +1973,6 @@ static int lpuart_suspend(struct device *dev) writeb(temp, sport->port.membase + UARTCR2); } - if (sport->dma_rx_in_progress) - lpuart_dma_rx_terminate(sport); - uart_suspend_port(&lpuart_reg, &sport->port); if (sport->port.suspended && !sport->port.irq_wake) clk_disable_unprepare(sport->clk);