From patchwork Sun Sep 20 11:23:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787555 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E9F4139A for ; Sun, 20 Sep 2020 11:32:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 914CC21D43 for ; Sun, 20 Sep 2020 11:32:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726279AbgITLc1 (ORCPT ); Sun, 20 Sep 2020 07:32:27 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54054 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726389AbgITLc0 (ORCPT ); Sun, 20 Sep 2020 07:32:26 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 5338A8030865; Sun, 20 Sep 2020 11:23:26 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 32H5h15es7Nm; Sun, 20 Sep 2020 14:23:25 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 01/11] spi: dw-dma: Set DMA Level registers on init Date: Sun, 20 Sep 2020 14:23:12 +0300 Message-ID: <20200920112322.24585-2-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Indeed the registers content doesn't get cleared when the SPI controller is disabled and enabled. Max burst lengths aren't changed since the Rx and Tx DMA channels are requested on init stage and are kept acquired until the device is removed. Obviously SPI controller FIFO depth can't be changed. Due to all of that we can safely move the DMA Transmit and Receive data level registers initialization to the SPI controller DMA init stage (when the SPI controller is being probed) instead of doing it for each SPI transfer when dma_setup is called. This shall speed the DMA-based SPI transfer initialization up a bit, particularly if the APB bus is relatively slow. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 28 +++++++++++++--------------- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index bb390ff67d1d..a7ff1e357f8b 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -49,6 +49,7 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws) max_burst = RX_BURST_LEVEL; dws->rxburst = min(max_burst, def_burst); + dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1); ret = dma_get_slave_caps(dws->txchan, &caps); if (!ret && caps.max_burst) @@ -56,7 +57,19 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws) else max_burst = TX_BURST_LEVEL; + /* + * Having a Rx DMA channel serviced with higher priority than a Tx DMA + * channel might not be enough to provide a well balanced DMA-based + * SPI transfer interface. There might still be moments when the Tx DMA + * channel is occasionally handled faster than the Rx DMA channel. + * That in its turn will eventually cause the SPI Rx FIFO overflow if + * SPI bus speed is high enough to fill the SPI Rx FIFO in before it's + * cleared by the Rx DMA channel. In order to fix the problem the Tx + * DMA activity is intentionally slowed down by limiting the SPI Tx + * FIFO depth with a value twice bigger than the Tx burst length. + */ dws->txburst = min(max_burst, def_burst); + dw_writel(dws, DW_SPI_DMATDLR, dws->txburst); } static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws) @@ -372,21 +385,6 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) { u16 imr = 0, dma_ctrl = 0; - /* - * Having a Rx DMA channel serviced with higher priority than a Tx DMA - * channel might not be enough to provide a well balanced DMA-based - * SPI transfer interface. There might still be moments when the Tx DMA - * channel is occasionally handled faster than the Rx DMA channel. - * That in its turn will eventually cause the SPI Rx FIFO overflow if - * SPI bus speed is high enough to fill the SPI Rx FIFO in before it's - * cleared by the Rx DMA channel. In order to fix the problem the Tx - * DMA activity is intentionally slowed down by limiting the SPI Tx - * FIFO depth with a value twice bigger than the Tx burst length - * calculated earlier by the dw_spi_dma_maxburst_init() method. - */ - dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1); - dw_writel(dws, DW_SPI_DMATDLR, dws->txburst); - if (xfer->tx_buf) dma_ctrl |= SPI_DMA_TDMAE; if (xfer->rx_buf) From patchwork Sun Sep 20 11:23:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787545 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3548E139A for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 27D6C20EDD for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726285AbgITLcY (ORCPT ); Sun, 20 Sep 2020 07:32:24 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54028 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726314AbgITLcX (ORCPT ); Sun, 20 Sep 2020 07:32:23 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id E4D5E803073C; Sun, 20 Sep 2020 11:23:26 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id FYtquVonEStk; Sun, 20 Sep 2020 14:23:26 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 02/11] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified Date: Sun, 20 Sep 2020 14:23:13 +0300 Message-ID: <20200920112322.24585-3-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Since commit 46164fde6b78 ("spi: dw: Fix Rx-only DMA transfers") if DMA interface is enabled, then Tx-buffer must be available in each SPI transfer. It's required since in order to activate the incoming data reception either DMA or CPU must be pushing data out to the SPI bus. But the DW APB SSI DMA driver code is still left in state as if Tx-buffer might be optional, which is no longer true. Let's fix it so an error would be returned if no Tx-buffer detected and DMA Tx would be always enabled. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index a7ff1e357f8b..1b013ac94a3f 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -262,9 +262,6 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) struct dma_slave_config txconf; struct dma_async_tx_descriptor *txdesc; - if (!xfer->tx_buf) - return NULL; - memset(&txconf, 0, sizeof(txconf)); txconf.direction = DMA_MEM_TO_DEV; txconf.dst_addr = dws->dma_addr; @@ -383,17 +380,19 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) { - u16 imr = 0, dma_ctrl = 0; + u16 imr, dma_ctrl; - if (xfer->tx_buf) - dma_ctrl |= SPI_DMA_TDMAE; + if (!xfer->tx_buf) + return -EINVAL; + + /* Set the DMA handshaking interface */ + dma_ctrl = SPI_DMA_TDMAE; if (xfer->rx_buf) dma_ctrl |= SPI_DMA_RDMAE; dw_writel(dws, DW_SPI_DMACR, dma_ctrl); /* Set the interrupt mask */ - if (xfer->tx_buf) - imr |= SPI_INT_TXOI; + imr = SPI_INT_TXOI; if (xfer->rx_buf) imr |= SPI_INT_RXUI | SPI_INT_RXOI; spi_umask_intr(dws, imr); @@ -412,6 +411,8 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) /* Prepare the TX dma transfer */ txdesc = dw_spi_dma_prepare_tx(dws, xfer); + if (!txdesc) + return -EINVAL; /* Prepare the RX dma transfer */ rxdesc = dw_spi_dma_prepare_rx(dws, xfer); @@ -423,17 +424,15 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) dma_async_issue_pending(dws->rxchan); } - if (txdesc) { - set_bit(TX_BUSY, &dws->dma_chan_busy); - dmaengine_submit(txdesc); - dma_async_issue_pending(dws->txchan); - } + set_bit(TX_BUSY, &dws->dma_chan_busy); + dmaengine_submit(txdesc); + dma_async_issue_pending(dws->txchan); ret = dw_spi_dma_wait(dws, xfer); if (ret) return ret; - if (txdesc && dws->master->cur_msg->status == -EINPROGRESS) { + if (dws->master->cur_msg->status == -EINPROGRESS) { ret = dw_spi_dma_wait_tx_done(dws, xfer); if (ret) return ret; From patchwork Sun Sep 20 11:23:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787549 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B40DB174A for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A5AA420EDD for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726267AbgITLcZ (ORCPT ); Sun, 20 Sep 2020 07:32:25 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54024 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726290AbgITLcX (ORCPT ); Sun, 20 Sep 2020 07:32:23 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 9697B8030718; Sun, 20 Sep 2020 11:23:27 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id C8I5CXQTa1vV; Sun, 20 Sep 2020 14:23:27 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 03/11] spi: dw-dma: Configure the DMA channels in dma_setup Date: Sun, 20 Sep 2020 14:23:14 +0300 Message-ID: <20200920112322.24585-4-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Mainly this is a preparation patch before adding one-by-one DMA SG entries transmission. But logically the Tx and Rx DMA channels setup should be performed in the dma_setup() callback anyway. So we'll move the DMA slave channels src/dst burst lengths, address and address width configuration from the Tx/Rx channels preparation methods to the dedicated functions and then make sure it's called at the DMA setup stage. Note we now make sure the return value of the dmaengine_slave_config() method doesn't indicate an error. It has been unnecessary in case if Dw DMAC is utilized as a DMA engine, since its device_config() callback always returns zero (though it might change in future). But since DW APB SSI driver now supports any DMA back-end we must make sure the DMA device configuration has been successful before proceeding with further setups. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 42 +++++++++++++++++++++++++++++----------- 1 file changed, 31 insertions(+), 11 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 1b013ac94a3f..da17897b8acb 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -256,11 +256,9 @@ static void dw_spi_dma_tx_done(void *arg) complete(&dws->dma_completion); } -static struct dma_async_tx_descriptor * -dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_config_tx(struct dw_spi *dws) { struct dma_slave_config txconf; - struct dma_async_tx_descriptor *txdesc; memset(&txconf, 0, sizeof(txconf)); txconf.direction = DMA_MEM_TO_DEV; @@ -270,7 +268,13 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) txconf.dst_addr_width = dw_spi_dma_convert_width(dws->n_bytes); txconf.device_fc = false; - dmaengine_slave_config(dws->txchan, &txconf); + return dmaengine_slave_config(dws->txchan, &txconf); +} + +static struct dma_async_tx_descriptor * +dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) +{ + struct dma_async_tx_descriptor *txdesc; txdesc = dmaengine_prep_slave_sg(dws->txchan, xfer->tx_sg.sgl, @@ -345,14 +349,9 @@ static void dw_spi_dma_rx_done(void *arg) complete(&dws->dma_completion); } -static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, - struct spi_transfer *xfer) +static int dw_spi_dma_config_rx(struct dw_spi *dws) { struct dma_slave_config rxconf; - struct dma_async_tx_descriptor *rxdesc; - - if (!xfer->rx_buf) - return NULL; memset(&rxconf, 0, sizeof(rxconf)); rxconf.direction = DMA_DEV_TO_MEM; @@ -362,7 +361,16 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, rxconf.src_addr_width = dw_spi_dma_convert_width(dws->n_bytes); rxconf.device_fc = false; - dmaengine_slave_config(dws->rxchan, &rxconf); + return dmaengine_slave_config(dws->rxchan, &rxconf); +} + +static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, + struct spi_transfer *xfer) +{ + struct dma_async_tx_descriptor *rxdesc; + + if (!xfer->rx_buf) + return NULL; rxdesc = dmaengine_prep_slave_sg(dws->rxchan, xfer->rx_sg.sgl, @@ -381,10 +389,22 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) { u16 imr, dma_ctrl; + int ret; if (!xfer->tx_buf) return -EINVAL; + /* Setup DMA channels */ + ret = dw_spi_dma_config_tx(dws); + if (ret) + return ret; + + if (xfer->rx_buf) { + ret = dw_spi_dma_config_rx(dws); + if (ret) + return ret; + } + /* Set the DMA handshaking interface */ dma_ctrl = SPI_DMA_TDMAE; if (xfer->rx_buf) From patchwork Sun Sep 20 11:23:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787547 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8566717D5 for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C4D720EDD for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726402AbgITLcZ (ORCPT ); Sun, 20 Sep 2020 07:32:25 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54012 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726267AbgITLcW (ORCPT ); Sun, 20 Sep 2020 07:32:22 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 32EFC80305E2; Sun, 20 Sep 2020 11:23:28 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id aGs3rQhg65en; Sun, 20 Sep 2020 14:23:27 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 04/11] spi: dw-dma: Check rx_buf availability in the xfer method Date: Sun, 20 Sep 2020 14:23:15 +0300 Message-ID: <20200920112322.24585-5-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Checking rx_buf for being NULL and returning NULL from the Rx-channel preparation method doesn't let us to distinguish that situation from errors happening during the Rx SG-list preparation. So it's better to make sure that the rx_buf not-NULL and full-duplex communication is requested prior calling the Rx preparation method. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index da17897b8acb..d2a67dee1a66 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -369,9 +369,6 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, { struct dma_async_tx_descriptor *rxdesc; - if (!xfer->rx_buf) - return NULL; - rxdesc = dmaengine_prep_slave_sg(dws->rxchan, xfer->rx_sg.sgl, xfer->rx_sg.nents, @@ -435,10 +432,12 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) return -EINVAL; /* Prepare the RX dma transfer */ - rxdesc = dw_spi_dma_prepare_rx(dws, xfer); + if (xfer->rx_buf) { + rxdesc = dw_spi_dma_prepare_rx(dws, xfer); + if (!rxdesc) + return -EINVAL; - /* rx must be started before tx due to spi instinct */ - if (rxdesc) { + /* rx must be started before tx due to spi instinct */ set_bit(RX_BUSY, &dws->dma_chan_busy); dmaengine_submit(rxdesc); dma_async_issue_pending(dws->rxchan); @@ -458,7 +457,7 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) return ret; } - if (rxdesc && dws->master->cur_msg->status == -EINPROGRESS) + if (xfer->rx_buf && dws->master->cur_msg->status == -EINPROGRESS) ret = dw_spi_dma_wait_rx_done(dws); return ret; From patchwork Sun Sep 20 11:23:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787551 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 57A70139A for ; Sun, 20 Sep 2020 11:32:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 49AE7216C4 for ; Sun, 20 Sep 2020 11:32:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726290AbgITLc0 (ORCPT ); Sun, 20 Sep 2020 07:32:26 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54042 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726338AbgITLc0 (ORCPT ); Sun, 20 Sep 2020 07:32:26 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id E9848803016F; Sun, 20 Sep 2020 11:23:28 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2OLrOgJxmom1; Sun, 20 Sep 2020 14:23:28 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 05/11] spi: dw-dma: Move DMA transfers submission to the channels prep methods Date: Sun, 20 Sep 2020 14:23:16 +0300 Message-ID: <20200920112322.24585-6-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Indeed we can freely move the dmaengine_submit() method invocation and the Tx and Rx busy flag setting into the DMA Tx/Rx prepare methods. Since the Tx/Rx preparation method is now mainly used for the DMA transfers submission, here we suggest to rename it to have the _submit_{r,t}x suffix instead. By having this alteration applied first we implement another code preparation before adding the one-by-one DMA SG entries transmission, second we now have the dma_async_tx_descriptor descriptor used locally only in the new DMA transfers submission methods (this will be cleaned up a bit later), third we make the generic transfer method more readable, where now the functionality of submission, execution and wait procedures is transparently split up instead of having a preparation, intermixed submission/execution and wait procedures. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index d2a67dee1a66..769d10ca74b4 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -272,7 +272,7 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws) } static struct dma_async_tx_descriptor * -dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) +dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *txdesc; @@ -287,6 +287,9 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) txdesc->callback = dw_spi_dma_tx_done; txdesc->callback_param = dws; + dmaengine_submit(txdesc); + set_bit(TX_BUSY, &dws->dma_chan_busy); + return txdesc; } @@ -364,7 +367,7 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws) return dmaengine_slave_config(dws->rxchan, &rxconf); } -static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, +static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *rxdesc; @@ -380,6 +383,9 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, rxdesc->callback = dw_spi_dma_rx_done; rxdesc->callback_param = dws; + dmaengine_submit(rxdesc); + set_bit(RX_BUSY, &dws->dma_chan_busy); + return rxdesc; } @@ -426,25 +432,21 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) struct dma_async_tx_descriptor *txdesc, *rxdesc; int ret; - /* Prepare the TX dma transfer */ - txdesc = dw_spi_dma_prepare_tx(dws, xfer); + /* Submit the DMA Tx transfer */ + txdesc = dw_spi_dma_submit_tx(dws, xfer); if (!txdesc) return -EINVAL; - /* Prepare the RX dma transfer */ + /* Submit the DMA Rx transfer if required */ if (xfer->rx_buf) { - rxdesc = dw_spi_dma_prepare_rx(dws, xfer); + rxdesc = dw_spi_dma_submit_rx(dws, xfer); if (!rxdesc) return -EINVAL; /* rx must be started before tx due to spi instinct */ - set_bit(RX_BUSY, &dws->dma_chan_busy); - dmaengine_submit(rxdesc); dma_async_issue_pending(dws->rxchan); } - set_bit(TX_BUSY, &dws->dma_chan_busy); - dmaengine_submit(txdesc); dma_async_issue_pending(dws->txchan); ret = dw_spi_dma_wait(dws, xfer); From patchwork Sun Sep 20 11:23:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787557 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A22D6CA for ; Sun, 20 Sep 2020 11:32:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D538216C4 for ; Sun, 20 Sep 2020 11:32:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726389AbgITLc2 (ORCPT ); Sun, 20 Sep 2020 07:32:28 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54052 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726397AbgITLc1 (ORCPT ); Sun, 20 Sep 2020 07:32:27 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 8DD0B8030171; Sun, 20 Sep 2020 11:23:29 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RpDXG3G2kXML; Sun, 20 Sep 2020 14:23:29 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 06/11] spi: dw-dma: Check DMA Tx-desc submission status Date: Sun, 20 Sep 2020 14:23:17 +0300 Message-ID: <20200920112322.24585-7-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org We suggest to add the dmaengine_submit() return value test for errors. It has been unnecessary while the driver was expected to be utilized in pair with DW DMAC. But since now the driver can be used with any DMA engine, it might be useful to track the errors on DMA submissions so not miss them and get into an unpredictable driver behaviour. Signed-off-by: Serge Semin --- Changelog v2: - Replace negative conditional statements with the positive ones. - Terminate the prepared descriptors on submission errors. --- drivers/spi/spi-dw-dma.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 769d10ca74b4..aa3900809126 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -275,6 +275,8 @@ static struct dma_async_tx_descriptor * dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *txdesc; + dma_cookie_t cookie; + int ret; txdesc = dmaengine_prep_slave_sg(dws->txchan, xfer->tx_sg.sgl, @@ -287,7 +289,13 @@ dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) txdesc->callback = dw_spi_dma_tx_done; txdesc->callback_param = dws; - dmaengine_submit(txdesc); + cookie = dmaengine_submit(txdesc); + ret = dma_submit_error(cookie); + if (ret) { + dmaengine_terminate_sync(dws->txchan); + return NULL; + } + set_bit(TX_BUSY, &dws->dma_chan_busy); return txdesc; @@ -371,6 +379,8 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *rxdesc; + dma_cookie_t cookie; + int ret; rxdesc = dmaengine_prep_slave_sg(dws->rxchan, xfer->rx_sg.sgl, @@ -383,7 +393,13 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, rxdesc->callback = dw_spi_dma_rx_done; rxdesc->callback_param = dws; - dmaengine_submit(rxdesc); + cookie = dmaengine_submit(rxdesc); + ret = dma_submit_error(cookie); + if (ret) { + dmaengine_terminate_sync(dws->rxchan); + return NULL; + } + set_bit(RX_BUSY, &dws->dma_chan_busy); return rxdesc; From patchwork Sun Sep 20 11:23:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787543 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1520A139F for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06C31222BB for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726343AbgITLcY (ORCPT ); Sun, 20 Sep 2020 07:32:24 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54020 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726285AbgITLcX (ORCPT ); Sun, 20 Sep 2020 07:32:23 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 4655B8030833; Sun, 20 Sep 2020 11:23:30 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3DB0P7RBo_ww; Sun, 20 Sep 2020 14:23:29 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 07/11] spi: dw-dma: Remove DMA Tx-desc passing around Date: Sun, 20 Sep 2020 14:23:18 +0300 Message-ID: <20200920112322.24585-8-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org It's pointless to pass the Rx and Tx transfers DMA Tx-descriptors, since they are used in the Tx/Rx submit method only. Instead just return the submission status from these methods. This alteration will make the code less complex. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index aa3900809126..9f70818acce6 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -271,8 +271,7 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws) return dmaengine_slave_config(dws->txchan, &txconf); } -static struct dma_async_tx_descriptor * -dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *txdesc; dma_cookie_t cookie; @@ -284,7 +283,7 @@ dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!txdesc) - return NULL; + return -ENOMEM; txdesc->callback = dw_spi_dma_tx_done; txdesc->callback_param = dws; @@ -293,12 +292,12 @@ dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) ret = dma_submit_error(cookie); if (ret) { dmaengine_terminate_sync(dws->txchan); - return NULL; + return ret; } set_bit(TX_BUSY, &dws->dma_chan_busy); - return txdesc; + return 0; } static inline bool dw_spi_dma_rx_busy(struct dw_spi *dws) @@ -375,8 +374,7 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws) return dmaengine_slave_config(dws->rxchan, &rxconf); } -static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, - struct spi_transfer *xfer) +static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *rxdesc; dma_cookie_t cookie; @@ -388,7 +386,7 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!rxdesc) - return NULL; + return -ENOMEM; rxdesc->callback = dw_spi_dma_rx_done; rxdesc->callback_param = dws; @@ -397,12 +395,12 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, ret = dma_submit_error(cookie); if (ret) { dmaengine_terminate_sync(dws->rxchan); - return NULL; + return ret; } set_bit(RX_BUSY, &dws->dma_chan_busy); - return rxdesc; + return 0; } static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) @@ -445,19 +443,18 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) { - struct dma_async_tx_descriptor *txdesc, *rxdesc; int ret; /* Submit the DMA Tx transfer */ - txdesc = dw_spi_dma_submit_tx(dws, xfer); - if (!txdesc) - return -EINVAL; + ret = dw_spi_dma_submit_tx(dws, xfer); + if (ret) + return ret; /* Submit the DMA Rx transfer if required */ if (xfer->rx_buf) { - rxdesc = dw_spi_dma_submit_rx(dws, xfer); - if (!rxdesc) - return -EINVAL; + ret = dw_spi_dma_submit_rx(dws, xfer); + if (ret) + return ret; /* rx must be started before tx due to spi instinct */ dma_async_issue_pending(dws->rxchan); From patchwork Sun Sep 20 11:23:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787553 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 88A616CA for ; Sun, 20 Sep 2020 11:32:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7041D216C4 for ; Sun, 20 Sep 2020 11:32:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726338AbgITLc1 (ORCPT ); Sun, 20 Sep 2020 07:32:27 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54044 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726279AbgITLc0 (ORCPT ); Sun, 20 Sep 2020 07:32:26 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id CE1C7803017C; Sun, 20 Sep 2020 11:23:30 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QHf1cNj_oOvJ; Sun, 20 Sep 2020 14:23:30 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 08/11] spi: dw-dma: Detach DMA transfer into a dedicated method Date: Sun, 20 Sep 2020 14:23:19 +0300 Message-ID: <20200920112322.24585-9-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org In order to add an alternative method of DMA-based SPI transfer first we need to detach the currently available one from the common code. Here we move the normal DMA-based SPI transfer execution functionality into a dedicated method. It will be utilized if either the DMA engine supports an unlimited number SG entries or Tx-only SPI transfer is requested. But currently just use it for any SPI transfer. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 9f70818acce6..f2baefcae9ae 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -441,7 +441,8 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) return 0; } -static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_transfer_all(struct dw_spi *dws, + struct spi_transfer *xfer) { int ret; @@ -462,7 +463,14 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) dma_async_issue_pending(dws->txchan); - ret = dw_spi_dma_wait(dws, xfer); + return dw_spi_dma_wait(dws, xfer); +} + +static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) +{ + int ret; + + ret = dw_spi_dma_transfer_all(dws, xfer); if (ret) return ret; From patchwork Sun Sep 20 11:23:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787541 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E786A6CA for ; Sun, 20 Sep 2020 11:32:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D984F222BB for ; Sun, 20 Sep 2020 11:32:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726332AbgITLcY (ORCPT ); Sun, 20 Sep 2020 07:32:24 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54016 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726279AbgITLcW (ORCPT ); Sun, 20 Sep 2020 07:32:22 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id B49DE803017D; Sun, 20 Sep 2020 11:23:31 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id o4siTxQgm8tT; Sun, 20 Sep 2020 14:23:31 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 09/11] spi: dw-dma: Move DMAC register cleanup to DMA transfer method Date: Sun, 20 Sep 2020 14:23:20 +0300 Message-ID: <20200920112322.24585-10-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org DW APB SSI DMA driver doesn't use the native SPI core wait API since commit bdbdf0f06337 ("spi: dw: Locally wait for the DMA transfers completion"). Due to that the driver can now clear the DMAC register in a single place synchronously with the DMA transactions completion or failure. After that all the possible code paths are still covered: 1) DMA completion callbacks are executed in case if the corresponding DMA transactions are finished. When they are, one of them will eventually wake the SPI messages pump kernel thread and dw_spi_dma_transfer_all() method will clean the DMAC register as implied by this patch. 2) dma_stop is called when the SPI core detects an error either returned from the transfer_one() callback or set in the SPI message status field. Both types of errors will be noticed by the dw_spi_dma_transfer_all() method. 3) dma_exit is called when either SPI controller driver or the corresponding device is removed. In any case the SPI core will first flush the SPI messages pump kernel thread, so any pending or in-fly SPI transfers will be finished before that. Due to all of that let's simplify the DW APB SSI DMA driver a bit and move the DMAC register cleanup to a single place in the dw_spi_dma_transfer_all() method. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index f2baefcae9ae..935f073a3523 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -152,8 +152,6 @@ static void dw_spi_dma_exit(struct dw_spi *dws) dmaengine_terminate_sync(dws->rxchan); dma_release_channel(dws->rxchan); } - - dw_writel(dws, DW_SPI_DMACR, 0); } static irqreturn_t dw_spi_dma_transfer_handler(struct dw_spi *dws) @@ -252,7 +250,6 @@ static void dw_spi_dma_tx_done(void *arg) if (test_bit(RX_BUSY, &dws->dma_chan_busy)) return; - dw_writel(dws, DW_SPI_DMACR, 0); complete(&dws->dma_completion); } @@ -355,7 +352,6 @@ static void dw_spi_dma_rx_done(void *arg) if (test_bit(TX_BUSY, &dws->dma_chan_busy)) return; - dw_writel(dws, DW_SPI_DMACR, 0); complete(&dws->dma_completion); } @@ -449,13 +445,13 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, /* Submit the DMA Tx transfer */ ret = dw_spi_dma_submit_tx(dws, xfer); if (ret) - return ret; + goto err_clear_dmac; /* Submit the DMA Rx transfer if required */ if (xfer->rx_buf) { ret = dw_spi_dma_submit_rx(dws, xfer); if (ret) - return ret; + goto err_clear_dmac; /* rx must be started before tx due to spi instinct */ dma_async_issue_pending(dws->rxchan); @@ -463,7 +459,12 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, dma_async_issue_pending(dws->txchan); - return dw_spi_dma_wait(dws, xfer); + ret = dw_spi_dma_wait(dws, xfer); + +err_clear_dmac: + dw_writel(dws, DW_SPI_DMACR, 0); + + return ret; } static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) @@ -496,8 +497,6 @@ static void dw_spi_dma_stop(struct dw_spi *dws) dmaengine_terminate_sync(dws->rxchan); clear_bit(RX_BUSY, &dws->dma_chan_busy); } - - dw_writel(dws, DW_SPI_DMACR, 0); } static const struct dw_spi_dma_ops dw_spi_dma_mfld_ops = { From patchwork Sun Sep 20 11:23:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787561 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 69A61139A for ; Sun, 20 Sep 2020 11:32:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5C701216C4 for ; Sun, 20 Sep 2020 11:32:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726314AbgITLc3 (ORCPT ); Sun, 20 Sep 2020 07:32:29 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54048 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726344AbgITLc1 (ORCPT ); Sun, 20 Sep 2020 07:32:27 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 5C817803017E; Sun, 20 Sep 2020 11:23:32 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id mVfBb3tl_Let; Sun, 20 Sep 2020 14:23:31 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 10/11] spi: dw-dma: Pass exact data to the DMA submit and wait methods Date: Sun, 20 Sep 2020 14:23:21 +0300 Message-ID: <20200920112322.24585-11-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org In order to use the DMA submission and waiting methods in both generic DMA-based SPI transfer and one-by-one DMA SG entries transmission functions, we need to modify the dw_spi_dma_wait() and dw_spi_dma_submit_tx()/dw_spi_dma_submit_rx() prototypes. So instead of getting the SPI transfer object as the second argument they must accept the exact data structure instances they imply to use. Those are the current transfer length and the SPI bus frequency in case of dw_spi_dma_wait(), and SG list together with number of list entries in case of the DMA Tx/Rx submission methods. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 935f073a3523..f333c2e23bf6 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -188,12 +188,12 @@ static enum dma_slave_buswidth dw_spi_dma_convert_width(u8 n_bytes) return DMA_SLAVE_BUSWIDTH_UNDEFINED; } -static int dw_spi_dma_wait(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_wait(struct dw_spi *dws, unsigned int len, u32 speed) { unsigned long long ms; - ms = xfer->len * MSEC_PER_SEC * BITS_PER_BYTE; - do_div(ms, xfer->effective_speed_hz); + ms = len * MSEC_PER_SEC * BITS_PER_BYTE; + do_div(ms, speed); ms += ms + 200; if (ms > UINT_MAX) @@ -268,17 +268,16 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws) return dmaengine_slave_config(dws->txchan, &txconf); } -static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct scatterlist *sgl, + unsigned int nents) { struct dma_async_tx_descriptor *txdesc; dma_cookie_t cookie; int ret; - txdesc = dmaengine_prep_slave_sg(dws->txchan, - xfer->tx_sg.sgl, - xfer->tx_sg.nents, - DMA_MEM_TO_DEV, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + txdesc = dmaengine_prep_slave_sg(dws->txchan, sgl, nents, + DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!txdesc) return -ENOMEM; @@ -370,17 +369,16 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws) return dmaengine_slave_config(dws->rxchan, &rxconf); } -static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct scatterlist *sgl, + unsigned int nents) { struct dma_async_tx_descriptor *rxdesc; dma_cookie_t cookie; int ret; - rxdesc = dmaengine_prep_slave_sg(dws->rxchan, - xfer->rx_sg.sgl, - xfer->rx_sg.nents, - DMA_DEV_TO_MEM, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + rxdesc = dmaengine_prep_slave_sg(dws->rxchan, sgl, nents, + DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!rxdesc) return -ENOMEM; @@ -443,13 +441,14 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, int ret; /* Submit the DMA Tx transfer */ - ret = dw_spi_dma_submit_tx(dws, xfer); + ret = dw_spi_dma_submit_tx(dws, xfer->tx_sg.sgl, xfer->tx_sg.nents); if (ret) goto err_clear_dmac; /* Submit the DMA Rx transfer if required */ if (xfer->rx_buf) { - ret = dw_spi_dma_submit_rx(dws, xfer); + ret = dw_spi_dma_submit_rx(dws, xfer->rx_sg.sgl, + xfer->rx_sg.nents); if (ret) goto err_clear_dmac; @@ -459,7 +458,7 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, dma_async_issue_pending(dws->txchan); - ret = dw_spi_dma_wait(dws, xfer); + ret = dw_spi_dma_wait(dws, xfer->len, xfer->effective_speed_hz); err_clear_dmac: dw_writel(dws, DW_SPI_DMACR, 0); From patchwork Sun Sep 20 11:23:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11787563 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F355139A for ; Sun, 20 Sep 2020 11:32:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C13720EDD for ; Sun, 20 Sep 2020 11:32:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726344AbgITLca (ORCPT ); Sun, 20 Sep 2020 07:32:30 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54046 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726387AbgITLca (ORCPT ); Sun, 20 Sep 2020 07:32:30 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 56BFD803202A; Sun, 20 Sep 2020 11:23:33 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Y3gWE7XUGMmZ; Sun, 20 Sep 2020 14:23:32 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 11/11] spi: dw-dma: Add one-by-one SG list entries transfer Date: Sun, 20 Sep 2020 14:23:22 +0300 Message-ID: <20200920112322.24585-12-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org In case if at least one of the requested DMA engine channels doesn't support the hardware accelerated SG list entries traverse, the DMA driver will most likely work that around by performing the IRQ-based SG list entries resubmission. That might and will cause a problem if the DMA Tx channel is recharged and re-executed before the Rx DMA channel. Due to non-deterministic IRQ-handler execution latency the DMA Tx channel will start pushing data to the SPI bus before the Rx DMA channel is even reinitialized with the next inbound SG list entry. By doing so the DMA Tx channel will implicitly start filling the DW APB SSI Rx FIFO up, which while the DMA Rx channel being recharged and re-executed will eventually be overflown. In order to solve the problem we have to feed the DMA engine with SG list entries one-by-one. It shall keep the DW APB SSI Tx and Rx FIFOs synchronized and prevent the Rx FIFO overflow. Since in general the SPI tx_sg and rx_sg lists may have different number of entries of different lengths (though total length should match) we virtually split the SG-lists to the set of DMA transfers, which length is a minimum of the ordered SG-entries lengths. The solution described above is only executed if a full-duplex SPI transfer is requested and the DMA engine hasn't provided channels with hardware accelerated SG list traverse capability to handle both SG lists at once. Signed-off-by: Serge Semin Suggested-by: Andy Shevchenko --- drivers/spi/spi-dw-dma.c | 137 ++++++++++++++++++++++++++++++++++++++- drivers/spi/spi-dw.h | 1 + 2 files changed, 137 insertions(+), 1 deletion(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index f333c2e23bf6..1cbb5a9efbba 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -72,6 +72,23 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws) dw_writel(dws, DW_SPI_DMATDLR, dws->txburst); } +static void dw_spi_dma_sg_burst_init(struct dw_spi *dws) +{ + struct dma_slave_caps tx = {0}, rx = {0}; + + dma_get_slave_caps(dws->txchan, &tx); + dma_get_slave_caps(dws->rxchan, &rx); + + if (tx.max_sg_burst > 0 && rx.max_sg_burst > 0) + dws->dma_sg_burst = min(tx.max_sg_burst, rx.max_sg_burst); + else if (tx.max_sg_burst > 0) + dws->dma_sg_burst = tx.max_sg_burst; + else if (rx.max_sg_burst > 0) + dws->dma_sg_burst = rx.max_sg_burst; + else + dws->dma_sg_burst = 0; +} + static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws) { struct dw_dma_slave dma_tx = { .dst_id = 1 }, *tx = &dma_tx; @@ -109,6 +126,8 @@ static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws) dw_spi_dma_maxburst_init(dws); + dw_spi_dma_sg_burst_init(dws); + return 0; free_rxchan: @@ -138,6 +157,8 @@ static int dw_spi_dma_init_generic(struct device *dev, struct dw_spi *dws) dw_spi_dma_maxburst_init(dws); + dw_spi_dma_sg_burst_init(dws); + return 0; } @@ -466,11 +487,125 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, return ret; } +/* + * In case if at least one of the requested DMA channels doesn't support the + * hardware accelerated SG list entries traverse, the DMA driver will most + * likely work that around by performing the IRQ-based SG list entries + * resubmission. That might and will cause a problem if the DMA Tx channel is + * recharged and re-executed before the Rx DMA channel. Due to + * non-deterministic IRQ-handler execution latency the DMA Tx channel will + * start pushing data to the SPI bus before the Rx DMA channel is even + * reinitialized with the next inbound SG list entry. By doing so the DMA Tx + * channel will implicitly start filling the DW APB SSI Rx FIFO up, which while + * the DMA Rx channel being recharged and re-executed will eventually be + * overflown. + * + * In order to solve the problem we have to feed the DMA engine with SG list + * entries one-by-one. It shall keep the DW APB SSI Tx and Rx FIFOs + * synchronized and prevent the Rx FIFO overflow. Since in general the tx_sg + * and rx_sg lists may have different number of entries of different lengths + * (though total length should match) let's virtually split the SG-lists to the + * set of DMA transfers, which length is a minimum of the ordered SG-entries + * lengths. An ASCII-sketch of the implemented algo is following: + * xfer->len + * |___________| + * tx_sg list: |___|____|__| + * rx_sg list: |_|____|____| + * DMA transfers: |_|_|__|_|__| + * + * Note in order to have this workaround solving the denoted problem the DMA + * engine driver should properly initialize the max_sg_burst capability and set + * the DMA device max segment size parameter with maximum data block size the + * DMA engine supports. + */ + +static int dw_spi_dma_transfer_one(struct dw_spi *dws, + struct spi_transfer *xfer) +{ + struct scatterlist *tx_sg = NULL, *rx_sg = NULL, tx_tmp, rx_tmp; + unsigned int tx_len = 0, rx_len = 0; + unsigned int base, len; + int ret; + + sg_init_table(&tx_tmp, 1); + sg_init_table(&rx_tmp, 1); + + for (base = 0, len = 0; base < xfer->len; base += len) { + /* Fetch next Tx DMA data chunk */ + if (!tx_len) { + tx_sg = !tx_sg ? &xfer->tx_sg.sgl[0] : sg_next(tx_sg); + sg_dma_address(&tx_tmp) = sg_dma_address(tx_sg); + tx_len = sg_dma_len(tx_sg); + } + + /* Fetch next Rx DMA data chunk */ + if (!rx_len) { + rx_sg = !rx_sg ? &xfer->rx_sg.sgl[0] : sg_next(rx_sg); + sg_dma_address(&rx_tmp) = sg_dma_address(rx_sg); + rx_len = sg_dma_len(rx_sg); + } + + len = min(tx_len, rx_len); + + sg_dma_len(&tx_tmp) = len; + sg_dma_len(&rx_tmp) = len; + + /* Submit DMA Tx transfer */ + ret = dw_spi_dma_submit_tx(dws, &tx_tmp, 1); + if (ret) + break; + + /* Submit DMA Rx transfer */ + ret = dw_spi_dma_submit_rx(dws, &rx_tmp, 1); + if (ret) + break; + + /* Rx must be started before Tx due to SPI instinct */ + dma_async_issue_pending(dws->rxchan); + + dma_async_issue_pending(dws->txchan); + + /* + * Here we only need to wait for the DMA transfer to be + * finished since SPI controller is kept enabled during the + * procedure this loop implements and there is no risk to lose + * data left in the Tx/Rx FIFOs. + */ + ret = dw_spi_dma_wait(dws, len, xfer->effective_speed_hz); + if (ret) + break; + + reinit_completion(&dws->dma_completion); + + sg_dma_address(&tx_tmp) += len; + sg_dma_address(&rx_tmp) += len; + tx_len -= len; + rx_len -= len; + } + + dw_writel(dws, DW_SPI_DMACR, 0); + + return ret; +} + static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) { + unsigned int nents; int ret; - ret = dw_spi_dma_transfer_all(dws, xfer); + nents = max(xfer->tx_sg.nents, xfer->rx_sg.nents); + + /* + * Execute normal DMA-based transfer (which submits the Rx and Tx SG + * lists directly to the DMA engine at once) if either full hardware + * accelerated SG list traverse is supported by both channels, or the + * Tx-only SPI transfer is requested, or the DMA engine is capable to + * handle both SG lists on hardware accelerated basis. + */ + if (!dws->dma_sg_burst || !xfer->rx_buf || nents <= dws->dma_sg_burst) + ret = dw_spi_dma_transfer_all(dws, xfer); + else + ret = dw_spi_dma_transfer_one(dws, xfer); if (ret) return ret; diff --git a/drivers/spi/spi-dw.h b/drivers/spi/spi-dw.h index 151ba316619e..1d201c62d292 100644 --- a/drivers/spi/spi-dw.h +++ b/drivers/spi/spi-dw.h @@ -146,6 +146,7 @@ struct dw_spi { u32 txburst; struct dma_chan *rxchan; u32 rxburst; + u32 dma_sg_burst; unsigned long dma_chan_busy; dma_addr_t dma_addr; /* phy address of the Data register */ const struct dw_spi_dma_ops *dma_ops;