From patchwork Fri Jul 31 07:59:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11694335 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3972513B1 for ; Fri, 31 Jul 2020 08:00:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2AB902250E for ; Fri, 31 Jul 2020 08:00:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726985AbgGaIAB (ORCPT ); Fri, 31 Jul 2020 04:00:01 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:59670 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731523AbgGaIAB (ORCPT ); Fri, 31 Jul 2020 04:00:01 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 6D11E8030802; Fri, 31 Jul 2020 07:59:58 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GpEVWY4Cvpdq; Fri, 31 Jul 2020 10:59:57 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH 1/8] spi: dw-dma: Set DMA Level registers on init Date: Fri, 31 Jul 2020 10:59:46 +0300 Message-ID: <20200731075953.14416-2-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> References: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Indeed the registers content doesn't get cleared when the SPI controller is disabled and enabled. Max burst lengths aren't changed since the Rx and Tx DMA channels are requested on init stage and are kept acquired until the device is removed. Obviously SPI controller FIFO depth can't be changed. Due to all of that we can safely move the DMA Transmit and Receive data level registers initialization to the SPI controller DMA init stage (when the SPI controller is being probed) instead of doing it for each SPI transfer when dma_setup is called. This shall speed the DMA-based SPI transfer initialization up a bit, particularly if the APB bus is relatively slow. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index bb390ff67d1d..440679fa0764 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -49,6 +49,7 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws) max_burst = RX_BURST_LEVEL; dws->rxburst = min(max_burst, def_burst); + dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1); ret = dma_get_slave_caps(dws->txchan, &caps); if (!ret && caps.max_burst) @@ -56,7 +57,20 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws) else max_burst = TX_BURST_LEVEL; + /* + * Having a Rx DMA channel serviced with higher priority than a Tx DMA + * channel might not be enough to provide a well balanced DMA-based + * SPI transfer interface. There might still be moments when the Tx DMA + * channel is occasionally handled faster than the Rx DMA channel. + * That in its turn will eventually cause the SPI Rx FIFO overflow if + * SPI bus speed is high enough to fill the SPI Rx FIFO in before it's + * cleared by the Rx DMA channel. In order to fix the problem the Tx + * DMA activity is intentionally slowed down by limiting the SPI Tx + * FIFO depth with a value twice bigger than the Tx burst length + * calculated earlier by the dw_spi_dma_maxburst_init() method. + */ dws->txburst = min(max_burst, def_burst); + dw_writel(dws, DW_SPI_DMATDLR, dws->txburst); } static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws) @@ -372,21 +386,6 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) { u16 imr = 0, dma_ctrl = 0; - /* - * Having a Rx DMA channel serviced with higher priority than a Tx DMA - * channel might not be enough to provide a well balanced DMA-based - * SPI transfer interface. There might still be moments when the Tx DMA - * channel is occasionally handled faster than the Rx DMA channel. - * That in its turn will eventually cause the SPI Rx FIFO overflow if - * SPI bus speed is high enough to fill the SPI Rx FIFO in before it's - * cleared by the Rx DMA channel. In order to fix the problem the Tx - * DMA activity is intentionally slowed down by limiting the SPI Tx - * FIFO depth with a value twice bigger than the Tx burst length - * calculated earlier by the dw_spi_dma_maxburst_init() method. - */ - dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1); - dw_writel(dws, DW_SPI_DMATDLR, dws->txburst); - if (xfer->tx_buf) dma_ctrl |= SPI_DMA_TDMAE; if (xfer->rx_buf) From patchwork Fri Jul 31 07:59:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11694343 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D29B613B1 for ; Fri, 31 Jul 2020 08:00:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C534220829 for ; Fri, 31 Jul 2020 08:00:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731588AbgGaIAE (ORCPT ); Fri, 31 Jul 2020 04:00:04 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:59692 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731529AbgGaIAC (ORCPT ); Fri, 31 Jul 2020 04:00:02 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 1A8A780045E4; Fri, 31 Jul 2020 07:59:59 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BND_oGN_6XL2; Fri, 31 Jul 2020 10:59:58 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH 2/8] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified Date: Fri, 31 Jul 2020 10:59:47 +0300 Message-ID: <20200731075953.14416-3-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> References: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Since commit 46164fde6b78 ("spi: dw: Fix Rx-only DMA transfers") if DMA interface is enabled, then Tx-buffer must be available in each SPI transfer. It's required since in order to activate the incoming data reception either DMA or CPU must be pushing data out to the SPI bus. But the DW APB SSI DMA driver code is still left in state as if Tx-buffer might be optional, which is no longer true. Let's fix it so an error would be returned if no Tx-buffer detected and DMA Tx would be always enabled. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 440679fa0764..ec721af61663 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -263,9 +263,6 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) struct dma_slave_config txconf; struct dma_async_tx_descriptor *txdesc; - if (!xfer->tx_buf) - return NULL; - memset(&txconf, 0, sizeof(txconf)); txconf.direction = DMA_MEM_TO_DEV; txconf.dst_addr = dws->dma_addr; @@ -384,17 +381,19 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) { - u16 imr = 0, dma_ctrl = 0; + u16 imr, dma_ctrl; - if (xfer->tx_buf) - dma_ctrl |= SPI_DMA_TDMAE; + if (!xfer->tx_buf) + return -EINVAL; + + /* Set the DMA handshaking interface */ + dma_ctrl = SPI_DMA_TDMAE; if (xfer->rx_buf) dma_ctrl |= SPI_DMA_RDMAE; dw_writel(dws, DW_SPI_DMACR, dma_ctrl); /* Set the interrupt mask */ - if (xfer->tx_buf) - imr |= SPI_INT_TXOI; + imr = SPI_INT_TXOI; if (xfer->rx_buf) imr |= SPI_INT_RXUI | SPI_INT_RXOI; spi_umask_intr(dws, imr); @@ -413,6 +412,8 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) /* Prepare the TX dma transfer */ txdesc = dw_spi_dma_prepare_tx(dws, xfer); + if (!txdesc) + return -EINVAL; /* Prepare the RX dma transfer */ rxdesc = dw_spi_dma_prepare_rx(dws, xfer); @@ -424,17 +425,15 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) dma_async_issue_pending(dws->rxchan); } - if (txdesc) { - set_bit(TX_BUSY, &dws->dma_chan_busy); - dmaengine_submit(txdesc); - dma_async_issue_pending(dws->txchan); - } + set_bit(TX_BUSY, &dws->dma_chan_busy); + dmaengine_submit(txdesc); + dma_async_issue_pending(dws->txchan); ret = dw_spi_dma_wait(dws, xfer); if (ret) return ret; - if (txdesc && dws->master->cur_msg->status == -EINPROGRESS) { + if (dws->master->cur_msg->status == -EINPROGRESS) { ret = dw_spi_dma_wait_tx_done(dws, xfer); if (ret) return ret; From patchwork Fri Jul 31 07:59:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11694337 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE18013B1 for ; Fri, 31 Jul 2020 08:00:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9C71722B48 for ; Fri, 31 Jul 2020 08:00:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731613AbgGaIAE (ORCPT ); Fri, 31 Jul 2020 04:00:04 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:59710 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731530AbgGaIAC (ORCPT ); Fri, 31 Jul 2020 04:00:02 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id B82B6803202D; Fri, 31 Jul 2020 07:59:59 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dhj8sOxTMUso; Fri, 31 Jul 2020 10:59:59 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH 3/8] spi: dw-dma: Configure the DMA channels in dma_setup Date: Fri, 31 Jul 2020 10:59:48 +0300 Message-ID: <20200731075953.14416-4-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> References: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Mainly this is a preparation patch before adding one-by-one DMA SG entries transmission. But logically the Tx and Rx DMA channels setup should be performed in the dma_setup() callback anyway. So let's move the DMA slave channels src/dst burst lengths, address and address width configuration to the DMA setup stage. While at it make sure the return value of the dmaengine_slave_config() method is checked. It has been unnecessary in case if Dw DMAC is utilized as a DMA engine, since its device_config() callback always returns zero (though it might change in future). But since DW APB SSI driver now supports any DMA back-end we must make sure the DMA device configuration has been successful before proceeding with further setups. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 42 +++++++++++++++++++++++++++++----------- 1 file changed, 31 insertions(+), 11 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index ec721af61663..56496b659d62 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -257,11 +257,9 @@ static void dw_spi_dma_tx_done(void *arg) complete(&dws->dma_completion); } -static struct dma_async_tx_descriptor * -dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_config_tx(struct dw_spi *dws) { struct dma_slave_config txconf; - struct dma_async_tx_descriptor *txdesc; memset(&txconf, 0, sizeof(txconf)); txconf.direction = DMA_MEM_TO_DEV; @@ -271,7 +269,13 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) txconf.dst_addr_width = dw_spi_dma_convert_width(dws->n_bytes); txconf.device_fc = false; - dmaengine_slave_config(dws->txchan, &txconf); + return dmaengine_slave_config(dws->txchan, &txconf); +} + +static struct dma_async_tx_descriptor * +dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) +{ + struct dma_async_tx_descriptor *txdesc; txdesc = dmaengine_prep_slave_sg(dws->txchan, xfer->tx_sg.sgl, @@ -346,14 +350,9 @@ static void dw_spi_dma_rx_done(void *arg) complete(&dws->dma_completion); } -static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, - struct spi_transfer *xfer) +static int dw_spi_dma_config_rx(struct dw_spi *dws) { struct dma_slave_config rxconf; - struct dma_async_tx_descriptor *rxdesc; - - if (!xfer->rx_buf) - return NULL; memset(&rxconf, 0, sizeof(rxconf)); rxconf.direction = DMA_DEV_TO_MEM; @@ -363,7 +362,16 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, rxconf.src_addr_width = dw_spi_dma_convert_width(dws->n_bytes); rxconf.device_fc = false; - dmaengine_slave_config(dws->rxchan, &rxconf); + return dmaengine_slave_config(dws->rxchan, &rxconf); +} + +static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, + struct spi_transfer *xfer) +{ + struct dma_async_tx_descriptor *rxdesc; + + if (!xfer->rx_buf) + return NULL; rxdesc = dmaengine_prep_slave_sg(dws->rxchan, xfer->rx_sg.sgl, @@ -382,10 +390,22 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) { u16 imr, dma_ctrl; + int ret; if (!xfer->tx_buf) return -EINVAL; + /* Setup DMA channels */ + ret = dw_spi_dma_config_tx(dws); + if (ret) + return ret; + + if (xfer->rx_buf) { + ret = dw_spi_dma_config_rx(dws); + if (ret) + return ret; + } + /* Set the DMA handshaking interface */ dma_ctrl = SPI_DMA_TDMAE; if (xfer->rx_buf) From patchwork Fri Jul 31 07:59:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11694349 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0AED13B1 for ; Fri, 31 Jul 2020 08:00:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B313320829 for ; Fri, 31 Jul 2020 08:00:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731435AbgGaIAV (ORCPT ); Fri, 31 Jul 2020 04:00:21 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:59730 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731539AbgGaIAE (ORCPT ); Fri, 31 Jul 2020 04:00:04 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 74B308030808; Fri, 31 Jul 2020 08:00:00 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QbiS43k0K67d; Fri, 31 Jul 2020 10:59:59 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH 4/8] spi: dw-dma: Move DMA transfers submission to the channels prep methods Date: Fri, 31 Jul 2020 10:59:49 +0300 Message-ID: <20200731075953.14416-5-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> References: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Indeed we can freely move the dmaengine_submit() method invocation and the Tx and Rx busy flag setting into the DMA Tx/Rx prepare methods. By doing so first we implement another preparation before adding the one-by-one DMA SG entries transmission, second we now have the dma_async_tx_descriptor descriptor used locally only in the new DMA transfers submitition methods, which makes the code less complex with no passing around the DMA Tx descriptors, third we make the generic transfer method more readable, where now the functionality of submission, execution and wait procedures is transparently split up instead of having a preparation, intermixed submission/execution and wait procedures. While at it we also add the dmaengine_submit() return value test. It has been unnecessary for DW DMAC, but should be done to support the generic DMA interface. Note since the DMA channels preparation methods are now responsible for the DMA transactions submission, we also rename them to dw_spi_dma_submit_{tx,rx}(). Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 56 ++++++++++++++++++++++------------------ 1 file changed, 31 insertions(+), 25 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 56496b659d62..366978086781 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -272,10 +272,11 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws) return dmaengine_slave_config(dws->txchan, &txconf); } -static struct dma_async_tx_descriptor * -dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *txdesc; + dma_cookie_t cookie; + int ret; txdesc = dmaengine_prep_slave_sg(dws->txchan, xfer->tx_sg.sgl, @@ -283,12 +284,17 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!txdesc) - return NULL; + return -ENOMEM; txdesc->callback = dw_spi_dma_tx_done; txdesc->callback_param = dws; - return txdesc; + cookie = dmaengine_submit(txdesc); + ret = dma_submit_error(cookie); + if (!ret) + set_bit(TX_BUSY, &dws->dma_chan_busy); + + return ret; } static inline bool dw_spi_dma_rx_busy(struct dw_spi *dws) @@ -365,13 +371,11 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws) return dmaengine_slave_config(dws->rxchan, &rxconf); } -static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, - struct spi_transfer *xfer) +static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *rxdesc; - - if (!xfer->rx_buf) - return NULL; + dma_cookie_t cookie; + int ret; rxdesc = dmaengine_prep_slave_sg(dws->rxchan, xfer->rx_sg.sgl, @@ -379,12 +383,17 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!rxdesc) - return NULL; + return -ENOMEM; rxdesc->callback = dw_spi_dma_rx_done; rxdesc->callback_param = dws; - return rxdesc; + cookie = dmaengine_submit(rxdesc); + ret = dma_submit_error(cookie); + if (!ret) + set_bit(RX_BUSY, &dws->dma_chan_busy); + + return ret; } static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) @@ -427,26 +436,23 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) { - struct dma_async_tx_descriptor *txdesc, *rxdesc; int ret; - /* Prepare the TX dma transfer */ - txdesc = dw_spi_dma_prepare_tx(dws, xfer); - if (!txdesc) - return -EINVAL; + /* Submit DMA Tx transfer */ + ret = dw_spi_dma_submit_tx(dws, xfer); + if (ret) + return ret; - /* Prepare the RX dma transfer */ - rxdesc = dw_spi_dma_prepare_rx(dws, xfer); + /* Submit DMA Rx transfer if required */ + if (xfer->rx_buf) { + ret = dw_spi_dma_submit_rx(dws, xfer); + if (ret) + return ret; - /* rx must be started before tx due to spi instinct */ - if (rxdesc) { - set_bit(RX_BUSY, &dws->dma_chan_busy); - dmaengine_submit(rxdesc); + /* rx must be started before tx due to spi instinct */ dma_async_issue_pending(dws->rxchan); } - set_bit(TX_BUSY, &dws->dma_chan_busy); - dmaengine_submit(txdesc); dma_async_issue_pending(dws->txchan); ret = dw_spi_dma_wait(dws, xfer); @@ -459,7 +465,7 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) return ret; } - if (rxdesc && dws->master->cur_msg->status == -EINPROGRESS) + if (xfer->rx_buf && dws->master->cur_msg->status == -EINPROGRESS) ret = dw_spi_dma_wait_rx_done(dws); return ret; From patchwork Fri Jul 31 07:59:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11694345 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 42E9514B7 for ; Fri, 31 Jul 2020 08:00:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 357B7208E4 for ; Fri, 31 Jul 2020 08:00:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731873AbgGaIAW (ORCPT ); Fri, 31 Jul 2020 04:00:22 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:59748 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731523AbgGaIAE (ORCPT ); Fri, 31 Jul 2020 04:00:04 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 07D3A8040A69; Fri, 31 Jul 2020 08:00:01 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4aeNiTJ4Y1iU; Fri, 31 Jul 2020 11:00:00 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH 5/8] spi: dw-dma: Detach DMA transfer into a dedicated method Date: Fri, 31 Jul 2020 10:59:50 +0300 Message-ID: <20200731075953.14416-6-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> References: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org In order to add an alternative method of DMA-based SPI transfer first we need to detach the currently available one from the common code. Here we move the normal DMA-based SPI transfer execution functionality into a dedicated method. It will be utilized if either the DMA engine supports an unlimited number SG entries or Tx-only SPI transfer is requested. But currently just use it for any SPI transfer. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 366978086781..32ef7913a73d 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -434,7 +434,8 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) return 0; } -static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_transfer_all(struct dw_spi *dws, + struct spi_transfer *xfer) { int ret; @@ -455,7 +456,14 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) dma_async_issue_pending(dws->txchan); - ret = dw_spi_dma_wait(dws, xfer); + return dw_spi_dma_wait(dws, xfer); +} + +static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) +{ + int ret; + + ret = dw_spi_dma_transfer_all(dws, xfer); if (ret) return ret; From patchwork Fri Jul 31 07:59:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11694347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81A10161F for ; Fri, 31 Jul 2020 08:00:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 749AC208E4 for ; Fri, 31 Jul 2020 08:00:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731868AbgGaIAV (ORCPT ); Fri, 31 Jul 2020 04:00:21 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:59752 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731671AbgGaIAE (ORCPT ); Fri, 31 Jul 2020 04:00:04 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id E898E8040A6C; Fri, 31 Jul 2020 08:00:01 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qMeaNV9clg4S; Fri, 31 Jul 2020 11:00:01 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH 6/8] spi: dw-dma: Move DMAC register cleanup to DMA transfer method Date: Fri, 31 Jul 2020 10:59:51 +0300 Message-ID: <20200731075953.14416-7-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> References: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org DW APB SSI DMA driver doesn't use the native SPI core wait API since commit bdbdf0f06337 ("spi: dw: Locally wait for the DMA transfers completion"). Due to that the driver can now clear the DMAC register in a single place synchronously with the DMA transactions completion or failure. After that all the possible code paths are still covered: 1) DMA completion callbacks are executed in case if the corresponding DMA transactions are finished. When they are, one of them will eventually wake the SPI messages pump kernel thread and dw_spi_dma_transfer_all() method will clean the DMAC register as implied by this patch. 2) dma_stop is called when the SPI core detects an error either returned from the transfer_one() callback or set in the SPI message status field. Both types of errors will be noticed by the dw_spi_dma_transfer_all() method. 3) dma_exit is called when either SPI controller driver or the corresponding device is removed. In any case the SPI core will first flush the SPI messages pump kernel thread, so any pending or in-fly SPI transfers will be finished before that. Due to all of that let's simplify the DW APB SSI DMA driver a bit and move the DMAC register cleanup to a single place in the dw_spi_dma_transfer_all() method. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 32ef7913a73d..f8508c0c7978 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -153,8 +153,6 @@ static void dw_spi_dma_exit(struct dw_spi *dws) dmaengine_terminate_sync(dws->rxchan); dma_release_channel(dws->rxchan); } - - dw_writel(dws, DW_SPI_DMACR, 0); } static irqreturn_t dw_spi_dma_transfer_handler(struct dw_spi *dws) @@ -253,7 +251,6 @@ static void dw_spi_dma_tx_done(void *arg) if (test_bit(RX_BUSY, &dws->dma_chan_busy)) return; - dw_writel(dws, DW_SPI_DMACR, 0); complete(&dws->dma_completion); } @@ -352,7 +349,6 @@ static void dw_spi_dma_rx_done(void *arg) if (test_bit(TX_BUSY, &dws->dma_chan_busy)) return; - dw_writel(dws, DW_SPI_DMACR, 0); complete(&dws->dma_completion); } @@ -442,13 +438,13 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, /* Submit DMA Tx transfer */ ret = dw_spi_dma_submit_tx(dws, xfer); if (ret) - return ret; + goto err_clear_dmac; /* Submit DMA Rx transfer if required */ if (xfer->rx_buf) { ret = dw_spi_dma_submit_rx(dws, xfer); if (ret) - return ret; + goto err_clear_dmac; /* rx must be started before tx due to spi instinct */ dma_async_issue_pending(dws->rxchan); @@ -456,7 +452,12 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, dma_async_issue_pending(dws->txchan); - return dw_spi_dma_wait(dws, xfer); + ret = dw_spi_dma_wait(dws, xfer); + +err_clear_dmac: + dw_writel(dws, DW_SPI_DMACR, 0); + + return ret; } static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) @@ -489,8 +490,6 @@ static void dw_spi_dma_stop(struct dw_spi *dws) dmaengine_terminate_sync(dws->rxchan); clear_bit(RX_BUSY, &dws->dma_chan_busy); } - - dw_writel(dws, DW_SPI_DMACR, 0); } static const struct dw_spi_dma_ops dw_spi_dma_mfld_ops = { From patchwork Fri Jul 31 07:59:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11694341 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1711214B7 for ; Fri, 31 Jul 2020 08:00:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06A4D22CE3 for ; Fri, 31 Jul 2020 08:00:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731719AbgGaIAF (ORCPT ); Fri, 31 Jul 2020 04:00:05 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:59786 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731675AbgGaIAF (ORCPT ); Fri, 31 Jul 2020 04:00:05 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 99E628040A6A; Fri, 31 Jul 2020 08:00:02 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 447pCloz0qIS; Fri, 31 Jul 2020 11:00:01 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH 7/8] spi: dw-dma: Pass exact data to the DMA submit and wait methods Date: Fri, 31 Jul 2020 10:59:52 +0300 Message-ID: <20200731075953.14416-8-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> References: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org In order to use the DMA submission and waiting methods in both generic DMA-based SPI transfer and one-by-one DMA SG entries transmission functions, we need to modify the dw_spi_dma_wait() and dw_spi_dma_submit_tx()/dw_spi_dma_submit_rx() prototypes. So instead of getting the SPI transfer object as the second argument they must accept the exact data structure instances they imply to use. Those are the current transfer length and the SPI bus frequency in case of dw_spi_dma_wait(), and SG list together with number of list entries in case of the DMA Tx/Rx submission methods. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index f8508c0c7978..2b42b42b6cf2 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -189,12 +189,12 @@ static enum dma_slave_buswidth dw_spi_dma_convert_width(u8 n_bytes) return DMA_SLAVE_BUSWIDTH_UNDEFINED; } -static int dw_spi_dma_wait(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_wait(struct dw_spi *dws, unsigned int len, u32 speed) { unsigned long long ms; - ms = xfer->len * MSEC_PER_SEC * BITS_PER_BYTE; - do_div(ms, xfer->effective_speed_hz); + ms = len * MSEC_PER_SEC * BITS_PER_BYTE; + do_div(ms, speed); ms += ms + 200; if (ms > UINT_MAX) @@ -269,17 +269,16 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws) return dmaengine_slave_config(dws->txchan, &txconf); } -static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct scatterlist *sgl, + unsigned int nents) { struct dma_async_tx_descriptor *txdesc; dma_cookie_t cookie; int ret; - txdesc = dmaengine_prep_slave_sg(dws->txchan, - xfer->tx_sg.sgl, - xfer->tx_sg.nents, - DMA_MEM_TO_DEV, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + txdesc = dmaengine_prep_slave_sg(dws->txchan, sgl, nents, + DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!txdesc) return -ENOMEM; @@ -367,17 +366,16 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws) return dmaengine_slave_config(dws->rxchan, &rxconf); } -static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct scatterlist *sgl, + unsigned int nents) { struct dma_async_tx_descriptor *rxdesc; dma_cookie_t cookie; int ret; - rxdesc = dmaengine_prep_slave_sg(dws->rxchan, - xfer->rx_sg.sgl, - xfer->rx_sg.nents, - DMA_DEV_TO_MEM, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + rxdesc = dmaengine_prep_slave_sg(dws->rxchan, sgl, nents, + DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!rxdesc) return -ENOMEM; @@ -436,13 +434,14 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, int ret; /* Submit DMA Tx transfer */ - ret = dw_spi_dma_submit_tx(dws, xfer); + ret = dw_spi_dma_submit_tx(dws, xfer->tx_sg.sgl, xfer->tx_sg.nents); if (ret) goto err_clear_dmac; /* Submit DMA Rx transfer if required */ if (xfer->rx_buf) { - ret = dw_spi_dma_submit_rx(dws, xfer); + ret = dw_spi_dma_submit_rx(dws, xfer->rx_sg.sgl, + xfer->rx_sg.nents); if (ret) goto err_clear_dmac; @@ -452,7 +451,7 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, dma_async_issue_pending(dws->txchan); - ret = dw_spi_dma_wait(dws, xfer); + ret = dw_spi_dma_wait(dws, xfer->len, xfer->effective_speed_hz); err_clear_dmac: dw_writel(dws, DW_SPI_DMACR, 0); From patchwork Fri Jul 31 07:59:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 11694339 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E81414B7 for ; Fri, 31 Jul 2020 08:00:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8ABD9207F5 for ; Fri, 31 Jul 2020 08:00:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731849AbgGaIAJ (ORCPT ); Fri, 31 Jul 2020 04:00:09 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:59794 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731435AbgGaIAJ (ORCPT ); Fri, 31 Jul 2020 04:00:09 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id A65BF803202D; Fri, 31 Jul 2020 08:00:03 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id hWKm7aRe4lqk; Fri, 31 Jul 2020 11:00:02 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH 8/8] spi: dw-dma: Add one-by-one SG list entries transfer Date: Fri, 31 Jul 2020 10:59:53 +0300 Message-ID: <20200731075953.14416-9-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> References: <20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org In case if at least one of the requested DMA engine channels doesn't support the hardware accelerated SG list entries traverse, the DMA driver will most likely work that around by performing the IRQ-based SG list entries resubmission. That might and will cause a problem if the DMA Tx channel is recharged and re-executed before the Rx DMA channel. Due to non-deterministic IRQ-handler execution latency the DMA Tx channel will start pushing data to the SPI bus before the Rx DMA channel is even reinitialized with the next inbound SG list entry. By doing so the DMA Tx channel will implicitly start filling the DW APB SSI Rx FIFO up, which while the DMA Rx channel being recharged and re-executed will eventually be overflown. In order to solve the problem we have to feed the DMA engine with SG list entries one-by-one. It shall keep the DW APB SSI Tx and Rx FIFOs synchronized and prevent the Rx FIFO overflow. Since in general the SPI tx_sg and rx_sg lists may have different number of entries of different lengths (though total length should match) we virtually split the SG-lists to the set of DMA transfers, which length is a minimum of the ordered SG-entries lengths. The solution described above is only executed if a full-duplex SPI transfer is requested and the DMA engine hasn't provided channels with hardware accelerated SG list traverse capability to handle both SG lists at once. Signed-off-by: Serge Semin Suggested-by: Andy Shevchenko --- drivers/spi/spi-dw-dma.c | 137 ++++++++++++++++++++++++++++++++++++++- drivers/spi/spi-dw.h | 1 + 2 files changed, 137 insertions(+), 1 deletion(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 2b42b42b6cf2..b3eeef3d5cf7 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -73,6 +73,23 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws) dw_writel(dws, DW_SPI_DMATDLR, dws->txburst); } +static void dw_spi_dma_sg_burst_init(struct dw_spi *dws) +{ + struct dma_slave_caps tx = {0}, rx = {0}; + + dma_get_slave_caps(dws->txchan, &tx); + dma_get_slave_caps(dws->rxchan, &rx); + + if (tx.max_sg_burst > 0 && rx.max_sg_burst > 0) + dws->dma_sg_burst = min(tx.max_sg_burst, rx.max_sg_burst); + else if (tx.max_sg_burst > 0) + dws->dma_sg_burst = tx.max_sg_burst; + else if (rx.max_sg_burst > 0) + dws->dma_sg_burst = rx.max_sg_burst; + else + dws->dma_sg_burst = 0; +} + static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws) { struct dw_dma_slave dma_tx = { .dst_id = 1 }, *tx = &dma_tx; @@ -110,6 +127,8 @@ static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws) dw_spi_dma_maxburst_init(dws); + dw_spi_dma_sg_burst_init(dws); + return 0; free_rxchan: @@ -139,6 +158,8 @@ static int dw_spi_dma_init_generic(struct device *dev, struct dw_spi *dws) dw_spi_dma_maxburst_init(dws); + dw_spi_dma_sg_burst_init(dws); + return 0; } @@ -459,11 +480,125 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, return ret; } +static int dw_spi_dma_transfer_one(struct dw_spi *dws, + struct spi_transfer *xfer) +{ + struct scatterlist *tx_sg = NULL, *rx_sg = NULL, tx_tmp, rx_tmp; + unsigned int tx_len = 0, rx_len = 0; + unsigned int base, len; + int ret; + + sg_init_table(&tx_tmp, 1); + sg_init_table(&rx_tmp, 1); + + /* + * In case if at least one of the requested DMA channels doesn't + * support the hardware accelerated SG list entries traverse, the DMA + * driver will most likely work that around by performing the IRQ-based + * SG list entries resubmission. That might and will cause a problem + * if the DMA Tx channel is recharged and re-executed before the Rx DMA + * channel. Due to non-deterministic IRQ-handler execution latency the + * DMA Tx channel will start pushing data to the SPI bus before the + * Rx DMA channel is even reinitialized with the next inbound SG list + * entry. By doing so the DMA Tx channel will implicitly start filling + * the DW APB SSI Rx FIFO up, which while the DMA Rx channel being + * recharged and re-executed will eventually be overflown. + * + * In order to solve the problem we have to feed the DMA engine with SG + * list entries one-by-one. It shall keep the DW APB SSI Tx and Rx + * FIFOs synchronized and prevent the Rx FIFO overflow. Since in + * general the tx_sg and rx_sg lists may have different number of + * entries of different lengths (though total length should match) + * let's virtually split the SG-lists to the set of DMA transfers, + * which length is a minimum of the ordered SG-entries lengths. + * An ASCII-sketch of the implemented algo is following: + * xfer->len + * |___________| + * tx_sg list: |___|____|__| + * rx_sg list: |_|____|____| + * DMA transfers: |_|_|__|_|__| + * + * Note in order to have this workaround solving the denoted problem + * the DMA engine driver should properly initialize the max_sg_burst + * capability and set the DMA device max segment size parameter with + * maximum data block size the DMA engine supports. + */ + for (base = 0, len = 0; base < xfer->len; base += len) { + /* Fetch next Tx DMA data chunk */ + if (!tx_len) { + tx_sg = !tx_sg ? &xfer->tx_sg.sgl[0] : sg_next(tx_sg); + sg_dma_address(&tx_tmp) = sg_dma_address(tx_sg); + tx_len = sg_dma_len(tx_sg); + } + + /* Fetch next Rx DMA data chunk */ + if (!rx_len) { + rx_sg = !rx_sg ? &xfer->rx_sg.sgl[0] : sg_next(rx_sg); + sg_dma_address(&rx_tmp) = sg_dma_address(rx_sg); + rx_len = sg_dma_len(rx_sg); + } + + len = min(tx_len, rx_len); + + sg_dma_len(&tx_tmp) = len; + sg_dma_len(&rx_tmp) = len; + + /* Submit DMA Tx transfer */ + ret = dw_spi_dma_submit_tx(dws, &tx_tmp, 1); + if (ret) + break; + + /* Submit DMA Rx transfer */ + ret = dw_spi_dma_submit_rx(dws, &rx_tmp, 1); + if (ret) + break; + + /* Rx must be started before Tx due to SPI instinct */ + dma_async_issue_pending(dws->rxchan); + + dma_async_issue_pending(dws->txchan); + + /* + * Here we only need to wait for the DMA transfer to be + * finished since SPI controller is kept enabled during the + * procedure this loop implements and there is no risk to lose + * data left in the Tx/Rx FIFOs. + */ + ret = dw_spi_dma_wait(dws, len, xfer->effective_speed_hz); + if (ret) + break; + + reinit_completion(&dws->dma_completion); + + sg_dma_address(&tx_tmp) += len; + sg_dma_address(&rx_tmp) += len; + tx_len -= len; + rx_len -= len; + } + + dw_writel(dws, DW_SPI_DMACR, 0); + + return ret; +} + static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) { + unsigned int nents; int ret; - ret = dw_spi_dma_transfer_all(dws, xfer); + nents = max(xfer->tx_sg.nents, xfer->rx_sg.nents); + + /* + * Execute normal DMA-based transfer (which submits the Rx and Tx SG + * lists directly to the DMA engine at once) if either full hardware + * accelerated SG list traverse is supported by both channels, or the + * Tx-only SPI transfer is requested, or the DMA engine is capable to + * handle both SG lists on hardware accelerated basis. + */ + if (!dws->dma_sg_burst || !xfer->rx_buf || nents <= dws->dma_sg_burst) + ret = dw_spi_dma_transfer_all(dws, xfer); + else + ret = dw_spi_dma_transfer_one(dws, xfer); if (ret) return ret; diff --git a/drivers/spi/spi-dw.h b/drivers/spi/spi-dw.h index 151ba316619e..1d201c62d292 100644 --- a/drivers/spi/spi-dw.h +++ b/drivers/spi/spi-dw.h @@ -146,6 +146,7 @@ struct dw_spi { u32 txburst; struct dma_chan *rxchan; u32 rxburst; + u32 dma_sg_burst; unsigned long dma_chan_busy; dma_addr_t dma_addr; /* phy address of the Data register */ const struct dw_spi_dma_ops *dma_ops;