From patchwork Sun Aug 27 15:25:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tobias Schramm X-Patchwork-Id: 13367310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0115AC83F12 for ; Sun, 27 Aug 2023 15:38:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230157AbjH0PiN (ORCPT ); Sun, 27 Aug 2023 11:38:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229696AbjH0PiK (ORCPT ); Sun, 27 Aug 2023 11:38:10 -0400 X-Greylist: delayed 387 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Sun, 27 Aug 2023 08:38:07 PDT Received: from mail.manjaro.org (mail.manjaro.org [IPv6:2a01:4f8:c0c:51f3::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B40FCFC; Sun, 27 Aug 2023 08:38:07 -0700 (PDT) From: Tobias Schramm DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=manjaro.org; s=2021; t=1693150298; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9CFlQknagNZ7xplVnx26WwbwbBId9FT6aSVcWnKANcA=; b=v7OvgJHrAZz+KB7S5cz0VREbnwljDRmKtVndhFB7GX0A2ygw0nsnk4an25vdjUhKvS87Kt v4BTHtHkoWDSZlf6Poin1dt/3QK8FHLY0WNN/s6tnztm2WaRpqbStPx4I2Wlhwm0YXphap BLdcfbJzW6hF0SLx2m6GCRfmRRnvQ74BpxgwH5z8E8ajnwjGby7jb7TsvyJDo286ynHxat MBhlxm3OXABZP4/2hwKDI4pFfIbMXU7kDFmkS2uHDmZkLVsroaQyzY7ycdEHAnQd6npbCe JCFyFu8cH2luBk9BiRbIpfuENHgc4g5atvSXguv+DKDyZmSUn0MLH2p6dmkdjQ== To: Mark Brown , Chen-Yu Tsai , Jernej Skrabec , Samuel Holland Cc: linux-spi@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Tobias Schramm Subject: [PATCH 1/2] spi: sun6i: reduce DMA RX transfer width to single byte Date: Sun, 27 Aug 2023 17:25:57 +0200 Message-ID: <20230827152558.5368-2-t.schramm@manjaro.org> In-Reply-To: <20230827152558.5368-1-t.schramm@manjaro.org> References: <20230827152558.5368-1-t.schramm@manjaro.org> MIME-Version: 1.0 Authentication-Results: ORIGINATING; auth=pass smtp.auth=t.schramm@manjaro.org smtp.mailfrom=t.schramm@manjaro.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Through empirical testing it has been determined that sometimes RX SPI transfers with DMA enabled return corrupted data. This is down to single or even multiple bytes lost during DMA transfer from SPI peripheral to memory. It seems the RX FIFO within the SPI peripheral can become confused when performing bus read accesses wider than a single byte to it during an active SPI transfer. This patch reduces the width of individual DMA read accesses to the RX FIFO to a single byte to mitigate that issue. Signed-off-by: Tobias Schramm --- drivers/spi/spi-sun6i.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c index 30d541612253..8fcb2696ec09 100644 --- a/drivers/spi/spi-sun6i.c +++ b/drivers/spi/spi-sun6i.c @@ -207,7 +207,7 @@ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi, struct dma_slave_config rxconf = { .direction = DMA_DEV_TO_MEM, .src_addr = sspi->dma_addr_rx, - .src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES, + .src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE, .src_maxburst = 8, }; From patchwork Sun Aug 27 15:25:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tobias Schramm X-Patchwork-Id: 13367311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5CE1C83F01 for ; Sun, 27 Aug 2023 15:38:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230261AbjH0PiN (ORCPT ); Sun, 27 Aug 2023 11:38:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230113AbjH0PiL (ORCPT ); Sun, 27 Aug 2023 11:38:11 -0400 Received: from mail.manjaro.org (mail.manjaro.org [IPv6:2a01:4f8:c0c:51f3::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1E2A106; Sun, 27 Aug 2023 08:38:07 -0700 (PDT) From: Tobias Schramm DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=manjaro.org; s=2021; t=1693150299; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hP3D0sv+Pnl8mmbV8npJb2MtmhDHanchUwfByKnsoeI=; b=PBLpY3qvDytsg6qTLjboVR3FaL0LRQKH/KQkfFQTtpdpdfX0k3tVLS5j7qJLcAEumouwHc RCvK7pZXnrrnFtiMBXOdd+QvaIrNAGKEkiAIa/f31v3OdA4xarGys8l+iI5D7S+mTVeNvb iClKHjUlnqiSNjRi/aiSMgswo5sKZCwv91pFROySEmVTzZLOotFjaJLr5jWCtzsX58x2tv rVhBUYQ0JLTw8Fe6+Klps4Oi5eNkQzHWtsXsPqNwOryrQVq1QrCkKtozjUp68FZ7H5k/9S vzMCoOxjW5kzYtdUUjscNGzNcn+dfkY2KkLcAfjkpyeDG/TxQJAEaIQ5oo4xQw== To: Mark Brown , Chen-Yu Tsai , Jernej Skrabec , Samuel Holland Cc: linux-spi@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Tobias Schramm Subject: [PATCH 2/2] spi: sun6i: fix race between DMA RX transfer completion and RX FIFO drain Date: Sun, 27 Aug 2023 17:25:58 +0200 Message-ID: <20230827152558.5368-3-t.schramm@manjaro.org> In-Reply-To: <20230827152558.5368-1-t.schramm@manjaro.org> References: <20230827152558.5368-1-t.schramm@manjaro.org> MIME-Version: 1.0 Authentication-Results: ORIGINATING; auth=pass smtp.auth=t.schramm@manjaro.org smtp.mailfrom=t.schramm@manjaro.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Previously the transfer complete IRQ immediately drained to RX FIFO to read any data remaining in FIFO to the RX buffer. This behaviour is correct when dealing with SPI in interrupt mode. However in DMA mode the transfer complete interrupt still fires as soon as all bytes to be transferred have been stored in the FIFO. At that point data in the FIFO still needs to be picked up by the DMA engine. Thus the drain procedure and DMA engine end up racing to read from RX FIFO, corrupting any data read. Additionally the RX buffer pointer is never adjusted according to DMA progress in DMA mode, thus calling the RX FIFO drain procedure in DMA mode is a bug. Fix corruptions in DMA RX mode by draining RX FIFO only in interrupt mode. Also wait for completion of RX DMA when in DMA mode before returning to ensure all data has been copied to the supplied memory buffer. Signed-off-by: Tobias Schramm --- drivers/spi/spi-sun6i.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c index 8fcb2696ec09..57c828e73c44 100644 --- a/drivers/spi/spi-sun6i.c +++ b/drivers/spi/spi-sun6i.c @@ -102,6 +102,7 @@ struct sun6i_spi { struct reset_control *rstc; struct completion done; + struct completion dma_rx_done; const u8 *tx_buf; u8 *rx_buf; @@ -196,6 +197,13 @@ static size_t sun6i_spi_max_transfer_size(struct spi_device *spi) return SUN6I_MAX_XFER_SIZE - 1; } +static void sun6i_spi_dma_rx_cb(void *param) +{ + struct sun6i_spi *sspi = param; + + complete(&sspi->dma_rx_done); +} + static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi, struct spi_transfer *tfr) { @@ -220,6 +228,8 @@ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi, DMA_PREP_INTERRUPT); if (!rxdesc) return -EINVAL; + rxdesc->callback_param = sspi; + rxdesc->callback = sun6i_spi_dma_rx_cb; } txdesc = NULL; @@ -275,6 +285,7 @@ static int sun6i_spi_transfer_one(struct spi_master *master, return -EINVAL; reinit_completion(&sspi->done); + reinit_completion(&sspi->dma_rx_done); sspi->tx_buf = tfr->tx_buf; sspi->rx_buf = tfr->rx_buf; sspi->len = tfr->len; @@ -459,6 +470,22 @@ static int sun6i_spi_transfer_one(struct spi_master *master, start = jiffies; timeout = wait_for_completion_timeout(&sspi->done, msecs_to_jiffies(tx_time)); + + if (!use_dma) { + sun6i_spi_drain_fifo(sspi); + } else { + if (timeout && rx_len) { + /* + * Even though RX on the peripheral side has finished + * RX DMA might still be in flight + */ + timeout = wait_for_completion_timeout(&sspi->dma_rx_done, + timeout); + if (!timeout) + dev_warn(&master->dev, "RX DMA timeout\n"); + } + } + end = jiffies; if (!timeout) { dev_warn(&master->dev, @@ -486,7 +513,6 @@ static irqreturn_t sun6i_spi_handler(int irq, void *dev_id) /* Transfer complete */ if (status & SUN6I_INT_CTL_TC) { sun6i_spi_write(sspi, SUN6I_INT_STA_REG, SUN6I_INT_CTL_TC); - sun6i_spi_drain_fifo(sspi); complete(&sspi->done); return IRQ_HANDLED; } @@ -644,6 +670,7 @@ static int sun6i_spi_probe(struct platform_device *pdev) } init_completion(&sspi->done); + init_completion(&sspi->dma_rx_done); sspi->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL); if (IS_ERR(sspi->rstc)) {