From patchwork Mon Sep 23 13:58:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philipp Puschmann X-Patchwork-Id: 11156981 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8668A1747 for ; Mon, 23 Sep 2019 13:58:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6FC2620882 for ; Mon, 23 Sep 2019 13:58:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2408307AbfIWN6Y (ORCPT ); Mon, 23 Sep 2019 09:58:24 -0400 Received: from mx1.emlix.com ([188.40.240.192]:42092 "EHLO mx1.emlix.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406362AbfIWN6Q (ORCPT ); Mon, 23 Sep 2019 09:58:16 -0400 Received: from mailer.emlix.com (unknown [81.20.119.6]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.emlix.com (Postfix) with ESMTPS id 451475FBA6; Mon, 23 Sep 2019 15:58:14 +0200 (CEST) From: Philipp Puschmann To: linux-kernel@vger.kernel.org Cc: jlu@pengutronix.de, yibin.gong@nxp.com, fugang.duan@nxp.com, l.stach@pengutronix.de, dan.j.williams@intel.com, vkoul@kernel.org, shawnguo@kernel.org, s.hauer@pengutronix.de, kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com, dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Philipp Puschmann Subject: [PATCH v5 2/3] dmaengine: imx-sdma: fix dma freezes Date: Mon, 23 Sep 2019 15:58:07 +0200 Message-Id: <20190923135808.815-3-philipp.puschmann@emlix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190923135808.815-1-philipp.puschmann@emlix.com> References: <20190923135808.815-1-philipp.puschmann@emlix.com> MIME-Version: 1.0 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org For some years and since many kernel versions there are reports that the RX UART SDMA channel stops working at some point. The workaround was to disable DMA for RX. This commit fixes the problem itself. Cyclic DMA transfers are used by uart and other drivers and these can fail in at least two cases where we can run out of descriptors available to the engine: - Interrupts are disabled for too long and all buffers are filled with data, especially in a setup where many small dma transfers are being executed only using a tiny part of a single buffer - DMA errors (such as generated by baud rate mismatch with imx-uart) use up all descriptors before we can react. In this case, SDMA stops the channel and no further transfers are done until the respective channel is disabled and re-enabled. We can check if the channel has been stopped and re-enable it then. To distinguish from the the case that the channel was stopped by upper-level driver we introduce new flag IMX_DMA_ACTIVE. As sdmac->desc is constant we can move desc out of the loop. Fixes: 1ec1e82f2510 ("dmaengine: Add Freescale i.MX SDMA support") Signed-off-by: Philipp Puschmann Signed-off-by: Jan Luebbe Reviewed-by: Lucas Stach --- Changelog v5: - join with patch version from Jan Luebbe - adapt comments and patch descriptions Changelog v4: - fixed the fixes tag Changelog v3: - use correct dma_wmb() instead of dma_wb() - add fixes tag Changelog v2: - clarify comment and commit description drivers/dma/imx-sdma.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index b42281604e54..0b1d6a62423d 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -383,6 +383,7 @@ struct sdma_channel { }; #define IMX_DMA_SG_LOOP BIT(0) +#define IMX_DMA_ACTIVE BIT(1) #define MAX_DMA_CHANNELS 32 #define MXC_SDMA_DEFAULT_PRIORITY 1 @@ -658,6 +659,9 @@ static int sdma_config_ownership(struct sdma_channel *sdmac, static void sdma_enable_channel(struct sdma_engine *sdma, int channel) { + struct sdma_channel *sdmac = &sdma->channel[channel]; + + sdmac->flags |= IMX_DMA_ACTIVE; writel(BIT(channel), sdma->regs + SDMA_H_START); } @@ -774,16 +778,17 @@ static void sdma_start_desc(struct sdma_channel *sdmac) static void sdma_update_channel_loop(struct sdma_channel *sdmac) { + struct sdma_engine *sdma = sdmac->sdma; struct sdma_buffer_descriptor *bd; + struct sdma_desc *desc = sdmac->desc; int error = 0; - enum dma_status old_status = sdmac->status; + enum dma_status old_status = sdmac->status; /* * loop mode. Iterate over descriptors, re-setup them and * call callback function. */ - while (sdmac->desc) { - struct sdma_desc *desc = sdmac->desc; + while (desc) { bd = &desc->bd[desc->buf_tail]; @@ -822,6 +827,18 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac) if (error) sdmac->status = old_status; } + + /* In some situations it may happen that the sdma does not find any + * usable descriptor in the ring to put data into. The channel is + * stopped then and after having freed some buffers we have to restart + * it manually. + */ + if ((sdmac->flags & IMX_DMA_ACTIVE) && + !(readl_relaxed(sdma->regs + SDMA_H_STATSTOP) & BIT(sdmac->channel))) { + dev_err_ratelimited(sdma->dev, "SDMA channel %d: cyclic transfer disabled by HW, reenabling\n", + sdmac->channel); + writel(BIT(sdmac->channel), sdma->regs + SDMA_H_START); + }; } static void mxc_sdma_handle_channel_normal(struct sdma_channel *data) @@ -1051,7 +1068,8 @@ static int sdma_disable_channel(struct dma_chan *chan) struct sdma_engine *sdma = sdmac->sdma; int channel = sdmac->channel; - writel_relaxed(BIT(channel), sdma->regs + SDMA_H_STATSTOP); + sdmac->flags &= ~IMX_DMA_ACTIVE; + writel(BIT(channel), sdma->regs + SDMA_H_STATSTOP); sdmac->status = DMA_ERROR; return 0;