From patchwork Sat Dec 5 16:57:00 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Bondarenko X-Patchwork-Id: 7775641 Return-Path: X-Original-To: patchwork-linux-spi@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4D6BBBEEE1 for ; Sat, 5 Dec 2015 16:57:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 76146204F6 for ; Sat, 5 Dec 2015 16:57:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 221DA20527 for ; Sat, 5 Dec 2015 16:57:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754350AbbLEQ5Q (ORCPT ); Sat, 5 Dec 2015 11:57:16 -0500 Received: from mail-lf0-f47.google.com ([209.85.215.47]:35915 "EHLO mail-lf0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754291AbbLEQ5P (ORCPT ); Sat, 5 Dec 2015 11:57:15 -0500 Received: by lfs39 with SMTP id 39so126576120lfs.3; Sat, 05 Dec 2015 08:57:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6l6IupqjPOx90mjBtZoVHErim8NLYhq0ibwDbmYlUR8=; b=Gjv1Lel+644ZutN27rLa25nvsdmLGoFfL5jyZ/nCuDdoogav1w8gN20CtewV8vQe4P p6p3jrUEd5n0PUYBFqX0s24bZf5t1VvEnPha2rWWdPLVYZkkdJ4f6WMk7InNBKzHRbku WJxzv7+Z+skOXzK4J2F0/FeSuaBShQAAExqGps+dD3BpqxQY70WnZVncpC46srX9lfJ1 rpGnxEoC/mRfDmCO05iWMyw3t6zzsnp6n7nCO8c8iQQGLQZBL7v5xSHrJKf2LfbNodDL soiVQ02H90w4CDYT21OTEpErjIulO7XtorS8GSdvmMHX6/LajSYSGuxwBWLNugGGxDbp ttsw== X-Received: by 10.25.151.133 with SMTP id z127mr9947548lfd.105.1449334633573; Sat, 05 Dec 2015 08:57:13 -0800 (PST) Received: from localhost.localdomain (c-89-233-200-205.cust.bredband2.com. [89.233.200.205]) by smtp.gmail.com with ESMTPSA id tv8sm164546lbb.27.2015.12.05.08.57.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 05 Dec 2015 08:57:13 -0800 (PST) From: Anton Bondarenko To: broonie@kernel.org, b38343@freescale.com, s.hauer@pengutronix.de Cc: linux-kernel@vger.kernel.org, linux-spi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, vladimir_zapolskiy@mentor.com, jiada_wang@mentor.com Subject: [PATCH v5 02/11] spi: imx: reorder HW operations enable order to avoid possible RX data loss Date: Sat, 5 Dec 2015 17:57:00 +0100 Message-Id: <1449334629-4715-3-git-send-email-anton.bondarenko.sama@gmail.com> X-Mailer: git-send-email 2.6.3 In-Reply-To: <1449334629-4715-1-git-send-email-anton.bondarenko.sama@gmail.com> References: <1449334629-4715-1-git-send-email-anton.bondarenko.sama@gmail.com> Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The overflow may happen due to rescheduling for another task and/or interrupt if we enable SPI HW before starting RX DMA. So RX DMA enabled first to make sure data would be read out from FIFO ASAP. TX DMA enabled next to start filling TX FIFO with new data. And finaly SPI HW enabled to start actual data transfer. The risk rise in case of heavy system load and high SPI clock. Signed-off-by: Anton Bondarenko --- drivers/spi/spi-imx.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c index fb3bcc4..17e8f9e 100644 --- a/drivers/spi/spi-imx.c +++ b/drivers/spi/spi-imx.c @@ -946,10 +946,18 @@ static int spi_imx_dma_transfer(struct spi_imx_data *spi_imx, if (left) writel(dma | (left << MX51_ECSPI_DMA_RXT_WML_OFFSET), spi_imx->base + MX51_ECSPI_DMA); + /* + * Set these order to avoid potential RX overflow. The overflow may + * happen if we enable SPI HW before starting RX DMA due to rescheduling + * for another task and/or interrupt. + * So RX DMA enabled first to make sure data would be read out from FIFO + * ASAP. TX DMA enabled next to start filling TX FIFO with new data. + * And finaly SPI HW enabled to start actual data transfer. + */ + dma_async_issue_pending(master->dma_rx); + dma_async_issue_pending(master->dma_tx); spi_imx->devtype_data->trigger(spi_imx); - dma_async_issue_pending(master->dma_tx); - dma_async_issue_pending(master->dma_rx); /* Wait SDMA to finish the data transfer.*/ timeout = wait_for_completion_timeout(&spi_imx->dma_tx_completion, IMX_DMA_TIMEOUT);