From patchwork Mon Aug 22 18:53:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 12951139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75903C32774 for ; Mon, 22 Aug 2022 18:55:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238000AbiHVSzA (ORCPT ); Mon, 22 Aug 2022 14:55:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237368AbiHVSyS (ORCPT ); Mon, 22 Aug 2022 14:54:18 -0400 Received: from mail.baikalelectronics.com (mail.baikalelectronics.com [87.245.175.230]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 95B87402FA; Mon, 22 Aug 2022 11:53:54 -0700 (PDT) Received: from mail (mail.baikal.int [192.168.51.25]) by mail.baikalelectronics.com (Postfix) with ESMTP id AE83DDA7; Mon, 22 Aug 2022 21:57:05 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.com AE83DDA7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1661194625; bh=17/2wYbrQgogQ/AnOlqJwuN0f3HJbuhq/LuIABB8iF0=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=Hu7mSLOx7NhpPbD8gLSDPOBDDoZ8BbbsUZr8CrtzAPtsZd7eRN88kNWWIW1c4QEjR ad5NWnIjmd0vhcLrX/cHiEwO8qo2N+vIE0c6/ad7U7c/EpOdqoFpwf4wrHs5T8cYFP 9FalIT4AR073hIjqJQAlyaCfktfUFqpH/cD//wtw= Received: from localhost (192.168.168.10) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Aug 2022 21:53:51 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Rob Herring , Bjorn Helgaas , Lorenzo Pieralisi , Jingoo Han , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , =?utf-8?q?Krzys?= =?utf-8?q?ztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH RESEND v5 06/24] dmaengine: dw-edma: Fix invalid interleaved xfers semantics Date: Mon, 22 Aug 2022 21:53:14 +0300 Message-ID: <20220822185332.26149-7-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> References: <20220822185332.26149-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org The interleaved DMA transfer support added in commit 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") seems contradicting to what the DMA-engine defines. The next conditional statements: if (!xfer->xfer.il->numf) return NULL; if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) return NULL; basically mean that numf can't be zero and frame_size must always be zero, otherwise the transfer won't be executed. But further the transfer execution method takes the frames size from the dma_interleaved_template.sgl[] array for each frame. That array in accordance with [1] is supposed to be of dma_interleaved_template.frame_size size, which as we discovered before the code expects to be zero. So judging by the dw_edma_device_transfer() implementation the method implies the dma_interleaved_template.sgl[] array being of dma_interleaved_template.numf size, which is wrong. Since the dw_edma_device_transfer() method doesn't permit dma_interleaved_template.frame_size being non-zero then actual multi-chunk interleaved transfer turns to be unsupported even though the code implies having it supported. Let's fix that by adding a fully functioning support of the interleaved DMA transfers. First of all dma_interleaved_template.frame_size is supposed to be greater or equal to one thus having at least simple linear chunked frames. Secondly we can create a walk-through all over the chunks and frames just by initializing the number of the eDMA burst transactios as a multiple of dma_interleaved_template.numf and dma_interleaved_template.frame_size and getting the frame_size-modulo of the iteration step as an index of the dma_interleaved_template.sgl[] array. The rest of the dw_edma_device_transfer() method code can be left unchanged. [1] include/linux/dmaengine.h: doc struct dma_interleaved_template Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam Tested-by: Manivannan Sadhasivam Acked-By: Vinod Koul --- drivers/dma/dw-edma/dw-edma-core.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 225eab58acb7..ef49deb5a7f3 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -333,6 +333,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) struct dw_edma_chunk *chunk; struct dw_edma_burst *burst; struct dw_edma_desc *desc; + size_t fsz = 0; u32 cnt = 0; int i; @@ -382,9 +383,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (xfer->xfer.sg.len < 1) return NULL; } else if (xfer->type == EDMA_XFER_INTERLEAVED) { - if (!xfer->xfer.il->numf) - return NULL; - if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) + if (!xfer->xfer.il->numf || xfer->xfer.il->frame_size < 1) return NULL; if (!xfer->xfer.il->src_inc || !xfer->xfer.il->dst_inc) return NULL; @@ -414,10 +413,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) cnt = xfer->xfer.sg.len; sg = xfer->xfer.sg.sgl; } else if (xfer->type == EDMA_XFER_INTERLEAVED) { - if (xfer->xfer.il->numf > 0) - cnt = xfer->xfer.il->numf; - else - cnt = xfer->xfer.il->frame_size; + cnt = xfer->xfer.il->numf * xfer->xfer.il->frame_size; + fsz = xfer->xfer.il->frame_size; } for (i = 0; i < cnt; i++) { @@ -439,7 +436,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) else if (xfer->type == EDMA_XFER_SCATTER_GATHER) burst->sz = sg_dma_len(sg); else if (xfer->type == EDMA_XFER_INTERLEAVED) - burst->sz = xfer->xfer.il->sgl[i].size; + burst->sz = xfer->xfer.il->sgl[i % fsz].size; chunk->ll_region.sz += burst->sz; desc->alloc_sz += burst->sz; @@ -482,10 +479,9 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (xfer->type == EDMA_XFER_SCATTER_GATHER) { sg = sg_next(sg); - } else if (xfer->type == EDMA_XFER_INTERLEAVED && - xfer->xfer.il->frame_size > 0) { + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { struct dma_interleaved_template *il = xfer->xfer.il; - struct data_chunk *dc = &il->sgl[i]; + struct data_chunk *dc = &il->sgl[i % fsz]; src_addr += burst->sz; if (il->src_sgl)