From patchwork Mon Jul 31 10:14:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miquel Raynal X-Patchwork-Id: 13334242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D004C001DC for ; Mon, 31 Jul 2023 10:14:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231259AbjGaKOy (ORCPT ); Mon, 31 Jul 2023 06:14:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229622AbjGaKOx (ORCPT ); Mon, 31 Jul 2023 06:14:53 -0400 Received: from relay8-d.mail.gandi.net (relay8-d.mail.gandi.net [217.70.183.201]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6658E3 for ; Mon, 31 Jul 2023 03:14:51 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPSA id 3BC2A1BF20D; Mon, 31 Jul 2023 10:14:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1690798490; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CIavbQJuk5XuT1JU98DnGK6D45Tl57BY5MxbsyHMO6Y=; b=kZfiT2+59PQezD7Br+7it6zEPSFx53fI8cBJwOJfUJgVdI85hKDCvNIIrFqjhW9ShvifWH oNpGz45DUgfE4NiI6m2t2cjaabdrK3NL0Ahu0aMa1xT06KIKihhT13sbkf+wJ2OZfrBSPA 14+rqKcvhoT7UQY3NUNesq78XqbeEYRSO7kNpjiilRCLXxAJYMiyWvhX7xyxBCplwyg8F5 c1Onh6AG7mbfVRl72Aug8LLQCO5lDaHsN3Tjx1/kJHo09YR2qXlr8eam2TWzXCXjskLf4b 4qshnrzIw/3tKt8j3z/+Iwm6zbvBVIndomhXn/3BylyOZUaNZaudX2MLmHOsfQ== From: Miquel Raynal To: Lizhi Hou , Brian Xu , Raj Kumar Rampelli , Vinod Koul Cc: Michal Simek , Max Zhen , Sonal Santan , dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Thomas Petazzoni , Miquel Raynal Subject: [PATCH 3/4] dmaengine: xilinx: xdma: Prepare the introduction of cyclic transfers Date: Mon, 31 Jul 2023 12:14:41 +0200 Message-Id: <20230731101442.792514-4-miquel.raynal@bootlin.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230731101442.792514-1-miquel.raynal@bootlin.com> References: <20230731101442.792514-1-miquel.raynal@bootlin.com> MIME-Version: 1.0 X-GND-Sasl: miquel.raynal@bootlin.com Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org In order to reduce and clarify the diff when introducing cyclic transfers support, let's first prepare the driver a bit. There is no functional change. Signed-off-by: Miquel Raynal --- drivers/dma/xilinx/xdma.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c index 5cdb19bd80a7..40983d9355c4 100644 --- a/drivers/dma/xilinx/xdma.c +++ b/drivers/dma/xilinx/xdma.c @@ -137,10 +137,10 @@ static inline void *xdma_blk_last_desc(struct xdma_desc_block *block) } /** - * xdma_link_desc_blocks - Link descriptor blocks for DMA transfer + * xdma_link_sg_desc_blocks - Link SG descriptor blocks for DMA transfer * @sw_desc: Tx descriptor pointer */ -static void xdma_link_desc_blocks(struct xdma_desc *sw_desc) +static void xdma_link_sg_desc_blocks(struct xdma_desc *sw_desc) { struct xdma_desc_block *block; u32 last_blk_desc, desc_control; @@ -239,6 +239,7 @@ xdma_alloc_desc(struct xdma_chan *chan, u32 desc_num) struct xdma_hw_desc *desc; dma_addr_t dma_addr; u32 dblk_num; + u32 control; void *addr; int i, j; @@ -254,6 +255,8 @@ xdma_alloc_desc(struct xdma_chan *chan, u32 desc_num) if (!sw_desc->desc_blocks) goto failed; + control = XDMA_DESC_CONTROL(1, 0); + sw_desc->dblk_num = dblk_num; for (i = 0; i < sw_desc->dblk_num; i++) { addr = dma_pool_alloc(chan->desc_pool, GFP_NOWAIT, &dma_addr); @@ -263,10 +266,10 @@ xdma_alloc_desc(struct xdma_chan *chan, u32 desc_num) sw_desc->desc_blocks[i].virt_addr = addr; sw_desc->desc_blocks[i].dma_addr = dma_addr; for (j = 0, desc = addr; j < XDMA_DESC_ADJACENT; j++) - desc[j].control = cpu_to_le32(XDMA_DESC_CONTROL(1, 0)); + desc[j].control = cpu_to_le32(control); } - xdma_link_desc_blocks(sw_desc); + xdma_link_sg_desc_blocks(sw_desc); return sw_desc; @@ -577,6 +580,12 @@ static int xdma_alloc_chan_resources(struct dma_chan *chan) return 0; } +static enum dma_status xdma_tx_status(struct dma_chan *chan, dma_cookie_t cookie, + struct dma_tx_state *state) +{ + return dma_cookie_status(chan, cookie, state); +} + /** * xdma_channel_isr - XDMA channel interrupt handler * @irq: IRQ number @@ -925,7 +934,7 @@ static int xdma_probe(struct platform_device *pdev) xdev->dma_dev.dev = &pdev->dev; xdev->dma_dev.device_free_chan_resources = xdma_free_chan_resources; xdev->dma_dev.device_alloc_chan_resources = xdma_alloc_chan_resources; - xdev->dma_dev.device_tx_status = dma_cookie_status; + xdev->dma_dev.device_tx_status = xdma_tx_status; xdev->dma_dev.device_prep_slave_sg = xdma_prep_device_sg; xdev->dma_dev.device_config = xdma_device_config; xdev->dma_dev.device_issue_pending = xdma_issue_pending;