From patchwork Sat Sep 29 07:46:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 10620653 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EDFFBA6A for ; Sat, 29 Sep 2018 07:47:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB2292ACB1 for ; Sat, 29 Sep 2018 07:47:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BE02E2ACDA; Sat, 29 Sep 2018 07:47:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6114D2ACB1 for ; Sat, 29 Sep 2018 07:47:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727682AbeI2OOb (ORCPT ); Sat, 29 Sep 2018 10:14:31 -0400 Received: from mail-pf1-f194.google.com ([209.85.210.194]:36377 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727538AbeI2OOb (ORCPT ); Sat, 29 Sep 2018 10:14:31 -0400 Received: by mail-pf1-f194.google.com with SMTP id b7-v6so5808067pfo.3 for ; Sat, 29 Sep 2018 00:47:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uVMEsprQm9gudmgxiK82x9lkc/giPgwELqLOkGrioH0=; b=QtyXJRWs7uIbraMJGQFWjTOq0ThdPmG7FaAZVP4W80xr2iiohY9GdxboO5o6AydQAj L9VssystDcpRn0Bg3N4FUXfipMUFewlZKrS8dYo9lqcUfQH1p59zvZFGPYmQB5szEuBH BQpDIQqhEZPDafOWCBuXssAJ8xfiehh/J5wrQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uVMEsprQm9gudmgxiK82x9lkc/giPgwELqLOkGrioH0=; b=XPxGJg8/1Fdg+Dq0mHoiYQsSt1SzhYPC700f3vuF/fKr7ppvSKYaLt5/64dJKZ8nbN rSKwFL20pZjN92Ct/LwqZ8hQFUMApUpRyOJ7bYrD9UXJlXlIDvglsw5066SO0xOjPOv0 0gvr4gO8Jw47WH4bR6oJuwNqcODzV8QwG4pEXPTx3/Pek9Hc8nXvG9DxUBtGUrqYM3jG lc6Ax1hY6yl4q/yzLOAM/qi/2Kv+dqYm7EtMIDWOOaledVaUkSistx5esYKGK016vohb AW/e7vhAmnwVTyH8cJGLVMYVCXxyPoMSI616LSu+XCn71UYZJvEps5SgoLrfTf1RQNjQ rs5Q== X-Gm-Message-State: ABuFfohzYbRkiKMyYyEwWUeuSeuUdZjmv0PGUs9YlPqVAuT6WeCINNPv aT+u4eqhAyw1t9Wq93IBKv2/ X-Google-Smtp-Source: ACcGV60SMyaA3GJZNMwRYEWcWYgMOgNcXN1rCg6dOywa0iGhSTUK6gx14Om2uLC4r85WISQccMGatg== X-Received: by 2002:a63:9e0a:: with SMTP id s10-v6mr1928388pgd.326.1538207225764; Sat, 29 Sep 2018 00:47:05 -0700 (PDT) Received: from localhost.localdomain ([2409:4072:717:b0f2:f0af:8cc:da39:e2c3]) by smtp.gmail.com with ESMTPSA id m21-v6sm9926570pgd.6.2018.09.29.00.46.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 29 Sep 2018 00:47:05 -0700 (PDT) From: Manivannan Sadhasivam To: vkoul@kernel.org, dan.j.williams@intel.com, afaerber@suse.de, robh+dt@kernel.org, gregkh@linuxfoundation.org, jslaby@suse.com Cc: linux-serial@vger.kernel.org, dmaengine@vger.kernel.org, liuwei@actions-semi.com, 96boards@ucrobotics.com, devicetree@vger.kernel.org, daniel.thompson@linaro.org, amit.kucheria@linaro.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, hzhang@ucrobotics.com, bdong@ucrobotics.com, manivannanece23@gmail.com, thomas.liau@actions-semi.com, jeff.chen@actions-semi.com, pn@denx.de, edgar.righi@lsitec.org.br, Manivannan Sadhasivam Subject: [PATCH v2 1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5 Date: Sat, 29 Sep 2018 13:16:35 +0530 Message-Id: <20180929074637.9766-2-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180929074637.9766-1-manivannan.sadhasivam@linaro.org> References: <20180929074637.9766-1-manivannan.sadhasivam@linaro.org> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Enable Tx DMA for UART5 in Actions Semi S900 SoC. Signed-off-by: Manivannan Sadhasivam --- arch/arm64/boot/dts/actions/s900.dtsi | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/boot/dts/actions/s900.dtsi b/arch/arm64/boot/dts/actions/s900.dtsi index eceba914762c..39af1236f611 100644 --- a/arch/arm64/boot/dts/actions/s900.dtsi +++ b/arch/arm64/boot/dts/actions/s900.dtsi @@ -156,6 +156,8 @@ compatible = "actions,s900-uart", "actions,owl-uart"; reg = <0x0 0xe012a000 0x0 0x2000>; interrupts = ; + dma-names = "tx"; + dmas = <&dma 26>; status = "disabled"; }; From patchwork Sat Sep 29 07:46:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 10620655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3DF02A6A for ; Sat, 29 Sep 2018 07:47:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1ACEE2ACB1 for ; Sat, 29 Sep 2018 07:47:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0EEED2ACDA; Sat, 29 Sep 2018 07:47:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2AF052ACB1 for ; Sat, 29 Sep 2018 07:47:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727721AbeI2OOm (ORCPT ); Sat, 29 Sep 2018 10:14:42 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:41487 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727722AbeI2OOm (ORCPT ); Sat, 29 Sep 2018 10:14:42 -0400 Received: by mail-pf1-f196.google.com with SMTP id m77-v6so5792966pfi.8 for ; Sat, 29 Sep 2018 00:47:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=NhxWnMICtyLVgQ8tgcBJblIpV968gEL3tsJPDbXWuxg=; b=I2x3UrUaTEDmeyvCzKL4ilIZ6tPRPjtwiEmbQtb0z6HrgkuRMSU7VinD5fdoLpGjPA 5A1ypXABka1u/07T+/FjB09gb8mrNbyW+2fiLvmcudlAto6ZfE9nGq0VeLQHgAzqXE+R zdr8e6hXHiwVopnJifOfi62265zGwbsJfOoAE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=NhxWnMICtyLVgQ8tgcBJblIpV968gEL3tsJPDbXWuxg=; b=XlYRaKZDmNq6//UpW9F3l6691AlXFzInC+Lq3EhgXmVfE8i/w/ZocEWtS2zyayw3Zu vnNWmCghMqFuA2peA9i7DLiOWhCPyLT3NsL+0NIHZAqzlA8YjZbLV7qTsWlo4aUoy8Fg /ghNE0wLNA4fZeD6Wj/8UsUuqd1QjGesLFkrdNbf8gz+YVXPAQlfGrRaYNKqrptlI4LE TkximmmV/3YuWSyzFyWWRjJ+f8dstUxhH2p3tpgUWx3DKudGMMp4WbqUqSRgR73DKFHY sZ6t0hMY9T8OYI1ww65FYkDkGLI+NCWi1QWxed898djCOZ52ELk6kGYXSbWoYS8WtJXp lY1Q== X-Gm-Message-State: ABuFfojpiMVrj17KwdCKH9T6E2ayPGvhbkvLIpbCvBk81kuLe8TQSSrZ KceYhh3FRjZHB48diaI+qIeZ X-Google-Smtp-Source: ACcGV6054eOMVK9z2kMldflbJ/olCOMQGXVanXPVOkot1tpS1SKjRAhEaMqtwIuUnqqivvipWfhcSg== X-Received: by 2002:a17:902:32c3:: with SMTP id z61-v6mr2070504plb.324.1538207235623; Sat, 29 Sep 2018 00:47:15 -0700 (PDT) Received: from localhost.localdomain ([2409:4072:717:b0f2:f0af:8cc:da39:e2c3]) by smtp.gmail.com with ESMTPSA id m21-v6sm9926570pgd.6.2018.09.29.00.47.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 29 Sep 2018 00:47:15 -0700 (PDT) From: Manivannan Sadhasivam To: vkoul@kernel.org, dan.j.williams@intel.com, afaerber@suse.de, robh+dt@kernel.org, gregkh@linuxfoundation.org, jslaby@suse.com Cc: linux-serial@vger.kernel.org, dmaengine@vger.kernel.org, liuwei@actions-semi.com, 96boards@ucrobotics.com, devicetree@vger.kernel.org, daniel.thompson@linaro.org, amit.kucheria@linaro.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, hzhang@ucrobotics.com, bdong@ucrobotics.com, manivannanece23@gmail.com, thomas.liau@actions-semi.com, jeff.chen@actions-semi.com, pn@denx.de, edgar.righi@lsitec.org.br, Manivannan Sadhasivam Subject: [PATCH v2 2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC Date: Sat, 29 Sep 2018 13:16:36 +0530 Message-Id: <20180929074637.9766-3-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180929074637.9766-1-manivannan.sadhasivam@linaro.org> References: <20180929074637.9766-1-manivannan.sadhasivam@linaro.org> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC. The slave mode supports bus width of 4 bytes common for all peripherals and 1 byte specific for UART. The cyclic mode supports only block mode transfer. Signed-off-by: Manivannan Sadhasivam --- drivers/dma/owl-dma.c | 279 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 272 insertions(+), 7 deletions(-) diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c index 7812a6338acd..1d26db4c9229 100644 --- a/drivers/dma/owl-dma.c +++ b/drivers/dma/owl-dma.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include "virt-dma.h" @@ -165,6 +166,7 @@ struct owl_dma_lli { struct owl_dma_txd { struct virt_dma_desc vd; struct list_head lli_list; + bool cyclic; }; /** @@ -191,6 +193,8 @@ struct owl_dma_vchan { struct virt_dma_chan vc; struct owl_dma_pchan *pchan; struct owl_dma_txd *txd; + struct dma_slave_config cfg; + u8 drq; }; /** @@ -336,9 +340,11 @@ static struct owl_dma_lli *owl_dma_alloc_lli(struct owl_dma *od) static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd, struct owl_dma_lli *prev, - struct owl_dma_lli *next) + struct owl_dma_lli *next, + bool is_cyclic) { - list_add_tail(&next->node, &txd->lli_list); + if (!is_cyclic) + list_add_tail(&next->node, &txd->lli_list); if (prev) { prev->hw.next_lli = next->phys; @@ -351,7 +357,9 @@ static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd, static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan, struct owl_dma_lli *lli, dma_addr_t src, dma_addr_t dst, - u32 len, enum dma_transfer_direction dir) + u32 len, enum dma_transfer_direction dir, + struct dma_slave_config *sconfig, + bool is_cyclic) { struct owl_dma_lli_hw *hw = &lli->hw; u32 mode; @@ -364,6 +372,32 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan, OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_INC; + break; + case DMA_MEM_TO_DEV: + mode |= OWL_DMA_MODE_TS(vchan->drq) + | OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV + | OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST; + + /* + * Hardware only supports 32bit and 8bit buswidth. Since the + * default is 32bit, select 8bit only when requested. + */ + if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE) + mode |= OWL_DMA_MODE_NDDBW_8BIT; + + break; + case DMA_DEV_TO_MEM: + mode |= OWL_DMA_MODE_TS(vchan->drq) + | OWL_DMA_MODE_ST_DEV | OWL_DMA_MODE_DT_DCU + | OWL_DMA_MODE_SAM_CONST | OWL_DMA_MODE_DAM_INC; + + /* + * Hardware only supports 32bit and 8bit buswidth. Since the + * default is 32bit, select 8bit only when requested. + */ + if (sconfig->src_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE) + mode |= OWL_DMA_MODE_NDDBW_8BIT; + break; default: return -EINVAL; @@ -381,7 +415,10 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan, OWL_DMA_LLC_SAV_LOAD_NEXT | OWL_DMA_LLC_DAV_LOAD_NEXT); - hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK); + if (is_cyclic) + hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_BLOCK); + else + hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK); return 0; } @@ -443,6 +480,16 @@ static void owl_dma_terminate_pchan(struct owl_dma *od, spin_unlock_irqrestore(&od->lock, flags); } +static void owl_dma_pause_pchan(struct owl_dma_pchan *pchan) +{ + pchan_writel(pchan, 1, OWL_DMAX_PAUSE); +} + +static void owl_dma_resume_pchan(struct owl_dma_pchan *pchan) +{ + pchan_writel(pchan, 0, OWL_DMAX_PAUSE); +} + static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan) { struct owl_dma *od = to_owl_dma(vchan->vc.chan.device); @@ -464,7 +511,10 @@ static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan) lli = list_first_entry(&txd->lli_list, struct owl_dma_lli, node); - int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK; + if (txd->cyclic) + int_ctl = OWL_DMA_INTCTL_BLOCK; + else + int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK; pchan_writel(pchan, OWL_DMAX_MODE, OWL_DMA_MODE_LME); pchan_writel(pchan, OWL_DMAX_LINKLIST_CTL, @@ -627,6 +677,54 @@ static int owl_dma_terminate_all(struct dma_chan *chan) return 0; } +static int owl_dma_config(struct dma_chan *chan, + struct dma_slave_config *config) +{ + struct owl_dma_vchan *vchan = to_owl_vchan(chan); + + /* Reject definitely invalid configurations */ + if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES || + config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES) + return -EINVAL; + + memcpy(&vchan->cfg, config, sizeof(struct dma_slave_config)); + + return 0; +} + +static int owl_dma_pause(struct dma_chan *chan) +{ + struct owl_dma_vchan *vchan = to_owl_vchan(chan); + unsigned long flags; + + spin_lock_irqsave(&vchan->vc.lock, flags); + + owl_dma_pause_pchan(vchan->pchan); + + spin_unlock_irqrestore(&vchan->vc.lock, flags); + + return 0; +} + +static int owl_dma_resume(struct dma_chan *chan) +{ + struct owl_dma_vchan *vchan = to_owl_vchan(chan); + unsigned long flags; + + if (!vchan->pchan && !vchan->txd) + return 0; + + dev_dbg(chan2dev(chan), "vchan %p: resume\n", &vchan->vc); + + spin_lock_irqsave(&vchan->vc.lock, flags); + + owl_dma_resume_pchan(vchan->pchan); + + spin_unlock_irqrestore(&vchan->vc.lock, flags); + + return 0; +} + static u32 owl_dma_getbytes_chan(struct owl_dma_vchan *vchan) { struct owl_dma_pchan *pchan; @@ -754,13 +852,14 @@ static struct dma_async_tx_descriptor bytes = min_t(size_t, (len - offset), OWL_DMA_FRAME_MAX_LENGTH); ret = owl_dma_cfg_lli(vchan, lli, src + offset, dst + offset, - bytes, DMA_MEM_TO_MEM); + bytes, DMA_MEM_TO_MEM, + &vchan->cfg, txd->cyclic); if (ret) { dev_warn(chan2dev(chan), "failed to config lli\n"); goto err_txd_free; } - prev = owl_dma_add_lli(txd, prev, lli); + prev = owl_dma_add_lli(txd, prev, lli, false); } return vchan_tx_prep(&vchan->vc, &txd->vd, flags); @@ -770,6 +869,133 @@ static struct dma_async_tx_descriptor return NULL; } +static struct dma_async_tx_descriptor + *owl_dma_prep_slave_sg(struct dma_chan *chan, + struct scatterlist *sgl, + unsigned int sg_len, + enum dma_transfer_direction dir, + unsigned long flags, void *context) +{ + struct owl_dma *od = to_owl_dma(chan->device); + struct owl_dma_vchan *vchan = to_owl_vchan(chan); + struct dma_slave_config *sconfig = &vchan->cfg; + struct owl_dma_txd *txd; + struct owl_dma_lli *lli, *prev = NULL; + struct scatterlist *sg; + dma_addr_t addr, src = 0, dst = 0; + size_t len; + int ret, i; + + txd = kzalloc(sizeof(*txd), GFP_NOWAIT); + if (!txd) + return NULL; + + INIT_LIST_HEAD(&txd->lli_list); + + for_each_sg(sgl, sg, sg_len, i) { + addr = sg_dma_address(sg); + len = sg_dma_len(sg); + + if (len > OWL_DMA_FRAME_MAX_LENGTH) { + dev_err(od->dma.dev, + "frame length exceeds max supported length"); + goto err_txd_free; + } + + lli = owl_dma_alloc_lli(od); + if (!lli) { + dev_err(chan2dev(chan), "failed to allocate lli"); + goto err_txd_free; + } + + if (dir == DMA_MEM_TO_DEV) { + src = addr; + dst = sconfig->dst_addr; + } else { + src = sconfig->src_addr; + dst = addr; + } + + ret = owl_dma_cfg_lli(vchan, lli, src, dst, len, dir, sconfig, + txd->cyclic); + if (ret) { + dev_warn(chan2dev(chan), "failed to config lli"); + goto err_txd_free; + } + + prev = owl_dma_add_lli(txd, prev, lli, false); + } + + return vchan_tx_prep(&vchan->vc, &txd->vd, flags); + +err_txd_free: + owl_dma_free_txd(od, txd); + + return NULL; +} + +static struct dma_async_tx_descriptor + *owl_prep_dma_cyclic(struct dma_chan *chan, + dma_addr_t buf_addr, size_t buf_len, + size_t period_len, + enum dma_transfer_direction dir, + unsigned long flags) +{ + struct owl_dma *od = to_owl_dma(chan->device); + struct owl_dma_vchan *vchan = to_owl_vchan(chan); + struct dma_slave_config *sconfig = &vchan->cfg; + struct owl_dma_txd *txd; + struct owl_dma_lli *lli, *prev = NULL, *first = NULL; + dma_addr_t src = 0, dst = 0; + unsigned int periods = buf_len / period_len; + int ret, i; + + txd = kzalloc(sizeof(*txd), GFP_NOWAIT); + if (!txd) + return NULL; + + INIT_LIST_HEAD(&txd->lli_list); + txd->cyclic = true; + + for (i = 0; i < periods; i++) { + lli = owl_dma_alloc_lli(od); + if (!lli) { + dev_warn(chan2dev(chan), "failed to allocate lli"); + goto err_txd_free; + } + + if (dir == DMA_MEM_TO_DEV) { + src = buf_addr + (period_len * i); + dst = sconfig->dst_addr; + } else if (dir == DMA_DEV_TO_MEM) { + src = sconfig->src_addr; + dst = buf_addr + (period_len * i); + } + + ret = owl_dma_cfg_lli(vchan, lli, src, dst, period_len, + dir, sconfig, txd->cyclic); + if (ret) { + dev_warn(chan2dev(chan), "failed to config lli"); + goto err_txd_free; + } + + if (!first) + first = lli; + + prev = owl_dma_add_lli(txd, prev, lli, false); + } + + /* close the cyclic list */ + owl_dma_add_lli(txd, prev, first, true); + + return vchan_tx_prep(&vchan->vc, &txd->vd, flags); + +err_txd_free: + owl_dma_free_txd(od, txd); + + return NULL; +} + static void owl_dma_free_chan_resources(struct dma_chan *chan) { struct owl_dma_vchan *vchan = to_owl_vchan(chan); @@ -790,6 +1016,27 @@ static inline void owl_dma_free(struct owl_dma *od) } } +static struct dma_chan *owl_dma_of_xlate(struct of_phandle_args *dma_spec, + struct of_dma *ofdma) +{ + struct owl_dma *od = ofdma->of_dma_data; + struct owl_dma_vchan *vchan; + struct dma_chan *chan; + u8 drq = dma_spec->args[0]; + + if (drq > od->nr_vchans) + return NULL; + + chan = dma_get_any_slave_channel(&od->dma); + if (!chan) + return NULL; + + vchan = to_owl_vchan(chan); + vchan->drq = drq; + + return chan; +} + static int owl_dma_probe(struct platform_device *pdev) { struct device_node *np = pdev->dev.of_node; @@ -833,12 +1080,19 @@ static int owl_dma_probe(struct platform_device *pdev) spin_lock_init(&od->lock); dma_cap_set(DMA_MEMCPY, od->dma.cap_mask); + dma_cap_set(DMA_SLAVE, od->dma.cap_mask); + dma_cap_set(DMA_CYCLIC, od->dma.cap_mask); od->dma.dev = &pdev->dev; od->dma.device_free_chan_resources = owl_dma_free_chan_resources; od->dma.device_tx_status = owl_dma_tx_status; od->dma.device_issue_pending = owl_dma_issue_pending; od->dma.device_prep_dma_memcpy = owl_dma_prep_memcpy; + od->dma.device_prep_slave_sg = owl_dma_prep_slave_sg; + od->dma.device_prep_dma_cyclic = owl_prep_dma_cyclic; + od->dma.device_config = owl_dma_config; + od->dma.device_pause = owl_dma_pause; + od->dma.device_resume = owl_dma_resume; od->dma.device_terminate_all = owl_dma_terminate_all; od->dma.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); od->dma.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); @@ -910,8 +1164,18 @@ static int owl_dma_probe(struct platform_device *pdev) goto err_pool_free; } + /* Device-tree DMA controller registration */ + ret = of_dma_controller_register(pdev->dev.of_node, + owl_dma_of_xlate, od); + if (ret) { + dev_err(&pdev->dev, "of_dma_controller_register failed\n"); + goto err_dma_unregister; + } + return 0; +err_dma_unregister: + dma_async_device_unregister(&od->dma); err_pool_free: clk_disable_unprepare(od->clk); dma_pool_destroy(od->lli_pool); @@ -923,6 +1187,7 @@ static int owl_dma_remove(struct platform_device *pdev) { struct owl_dma *od = platform_get_drvdata(pdev); + of_dma_controller_free(pdev->dev.of_node); dma_async_device_unregister(&od->dma); /* Mask all interrupts for this execution environment */ From patchwork Sat Sep 29 07:46:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 10620659 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3AFA14BD for ; Sat, 29 Sep 2018 07:47:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8ED092ACB1 for ; Sat, 29 Sep 2018 07:47:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 827132ACDA; Sat, 29 Sep 2018 07:47:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D63632ACB1 for ; Sat, 29 Sep 2018 07:47:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727786AbeI2OOv (ORCPT ); Sat, 29 Sep 2018 10:14:51 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:38456 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727771AbeI2OOv (ORCPT ); Sat, 29 Sep 2018 10:14:51 -0400 Received: by mail-pf1-f193.google.com with SMTP id x17-v6so5803191pfh.5 for ; Sat, 29 Sep 2018 00:47:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=u9S+IwXTtUPV4JAD8SAdBcU3BcDCxDZSRNqvDApyuVs=; b=Cbg7VWaK2PbpKVWS6gaftRor9vDejpWReaYbI7Ss+IbjjZofw2Fh1MLzbHWWR2Y/CF M0dhncv9IKGeUzc21QKjZKi1dG7SD6Fijh8yJhwVGYmrBufcOy8m/vh08+B7fm5gblcO mpwHuZ8T1hOMZP+gvCI+wkq2JFsF35FdxS6Ys= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=u9S+IwXTtUPV4JAD8SAdBcU3BcDCxDZSRNqvDApyuVs=; b=jymHAmw5Leiex8/GvY9fX3oH3qL4R4Icdr8fO17I+REvXJvAanVwWS9TgA+Va4MOs5 bhpcitGKdng79mQPsZCDBJlgQZo7uecXAG9xjm+CDLlxDuRX06fDcpqXNZk1EXC3csaJ w6BDbbT7qWgVK+ftyHWUr2EImiqNRF1khgHb0ZH39AnuWvyly304UZNhw28mDYz82+Td vLC5Hb4ssiFVaWdyKaLIsg8370zjDGj1NQQI4uJsyAKM6vcd6gM0LomXuEYPMcBeHUez RBvkwz2Hy8qZAOOHqbKBbYyCQdiAe35PPtUj2CDHJ1h2WTTY++iaLRJj9yuly/uAhv3/ f+3Q== X-Gm-Message-State: ABuFfojbfA5neLcTew9TKO2PoFJGbNRBAvxu0TqvjaTHO0OV0twlHpBr RDEsTQ5/AU+eYTcLqYCuGCRTauccVw== X-Google-Smtp-Source: ACcGV62acRlNuLxdkDahBi16CKxWzRA8HpW8mXDnbyNdyiPkoJQ8L89v0NIwsoaqgl+vihNJYZvJ0g== X-Received: by 2002:a17:902:ac8e:: with SMTP id h14-v6mr2028806plr.300.1538207245150; Sat, 29 Sep 2018 00:47:25 -0700 (PDT) Received: from localhost.localdomain ([2409:4072:717:b0f2:f0af:8cc:da39:e2c3]) by smtp.gmail.com with ESMTPSA id m21-v6sm9926570pgd.6.2018.09.29.00.47.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 29 Sep 2018 00:47:24 -0700 (PDT) From: Manivannan Sadhasivam To: vkoul@kernel.org, dan.j.williams@intel.com, afaerber@suse.de, robh+dt@kernel.org, gregkh@linuxfoundation.org, jslaby@suse.com Cc: linux-serial@vger.kernel.org, dmaengine@vger.kernel.org, liuwei@actions-semi.com, 96boards@ucrobotics.com, devicetree@vger.kernel.org, daniel.thompson@linaro.org, amit.kucheria@linaro.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, hzhang@ucrobotics.com, bdong@ucrobotics.com, manivannanece23@gmail.com, thomas.liau@actions-semi.com, jeff.chen@actions-semi.com, pn@denx.de, edgar.righi@lsitec.org.br, Manivannan Sadhasivam Subject: [PATCH v2 3/3] tty: serial: Add Tx DMA support for UART in Actions Semi Owl SoCs Date: Sat, 29 Sep 2018 13:16:37 +0530 Message-Id: <20180929074637.9766-4-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180929074637.9766-1-manivannan.sadhasivam@linaro.org> References: <20180929074637.9766-1-manivannan.sadhasivam@linaro.org> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add Tx DMA support for Actions Semi Owl SoCs. If there is no DMA property specified in DT, it will fallback to default interrupt mode. Signed-off-by: Manivannan Sadhasivam --- drivers/tty/serial/owl-uart.c | 172 +++++++++++++++++++++++++++++++++- 1 file changed, 171 insertions(+), 1 deletion(-) diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c index 29a6dc6a8d23..1b3016db7ae2 100644 --- a/drivers/tty/serial/owl-uart.c +++ b/drivers/tty/serial/owl-uart.c @@ -11,6 +11,8 @@ #include #include #include +#include +#include #include #include #include @@ -48,6 +50,8 @@ #define OWL_UART_CTL_RXIE BIT(18) #define OWL_UART_CTL_TXIE BIT(19) #define OWL_UART_CTL_LBEN BIT(20) +#define OWL_UART_CTL_DRCR BIT(21) +#define OWL_UART_CTL_DTCR BIT(22) #define OWL_UART_STAT_RIP BIT(0) #define OWL_UART_STAT_TIP BIT(1) @@ -71,12 +75,21 @@ struct owl_uart_info { struct owl_uart_port { struct uart_port port; struct clk *clk; + + struct dma_chan *tx_ch; + dma_addr_t tx_dma_buf; + dma_cookie_t dma_tx_cookie; + u32 tx_size; + bool tx_dma; + bool dma_tx_running; }; #define to_owl_uart_port(prt) container_of(prt, struct owl_uart_port, prt) static struct owl_uart_port *owl_uart_ports[OWL_UART_PORT_NUM]; +static void owl_uart_dma_start_tx(struct owl_uart_port *owl_port); + static inline void owl_uart_write(struct uart_port *port, u32 val, unsigned int off) { writel(val, port->membase + off); @@ -115,6 +128,83 @@ static unsigned int owl_uart_get_mctrl(struct uart_port *port) return mctrl; } +static void owl_uart_dma_tx_callback(void *data) +{ + struct owl_uart_port *owl_port = data; + struct uart_port *port = &owl_port->port; + struct circ_buf *xmit = &port->state->xmit; + unsigned long flags; + u32 val; + + dma_sync_single_for_cpu(port->dev, owl_port->tx_dma_buf, + UART_XMIT_SIZE, DMA_TO_DEVICE); + + spin_lock_irqsave(&port->lock, flags); + + owl_port->dma_tx_running = 0; + + xmit->tail += owl_port->tx_size; + xmit->tail &= UART_XMIT_SIZE - 1; + port->icount.tx += owl_port->tx_size; + + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) + uart_write_wakeup(port); + + /* Disable Tx DRQ */ + val = owl_uart_read(port, OWL_UART_CTL); + val &= ~OWL_UART_CTL_TXDE; + owl_uart_write(port, val, OWL_UART_CTL); + + /* Clear pending Tx IRQ */ + val = owl_uart_read(port, OWL_UART_STAT); + val |= OWL_UART_STAT_TIP; + owl_uart_write(port, val, OWL_UART_STAT); + + if (!uart_circ_empty(xmit) && !uart_tx_stopped(port)) + owl_uart_dma_start_tx(owl_port); + + spin_unlock_irqrestore(&port->lock, flags); +} + +static void owl_uart_dma_start_tx(struct owl_uart_port *owl_port) +{ + struct uart_port *port = &owl_port->port; + struct circ_buf *xmit = &port->state->xmit; + struct dma_async_tx_descriptor *desc; + u32 val; + + if (uart_tx_stopped(port) || uart_circ_empty(xmit) || + owl_port->dma_tx_running) + return; + + dma_sync_single_for_device(port->dev, owl_port->tx_dma_buf, + UART_XMIT_SIZE, DMA_TO_DEVICE); + + owl_port->tx_size = CIRC_CNT_TO_END(xmit->head, xmit->tail, + UART_XMIT_SIZE); + + desc = dmaengine_prep_slave_single(owl_port->tx_ch, + owl_port->tx_dma_buf + xmit->tail, + owl_port->tx_size, DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT); + if (!desc) + return; + + desc->callback = owl_uart_dma_tx_callback; + desc->callback_param = owl_port; + + /* Enable Tx DRQ */ + val = owl_uart_read(port, OWL_UART_CTL); + val &= ~OWL_UART_CTL_TXIE; + val |= OWL_UART_CTL_TXDE | OWL_UART_CTL_DTCR; + owl_uart_write(port, val, OWL_UART_CTL); + + /* Start Tx DMA transfer */ + owl_port->dma_tx_running = true; + owl_port->dma_tx_cookie = dmaengine_submit(desc); + dma_async_issue_pending(owl_port->tx_ch); +} + static unsigned int owl_uart_tx_empty(struct uart_port *port) { unsigned long flags; @@ -159,6 +249,7 @@ static void owl_uart_stop_tx(struct uart_port *port) static void owl_uart_start_tx(struct uart_port *port) { + struct owl_uart_port *owl_port = to_owl_uart_port(port); u32 val; if (uart_tx_stopped(port)) { @@ -166,6 +257,11 @@ static void owl_uart_start_tx(struct uart_port *port) return; } + if (owl_port->tx_dma) { + owl_uart_dma_start_tx(owl_port); + return; + } + val = owl_uart_read(port, OWL_UART_STAT); val |= OWL_UART_STAT_TIP; owl_uart_write(port, val, OWL_UART_STAT); @@ -273,13 +369,27 @@ static irqreturn_t owl_uart_irq(int irq, void *dev_id) return IRQ_HANDLED; } +static void owl_dma_channel_free(struct owl_uart_port *owl_port) +{ + dmaengine_terminate_all(owl_port->tx_ch); + dma_release_channel(owl_port->tx_ch); + dma_unmap_single(owl_port->port.dev, owl_port->tx_dma_buf, + UART_XMIT_SIZE, DMA_TO_DEVICE); + owl_port->dma_tx_running = false; + owl_port->tx_ch = NULL; +} + static void owl_uart_shutdown(struct uart_port *port) { - u32 val; + struct owl_uart_port *owl_port = to_owl_uart_port(port); unsigned long flags; + u32 val; spin_lock_irqsave(&port->lock, flags); + if (owl_port->tx_dma) + owl_dma_channel_free(owl_port); + val = owl_uart_read(port, OWL_UART_CTL); val &= ~(OWL_UART_CTL_TXIE | OWL_UART_CTL_RXIE | OWL_UART_CTL_TXDE | OWL_UART_CTL_RXDE | OWL_UART_CTL_EN); @@ -290,6 +400,62 @@ static void owl_uart_shutdown(struct uart_port *port) free_irq(port->irq, port); } +static int owl_uart_dma_tx_init(struct uart_port *port) +{ + struct owl_uart_port *owl_port = to_owl_uart_port(port); + struct device *dev = port->dev; + struct dma_slave_config slave_config; + int ret; + + owl_port->tx_dma = false; + + /* Request DMA TX channel */ + owl_port->tx_ch = dma_request_slave_channel(dev, "tx"); + if (!owl_port->tx_ch) { + dev_info(dev, "tx dma alloc failed\n"); + return -ENODEV; + } + + owl_port->tx_dma_buf = dma_map_single(dev, + owl_port->port.state->xmit.buf, + UART_XMIT_SIZE, DMA_TO_DEVICE); + if (dma_mapping_error(dev, owl_port->tx_dma_buf)) { + ret = -ENOMEM; + goto alloc_err; + } + + /* Configure DMA channel */ + memset(&slave_config, 0, sizeof(slave_config)); + slave_config.direction = DMA_MEM_TO_DEV; + slave_config.dst_addr = port->mapbase + OWL_UART_TXDAT; + slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + + ret = dmaengine_slave_config(owl_port->tx_ch, &slave_config); + if (ret < 0) { + dev_err(dev, "tx dma channel config failed\n"); + ret = -ENODEV; + goto map_err; + } + + /* Use DMA buffer size as the FIFO size */ + port->fifosize = UART_XMIT_SIZE; + + /* Set DMA flag */ + owl_port->tx_dma = true; + owl_port->dma_tx_running = false; + + return 0; + +map_err: + dma_unmap_single(dev, owl_port->tx_dma_buf, UART_XMIT_SIZE, + DMA_TO_DEVICE); +alloc_err: + dma_release_channel(owl_port->tx_ch); + owl_port->tx_ch = NULL; + + return ret; +} + static int owl_uart_startup(struct uart_port *port) { u32 val; @@ -301,6 +467,10 @@ static int owl_uart_startup(struct uart_port *port) if (ret) return ret; + ret = owl_uart_dma_tx_init(port); + if (!ret) + dev_info(port->dev, "using DMA for tx\n"); + spin_lock_irqsave(&port->lock, flags); val = owl_uart_read(port, OWL_UART_STAT);