From patchwork Mon Oct 3 12:50:38 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sanchayan X-Patchwork-Id: 9360583 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 05053607D6 for ; Mon, 3 Oct 2016 13:01:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E68DA289DD for ; Mon, 3 Oct 2016 13:01:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D620428A2E; Mon, 3 Oct 2016 13:01:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,FREEMAIL_FROM,RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E769F289DD for ; Mon, 3 Oct 2016 13:00:59 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1br2qH-0007od-Hl; Mon, 03 Oct 2016 12:59:37 +0000 Received: from mail-pa0-x244.google.com ([2607:f8b0:400e:c03::244]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1br2q2-0007c7-VC for linux-arm-kernel@lists.infradead.org; Mon, 03 Oct 2016 12:59:24 +0000 Received: by mail-pa0-x244.google.com with SMTP id r9so7849841paz.1 for ; Mon, 03 Oct 2016 05:59:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=R7n1N8yYeYsN7DyCPe8W//VmYZG+ZS6F+fViEqaPMio=; b=xssMScaRcmf89I/wyfZwvyxCK1wYW1vDpbTCfcPoBS1rZO9RUXPlUNPGECUYWVGK61 XaC+kFKxe3WeiOrDu83CZCZnG/Eb5XD0bmkxSsZs8ol76dmed0kUr3Q9i7gzwDwp8gmB cAMkmW/NBJTT+Pb7ek17cooXVh9yS26Vlo/7ozqgwIP7ppmb/0SAXty1sc1RaZJDXhAv BhH/bU36jgEmhT4eEqa/u5PVUwaPLe8o89fA0z2Kk+z2ISYTQqYyskdSDpug57wKVv9L KXgGg7XqYHEWu/AGIGB0HR/W9krtlVq2OJPyysihisZr2WKp7x2nArOdWm0NWPci4WQn r6kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=R7n1N8yYeYsN7DyCPe8W//VmYZG+ZS6F+fViEqaPMio=; b=mDqZblUVWYqiwrCtuQ4GOBngVDAm9yTtY1InP1BZwX3fa2FfCJblDnmhMmm/Ov29CF D6GZ1F5cNL0QYPj1jAY0Y35rz8qfEM4Dz+V3YIROAXTD4AlKojUuYYOOrsTZ9Hwwf/XO eW3jced3OpFTym/TdrwqXQHC8iKXvWf2Vo+JhoXEfB7atGgx/+qfdYps3xWFz06DKuEM j3TX4xXASaOOYfKJWpPJ9R9gk7MPuWkvb2T6jAQhx4xzxFV2Tsn+aNwbeMNyxR9+aiek Nw+WXC/dyvOW7tYT6ct66UmWbXzOfe0oG3Smnrh2NmGw8mYcr0pL28Bs+aus8DjfixXK c29w== X-Gm-Message-State: AA6/9RkaIC5qvk6aQ/08P33gAvc8DAy+WXrjwMRTA3lGerh0UbEjAeSA2IjHoltP61wZ3A== X-Received: by 10.66.235.100 with SMTP id ul4mr37786998pac.50.1475499542058; Mon, 03 Oct 2016 05:59:02 -0700 (PDT) Received: from localhost ([115.115.243.34]) by smtp.gmail.com with ESMTPSA id f86sm17099037pfd.83.2016.10.03.05.59.01 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 03 Oct 2016 05:59:01 -0700 (PDT) From: Sanchayan Maity To: broonie@kernel.org, shawnguo@kernel.org Subject: [PATCH v1 2/2] spi: spi-fsl-dspi: Add DMA support for Vybrid Date: Mon, 3 Oct 2016 18:20:38 +0530 Message-Id: <8ccf4f2f4e6739410d9e242f9d0e3f3bcc76d0e2.1475498805.git.maitysanchayan@gmail.com> X-Mailer: git-send-email 2.10.0 In-Reply-To: <63fc108e2df00d2297a9b7014955f203bc802f34.1475498805.git.maitysanchayan@gmail.com> References: <63fc108e2df00d2297a9b7014955f203bc802f34.1475498805.git.maitysanchayan@gmail.com> In-Reply-To: <63fc108e2df00d2297a9b7014955f203bc802f34.1475498805.git.maitysanchayan@gmail.com> References: <63fc108e2df00d2297a9b7014955f203bc802f34.1475498805.git.maitysanchayan@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161003_055923_161189_5BB7EA03 X-CRM114-Status: GOOD ( 17.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, bhuvanchandra.dv@toradex.com, linux-kernel@vger.kernel.org, stefan@agner.ch, linux-spi@vger.kernel.org, Sanchayan Maity , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add DMA support for Vybrid. Signed-off-by: Sanchayan Maity --- drivers/spi/spi-fsl-dspi.c | 293 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 293 insertions(+) diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c index 9e9dadb..ada50ee 100644 --- a/drivers/spi/spi-fsl-dspi.c +++ b/drivers/spi/spi-fsl-dspi.c @@ -15,6 +15,8 @@ #include #include +#include +#include #include #include #include @@ -40,6 +42,7 @@ #define TRAN_STATE_WORD_ODD_NUM 0x04 #define DSPI_FIFO_SIZE 4 +#define DSPI_DMA_BUFSIZE (DSPI_FIFO_SIZE * 1024) #define SPI_MCR 0x00 #define SPI_MCR_MASTER (1 << 31) @@ -71,6 +74,11 @@ #define SPI_SR_EOQF 0x10000000 #define SPI_SR_TCFQF 0x80000000 +#define SPI_RSER_TFFFE BIT(25) +#define SPI_RSER_TFFFD BIT(24) +#define SPI_RSER_RFDFE BIT(17) +#define SPI_RSER_RFDFD BIT(16) + #define SPI_RSER 0x30 #define SPI_RSER_EOQFE 0x10000000 #define SPI_RSER_TCFQE 0x80000000 @@ -108,6 +116,8 @@ #define SPI_TCR_TCNT_MAX 0x10000 +#define DMA_COMPLETION_TIMEOUT msecs_to_jiffies(3000) + struct chip_data { u32 mcr_val; u32 ctar_val; @@ -117,6 +127,7 @@ struct chip_data { enum dspi_trans_mode { DSPI_EOQ_MODE = 0, DSPI_TCFQ_MODE, + DSPI_DMA_MODE, }; struct fsl_dspi_devtype_data { @@ -139,6 +150,22 @@ static const struct fsl_dspi_devtype_data ls2085a_data = { .max_clock_factor = 8, }; +struct fsl_dspi_dma { + u32 curr_xfer_len; + + u32 *tx_dma_buf; + struct dma_chan *chan_tx; + dma_addr_t tx_dma_phys; + struct completion cmd_tx_complete; + struct dma_async_tx_descriptor *tx_desc; + + u32 *rx_dma_buf; + struct dma_chan *chan_rx; + dma_addr_t rx_dma_phys; + struct completion cmd_rx_complete; + struct dma_async_tx_descriptor *rx_desc; +}; + struct fsl_dspi { struct spi_master *master; struct platform_device *pdev; @@ -165,6 +192,7 @@ struct fsl_dspi { u32 waitflags; u32 spi_tcnt; + struct fsl_dspi_dma *dma; }; static inline int is_double_byte_mode(struct fsl_dspi *dspi) @@ -368,6 +396,264 @@ static void dspi_tcfq_read(struct fsl_dspi *dspi) dspi_data_from_popr(dspi, rx_word); } +static void dspi_tx_dma_callback(void *arg) +{ + struct fsl_dspi *dspi = arg; + struct fsl_dspi_dma *dma = dspi->dma; + + complete(&dma->cmd_tx_complete); +} + +static void dspi_rx_dma_callback(void *arg) +{ + struct fsl_dspi *dspi = arg; + struct fsl_dspi_dma *dma = dspi->dma; + int rx_word; + int i, len; + u16 d; + + rx_word = is_double_byte_mode(dspi); + + len = rx_word ? (dma->curr_xfer_len / 2) : dma->curr_xfer_len; + + if (!(dspi->dataflags & TRAN_STATE_RX_VOID)) { + for (i = 0; i < len; i++) { + d = dspi->dma->rx_dma_buf[i]; + rx_word ? (*(u16 *)dspi->rx = d) : + (*(u8 *)dspi->rx = d); + dspi->rx += rx_word + 1; + } + } + + complete(&dma->cmd_rx_complete); +} + +static int dspi_next_xfer_dma_submit(struct fsl_dspi *dspi) +{ + struct fsl_dspi_dma *dma = dspi->dma; + struct device *dev = &dspi->pdev->dev; + int time_left; + int tx_word; + int i, len; + u16 val; + + tx_word = is_double_byte_mode(dspi); + + len = tx_word ? (dma->curr_xfer_len / 2) : dma->curr_xfer_len; + + for (i = 0; i < len - 1; i++) { + val = tx_word ? *(u16 *) dspi->tx : *(u8 *) dspi->tx; + dspi->dma->tx_dma_buf[i] = + SPI_PUSHR_TXDATA(val) | SPI_PUSHR_PCS(dspi->cs) | + SPI_PUSHR_CTAS(0) | SPI_PUSHR_CONT; + dspi->tx += tx_word + 1; + } + + val = tx_word ? *(u16 *) dspi->tx : *(u8 *) dspi->tx; + dspi->dma->tx_dma_buf[i] = SPI_PUSHR_TXDATA(val) | + SPI_PUSHR_PCS(dspi->cs) | + SPI_PUSHR_CTAS(0); + dspi->tx += tx_word + 1; + + dma->tx_desc = dmaengine_prep_slave_single(dma->chan_tx, + dma->tx_dma_phys, + DSPI_DMA_BUFSIZE, DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!dma->tx_desc) { + dev_err(dev, "Not able to get desc for DMA xfer\n"); + return -EIO; + } + + dma->tx_desc->callback = dspi_tx_dma_callback; + dma->tx_desc->callback_param = dspi; + if (dma_submit_error(dmaengine_submit(dma->tx_desc))) { + dev_err(dev, "DMA submit failed\n"); + return -EINVAL; + } + + dma->rx_desc = dmaengine_prep_slave_single(dma->chan_rx, + dma->rx_dma_phys, + DSPI_DMA_BUFSIZE, DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!dma->rx_desc) { + dev_err(dev, "Not able to get desc for DMA xfer\n"); + return -EIO; + } + + dma->rx_desc->callback = dspi_rx_dma_callback; + dma->rx_desc->callback_param = dspi; + if (dma_submit_error(dmaengine_submit(dma->rx_desc))) { + dev_err(dev, "DMA submit failed\n"); + return -EINVAL; + } + + reinit_completion(&dspi->dma->cmd_rx_complete); + reinit_completion(&dspi->dma->cmd_tx_complete); + + dma_async_issue_pending(dma->chan_rx); + dma_async_issue_pending(dma->chan_tx); + + time_left = wait_for_completion_timeout(&dspi->dma->cmd_tx_complete, + DMA_COMPLETION_TIMEOUT); + if (time_left == 0) { + dev_err(dev, "DMA tx timeout\n"); + dmaengine_terminate_all(dma->chan_tx); + dmaengine_terminate_all(dma->chan_rx); + return -ETIMEDOUT; + } + + time_left = wait_for_completion_timeout(&dspi->dma->cmd_rx_complete, + DMA_COMPLETION_TIMEOUT); + if (time_left == 0) { + dev_err(dev, "DMA rx timeout\n"); + dmaengine_terminate_all(dma->chan_tx); + dmaengine_terminate_all(dma->chan_rx); + return -ETIMEDOUT; + } + + return 0; +} + +static int dspi_dma_xfer(struct fsl_dspi *dspi) +{ + struct fsl_dspi_dma *dma = dspi->dma; + struct device *dev = &dspi->pdev->dev; + int curr_remaining_bytes; + int ret = 0; + + curr_remaining_bytes = dspi->len; + while (curr_remaining_bytes) { + regmap_write(dspi->regmap, SPI_RSER, 0); + regmap_write(dspi->regmap, SPI_RSER, + SPI_RSER_TFFFE | SPI_RSER_TFFFD | + SPI_RSER_RFDFE | SPI_RSER_RFDFD); + + /* Check if current transfer fits the DMA buffer */ + dma->curr_xfer_len = curr_remaining_bytes; + if (curr_remaining_bytes > DSPI_DMA_BUFSIZE / sizeof(u32)) + dma->curr_xfer_len = DSPI_DMA_BUFSIZE / sizeof(u32); + + ret = dspi_next_xfer_dma_submit(dspi); + if (ret) { + dev_err(dev, "DMA transfer failed\n"); + goto exit; + + } else { + curr_remaining_bytes -= dma->curr_xfer_len; + if (curr_remaining_bytes < 0) + curr_remaining_bytes = 0; + dspi->len = curr_remaining_bytes; + } + } + +exit: + return ret; +} + +static int dspi_request_dma(struct fsl_dspi *dspi, phys_addr_t phy_addr) +{ + struct fsl_dspi_dma *dma; + struct dma_slave_config cfg; + struct device *dev = &dspi->pdev->dev; + int ret; + + dma = devm_kzalloc(dev, sizeof(*dma), GFP_KERNEL); + if (!dma) + return -ENOMEM; + + dma->chan_rx = dma_request_slave_channel(dev, "rx"); + if (!dma->chan_rx) { + dev_err(dev, "rx dma channel not available\n"); + ret = -ENODEV; + return ret; + } + + dma->chan_tx = dma_request_slave_channel(dev, "tx"); + if (!dma->chan_tx) { + dev_err(dev, "tx dma channel not available\n"); + ret = -ENODEV; + goto err_tx_channel; + } + + dma->tx_dma_buf = dma_alloc_coherent(dev, DSPI_DMA_BUFSIZE, + &dma->tx_dma_phys, GFP_KERNEL); + if (!dma->tx_dma_buf) { + ret = -ENOMEM; + goto err_tx_dma_buf; + } + + dma->rx_dma_buf = dma_alloc_coherent(dev, DSPI_DMA_BUFSIZE, + &dma->rx_dma_phys, GFP_KERNEL); + if (!dma->rx_dma_buf) { + ret = -ENOMEM; + goto err_rx_dma_buf; + } + + cfg.src_addr = phy_addr + SPI_POPR; + cfg.dst_addr = phy_addr + SPI_PUSHR; + cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + cfg.src_maxburst = 1; + cfg.dst_maxburst = 1; + + cfg.direction = DMA_DEV_TO_MEM; + ret = dmaengine_slave_config(dma->chan_rx, &cfg); + if (ret) { + dev_err(dev, "can't configure rx dma channel\n"); + ret = -EINVAL; + goto err_slave_config; + } + + cfg.direction = DMA_MEM_TO_DEV; + ret = dmaengine_slave_config(dma->chan_tx, &cfg); + if (ret) { + dev_err(dev, "can't configure tx dma channel\n"); + ret = -EINVAL; + goto err_slave_config; + } + + dspi->dma = dma; + dspi->devtype_data->trans_mode = DSPI_DMA_MODE; + init_completion(&dma->cmd_tx_complete); + init_completion(&dma->cmd_rx_complete); + + return 0; + +err_slave_config: + devm_kfree(dev, dma->rx_dma_buf); +err_rx_dma_buf: + devm_kfree(dev, dma->tx_dma_buf); +err_tx_dma_buf: + dma_release_channel(dma->chan_tx); +err_tx_channel: + dma_release_channel(dma->chan_rx); + + devm_kfree(dev, dma); + dspi->dma = NULL; + + return ret; +} + +static void dspi_release_dma(struct fsl_dspi *dspi) +{ + struct fsl_dspi_dma *dma = dspi->dma; + struct device *dev = &dspi->pdev->dev; + + if (dma) { + if (dma->chan_tx) { + dma_unmap_single(dev, dma->tx_dma_phys, + DSPI_DMA_BUFSIZE, DMA_TO_DEVICE); + dma_release_channel(dma->chan_tx); + } + + if (dma->chan_rx) { + dma_unmap_single(dev, dma->rx_dma_phys, + DSPI_DMA_BUFSIZE, DMA_FROM_DEVICE); + dma_release_channel(dma->chan_rx); + } + } +} + static int dspi_transfer_one_message(struct spi_master *master, struct spi_message *message) { @@ -424,6 +710,9 @@ static int dspi_transfer_one_message(struct spi_master *master, regmap_write(dspi->regmap, SPI_RSER, SPI_RSER_TCFQE); dspi_tcfq_write(dspi); break; + case DSPI_DMA_MODE: + status = dspi_dma_xfer(dspi); + goto out; default: dev_err(&dspi->pdev->dev, "unsupported trans_mode %u\n", trans_mode); @@ -730,6 +1019,9 @@ static int dspi_probe(struct platform_device *pdev) } clk_prepare_enable(dspi->clk); + if (dspi_request_dma(dspi, res->start)) + dev_warn(&pdev->dev, "can't get dma channels\n"); + master->max_speed_hz = clk_get_rate(dspi->clk) / dspi->devtype_data->max_clock_factor; @@ -758,6 +1050,7 @@ static int dspi_remove(struct platform_device *pdev) struct fsl_dspi *dspi = spi_master_get_devdata(master); /* Disconnect from the SPI framework */ + dspi_release_dma(dspi); clk_disable_unprepare(dspi->clk); spi_unregister_master(dspi->master); spi_master_put(dspi->master);