From patchwork Fri Jan 11 11:08:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Faiz Abbas X-Patchwork-Id: 10757847 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 905F114E5 for ; Fri, 11 Jan 2019 11:07:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E40029C0E for ; Fri, 11 Jan 2019 11:07:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 729C329C13; Fri, 11 Jan 2019 11:07:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80D3B29C0E for ; Fri, 11 Jan 2019 11:07:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728201AbfAKLH2 (ORCPT ); Fri, 11 Jan 2019 06:07:28 -0500 Received: from lelv0143.ext.ti.com ([198.47.23.248]:34240 "EHLO lelv0143.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732121AbfAKLGN (ORCPT ); Fri, 11 Jan 2019 06:06:13 -0500 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id x0BB66wd094662; Fri, 11 Jan 2019 05:06:06 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1547204766; bh=N3B++L6AcHlZIKizWf6oaOtiIf7WL9Q6WRRgRAdLqkE=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=MLpzoe/inZNKkBjlUTcoIsapS7duB9hDWUgTK0v98V9hSGsFV6l6/7zJuSaKrDXf5 arcpNMbJzw90q2Njs0DUG4618ae4BVv2utyxmGhMIkY6cVysMw8TmCy9dbOUxDd4on UOQ1yRB59FDPrH4NUjcC1FaZ8NbJ2o3sjgYmx/jI= Received: from DFLE103.ent.ti.com (dfle103.ent.ti.com [10.64.6.24]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id x0BB665F029577 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 11 Jan 2019 05:06:06 -0600 Received: from DFLE109.ent.ti.com (10.64.6.30) by DFLE103.ent.ti.com (10.64.6.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1591.10; Fri, 11 Jan 2019 05:06:06 -0600 Received: from dlep32.itg.ti.com (157.170.170.100) by DFLE109.ent.ti.com (10.64.6.30) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.1591.10 via Frontend Transport; Fri, 11 Jan 2019 05:06:06 -0600 Received: from a0230074-OptiPlex-7010.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by dlep32.itg.ti.com (8.14.3/8.13.8) with ESMTP id x0BB5xEr024503; Fri, 11 Jan 2019 05:06:03 -0600 From: Faiz Abbas To: , , CC: , , , , , , Subject: [PATCH 1/7] mmc: sdhci: add support for using external DMA devices Date: Fri, 11 Jan 2019 16:38:45 +0530 Message-ID: <20190111110851.6805-2-faiz_abbas@ti.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190111110851.6805-1-faiz_abbas@ti.com> References: <20190111110851.6805-1-faiz_abbas@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Chunyan Zhang Some standard SD host controllers can support both external dma controllers as well as ADMA/SDMA in which the SD host controller acts as DMA master. TI's omap controller is the case as an example. Currently the generic SDHCI code supports ADMA/SDMA integrated in the host controller but does not have any support for external DMA controllers implemented using dmaengine, meaning that custom code is needed for any systems that use an external DMA controller with SDHCI. Fixes by Faiz Abbas : 1. Map scatterlists before dmaengine_prep_slave_sg() 2. Use dma_async() functions inside of the send_command() path and synchronize once at the start of each request. Signed-off-by: Chunyan Zhang Signed-off-by: Faiz Abbas Acked-by: Adrian Hunter --- drivers/mmc/host/Kconfig | 3 + drivers/mmc/host/sdhci.c | 266 ++++++++++++++++++++++++++++++++++++++- drivers/mmc/host/sdhci.h | 8 ++ 3 files changed, 273 insertions(+), 4 deletions(-) diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig index e26b8145efb3..333292e8ecdd 100644 --- a/drivers/mmc/host/Kconfig +++ b/drivers/mmc/host/Kconfig @@ -999,3 +999,6 @@ config MMC_SDHCI_AM654 If you have a controller with this interface, say Y or M here. If unsure, say N. + +config MMC_SDHCI_EXTERNAL_DMA + bool diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index a22e11a65658..4a9044c06e21 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -14,6 +14,7 @@ */ #include +#include #include #include #include @@ -1118,6 +1119,226 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) } } +#if IS_ENABLED(CONFIG_MMC_SDHCI_EXTERNAL_DMA) +static int sdhci_external_dma_init(struct sdhci_host *host) +{ + int ret = 0; + struct mmc_host *mmc = host->mmc; + + host->tx_chan = dma_request_chan(mmc->parent, "tx"); + if (IS_ERR(host->tx_chan)) { + ret = PTR_ERR(host->tx_chan); + if (ret != -EPROBE_DEFER) + pr_warn("Failed to request TX DMA channel.\n"); + host->tx_chan = NULL; + return ret; + } + + host->rx_chan = dma_request_chan(mmc->parent, "rx"); + if (IS_ERR(host->rx_chan)) { + if (host->tx_chan) { + dma_release_channel(host->tx_chan); + host->tx_chan = NULL; + } + + ret = PTR_ERR(host->rx_chan); + if (ret != -EPROBE_DEFER) + pr_warn("Failed to request RX DMA channel.\n"); + host->rx_chan = NULL; + } + + return ret; +} + +static inline struct dma_chan * +sdhci_external_dma_channel(struct sdhci_host *host, struct mmc_data *data) +{ + return data->flags & MMC_DATA_WRITE ? host->tx_chan : host->rx_chan; +} + +static int sdhci_external_dma_setup(struct sdhci_host *host, + struct mmc_command *cmd) +{ + int ret, i; + struct dma_async_tx_descriptor *desc; + struct mmc_data *data = cmd->data; + struct dma_chan *chan; + struct dma_slave_config cfg; + dma_cookie_t cookie; + int sg_cnt; + + if (!host->mapbase) + return -EINVAL; + + cfg.src_addr = host->mapbase + SDHCI_BUFFER; + cfg.dst_addr = host->mapbase + SDHCI_BUFFER; + cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + cfg.src_maxburst = data->blksz / 4; + cfg.dst_maxburst = data->blksz / 4; + + /* Sanity check: all the SG entries must be aligned by block size. */ + for (i = 0; i < data->sg_len; i++) { + if ((data->sg + i)->length % data->blksz) + return -EINVAL; + } + + chan = sdhci_external_dma_channel(host, data); + + ret = dmaengine_slave_config(chan, &cfg); + if (ret) + return ret; + + sg_cnt = sdhci_pre_dma_transfer(host, data, COOKIE_MAPPED); + if (sg_cnt <= 0) + return -EINVAL; + + desc = dmaengine_prep_slave_sg(chan, data->sg, data->sg_len, + mmc_get_dma_dir(data), + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!desc) + return -EINVAL; + + desc->callback = NULL; + desc->callback_param = NULL; + + cookie = dmaengine_submit(desc); + if (cookie < 0) + ret = cookie; + + return ret; +} + +static void sdhci_external_dma_release(struct sdhci_host *host) +{ + if (host->tx_chan) { + dma_release_channel(host->tx_chan); + host->tx_chan = NULL; + } + + if (host->rx_chan) { + dma_release_channel(host->rx_chan); + host->rx_chan = NULL; + } + + sdhci_switch_external_dma(host, false); +} + +static int __sdhci_external_dma_prepare_data(struct sdhci_host *host, + struct mmc_command *cmd) +{ + struct mmc_data *data = cmd->data; + + host->data_timeout = 0; + + if (sdhci_data_line_cmd(cmd)) + sdhci_set_timeout(host, cmd); + + WARN_ON(host->data); + + /* Sanity checks */ + WARN_ON(data->blksz * data->blocks > 524288); + WARN_ON(data->blksz > host->mmc->max_blk_size); + WARN_ON(data->blocks > 65535); + + host->flags |= SDHCI_REQ_USE_DMA; + host->data = data; + host->data_early = 0; + host->data->bytes_xfered = 0; + + sdhci_set_transfer_irqs(host); + + /* + * For Version 4.10 onwards, if v4 mode is enabled, 32-bit Block Count + * can be supported, in that case 16-bit block count register must be 0. + */ + if (host->version >= SDHCI_SPEC_410 && host->v4_mode && + (host->quirks2 & SDHCI_QUIRK2_USE_32BIT_BLK_CNT)) { + if (sdhci_readw(host, SDHCI_BLOCK_COUNT)) + sdhci_writew(host, 0, SDHCI_BLOCK_COUNT); + sdhci_writew(host, data->blocks, SDHCI_32BIT_BLK_CNT); + } else { + sdhci_writew(host, data->blocks, SDHCI_BLOCK_COUNT); + } + + return 0; +} + +static void sdhci_external_dma_prepare_data(struct sdhci_host *host, + struct mmc_command *cmd) +{ + struct mmc_data *data = cmd->data; + + if (!data) + return; + + if (sdhci_external_dma_setup(host, cmd) || + __sdhci_external_dma_prepare_data(host, cmd)) { + sdhci_external_dma_release(host); + pr_err("%s: Cannot use external DMA, switch to the DMA/PIO which standard SDHCI provides.\n", + mmc_hostname(host->mmc)); + sdhci_prepare_data(host, cmd); + } +} + +static void sdhci_external_dma_pre_transfer(struct sdhci_host *host, + struct mmc_command *cmd) +{ + struct dma_chan *chan; + + if (!cmd->data || cmd->opcode == MMC_SET_BLOCK_COUNT) + return; + + sdhci_writew(host, cmd->data->blksz, SDHCI_BLOCK_SIZE); + chan = sdhci_external_dma_channel(host, cmd->data); + if (chan) + dma_async_issue_pending(chan); +} + +static int sdhci_external_dma_cleanup(struct sdhci_host *host, + struct mmc_data *data) +{ + struct dma_chan *chan = sdhci_external_dma_channel(host, data); + int ret = 0; + + if (chan) + ret = dmaengine_terminate_async(chan); + + return ret; +} +#else +static int sdhci_external_dma_init(struct sdhci_host *host) +{ + return -EOPNOTSUPP; +} + +static void sdhci_external_dma_release(struct sdhci_host *host) +{} + +static void sdhci_external_dma_prepare_data(struct sdhci_host *host, + struct mmc_command *cmd) +{ + /* If MMC_SDHCI_EXTERNAL_DMA not supported, PIO will be used */ + sdhci_prepare_data(host, cmd); +} + +static void sdhci_external_dma_pre_transfer(struct sdhci_host *host, + struct mmc_command *cmd) +{} + +static int sdhci_external_dma_cleanup(struct sdhci_host *host, + struct mmc_data *data) +{ + return 0; +} +#endif + +void sdhci_switch_external_dma(struct sdhci_host *host, bool en) +{ + host->use_external_dma = en; +} +EXPORT_SYMBOL_GPL(sdhci_switch_external_dma); + static inline bool sdhci_auto_cmd12(struct sdhci_host *host, struct mmc_request *mrq) { @@ -1374,7 +1595,10 @@ void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd) host->data_cmd = cmd; } - sdhci_prepare_data(host, cmd); + if (host->use_external_dma) + sdhci_external_dma_prepare_data(host, cmd); + else + sdhci_prepare_data(host, cmd); sdhci_writel(host, cmd->arg, SDHCI_ARGUMENT); @@ -1416,6 +1640,9 @@ void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd) timeout += 10 * HZ; sdhci_mod_timer(host, cmd->mrq, timeout); + if (host->use_external_dma) + sdhci_external_dma_pre_transfer(host, cmd); + sdhci_writew(host, SDHCI_MAKE_CMD(cmd->opcode, flags), SDHCI_COMMAND); } EXPORT_SYMBOL_GPL(sdhci_send_command); @@ -1781,6 +2008,11 @@ void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq) sdhci_led_activate(host); + if (host->use_external_dma && mrq->data) { + struct dma_chan *chan = sdhci_external_dma_channel(host, + mrq->data); + dmaengine_synchronize(chan); + } /* * Ensure we don't send the STOP for non-SET_BLOCK_COUNTED * requests if Auto-CMD12 is enabled. @@ -2658,6 +2890,8 @@ static bool sdhci_request_done(struct sdhci_host *host) dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, mmc_get_dma_dir(data)); + if (host->use_external_dma) + sdhci_external_dma_cleanup(host, data); } data->host_cookie = COOKIE_UNMAPPED; } @@ -3692,12 +3926,15 @@ int sdhci_setup_host(struct sdhci_host *host) mmc_hostname(mmc), host->version); } - if (host->quirks & SDHCI_QUIRK_FORCE_DMA) + if (host->quirks & SDHCI_QUIRK_FORCE_DMA) { host->flags |= SDHCI_USE_SDMA; - else if (!(host->caps & SDHCI_CAN_DO_SDMA)) + } else if (!(host->caps & SDHCI_CAN_DO_SDMA)) { DBG("Controller doesn't have SDMA capability\n"); - else + } else if (host->use_external_dma) { + /* Using dma-names to detect external dma capability */ + } else { host->flags |= SDHCI_USE_SDMA; + } if ((host->quirks & SDHCI_QUIRK_BROKEN_DMA) && (host->flags & SDHCI_USE_SDMA)) { @@ -3785,6 +4022,19 @@ int sdhci_setup_host(struct sdhci_host *host) } } + if (host->use_external_dma) { + ret = sdhci_external_dma_init(host); + if (ret == -EPROBE_DEFER) + goto unreg; + + /* + * Fall back to use the DMA/PIO integrated in standard SDHCI + * instead of external DMA devices. + */ + if (ret) + sdhci_switch_external_dma(host, false); + } + /* * If we use DMA, then it's up to the caller to set the DMA * mask, but PIO does not need the hw shim so we set a new @@ -4201,6 +4451,10 @@ void sdhci_cleanup_host(struct sdhci_host *host) dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + host->adma_table_sz, host->align_buffer, host->align_addr); + + if (host->use_external_dma) + sdhci_external_dma_release(host); + host->adma_table = NULL; host->align_buffer = NULL; } @@ -4247,6 +4501,7 @@ int __sdhci_add_host(struct sdhci_host *host) pr_info("%s: SDHCI controller on %s [%s] using %s\n", mmc_hostname(mmc), host->hw_name, dev_name(mmc_dev(mmc)), + host->use_external_dma ? "External DMA" : (host->flags & SDHCI_USE_ADMA) ? (host->flags & SDHCI_USE_64_BIT_DMA) ? "ADMA 64-bit" : "ADMA" : (host->flags & SDHCI_USE_SDMA) ? "DMA" : "PIO"); @@ -4335,6 +4590,9 @@ void sdhci_remove_host(struct sdhci_host *host, int dead) host->adma_table_sz, host->align_buffer, host->align_addr); + if (host->use_external_dma) + sdhci_external_dma_release(host); + host->adma_table = NULL; host->align_buffer = NULL; } diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index 6cc9a3c2ac66..7a52823ebef4 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -482,6 +482,7 @@ struct sdhci_host { int irq; /* Device IRQ */ void __iomem *ioaddr; /* Mapped address */ + phys_addr_t mapbase; /* physical address base */ char *bounce_buffer; /* For packing SDMA reads/writes */ dma_addr_t bounce_addr; unsigned int bounce_buffer_size; @@ -531,6 +532,7 @@ struct sdhci_host { bool pending_reset; /* Cmd/data reset is pending */ bool irq_wake_enabled; /* IRQ wakeup is enabled */ bool v4_mode; /* Host Version 4 Enable */ + bool use_external_dma; /* Host selects to use external DMA */ struct mmc_request *mrqs_done[SDHCI_MAX_MRQS]; /* Requests done */ struct mmc_command *cmd; /* Current command */ @@ -559,6 +561,11 @@ struct sdhci_host { struct timer_list timer; /* Timer for timeouts */ struct timer_list data_timer; /* Timer for data timeouts */ +#if IS_ENABLED(CONFIG_MMC_SDHCI_EXTERNAL_DMA) + struct dma_chan *rx_chan; + struct dma_chan *tx_chan; +#endif + u32 caps; /* CAPABILITY_0 */ u32 caps1; /* CAPABILITY_1 */ bool read_caps; /* Capability flags have been read */ @@ -792,5 +799,6 @@ void sdhci_start_tuning(struct sdhci_host *host); void sdhci_end_tuning(struct sdhci_host *host); void sdhci_reset_tuning(struct sdhci_host *host); void sdhci_send_tuning(struct sdhci_host *host, u32 opcode); +void sdhci_switch_external_dma(struct sdhci_host *host, bool en); #endif /* __SDHCI_HW_H */