From patchwork Fri Jun 25 05:22:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Koul X-Patchwork-Id: 12343627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A4B4C48BC2 for ; Fri, 25 Jun 2021 05:22:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5484C61426 for ; Fri, 25 Jun 2021 05:22:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233136AbhFYFYw (ORCPT ); Fri, 25 Jun 2021 01:24:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:46174 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233129AbhFYFYv (ORCPT ); Fri, 25 Jun 2021 01:24:51 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id E269161423; Fri, 25 Jun 2021 05:22:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624598551; bh=9CpDYd6V2+ZRataORv0JLj011dFvZe8nxIM9fHBFhSA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KcXhZM/MpToxLGfI//7R+vEgoTnrXwjgeQAdZfu0TQ40FoIiEJyUzv4zb7me9cw0a 28kBq8fLaAWtKLIUn5vH6yOgJlWuBMtybVTVvcv2eRJSzc4boQOZRBXtUZryzm3tP4 ATjLmyBJ/vznuwchZXvHxqhWmHoi/B5UAD19q51HKgKaW2i9hsfbnvkaXZVUPOMHuQ udTWSfFluKQVqYZv8Q6DMl1hL/bSOKanpmp2tCLlG5BAkx3FZ8SQiCzBnT7TL09OAK q3G4LcERwvNPTBl0dx6kEcuExJmZMwLQBolibw44UskILSyRqL3/96fN8CIw2YfxAP wnd41dDJw7+eA== From: Vinod Koul To: Bjorn Andersson , Mark Brown , Wolfram Sang Cc: Vinod Koul , Andy Gross , Sumit Semwal , Douglas Anderson , Matthias Kaehlcke , linux-spi@vger.kernel.org, linux-i2c@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 1/5] soc: qcom: geni: move GENI_IF_DISABLE_RO to common header Date: Fri, 25 Jun 2021 10:52:09 +0530 Message-Id: <20210625052213.32260-2-vkoul@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210625052213.32260-1-vkoul@kernel.org> References: <20210625052213.32260-1-vkoul@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org GENI_IF_DISABLE_RO is used by geni spi driver as well to check the status if GENI, so move this to common header qcom-geni-se.h Reviewed-by: Bjorn Andersson Signed-off-by: Vinod Koul Reviewed-by: Douglas Anderson --- drivers/soc/qcom/qcom-geni-se.c | 1 - include/linux/qcom-geni-se.h | 4 ++++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index 5bdfb1565c14..fe666ea0c487 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -104,7 +104,6 @@ static const char * const icc_path_names[] = {"qup-core", "qup-config", #define GENI_OUTPUT_CTRL 0x24 #define GENI_CGC_CTRL 0x28 #define GENI_CLK_CTRL_RO 0x60 -#define GENI_IF_DISABLE_RO 0x64 #define GENI_FW_S_REVISION_RO 0x6c #define SE_GENI_BYTE_GRAN 0x254 #define SE_GENI_TX_PACKING_CFG0 0x260 diff --git a/include/linux/qcom-geni-se.h b/include/linux/qcom-geni-se.h index 7c811eebcaab..5114e2144b17 100644 --- a/include/linux/qcom-geni-se.h +++ b/include/linux/qcom-geni-se.h @@ -63,6 +63,7 @@ struct geni_se { #define SE_GENI_STATUS 0x40 #define GENI_SER_M_CLK_CFG 0x48 #define GENI_SER_S_CLK_CFG 0x4c +#define GENI_IF_DISABLE_RO 0x64 #define GENI_FW_REVISION_RO 0x68 #define SE_GENI_CLK_SEL 0x7c #define SE_GENI_DMA_MODE_EN 0x258 @@ -105,6 +106,9 @@ struct geni_se { #define CLK_DIV_MSK GENMASK(15, 4) #define CLK_DIV_SHFT 4 +/* GENI_IF_DISABLE_RO fields */ +#define FIFO_IF_DISABLE (BIT(0)) + /* GENI_FW_REVISION_RO fields */ #define FW_REV_PROTOCOL_MSK GENMASK(15, 8) #define FW_REV_PROTOCOL_SHFT 8 From patchwork Fri Jun 25 05:22:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Koul X-Patchwork-Id: 12343629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86BA2C2B9F4 for ; Fri, 25 Jun 2021 05:22:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6FA6461424 for ; Fri, 25 Jun 2021 05:22:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233135AbhFYFY4 (ORCPT ); Fri, 25 Jun 2021 01:24:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:46250 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233152AbhFYFY4 (ORCPT ); Fri, 25 Jun 2021 01:24:56 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 594B161424; Fri, 25 Jun 2021 05:22:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624598556; bh=2GtxjBuEOxjqWXeb1a0iOKkA/oBn6Uv+LXGjT67q1F4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o3PY/btYw+I7gA3JEQit5/BaPv+fnV9TXsqLkld/W2CzaTp8PvGWyEttAc4nETz1v Sy0VHS06A2XXAVwQweOMhlDNQbGODymeKAEoyuA5JvyiP51IkHEpKBhlkd1PBwUUpW 3pIla11IdzSOLuvUUAO/EGepywIuI2UMJahzgVlsSshpC+XKwVTtc8++VgyXho8IQ4 ZHo6ZjFAbGFb0M33qWf5K5GCQc0pE9ZdfQWIqaXKX5ScvGkxyF4pu6hOAqe3w09GNT j71ai1ZBfGqo/57ghgubFdybfNPrMlj6ZHoLPWeMqClJH40bhQB51jsWdJDCjY73am SD6p9Mb/rE9ag== From: Vinod Koul To: Bjorn Andersson , Mark Brown , Wolfram Sang Cc: Vinod Koul , Andy Gross , Sumit Semwal , Douglas Anderson , Matthias Kaehlcke , linux-spi@vger.kernel.org, linux-i2c@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 2/5] soc: qcom: geni: Add support for gpi dma Date: Fri, 25 Jun 2021 10:52:10 +0530 Message-Id: <20210625052213.32260-3-vkoul@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210625052213.32260-1-vkoul@kernel.org> References: <20210625052213.32260-1-vkoul@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org GPI DMA is one of the DMA modes supported on geni, this adds support to enable that mode Also do better documentation of the enum geni_se_xfer_mode. Signed-off-by: Vinod Koul Reviewed-by: Douglas Anderson --- drivers/soc/qcom/qcom-geni-se.c | 29 ++++++++++++++++++++++++++++- include/linux/qcom-geni-se.h | 15 ++++++++++++++- 2 files changed, 42 insertions(+), 2 deletions(-) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index fe666ea0c487..7d649d2cf31e 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -321,6 +321,30 @@ static void geni_se_select_dma_mode(struct geni_se *se) writel_relaxed(val, se->base + SE_GENI_DMA_MODE_EN); } +static void geni_se_select_gpi_mode(struct geni_se *se) +{ + u32 val; + + geni_se_irq_clear(se); + + writel(0, se->base + SE_IRQ_EN); + + val = readl(se->base + SE_GENI_S_IRQ_EN); + val &= ~S_CMD_DONE_EN; + writel(val, se->base + SE_GENI_S_IRQ_EN); + + val = readl(se->base + SE_GENI_M_IRQ_EN); + val &= ~(M_CMD_DONE_EN | M_TX_FIFO_WATERMARK_EN | + M_RX_FIFO_WATERMARK_EN | M_RX_FIFO_LAST_EN); + writel(val, se->base + SE_GENI_M_IRQ_EN); + + writel(GENI_DMA_MODE_EN, se->base + SE_GENI_DMA_MODE_EN); + + val = readl(se->base + SE_GSI_EVENT_EN); + val |= (DMA_RX_EVENT_EN | DMA_TX_EVENT_EN | GENI_M_EVENT_EN | GENI_S_EVENT_EN); + writel(val, se->base + SE_GSI_EVENT_EN); +} + /** * geni_se_select_mode() - Select the serial engine transfer mode * @se: Pointer to the concerned serial engine. @@ -328,7 +352,7 @@ static void geni_se_select_dma_mode(struct geni_se *se) */ void geni_se_select_mode(struct geni_se *se, enum geni_se_xfer_mode mode) { - WARN_ON(mode != GENI_SE_FIFO && mode != GENI_SE_DMA); + WARN_ON(mode != GENI_SE_FIFO && mode != GENI_SE_DMA && mode != GENI_GPI_DMA); switch (mode) { case GENI_SE_FIFO: @@ -337,6 +361,9 @@ void geni_se_select_mode(struct geni_se *se, enum geni_se_xfer_mode mode) case GENI_SE_DMA: geni_se_select_dma_mode(se); break; + case GENI_GPI_DMA: + geni_se_select_gpi_mode(se); + break; case GENI_SE_INVALID: default: break; diff --git a/include/linux/qcom-geni-se.h b/include/linux/qcom-geni-se.h index 5114e2144b17..f5672785c0c4 100644 --- a/include/linux/qcom-geni-se.h +++ b/include/linux/qcom-geni-se.h @@ -8,11 +8,24 @@ #include -/* Transfer mode supported by GENI Serial Engines */ +/** + * enum geni_se_xfer_mode: Transfer modes supported by Serial Engines + * + * @GENI_SE_INVALID: Invalid mode + * @GENI_SE_FIFO: FIFO mode. Data is transferred with SE FIFO + * by programmed IO method + * @GENI_SE_DMA: Serial Engine DMA mode. Data is transferred + * with SE by DMAengine internal to SE + * @GENI_GPI_DMA: GPI DMA mode. Data is transferred using a DMAengine + * configured by a firmware residing on a GSI engine. This DMA name is + * interchangeably used as GSI or GPI which seem to imply the same DMAengine + */ + enum geni_se_xfer_mode { GENI_SE_INVALID, GENI_SE_FIFO, GENI_SE_DMA, + GENI_GPI_DMA, }; /* Protocols supported by GENI Serial Engines */ From patchwork Fri Jun 25 05:22:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Koul X-Patchwork-Id: 12343631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B140C2B9F4 for ; Fri, 25 Jun 2021 05:22:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 438C76140B for ; Fri, 25 Jun 2021 05:22:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233169AbhFYFZB (ORCPT ); Fri, 25 Jun 2021 01:25:01 -0400 Received: from mail.kernel.org ([198.145.29.99]:46324 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233133AbhFYFZA (ORCPT ); Fri, 25 Jun 2021 01:25:00 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id C54B56140B; Fri, 25 Jun 2021 05:22:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624598560; bh=MPl3vTAAEWf4+fP0ly5QUJMsJvvm/k1y8noRL42CSiI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A32ePgDycnk16AQJ9ftY5wD76rBy/xHphLZ0/ePtzUmNAAgeC0CfleSfg2G5pFSZA /h1SYvGeFn7dUIDaeQjW6AV5Uo/Fiok8d0NTBhASL3DoMvgcbYVVmye+ebcQbD04Gj guqoKhh6Gt7CPVoPCTPZ2yGXXjZZ8VBzqRUjCSHAUa+8LmeqDKMgzWqNW6tYyqb3Tx w2nxVetxHq2PFEvZSUTqp5L6hsxBvxTJfLHZ+AXkD26cQtJRE7Dnw4SHIAr5h80w/U o6vBqeHtQ5etL4H56CKEW8F6ULlba0ZqJP7BN/JQsn/ojD0exaTbV1f47GC9wVEAxl BvkAjYJUho1Zg== From: Vinod Koul To: Bjorn Andersson , Mark Brown , Wolfram Sang Cc: Vinod Koul , Andy Gross , Sumit Semwal , Douglas Anderson , Matthias Kaehlcke , linux-spi@vger.kernel.org, linux-i2c@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 3/5] spi: core: add dma_map_dev for dma device Date: Fri, 25 Jun 2021 10:52:11 +0530 Message-Id: <20210625052213.32260-4-vkoul@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210625052213.32260-1-vkoul@kernel.org> References: <20210625052213.32260-1-vkoul@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Some controllers like qcom geni need the parent device to be used for dma mapping, so add a dma_map_dev field and let drivers fill this to be used as mapping device Signed-off-by: Vinod Koul --- drivers/spi/spi.c | 4 ++++ include/linux/spi/spi.h | 1 + 2 files changed, 5 insertions(+) diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index e353b7a9e54e..315f7e7545f7 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -961,11 +961,15 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) if (ctlr->dma_tx) tx_dev = ctlr->dma_tx->device->dev; + else if (ctlr->dma_map_dev) + tx_dev = ctlr->dma_map_dev; else tx_dev = ctlr->dev.parent; if (ctlr->dma_rx) rx_dev = ctlr->dma_rx->device->dev; + else if (ctlr->dma_map_dev) + rx_dev = ctlr->dma_map_dev; else rx_dev = ctlr->dev.parent; diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h index 74239d65c7fd..4d3f116f5723 100644 --- a/include/linux/spi/spi.h +++ b/include/linux/spi/spi.h @@ -586,6 +586,7 @@ struct spi_controller { bool (*can_dma)(struct spi_controller *ctlr, struct spi_device *spi, struct spi_transfer *xfer); + struct device *dma_map_dev; /* * These hooks are for drivers that want to use the generic From patchwork Fri Jun 25 05:22:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Koul X-Patchwork-Id: 12343633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C116C2B9F4 for ; Fri, 25 Jun 2021 05:22:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E346961424 for ; Fri, 25 Jun 2021 05:22:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233174AbhFYFZH (ORCPT ); Fri, 25 Jun 2021 01:25:07 -0400 Received: from mail.kernel.org ([198.145.29.99]:46396 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233180AbhFYFZF (ORCPT ); Fri, 25 Jun 2021 01:25:05 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3C97961424; Fri, 25 Jun 2021 05:22:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624598565; bh=Yzpyztg2oyfb6L8hoG6V4EDp945nS3Jw82VdCGaUvbg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AOXfEiMsscmO3Q8hM9bDLvP0BuTFnF4N2hwKGfqNLLaHawW3Fqv3gXFKJLG3/Q6EW kduv3Z6NlPnn75HiQP1YxOav0S7u1cFQ+6Ev6B1ouVJ8SOPXS06xBR4fgO5RCS9Vkt Kyq7rKxs0yGRNmL1gG0I/CcUf3bRWnM5i9cctReGpl9W1dVZcepqzsgPEgkqpvpRh3 8FQmAmTlf2qext2gXnD0BsstVPxgwbEJyb0uEAtdd4cNxpH7veKZ4GBy2SYkJzjc07 4mh5uXTroaPJgEr8LZk3uIm6LfZPA8TIJStldIo7YFYUlu4NvPaCz6n1Cibq4Tw6Gk hV6O91ByBEIqQ== From: Vinod Koul To: Bjorn Andersson , Mark Brown , Wolfram Sang Cc: Vinod Koul , Andy Gross , Sumit Semwal , Douglas Anderson , Matthias Kaehlcke , linux-spi@vger.kernel.org, linux-i2c@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 4/5] spi: spi-geni-qcom: Add support for GPI dma Date: Fri, 25 Jun 2021 10:52:12 +0530 Message-Id: <20210625052213.32260-5-vkoul@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210625052213.32260-1-vkoul@kernel.org> References: <20210625052213.32260-1-vkoul@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org We can use GPI DMA for devices where it is enabled by firmware. Add support for this mode Signed-off-by: Vinod Koul Reported-by: kernel test robot --- drivers/spi/spi-geni-qcom.c | 329 ++++++++++++++++++++++++++++++++++-- 1 file changed, 315 insertions(+), 14 deletions(-) diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c index 3d0d8ddd5772..c64355c246be 100644 --- a/drivers/spi/spi-geni-qcom.c +++ b/drivers/spi/spi-geni-qcom.c @@ -2,6 +2,9 @@ // Copyright (c) 2017-2018, The Linux foundation. All rights reserved. #include +#include +#include +#include #include #include #include @@ -63,6 +66,29 @@ #define TIMESTAMP_AFTER BIT(3) #define POST_CMD_DELAY BIT(4) +#define GSI_LOOPBACK_EN (BIT(0)) +#define GSI_CS_TOGGLE (BIT(3)) +#define GSI_CPHA (BIT(4)) +#define GSI_CPOL (BIT(5)) + +#define MAX_TX_SG (3) +#define NUM_SPI_XFER (8) +#define SPI_XFER_TIMEOUT_MS (250) + +struct gsi_desc_cb { + struct spi_geni_master *mas; + struct spi_transfer *xfer; +}; + +struct spi_geni_gsi { + dma_cookie_t tx_cookie; + dma_cookie_t rx_cookie; + struct dma_async_tx_descriptor *tx_desc; + struct dma_async_tx_descriptor *rx_desc; + struct gsi_desc_cb tx_cb; + struct gsi_desc_cb rx_cb; +}; + struct spi_geni_master { struct geni_se se; struct device *dev; @@ -84,6 +110,13 @@ struct spi_geni_master { int irq; bool cs_flag; bool abort_failed; + struct spi_geni_gsi *gsi; + struct dma_chan *tx; + struct dma_chan *rx; + struct completion tx_cb; + struct completion rx_cb; + int cur_xfer_mode; + int num_xfers; }; static int get_spi_clk_cfg(unsigned int speed_hz, @@ -330,18 +363,230 @@ static int setup_fifo_params(struct spi_device *spi_slv, return geni_spi_set_clock_and_bw(mas, spi_slv->max_speed_hz); } +static void +spi_gsi_callback_result(void *cb, const struct dmaengine_result *result, bool tx) +{ + struct gsi_desc_cb *gsi = cb; + + if (result->result != DMA_TRANS_NOERROR) { + dev_err(gsi->mas->dev, "%s DMA txn failed\n", tx ? "TX" : "RX"); + return; + } + + if (!result->residue) { + if (tx) + complete(&gsi->mas->tx_cb); + else + complete(&gsi->mas->rx_cb); + } else { + dev_err(gsi->mas->dev, "%s DMA txn has pending %d data\n", + tx ? "TX" : "RX", result->residue); + } +} + +static void +spi_gsi_rx_callback_result(void *cb, const struct dmaengine_result *result) +{ + spi_gsi_callback_result(cb, result, false); +} + +static void +spi_gsi_tx_callback_result(void *cb, const struct dmaengine_result *result) +{ + spi_gsi_callback_result(cb, result, true); +} + +static int setup_gsi_xfer(struct spi_transfer *xfer, struct spi_geni_master *mas, + struct spi_device *spi_slv, struct spi_master *spi) +{ + unsigned long flags = DMA_PREP_INTERRUPT | DMA_CTRL_ACK; + struct spi_geni_gsi *gsi; + struct dma_slave_config config = {}; + struct gpi_spi_config peripheral = {}; + unsigned long timeout, jiffies; + int ret, i; + + config.peripheral_config = &peripheral; + config.peripheral_size = sizeof(peripheral); + peripheral.set_config = true; + + if (xfer->bits_per_word != mas->cur_bits_per_word || + xfer->speed_hz != mas->cur_speed_hz) { + mas->cur_bits_per_word = xfer->bits_per_word; + mas->cur_speed_hz = xfer->speed_hz; + } + + if (!(mas->cur_bits_per_word % MIN_WORD_LEN)) { + peripheral.rx_len = ((xfer->len << 3) / mas->cur_bits_per_word); + } else { + int bytes_per_word = (mas->cur_bits_per_word / BITS_PER_BYTE) + 1; + + peripheral.rx_len = (xfer->len / bytes_per_word); + } + + if (xfer->tx_buf && xfer->rx_buf) { + peripheral.cmd = SPI_DUPLEX; + } else if (xfer->tx_buf) { + peripheral.cmd = SPI_TX; + peripheral.rx_len = 0; + } else if (xfer->rx_buf) { + peripheral.cmd = SPI_RX; + } + + if (spi_slv->mode & SPI_LOOP) + peripheral.loopback_en = true; + if (spi_slv->mode & SPI_CPOL) + peripheral.clock_pol_high = true; + if (spi_slv->mode & SPI_CPHA) + peripheral.data_pol_high = true; + + peripheral.cs = spi_slv->chip_select; + peripheral.pack_en = true; + peripheral.word_len = xfer->bits_per_word - MIN_WORD_LEN; + peripheral.fragmentation = FRAGMENTATION; + + ret = get_spi_clk_cfg(mas->cur_speed_hz, mas, + &peripheral.clk_src, &peripheral.clk_div); + if (ret) { + dev_err(mas->dev, "Err in get_spi_clk_cfg() :%d\n", ret); + return ret; + } + + gsi = &mas->gsi[mas->num_xfers]; + gsi->rx_cb.mas = mas; + gsi->rx_cb.xfer = xfer; + + if (peripheral.cmd & SPI_RX) { + dmaengine_slave_config(mas->rx, &config); + gsi->rx_desc = dmaengine_prep_slave_sg(mas->rx, xfer->rx_sg.sgl, xfer->rx_sg.nents, + DMA_DEV_TO_MEM, flags); + if (!gsi->rx_desc) { + dev_err(mas->dev, "Err setting up rx desc\n"); + return -EIO; + } + gsi->rx_desc->callback_result = spi_gsi_rx_callback_result; + gsi->rx_desc->callback_param = &gsi->rx_cb; + } + + dmaengine_slave_config(mas->tx, &config); + gsi->tx_desc = dmaengine_prep_slave_sg(mas->tx, xfer->tx_sg.sgl, xfer->tx_sg.nents, + DMA_MEM_TO_DEV, flags); + if (!gsi->tx_desc) { + dev_err(mas->dev, "Err setting up tx desc\n"); + return -EIO; + } + + gsi->tx_cb.mas = mas; + gsi->tx_cb.xfer = xfer; + gsi->tx_desc->callback_result = spi_gsi_tx_callback_result; + gsi->tx_desc->callback_param = &gsi->tx_cb; + + if (peripheral.cmd & SPI_RX) + gsi->rx_cookie = dmaengine_submit(gsi->rx_desc); + gsi->tx_cookie = dmaengine_submit(gsi->tx_desc); + + if (peripheral.cmd & SPI_RX) + dma_async_issue_pending(mas->rx); + dma_async_issue_pending(mas->tx); + mas->num_xfers++; + + jiffies = msecs_to_jiffies(SPI_XFER_TIMEOUT_MS); + timeout = wait_for_completion_timeout(&mas->tx_cb, jiffies); + if (timeout <= 0) { + dev_err(mas->dev, "Tx[%d] timeout%lu\n", i, timeout); + ret = -ETIMEDOUT; + goto err_gsi_geni_transfer_one; + } + + if (peripheral.cmd & SPI_RX) { + jiffies = msecs_to_jiffies(SPI_XFER_TIMEOUT_MS); + timeout = wait_for_completion_timeout(&mas->rx_cb, jiffies); + if (timeout <= 0) { + dev_err(mas->dev, "Rx[%d] timeout%lu\n", i, timeout); + ret = -ETIMEDOUT; + goto err_gsi_geni_transfer_one; + } + } + + spi_finalize_current_transfer(spi); + return 0; + +err_gsi_geni_transfer_one: + dmaengine_terminate_all(mas->tx); + return ret; +} + +static bool geni_can_dma(struct spi_controller *ctlr, + struct spi_device *slv, struct spi_transfer *xfer) +{ + struct spi_geni_master *mas = spi_master_get_devdata(slv->master); + + /* check if dma is supported */ + if (mas->cur_xfer_mode == GENI_GPI_DMA) + return true; + + return false; +} + static int spi_geni_prepare_message(struct spi_master *spi, struct spi_message *spi_msg) { - int ret; struct spi_geni_master *mas = spi_master_get_devdata(spi); + int ret; - if (spi_geni_is_abort_still_pending(mas)) - return -EBUSY; + switch (mas->cur_xfer_mode) { + case GENI_SE_FIFO: + if (spi_geni_is_abort_still_pending(mas)) + return -EBUSY; + ret = setup_fifo_params(spi_msg->spi, spi); + if (ret) + dev_err(mas->dev, "Couldn't select mode %d\n", ret); + return ret; - ret = setup_fifo_params(spi_msg->spi, spi); - if (ret) - dev_err(mas->dev, "Couldn't select mode %d\n", ret); + case GENI_GPI_DMA: + mas->num_xfers = 0; + reinit_completion(&mas->tx_cb); + reinit_completion(&mas->rx_cb); + memset(mas->gsi, 0, (sizeof(struct spi_geni_gsi) * NUM_SPI_XFER)); + + return 0; + } + + dev_err(mas->dev, "Mode not supported %d", mas->cur_xfer_mode); + return -EINVAL; +} + +static int spi_geni_grab_gpi_chan(struct spi_geni_master *mas) +{ + size_t gsi_sz; + int ret; + + mas->tx = dma_request_chan(mas->dev, "tx"); + if (IS_ERR_OR_NULL(mas->tx)) { + dev_err(mas->dev, "Failed to get tx DMA ch %ld", PTR_ERR(mas->tx)); + ret = PTR_ERR(mas->tx); + goto err_tx; + } + mas->rx = dma_request_chan(mas->dev, "rx"); + if (IS_ERR_OR_NULL(mas->rx)) { + dev_err(mas->dev, "Failed to get rx DMA ch %ld", PTR_ERR(mas->rx)); + ret = PTR_ERR(mas->rx); + goto err_rx; + } + + gsi_sz = sizeof(struct spi_geni_gsi) * NUM_SPI_XFER; + mas->gsi = devm_kzalloc(mas->dev, gsi_sz, GFP_KERNEL); + if (IS_ERR_OR_NULL(mas->gsi)) + goto err_mem; + return 0; + +err_mem: + dma_release_channel(mas->rx); +err_rx: + dma_release_channel(mas->tx); +err_tx: + mas->tx = NULL; + mas->rx = NULL; return ret; } @@ -349,15 +594,15 @@ static int spi_geni_init(struct spi_geni_master *mas) { struct geni_se *se = &mas->se; unsigned int proto, major, minor, ver; - u32 spi_tx_cfg; + u32 spi_tx_cfg, fifo_disable; + int ret = -ENXIO; pm_runtime_get_sync(mas->dev); proto = geni_se_read_proto(se); if (proto != GENI_SE_SPI) { dev_err(mas->dev, "Invalid proto %d\n", proto); - pm_runtime_put(mas->dev); - return -ENXIO; + goto out_pm; } mas->tx_fifo_depth = geni_se_get_tx_fifo_depth(se); @@ -380,15 +625,38 @@ static int spi_geni_init(struct spi_geni_master *mas) else mas->oversampling = 1; - geni_se_select_mode(se, GENI_SE_FIFO); + fifo_disable = readl(se->base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE; + switch (fifo_disable) { + case 1: + ret = spi_geni_grab_gpi_chan(mas); + if (!ret) { /* success case */ + mas->cur_xfer_mode = GENI_GPI_DMA; + geni_se_select_mode(se, GENI_GPI_DMA); + dev_dbg(mas->dev, "Using GPI DMA mode for SPI\n"); + break; + } + /* + * in case of failure to get dma channel, we can till do the + * FIFO mode, so fallthrough + */ + dev_warn(mas->dev, "FIFO mode disabled, but couldn't get DMA, fall back to FIFO mode\n"); + fallthrough; + + case 0: + mas->cur_xfer_mode = GENI_SE_FIFO; + geni_se_select_mode(se, GENI_SE_FIFO); + ret = 0; + break; + } /* We always control CS manually */ spi_tx_cfg = readl(se->base + SE_SPI_TRANS_CFG); spi_tx_cfg &= ~CS_TOGGLE; writel(spi_tx_cfg, se->base + SE_SPI_TRANS_CFG); +out_pm: pm_runtime_put(mas->dev); - return 0; + return ret; } static unsigned int geni_byte_per_fifo_word(struct spi_geni_master *mas) @@ -575,8 +843,11 @@ static int spi_geni_transfer_one(struct spi_master *spi, if (!xfer->len) return 0; - setup_fifo_xfer(xfer, mas, slv->mode, spi); - return 1; + if (mas->cur_xfer_mode == GENI_SE_FIFO) { + setup_fifo_xfer(xfer, mas, slv->mode, spi); + return 1; + } + return setup_gsi_xfer(xfer, mas, slv, spi); } static irqreturn_t geni_spi_isr(int irq, void *data) @@ -671,6 +942,15 @@ static int spi_geni_probe(struct platform_device *pdev) if (irq < 0) return irq; + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (ret) { + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + if (ret) { + dev_err(&pdev->dev, "could not set DMA mask\n"); + return ret; + } + } + base = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(base)) return PTR_ERR(base); @@ -710,14 +990,17 @@ static int spi_geni_probe(struct platform_device *pdev) spi->max_speed_hz = 50000000; spi->prepare_message = spi_geni_prepare_message; spi->transfer_one = spi_geni_transfer_one; + spi->can_dma = geni_can_dma; + spi->dma_map_dev = mas->dev->parent; spi->auto_runtime_pm = true; spi->handle_err = handle_fifo_timeout; - spi->set_cs = spi_geni_set_cs; spi->use_gpio_descriptors = true; init_completion(&mas->cs_done); init_completion(&mas->cancel_done); init_completion(&mas->abort_done); + init_completion(&mas->tx_cb); + init_completion(&mas->rx_cb); spin_lock_init(&mas->lock); pm_runtime_use_autosuspend(&pdev->dev); pm_runtime_set_autosuspend_delay(&pdev->dev, 250); @@ -738,6 +1021,14 @@ static int spi_geni_probe(struct platform_device *pdev) if (ret) goto spi_geni_probe_runtime_disable; + /* + * check the mode supported and set_cs for fifo mode only + * for dma (gsi) mode, the gsi will set cs based on params passed in + * TRE + */ + if (mas->cur_xfer_mode == GENI_SE_FIFO) + spi->set_cs = spi_geni_set_cs; + ret = request_irq(mas->irq, geni_spi_isr, 0, dev_name(dev), spi); if (ret) goto spi_geni_probe_runtime_disable; @@ -754,6 +1045,14 @@ static int spi_geni_probe(struct platform_device *pdev) return ret; } +static void spi_geni_release_dma_chan(struct spi_geni_master *mas) +{ + if (mas->rx) + dma_release_channel(mas->rx); + if (mas->tx) + dma_release_channel(mas->tx); +} + static int spi_geni_remove(struct platform_device *pdev) { struct spi_master *spi = platform_get_drvdata(pdev); @@ -762,6 +1061,8 @@ static int spi_geni_remove(struct platform_device *pdev) /* Unregister _before_ disabling pm_runtime() so we stop transfers */ spi_unregister_master(spi); + spi_geni_release_dma_chan(mas); + free_irq(mas->irq, spi); pm_runtime_disable(&pdev->dev); return 0; From patchwork Fri Jun 25 05:22:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Koul X-Patchwork-Id: 12343635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B488C2B9F4 for ; Fri, 25 Jun 2021 05:22:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 74C9961421 for ; Fri, 25 Jun 2021 05:22:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233195AbhFYFZM (ORCPT ); Fri, 25 Jun 2021 01:25:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:46436 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233181AbhFYFZJ (ORCPT ); Fri, 25 Jun 2021 01:25:09 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id A942E61421; Fri, 25 Jun 2021 05:22:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624598569; bh=8Wcq/ry0jthB/53LTJCYPtzMhVIL/Gc7Ld88MISTXf0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SoFREMkKVI5ZHbVYDVr0n4VC4nqdK0donk/am01UvBRXqV6itpfwKLmBSy3dVWuBV yz9cUV68+5RKMy+zIBHlW+9ygDExUQoJQsuK6t4I87lBfJUoquWl3M94etpURJAZ84 NXQG991BQqkeEtwSMRr6IyeiaEthx9UzARE7LZlj2GezQk52EnxONwjBnlWpUL8xr8 16gzsbGmJyLWmHEEuPHFsyjTAcQ/YITqZZeMInMUVPq7PHbbbLvUs4t/eLyLvl/Qhe PIcOxXwAGCvVkgrcUxpTRdc1GS4lGMvDAOjylQ6cdbt/U+sbPh8yx3hdsr6tZUztm5 IpMcmF6jwv5CQ== From: Vinod Koul To: Bjorn Andersson , Mark Brown , Wolfram Sang Cc: Vinod Koul , Andy Gross , Sumit Semwal , Douglas Anderson , Matthias Kaehlcke , linux-spi@vger.kernel.org, linux-i2c@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 5/5] i2c: qcom-geni: Add support for GPI DMA Date: Fri, 25 Jun 2021 10:52:13 +0530 Message-Id: <20210625052213.32260-6-vkoul@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210625052213.32260-1-vkoul@kernel.org> References: <20210625052213.32260-1-vkoul@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org This adds capability to use GSI DMA for I2C transfers Signed-off-by: Vinod Koul --- drivers/i2c/busses/i2c-qcom-geni.c | 309 ++++++++++++++++++++++++++++- 1 file changed, 301 insertions(+), 8 deletions(-) diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c index 07b710a774df..839802f04b75 100644 --- a/drivers/i2c/busses/i2c-qcom-geni.c +++ b/drivers/i2c/busses/i2c-qcom-geni.c @@ -3,7 +3,9 @@ #include #include +#include #include +#include #include #include #include @@ -48,6 +50,8 @@ #define LOW_COUNTER_SHFT 10 #define CYCLE_COUNTER_MSK GENMASK(9, 0) +#define I2C_PACK_EN (BIT(0) | BIT(1)) + enum geni_i2c_err_code { GP_IRQ0, NACK, @@ -72,6 +76,11 @@ enum geni_i2c_err_code { #define XFER_TIMEOUT HZ #define RST_TIMEOUT HZ +enum i2c_se_mode { + I2C_FIFO_SE_DMA, + I2C_GPI_DMA, +}; + struct geni_i2c_dev { struct geni_se se; u32 tx_wm; @@ -89,6 +98,17 @@ struct geni_i2c_dev { void *dma_buf; size_t xfer_len; dma_addr_t dma_addr; + struct dma_chan *tx_c; + struct dma_chan *rx_c; + dma_cookie_t rx_cookie, tx_cookie; + dma_addr_t tx_addr; + dma_addr_t rx_addr; + bool cfg_sent; + struct dma_async_tx_descriptor *tx_desc; + struct dma_async_tx_descriptor *rx_desc; + void *tx_buf; + void *rx_buf; + enum i2c_se_mode se_mode; }; struct geni_i2c_err_log { @@ -456,6 +476,213 @@ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, return gi2c->err; } +static void i2c_gsi_cb_result(void *cb, const struct dmaengine_result *result) +{ + struct geni_i2c_dev *gi2c = cb; + + if (result->result != DMA_TRANS_NOERROR) { + dev_err(gi2c->se.dev, "DMA txn failed:%d\n", result->result); + return; + } + + if (result->residue) + dev_dbg(gi2c->se.dev, "DMA xfer has pending: %d\n", result->residue); + + complete(&gi2c->done); +} + +static void geni_i2c_gpi_unmap(struct geni_i2c_dev *gi2c, struct i2c_msg *msg) +{ + if (gi2c->tx_buf) { + dma_unmap_single(gi2c->se.dev->parent, gi2c->tx_addr, msg->len, DMA_TO_DEVICE); + i2c_put_dma_safe_msg_buf(gi2c->tx_buf, msg, false); + gi2c->tx_buf = NULL; + } + + if (gi2c->rx_buf) { + dma_unmap_single(gi2c->se.dev->parent, gi2c->rx_addr, msg->len, DMA_FROM_DEVICE); + i2c_put_dma_safe_msg_buf(gi2c->rx_buf, msg, false); + gi2c->rx_buf = NULL; + } +} + +static int geni_i2c_gpi_rx(struct geni_i2c_dev *gi2c, + struct i2c_msg *msg, struct dma_slave_config *config) +{ + struct gpi_i2c_config *peripheral; + unsigned int flags; + void *dma_buf; + int ret; + + peripheral = config->peripheral_config; + + dma_buf = i2c_get_dma_safe_msg_buf(msg, 1); + if (!dma_buf) + return -ENOMEM; + + gi2c->rx_addr = dma_map_single(gi2c->se.dev->parent, dma_buf, msg->len, DMA_FROM_DEVICE); + if (dma_mapping_error(gi2c->se.dev->parent, gi2c->rx_addr)) { + i2c_put_dma_safe_msg_buf(dma_buf, msg, false); + return -ENOMEM; + } + + peripheral->rx_len = msg->len; + peripheral->op = I2C_READ; + + ret = dmaengine_slave_config(gi2c->rx_c, config); + if (ret) { + dev_err(gi2c->se.dev, "rx dma config error:%d\n", ret); + goto err_config; + } + + peripheral->set_config = false; + peripheral->multi_msg = true; + flags = DMA_PREP_INTERRUPT | DMA_CTRL_ACK; + + gi2c->rx_desc = dmaengine_prep_slave_single(gi2c->rx_c, gi2c->rx_addr, + msg->len, DMA_DEV_TO_MEM, flags); + if (!gi2c->rx_desc) { + dev_err(gi2c->se.dev, "prep_slave_sg for rx failed\n"); + ret = -EIO; + goto err_config; + } + + gi2c->rx_desc->callback_result = i2c_gsi_cb_result; + gi2c->rx_desc->callback_param = gi2c; + gi2c->rx_buf = dma_buf; + + return 0; + +err_config: + dma_unmap_single(gi2c->se.dev->parent, gi2c->rx_addr, msg->len, DMA_FROM_DEVICE); + i2c_put_dma_safe_msg_buf(dma_buf, msg, false); + return ret; +} + +static int geni_i2c_gpi_tx(struct geni_i2c_dev *gi2c, + struct i2c_msg *msg, struct dma_slave_config *config) +{ + struct gpi_i2c_config *peripheral; + unsigned int flags; + void *dma_buf; + int ret; + + peripheral = config->peripheral_config; + + dma_buf = i2c_get_dma_safe_msg_buf(msg, 1); + if (!dma_buf) + return -ENOMEM; + + gi2c->tx_addr = dma_map_single(gi2c->se.dev->parent, dma_buf, msg->len, DMA_TO_DEVICE); + if (dma_mapping_error(gi2c->se.dev->parent, gi2c->tx_addr)) { + i2c_put_dma_safe_msg_buf(dma_buf, msg, false); + return -ENOMEM; + } + + peripheral->op = I2C_WRITE; + + ret = dmaengine_slave_config(gi2c->tx_c, config); + if (ret) { + dev_err(gi2c->se.dev, "tx dma config error:%d\n", ret); + goto err_config; + } + + peripheral->set_config = false; + peripheral->multi_msg = true; + flags = DMA_PREP_INTERRUPT | DMA_CTRL_ACK; + + gi2c->tx_desc = dmaengine_prep_slave_single(gi2c->tx_c, gi2c->tx_addr, + msg->len, DMA_MEM_TO_DEV, flags); + if (!gi2c->tx_desc) { + dev_err(gi2c->se.dev, "prep_slave_sg for tx failed\n"); + ret = -EIO; + goto err_config; + } + + gi2c->tx_desc->callback_result = i2c_gsi_cb_result; + gi2c->tx_desc->callback_param = gi2c; + gi2c->tx_buf = dma_buf; + + return 0; + +err_config: + dma_unmap_single(gi2c->se.dev->parent, gi2c->tx_addr, msg->len, DMA_FROM_DEVICE); + i2c_put_dma_safe_msg_buf(dma_buf, msg, false); + return ret; +} + +static int geni_i2c_gsi_xfer(struct geni_i2c_dev *gi2c, struct i2c_msg msgs[], int num) +{ + struct dma_slave_config config = {}; + struct gpi_i2c_config peripheral = {}; + int i, ret = 0, timeout, stretch; + + config.peripheral_config = &peripheral; + config.peripheral_size = sizeof(peripheral); + + if (!gi2c->cfg_sent) { + const struct geni_i2c_clk_fld *itr = gi2c->clk_fld; + + gi2c->cfg_sent = true; + peripheral.pack_enable = I2C_PACK_EN; + peripheral.cycle_count = itr->t_cycle_cnt; + peripheral.high_count = itr->t_high_cnt; + peripheral.low_count = itr->t_low_cnt; + peripheral.clk_div = itr->clk_div; + peripheral.set_config = true; + } + peripheral.multi_msg = false; + + for (i = 0; i < num; i++) { + gi2c->cur = &msgs[i]; + dev_dbg(gi2c->se.dev, "msg[%d].len:%d\n", i, gi2c->cur->len); + + stretch = (i < (num - 1)); + peripheral.addr = msgs[i].addr; + peripheral.stretch = stretch; + + if (msgs[i].flags & I2C_M_RD) { + ret = geni_i2c_gpi_rx(gi2c, &msgs[i], &config); + if (ret) { + dev_err(gi2c->se.dev, "Rx txn setting failed: %d\n", ret); + goto err; + } + + /* Issue RX */ + gi2c->rx_cookie = dmaengine_submit(gi2c->rx_desc); + dma_async_issue_pending(gi2c->rx_c); + } + + ret = geni_i2c_gpi_tx(gi2c, &msgs[i], &config); + if (ret) { + dev_err(gi2c->se.dev, "Tx txn setting failed: %d\n", ret); + goto err; + } + + /* Issue TX */ + gi2c->tx_cookie = dmaengine_submit(gi2c->tx_desc); + dma_async_issue_pending(gi2c->tx_c); + + timeout = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT); + if (!timeout) { + dev_err(gi2c->se.dev, "I2C timeout gsi flags:%d addr:0x%x\n", + gi2c->cur->flags, gi2c->cur->addr); + gi2c->err = -ETIMEDOUT; + goto err; + } + + geni_i2c_gpi_unmap(gi2c, &msgs[i]); + } + + return 0; + +err: + dmaengine_terminate_sync(gi2c->rx_c); + dmaengine_terminate_sync(gi2c->tx_c); + geni_i2c_gpi_unmap(gi2c, &msgs[i]); + return ret; +} + static int geni_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num) @@ -475,6 +702,12 @@ static int geni_i2c_xfer(struct i2c_adapter *adap, } qcom_geni_i2c_conf(gi2c); + + if (gi2c->se_mode == I2C_GPI_DMA) { + ret = geni_i2c_gsi_xfer(gi2c, msgs, num); + goto geni_i2c_txn_ret; + } + for (i = 0; i < num; i++) { u32 m_param = i < (num - 1) ? STOP_STRETCH : 0; @@ -489,6 +722,7 @@ static int geni_i2c_xfer(struct i2c_adapter *adap, if (ret) break; } +geni_i2c_txn_ret: if (ret == 0) ret = num; @@ -517,11 +751,44 @@ static const struct acpi_device_id geni_i2c_acpi_match[] = { MODULE_DEVICE_TABLE(acpi, geni_i2c_acpi_match); #endif +static void release_gpi_dma(struct geni_i2c_dev *gi2c) +{ + if (gi2c->rx_c) { + dma_release_channel(gi2c->rx_c); + gi2c->rx_c = NULL; + } + if (gi2c->tx_c) { + dma_release_channel(gi2c->tx_c); + gi2c->tx_c = NULL; + } +} + +static int setup_gpi_dma(struct geni_i2c_dev *gi2c) +{ + geni_se_select_mode(&gi2c->se, GENI_GPI_DMA); + gi2c->tx_c = dma_request_chan(gi2c->se.dev, "tx"); + if (IS_ERR_OR_NULL(gi2c->tx_c)) { + dev_err(gi2c->se.dev, "TX dma_request_slave_channel fail\n"); + return PTR_ERR(gi2c->tx_c); + } + + gi2c->rx_c = dma_request_chan(gi2c->se.dev, "rx"); + if (IS_ERR_OR_NULL(gi2c->rx_c)) { + dev_err(gi2c->se.dev, "RX dma_request_slave_channel fail\n"); + dma_release_channel(gi2c->tx_c); + gi2c->tx_c = NULL; + return PTR_ERR(gi2c->rx_c); + } + + dev_dbg(gi2c->se.dev, "Grabbed GPI dma channels\n"); + return 0; +} + static int geni_i2c_probe(struct platform_device *pdev) { struct geni_i2c_dev *gi2c; struct resource *res; - u32 proto, tx_depth; + u32 proto, tx_depth, fifo_disable; int ret; struct device *dev = &pdev->dev; @@ -601,16 +868,43 @@ static int geni_i2c_probe(struct platform_device *pdev) return ret; } proto = geni_se_read_proto(&gi2c->se); - tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se); if (proto != GENI_SE_I2C) { dev_err(dev, "Invalid proto %d\n", proto); geni_se_resources_off(&gi2c->se); return -ENXIO; } - gi2c->tx_wm = tx_depth - 1; - geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth); - geni_se_config_packing(&gi2c->se, BITS_PER_BYTE, PACKING_BYTES_PW, - true, true, true); + + fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE; + + switch (fifo_disable) { + case 1: + ret = setup_gpi_dma(gi2c); + if (!ret) { /* success case */ + gi2c->se_mode = I2C_GPI_DMA; + geni_se_select_mode(&gi2c->se, GENI_GPI_DMA); + dev_dbg(dev, "Using GPI DMA mode for I2C\n"); + break; + } + /* + * in case of failure to get dma channel, we can till do the + * FIFO mode, so fallthrough + */ + dev_warn(dev, "FIFO mode disabled, but couldn't get DMA, fall back to FIFO mode\n"); + fallthrough; + + case 0: + gi2c->se_mode = I2C_FIFO_SE_DMA; + tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se); + gi2c->tx_wm = tx_depth - 1; + geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth); + geni_se_config_packing(&gi2c->se, BITS_PER_BYTE, + PACKING_BYTES_PW, true, true, true); + + dev_dbg(dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth); + + break; + } + ret = geni_se_resources_off(&gi2c->se); if (ret) { dev_err(dev, "Error turning off resources %d\n", ret); @@ -621,8 +915,6 @@ static int geni_i2c_probe(struct platform_device *pdev) if (ret) return ret; - dev_dbg(dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth); - gi2c->suspended = 1; pm_runtime_set_suspended(gi2c->se.dev); pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY); @@ -645,6 +937,7 @@ static int geni_i2c_remove(struct platform_device *pdev) { struct geni_i2c_dev *gi2c = platform_get_drvdata(pdev); + release_gpi_dma(gi2c); i2c_del_adapter(&gi2c->adap); pm_runtime_disable(gi2c->se.dev); return 0;