From patchwork Tue Jan 12 08:43:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sricharan Ramabadhran X-Patchwork-Id: 8015301 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 120CEBEEE5 for ; Tue, 12 Jan 2016 08:47:42 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7B5CC203A1 for ; Tue, 12 Jan 2016 08:47:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F1A7920379 for ; Tue, 12 Jan 2016 08:47:38 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aIuaT-0005cP-US; Tue, 12 Jan 2016 08:45:58 +0000 Received: from smtp.codeaurora.org ([198.145.29.96]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aIuZL-0003Ve-BL for linux-arm-kernel@lists.infradead.org; Tue, 12 Jan 2016 08:44:54 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 75F0C605CE; Tue, 12 Jan 2016 08:44:28 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 68932605C0; Tue, 12 Jan 2016 08:44:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from blr-ubuntu-32.ap.qualcomm.com (unknown [202.46.23.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: sricharan@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 89B1B605C0; Tue, 12 Jan 2016 08:44:21 +0000 (UTC) From: Sricharan R To: devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, agross@codeaurora.org, linux-kernel@vger.kernel.org, linux-i2c@vger.kernel.org, iivanov@mm-sol.com, galak@codeaurora.org, dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org, andy.gross@linaro.org, ntelkar@codeaurora.org, wsa@the-dreams.de Subject: [PATCH V6 4/6] i2c: qup: Add bam dma capabilities Date: Tue, 12 Jan 2016 14:13:51 +0530 Message-Id: <1452588233-1803-5-git-send-email-sricharan@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 In-Reply-To: <1452588233-1803-1-git-send-email-sricharan@codeaurora.org> References: <1452588233-1803-1-git-send-email-sricharan@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160112_004447_800634_FA71E8A6 X-CRM114-Status: GOOD ( 24.90 ) X-Spam-Score: -1.9 (-) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: srichara@codeaurora.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP QUP cores can be attached to a BAM module, which acts as a dma engine for the QUP core. When DMA with BAM is enabled, the BAM consumer pipe transmitted data is written to the output FIFO and the BAM producer pipe received data is read from the input FIFO. With BAM capabilities, qup-i2c core can transfer more than 256 bytes, without a 'stop' which is not possible otherwise. Signed-off-by: Sricharan R Reviewed-by: Andy Gross --- drivers/i2c/busses/i2c-qup.c | 448 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 435 insertions(+), 13 deletions(-) diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c index f9009d6..137c83c 100644 --- a/drivers/i2c/busses/i2c-qup.c +++ b/drivers/i2c/busses/i2c-qup.c @@ -14,8 +14,12 @@ * */ +#include #include #include +#include +#include +#include #include #include #include @@ -24,6 +28,7 @@ #include #include #include +#include /* QUP Registers */ #define QUP_CONFIG 0x000 @@ -33,6 +38,7 @@ #define QUP_OPERATIONAL 0x018 #define QUP_ERROR_FLAGS 0x01c #define QUP_ERROR_FLAGS_EN 0x020 +#define QUP_OPERATIONAL_MASK 0x028 #define QUP_HW_VERSION 0x030 #define QUP_MX_OUTPUT_CNT 0x100 #define QUP_OUT_FIFO_BASE 0x110 @@ -52,6 +58,7 @@ #define QUP_STATE_VALID BIT(2) #define QUP_I2C_MAST_GEN BIT(4) +#define QUP_I2C_FLUSH BIT(6) #define QUP_OPERATIONAL_RESET 0x000ff0 #define QUP_I2C_STATUS_RESET 0xfffffc @@ -77,7 +84,10 @@ /* Packing/Unpacking words in FIFOs, and IO modes */ #define QUP_OUTPUT_BLK_MODE (1 << 10) +#define QUP_OUTPUT_BAM_MODE (3 << 10) #define QUP_INPUT_BLK_MODE (1 << 12) +#define QUP_INPUT_BAM_MODE (3 << 12) +#define QUP_BAM_MODE (QUP_OUTPUT_BAM_MODE | QUP_INPUT_BAM_MODE) #define QUP_UNPACK_EN BIT(14) #define QUP_PACK_EN BIT(15) @@ -94,6 +104,8 @@ #define QUP_TAG_DATA (2 << 8) #define QUP_TAG_STOP (3 << 8) #define QUP_TAG_REC (4 << 8) +#define QUP_BAM_INPUT_EOT 0x93 +#define QUP_BAM_FLUSH_STOP 0x96 /* QUP v2 tags */ #define QUP_TAG_V2_START 0x81 @@ -114,6 +126,12 @@ #define ONE_BYTE 0x1 #define QUP_I2C_MX_CONFIG_DURING_RUN BIT(31) +#define MX_TX_RX_LEN SZ_64K +#define MX_BLOCKS (MX_TX_RX_LEN / QUP_READ_LIMIT) + +/* Max timeout in ms for 32k bytes */ +#define TOUT_MAX 300 + struct qup_i2c_block { int count; int pos; @@ -123,6 +141,17 @@ struct qup_i2c_block { u8 tags[6]; }; +struct qup_i2c_tag { + u8 *start; + dma_addr_t addr; +}; + +struct qup_i2c_bam { + struct qup_i2c_tag tag; + struct dma_chan *dma; + struct scatterlist *sg; +}; + struct qup_i2c_dev { struct device *dev; void __iomem *base; @@ -154,6 +183,13 @@ struct qup_i2c_dev { /* To configure when bus is in run state */ int config_run; + /* dma parameters */ + bool is_dma; + struct dma_pool *dpool; + struct qup_i2c_tag start_tag; + struct qup_i2c_bam brx; + struct qup_i2c_bam btx; + struct completion xfer; }; @@ -230,6 +266,14 @@ static int qup_i2c_poll_state(struct qup_i2c_dev *qup, u32 req_state) return qup_i2c_poll_state_mask(qup, req_state, QUP_STATE_MASK); } +static void qup_i2c_flush(struct qup_i2c_dev *qup) +{ + u32 val = readl(qup->base + QUP_STATE); + + val |= QUP_I2C_FLUSH; + writel(val, qup->base + QUP_STATE); +} + static int qup_i2c_poll_state_valid(struct qup_i2c_dev *qup) { return qup_i2c_poll_state_mask(qup, 0, 0); @@ -437,12 +481,14 @@ static int qup_i2c_get_data_len(struct qup_i2c_dev *qup) } static int qup_i2c_set_tags(u8 *tags, struct qup_i2c_dev *qup, - struct i2c_msg *msg) + struct i2c_msg *msg, int is_dma) { u16 addr = (msg->addr << 1) | ((msg->flags & I2C_M_RD) == I2C_M_RD); int len = 0; int data_len; + int last = (qup->blk.pos == (qup->blk.count - 1)) && (qup->is_last); + if (qup->blk.pos == 0) { tags[len++] = QUP_TAG_V2_START; tags[len++] = addr & 0xff; @@ -452,7 +498,7 @@ static int qup_i2c_set_tags(u8 *tags, struct qup_i2c_dev *qup, } /* Send _STOP commands for the last block */ - if ((qup->blk.pos == (qup->blk.count - 1)) && qup->is_last) { + if (last) { if (msg->flags & I2C_M_RD) tags[len++] = QUP_TAG_V2_DATARD_STOP; else @@ -472,6 +518,11 @@ static int qup_i2c_set_tags(u8 *tags, struct qup_i2c_dev *qup, else tags[len++] = data_len; + if ((msg->flags & I2C_M_RD) && last && is_dma) { + tags[len++] = QUP_BAM_INPUT_EOT; + tags[len++] = QUP_BAM_FLUSH_STOP; + } + return len; } @@ -480,7 +531,7 @@ static int qup_i2c_issue_xfer_v2(struct qup_i2c_dev *qup, struct i2c_msg *msg) int data_len = 0, tag_len, index; int ret; - tag_len = qup_i2c_set_tags(qup->blk.tags, qup, msg); + tag_len = qup_i2c_set_tags(qup->blk.tags, qup, msg, 0); index = msg->len - qup->blk.data_len; /* only tags are written for read */ @@ -494,6 +545,283 @@ static int qup_i2c_issue_xfer_v2(struct qup_i2c_dev *qup, struct i2c_msg *msg) return ret; } +static void qup_i2c_bam_cb(void *data) +{ + struct qup_i2c_dev *qup = data; + + complete(&qup->xfer); +} + +void qup_sg_set_buf(struct scatterlist *sg, void *buf, struct qup_i2c_tag *tg, + unsigned int buflen, struct qup_i2c_dev *qup, + int map, int dir) +{ + sg_set_buf(sg, buf, buflen); + dma_map_sg(qup->dev, sg, 1, dir); + + if (!map) + sg_dma_address(sg) = tg->addr + ((u8 *)buf - tg->start); +} + +static void qup_i2c_rel_dma(struct qup_i2c_dev *qup) +{ + if (qup->btx.dma) + dma_release_channel(qup->btx.dma); + if (qup->brx.dma) + dma_release_channel(qup->brx.dma); + qup->btx.dma = NULL; + qup->brx.dma = NULL; +} + +static int qup_i2c_req_dma(struct qup_i2c_dev *qup) +{ + int err; + + if (!qup->btx.dma) { + qup->btx.dma = dma_request_slave_channel_reason(qup->dev, "tx"); + if (IS_ERR(qup->btx.dma)) { + err = PTR_ERR(qup->btx.dma); + qup->btx.dma = NULL; + dev_err(qup->dev, "\n tx channel not available"); + return err; + } + } + + if (!qup->brx.dma) { + qup->brx.dma = dma_request_slave_channel_reason(qup->dev, "rx"); + if (IS_ERR(qup->brx.dma)) { + dev_err(qup->dev, "\n rx channel not available"); + err = PTR_ERR(qup->brx.dma); + qup->brx.dma = NULL; + qup_i2c_rel_dma(qup); + return err; + } + } + return 0; +} + +static int qup_i2c_bam_do_xfer(struct qup_i2c_dev *qup, struct i2c_msg *msg, + int num) +{ + struct dma_async_tx_descriptor *txd, *rxd = NULL; + int ret = 0, idx = 0; + dma_cookie_t cookie_rx, cookie_tx; + u32 rx_nents = 0, tx_nents = 0, len, blocks, rem; + u32 i, tlen, tx_len, tx_buf = 0, rx_buf = 0, off = 0; + u8 *tags; + + while (idx < num) { + blocks = (msg->len + QUP_READ_LIMIT) / QUP_READ_LIMIT; + rem = msg->len % QUP_READ_LIMIT; + tx_len = 0, len = 0, i = 0; + + qup->is_last = (idx == (num - 1)); + + qup_i2c_set_blk_data(qup, msg); + + if (msg->flags & I2C_M_RD) { + rx_nents += (blocks * 2) + 1; + tx_nents += 1; + + while (qup->blk.pos < blocks) { + /* length set to '0' implies 256 bytes */ + tlen = (i == (blocks - 1)) ? rem : 0; + tags = &qup->start_tag.start[off + len]; + len += qup_i2c_set_tags(tags, qup, msg, 1); + + /* scratch buf to read the start and len tags */ + qup_sg_set_buf(&qup->brx.sg[rx_buf++], + &qup->brx.tag.start[0], + &qup->brx.tag, + 2, qup, 0, 0); + + qup_sg_set_buf(&qup->brx.sg[rx_buf++], + &msg->buf[QUP_READ_LIMIT * i], + NULL, tlen, qup, + 1, DMA_FROM_DEVICE); + i++; + qup->blk.pos = i; + } + qup_sg_set_buf(&qup->btx.sg[tx_buf++], + &qup->start_tag.start[off], + &qup->start_tag, len, qup, 0, 0); + off += len; + /* scratch buf to read the BAM EOT and FLUSH tags */ + qup_sg_set_buf(&qup->brx.sg[rx_buf++], + &qup->brx.tag.start[0], + &qup->brx.tag, 2, + qup, 0, 0); + } else { + tx_nents += (blocks * 2); + + while (qup->blk.pos < blocks) { + tlen = (i == (blocks - 1)) ? rem : 0; + tags = &qup->start_tag.start[off + tx_len]; + len = qup_i2c_set_tags(tags, qup, msg, 1); + + qup_sg_set_buf(&qup->btx.sg[tx_buf++], + tags, + &qup->start_tag, len, + qup, 0, 0); + + tx_len += len; + qup_sg_set_buf(&qup->btx.sg[tx_buf++], + &msg->buf[QUP_READ_LIMIT * i], + NULL, tlen, qup, 1, + DMA_TO_DEVICE); + i++; + qup->blk.pos = i; + } + off += tx_len; + + if (idx == (num - 1)) { + len = 1; + if (rx_nents) { + qup->btx.tag.start[0] = + QUP_BAM_INPUT_EOT; + len++; + } + qup->btx.tag.start[len - 1] = + QUP_BAM_FLUSH_STOP; + qup_sg_set_buf(&qup->btx.sg[tx_buf++], + &qup->btx.tag.start[0], + &qup->btx.tag, len, + qup, 0, 0); + tx_nents += 1; + } + } + idx++; + msg++; + } + + txd = dmaengine_prep_slave_sg(qup->btx.dma, qup->btx.sg, tx_nents, + DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_PREP_FENCE); + if (!txd) { + dev_err(qup->dev, "failed to get tx desc\n"); + ret = -EINVAL; + goto desc_err; + } + + if (!rx_nents) { + txd->callback = qup_i2c_bam_cb; + txd->callback_param = qup; + } + + cookie_tx = dmaengine_submit(txd); + if (dma_submit_error(cookie_tx)) { + ret = -EINVAL; + goto desc_err; + } + + dma_async_issue_pending(qup->btx.dma); + + if (rx_nents) { + rxd = dmaengine_prep_slave_sg(qup->brx.dma, qup->brx.sg, + rx_nents, DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT); + if (!rxd) { + dev_err(qup->dev, "failed to get rx desc\n"); + ret = -EINVAL; + + /* abort TX descriptors */ + dmaengine_terminate_all(qup->btx.dma); + goto desc_err; + } + + rxd->callback = qup_i2c_bam_cb; + rxd->callback_param = qup; + cookie_rx = dmaengine_submit(rxd); + if (dma_submit_error(cookie_rx)) { + ret = -EINVAL; + goto desc_err; + } + + dma_async_issue_pending(qup->brx.dma); + } + + if (!wait_for_completion_timeout(&qup->xfer, TOUT_MAX * HZ)) { + dev_err(qup->dev, "normal trans timed out\n"); + ret = -ETIMEDOUT; + } + + if (ret || qup->bus_err || qup->qup_err) { + if (qup->bus_err & QUP_I2C_NACK_FLAG) { + msg--; + dev_err(qup->dev, "NACK from %x\n", msg->addr); + ret = -EIO; + + if (qup_i2c_change_state(qup, QUP_RUN_STATE)) { + dev_err(qup->dev, "change to run state timed out"); + return ret; + } + + if (rx_nents) + writel(QUP_BAM_INPUT_EOT, + qup->base + QUP_OUT_FIFO_BASE); + + writel(QUP_BAM_FLUSH_STOP, + qup->base + QUP_OUT_FIFO_BASE); + + qup_i2c_flush(qup); + + /* wait for remaining interrupts to occur */ + if (!wait_for_completion_timeout(&qup->xfer, HZ)) + dev_err(qup->dev, "flush timed out\n"); + + qup_i2c_rel_dma(qup); + } + } + + dma_unmap_sg(qup->dev, qup->btx.sg, tx_nents, DMA_TO_DEVICE); + + if (rx_nents) + dma_unmap_sg(qup->dev, qup->brx.sg, rx_nents, + DMA_FROM_DEVICE); +desc_err: + return ret; +} + +static int qup_i2c_bam_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, + int num) +{ + struct qup_i2c_dev *qup = i2c_get_adapdata(adap); + int ret = 0; + + enable_irq(qup->irq); + ret = qup_i2c_req_dma(qup); + + if (ret) + goto out; + + qup->bus_err = 0; + qup->qup_err = 0; + + writel(0, qup->base + QUP_MX_INPUT_CNT); + writel(0, qup->base + QUP_MX_OUTPUT_CNT); + + /* set BAM mode */ + writel(QUP_REPACK_EN | QUP_BAM_MODE, qup->base + QUP_IO_MODE); + + /* mask fifo irqs */ + writel((0x3 << 8), qup->base + QUP_OPERATIONAL_MASK); + + /* set RUN STATE */ + ret = qup_i2c_change_state(qup, QUP_RUN_STATE); + if (ret) + goto out; + + writel(qup->clk_ctl, qup->base + QUP_I2C_CLK_CTL); + + qup->msg = msg; + ret = qup_i2c_bam_do_xfer(qup, qup->msg, num); +out: + disable_irq(qup->irq); + + qup->msg = NULL; + return ret; +} + static int qup_i2c_wait_for_complete(struct qup_i2c_dev *qup, struct i2c_msg *msg) { @@ -850,7 +1178,7 @@ static int qup_i2c_xfer_v2(struct i2c_adapter *adap, int num) { struct qup_i2c_dev *qup = i2c_get_adapdata(adap); - int ret, idx; + int ret, len, idx = 0, use_dma = 0; ret = pm_runtime_get_sync(qup->dev); if (ret < 0) @@ -865,7 +1193,27 @@ static int qup_i2c_xfer_v2(struct i2c_adapter *adap, writel(I2C_MINI_CORE | I2C_N_VAL_V2, qup->base + QUP_CONFIG); writel(QUP_V2_TAGS_EN, qup->base + QUP_I2C_MASTER_GEN); - for (idx = 0; idx < num; idx++) { + if ((qup->is_dma)) { + /* All i2c_msgs should be transferred using either dma or cpu */ + for (idx = 0; idx < num; idx++) { + if (msgs[idx].len == 0) { + ret = -EINVAL; + goto out; + } + + len = (msgs[idx].len > qup->out_fifo_sz) || + (msgs[idx].len > qup->in_fifo_sz); + + if ((!is_vmalloc_addr(msgs[idx].buf)) && len) { + use_dma = 1; + } else { + use_dma = 0; + break; + } + } + } + + do { if (msgs[idx].len == 0) { ret = -EINVAL; goto out; @@ -884,14 +1232,15 @@ static int qup_i2c_xfer_v2(struct i2c_adapter *adap, reinit_completion(&qup->xfer); - if (msgs[idx].flags & I2C_M_RD) - ret = qup_i2c_read_one_v2(qup, &msgs[idx]); - else - ret = qup_i2c_write_one_v2(qup, &msgs[idx]); - - if (ret) - break; - } + if (use_dma) { + ret = qup_i2c_bam_xfer(adap, &msgs[idx], num); + } else { + if (msgs[idx].flags & I2C_M_RD) + ret = qup_i2c_read_one_v2(qup, &msgs[idx]); + else + ret = qup_i2c_write_one_v2(qup, &msgs[idx]); + } + } while ((idx++ < num) && !use_dma & !ret); if (!ret) ret = qup_i2c_change_state(qup, QUP_RESET_STATE); @@ -958,6 +1307,7 @@ static int qup_i2c_probe(struct platform_device *pdev) int ret, fs_div, hs_div; int src_clk_freq; u32 clk_freq = 100000; + int blocks; qup = devm_kzalloc(&pdev->dev, sizeof(*qup), GFP_KERNEL); if (!qup) @@ -974,8 +1324,63 @@ static int qup_i2c_probe(struct platform_device *pdev) qup->adap.quirks = &qup_i2c_quirks; } else { qup->adap.algo = &qup_i2c_algo_v2; + ret = qup_i2c_req_dma(qup); + + if (ret == -EPROBE_DEFER) + goto fail_dma; + else if (ret != 0) + goto nodma; + + blocks = (MX_BLOCKS << 1) + 1; + qup->btx.sg = devm_kzalloc(&pdev->dev, + sizeof(*qup->btx.sg) * blocks, + GFP_KERNEL); + if (!qup->btx.sg) { + ret = -ENOMEM; + goto fail_dma; + } + sg_init_table(qup->btx.sg, blocks); + + qup->brx.sg = devm_kzalloc(&pdev->dev, + sizeof(*qup->brx.sg) * blocks, + GFP_KERNEL); + if (!qup->brx.sg) { + ret = -ENOMEM; + goto fail_dma; + } + sg_init_table(qup->brx.sg, blocks); + + /* 2 tag bytes for each block + 5 for start, stop tags */ + size = blocks * 2 + 5; + qup->dpool = dma_pool_create("qup_i2c-dma-pool", &pdev->dev, + size, 4, 0); + + qup->start_tag.start = dma_pool_alloc(qup->dpool, GFP_KERNEL, + &qup->start_tag.addr); + if (!qup->start_tag.start) { + ret = -ENOMEM; + goto fail_dma; + } + + qup->brx.tag.start = dma_pool_alloc(qup->dpool, + GFP_KERNEL, + &qup->brx.tag.addr); + if (!qup->brx.tag.start) { + ret = -ENOMEM; + goto fail_dma; + } + + qup->btx.tag.start = dma_pool_alloc(qup->dpool, + GFP_KERNEL, + &qup->btx.tag.addr); + if (!qup->btx.tag.start) { + ret = -ENOMEM; + goto fail_dma; + } + qup->is_dma = 1; } +nodma: /* We support frequencies up to FAST Mode (400KHz) */ if (!clk_freq || clk_freq > 400000) { dev_err(qup->dev, "clock frequency not supported %d\n", @@ -1093,6 +1498,11 @@ fail_runtime: pm_runtime_set_suspended(qup->dev); fail: qup_i2c_disable_clocks(qup); +fail_dma: + if (qup->btx.dma) + dma_release_channel(qup->btx.dma); + if (qup->brx.dma) + dma_release_channel(qup->brx.dma); return ret; } @@ -1100,6 +1510,18 @@ static int qup_i2c_remove(struct platform_device *pdev) { struct qup_i2c_dev *qup = platform_get_drvdata(pdev); + if (qup->is_dma) { + dma_pool_free(qup->dpool, qup->start_tag.start, + qup->start_tag.addr); + dma_pool_free(qup->dpool, qup->brx.tag.start, + qup->brx.tag.addr); + dma_pool_free(qup->dpool, qup->btx.tag.start, + qup->btx.tag.addr); + dma_pool_destroy(qup->dpool); + dma_release_channel(qup->btx.dma); + dma_release_channel(qup->brx.dma); + } + disable_irq(qup->irq); qup_i2c_disable_clocks(qup); i2c_del_adapter(&qup->adap);