From patchwork Sat Apr 11 07:09:03 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sricharan Ramabadhran X-Patchwork-Id: 6200941 Return-Path: X-Original-To: patchwork-linux-arm-msm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7F8F19F2EC for ; Sat, 11 Apr 2015 07:11:03 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3183C203DA for ; Sat, 11 Apr 2015 07:11:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D57922025A for ; Sat, 11 Apr 2015 07:11:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933327AbbDKHKl (ORCPT ); Sat, 11 Apr 2015 03:10:41 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:36111 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932978AbbDKHJc (ORCPT ); Sat, 11 Apr 2015 03:09:32 -0400 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 122C41401F8; Sat, 11 Apr 2015 07:09:32 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 03C71140206; Sat, 11 Apr 2015 07:09:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from blr-ubuntu-32.ap.qualcomm.com (unknown [202.46.23.61]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: sricharan@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 017CF1401F8; Sat, 11 Apr 2015 07:09:27 +0000 (UTC) From: Sricharan R To: devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-i2c@vger.kernel.org, iivanov@mm-sol.com, agross@codeaurora.org, galak@codeaurora.org, dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sricharan@codeaurora.org Subject: [PATCH V3 4/6] i2c: qup: Transfer every i2c_msg in i2c_msgs without stop Date: Sat, 11 Apr 2015 12:39:03 +0530 Message-Id: <1428736145-18361-5-git-send-email-sricharan@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 In-Reply-To: <1428736145-18361-1-git-send-email-sricharan@codeaurora.org> References: <1428736145-18361-1-git-send-email-sricharan@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The definition of i2c_msg says that "If this is the last message in a group, it is followed by a STOP. Otherwise it is followed by the next @i2c_msg transaction segment, beginning with a (repeated) START" So the expectation is that there is no 'STOP' bit inbetween individual i2c_msg segments with repeated 'START'. The QUP i2c hardware has no way to inform that there should not be a 'STOP' at the end of transaction. The only way to implement this is to coalesce all the i2c_msg in i2c_msgs in to one transaction and transfer them. Adding the support for the same. This is required for some clients like touchscreen which keeps incrementing counts across individual transfers and 'STOP' bit inbetween resets the counter, which is not required. Signed-off-by: Sricharan R --- drivers/i2c/busses/i2c-qup.c | 200 ++++++++++++++++++++++++++----------------- 1 file changed, 122 insertions(+), 78 deletions(-) diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c index 4463ff8..1b71043 100644 --- a/drivers/i2c/busses/i2c-qup.c +++ b/drivers/i2c/busses/i2c-qup.c @@ -890,76 +890,94 @@ static int qup_i2c_req_dma(struct qup_i2c_dev *qup) return 0; } -static int bam_do_xfer(struct qup_i2c_dev *qup, struct i2c_msg *msg) +static int bam_do_xfer(struct qup_i2c_dev *qup, struct i2c_msg *msg, int num) { struct dma_async_tx_descriptor *txd, *rxd = NULL; int ret = 0; dma_cookie_t cookie_rx, cookie_tx; - u32 rx_nents = 0, tx_nents = 0, len = 0; - /* QUP I2C read/write limit for single command is 256bytes max*/ - int blocks = (msg->len + QUP_READ_LIMIT) / QUP_READ_LIMIT; - int rem = msg->len % QUP_READ_LIMIT; - int tlen, i = 0, tx_len = 0; - - if (msg->flags & I2C_M_RD) { - tx_nents = 1; - rx_nents = (blocks << 1) + 1; - sg_init_table(qup->brx.sg_rx, rx_nents); - - while (i < blocks) { - /* transfer length set to '0' implies 256 bytes */ - tlen = (i == (blocks - 1)) ? rem : 0; - len += get_start_tag(&qup->start_tag.start[len], - msg, !i, (i == (blocks - 1)), - tlen); - - qup_sg_set_buf(&qup->brx.sg_rx[i << 1], - &qup->brx.scratch_tag.start[0], - &qup->brx.scratch_tag, - 2, qup, 0, 0); - - qup_sg_set_buf(&qup->brx.sg_rx[(i << 1) + 1], - &msg->buf[QUP_READ_LIMIT * i], - NULL, tlen, qup, 1, - DMA_FROM_DEVICE); - - i++; - } - - sg_init_one(qup->btx.sg_tx, &qup->start_tag.start[0], len); - qup_sg_set_buf(qup->btx.sg_tx, &qup->start_tag.start[0], - &qup->start_tag, len, qup, 0, 0); - qup_sg_set_buf(&qup->brx.sg_rx[i << 1], - &qup->brx.scratch_tag.start[1], - &qup->brx.scratch_tag, 2, - qup, 0, 0); - } else { - qup->btx.footer_tag.start[0] = QUP_BAM_FLUSH_STOP; - qup->btx.footer_tag.start[1] = QUP_BAM_FLUSH_STOP; - - tx_nents = (blocks << 1) + 1; - sg_init_table(qup->btx.sg_tx, tx_nents); - - while (i < blocks) { - tlen = (i == (blocks - 1)) ? rem : 0; - len = get_start_tag(&qup->start_tag.start[tx_len], - msg, !i, (i == (blocks - 1)), tlen); - - qup_sg_set_buf(&qup->btx.sg_tx[i << 1], - &qup->start_tag.start[tx_len], - &qup->start_tag, - len, qup, 0, 0); - - tx_len += len; - qup_sg_set_buf(&qup->btx.sg_tx[(i << 1) + 1], - &msg->buf[QUP_READ_LIMIT * i], NULL, - tlen, qup, 1, DMA_TO_DEVICE); - i++; + u32 rx_nents = 0, tx_nents = 0, len, blocks, rem, last; + u32 cur_rx_nents, cur_tx_nents; + u32 tlen, i, tx_len, tx_buf = 0, rx_buf = 0, off = 0; + + while (num) { + blocks = (msg->len + QUP_READ_LIMIT) / QUP_READ_LIMIT; + rem = msg->len % QUP_READ_LIMIT; + i = 0, tx_len = 0, len = 0; + + if (msg->flags & I2C_M_RD) { + cur_tx_nents = 1; + cur_rx_nents = (blocks * 2) + 1; + rx_nents += cur_rx_nents; + tx_nents += cur_tx_nents; + + while (i < blocks) { + /* transfer length set to '0' implies 256 + bytes */ + tlen = (i == (blocks - 1)) ? rem : 0; + last = (i == (blocks - 1)) && !(num - 1); + len += get_start_tag(&qup->start_tag.start[off + + len], + msg, !i, last, tlen); + + qup_sg_set_buf(&qup->brx.sg_rx[rx_buf++], + &qup->brx.scratch_tag.start[0], + &qup->brx.scratch_tag, + 2, qup, 0, 0); + + qup_sg_set_buf(&qup->brx.sg_rx[rx_buf++], + &msg->buf[QUP_READ_LIMIT * i], + NULL, tlen, qup, + 1, DMA_FROM_DEVICE); + i++; + } + qup_sg_set_buf(&qup->btx.sg_tx[tx_buf++], + &qup->start_tag.start[off], + &qup->start_tag, len, qup, 0, 0); + off += len; + qup_sg_set_buf(&qup->brx.sg_rx[rx_buf++], + &qup->brx.scratch_tag.start[1], + &qup->brx.scratch_tag, 2, + qup, 0, 0); + } else { + cur_tx_nents = (blocks * 2); + tx_nents += cur_tx_nents; + + while (i < blocks) { + tlen = (i == (blocks - 1)) ? rem : 0; + last = (i == (blocks - 1)) && !(num - 1); + len = get_start_tag(&qup->start_tag.start[off + + tx_len], + msg, !i, last, tlen); + + qup_sg_set_buf(&qup->btx.sg_tx[tx_buf++], + &qup->start_tag.start[off + + tx_len], + &qup->start_tag, len, + qup, 0, 0); + + tx_len += len; + qup_sg_set_buf(&qup->btx.sg_tx[tx_buf++], + &msg->buf[QUP_READ_LIMIT * i], + NULL, tlen, qup, 1, + DMA_TO_DEVICE); + i++; + } + off += tx_len; + + if (!(num - 1)) { + qup->btx.footer_tag.start[0] = + QUP_BAM_FLUSH_STOP; + qup->btx.footer_tag.start[1] = + QUP_BAM_FLUSH_STOP; + qup_sg_set_buf(&qup->btx.sg_tx[tx_buf++], + &qup->btx.footer_tag.start[0], + &qup->btx.footer_tag, 2, + qup, 0, 0); + tx_nents += 1; + } } - qup_sg_set_buf(&qup->btx.sg_tx[i << 1], - &qup->btx.footer_tag.start[0], - &qup->btx.footer_tag, 2, - qup, 0, 0); + msg++; + num--; } txd = dmaengine_prep_slave_sg(qup->btx.dma_tx, qup->btx.sg_tx, tx_nents, @@ -1006,10 +1024,20 @@ static int bam_do_xfer(struct qup_i2c_dev *qup, struct i2c_msg *msg) if (ret || qup->bus_err || qup->qup_err) { if (qup->bus_err & QUP_I2C_NACK_FLAG) + msg--; dev_err(qup->dev, "NACK from %x\n", msg->addr); ret = -EIO; + + if (qup_i2c_change_state(qup, QUP_RUN_STATE)) { + dev_err(qup->dev, "change to run state timed out"); + return ret; + } + + writel(QUP_BAM_INPUT_EOT, qup->base + QUP_OUT_FIFO_BASE); + writel(QUP_BAM_FLUSH_STOP, qup->base + QUP_OUT_FIFO_BASE); + writel(QUP_BAM_FLUSH_STOP, qup->base + QUP_OUT_FIFO_BASE); + writel(QUP_BAM_FLUSH_STOP, qup->base + QUP_OUT_FIFO_BASE); qup_i2c_flush(qup); - qup_i2c_change_state(qup, QUP_RUN_STATE); /* wait for remaining interrupts to occur */ if (!wait_for_completion_timeout(&qup->xfer, HZ)) @@ -1022,7 +1050,7 @@ desc_err: return ret; } -static int qup_bam_xfer(struct i2c_adapter *adap, struct i2c_msg *msg) +static int qup_bam_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num) { struct qup_i2c_dev *qup = i2c_get_adapdata(adap); int ret = 0; @@ -1051,7 +1079,7 @@ static int qup_bam_xfer(struct i2c_adapter *adap, struct i2c_msg *msg) writel(qup->clk_ctl, qup->base + QUP_I2C_CLK_CTL); qup->msg = msg; - ret = bam_do_xfer(qup, qup->msg); + ret = bam_do_xfer(qup, qup->msg, num); out: disable_irq(qup->irq); @@ -1064,7 +1092,7 @@ static int qup_i2c_xfer(struct i2c_adapter *adap, int num) { struct qup_i2c_dev *qup = i2c_get_adapdata(adap); - int ret, idx, last, len; + int ret, idx, last, use_dma = 0, len = 0; ret = pm_runtime_get_sync(qup->dev); if (ret < 0) @@ -1083,12 +1111,27 @@ static int qup_i2c_xfer(struct i2c_adapter *adap, writel(I2C_MINI_CORE | I2C_N_VAL, qup->base + QUP_CONFIG); } - for (idx = 0; idx < num; idx++) { - if (msgs[idx].len == 0) { - ret = -EINVAL; - goto out; + if ((qup->is_dma)) { + /* All i2c_msgs should be transferred using either dma or cpu */ + for (idx = 0; idx < num; idx++) { + if (msgs[idx].len == 0) { + ret = -EINVAL; + goto out; + } + + if (!len) + len = ((&msgs[idx])->len) > qup->out_fifo_sz; + + if ((!is_vmalloc_addr((&msgs[idx])->buf)) && len) { + use_dma = 1; + } else { + use_dma = 0; + break; + } } + } + for (idx = 0; idx < num; idx++) { if (qup_i2c_poll_state_i2c_master(qup)) { ret = -EIO; goto out; @@ -1096,11 +1139,9 @@ static int qup_i2c_xfer(struct i2c_adapter *adap, reinit_completion(&qup->xfer); - len = (&msgs[idx])->len; - - if ((qup->is_dma) && (!is_vmalloc_addr((&msgs[idx])->buf)) && - (len > qup->out_fifo_sz)) { - ret = qup_bam_xfer(adap, &msgs[idx]); + if (use_dma) { + ret = qup_bam_xfer(adap, &msgs[idx], num); + idx = num; } else { last = (idx == (num - 1)); if (msgs[idx].flags & I2C_M_RD) @@ -1215,6 +1256,8 @@ static int qup_i2c_probe(struct platform_device *pdev) ret = -ENOMEM; goto fail; } + sg_init_table(qup->btx.sg_tx, blocks); + qup->brx.sg_rx = devm_kzalloc(&pdev->dev, sizeof(*qup->btx.sg_tx) * blocks, GFP_KERNEL); @@ -1222,6 +1265,7 @@ static int qup_i2c_probe(struct platform_device *pdev) ret = -ENOMEM; goto fail; } + sg_init_table(qup->brx.sg_rx, blocks); size = sizeof(struct qup_i2c_tag) * (blocks + 3); qup->dpool = dma_pool_create("qup_i2c-dma-pool", &pdev->dev,