From patchwork Wed Jan 11 15:18:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Shevchenko X-Patchwork-Id: 9510435 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 17EC1601E7 for ; Wed, 11 Jan 2017 15:18:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B482284E6 for ; Wed, 11 Jan 2017 15:18:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 003212853C; Wed, 11 Jan 2017 15:18:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A26A284E6 for ; Wed, 11 Jan 2017 15:18:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1764617AbdAKPSx (ORCPT ); Wed, 11 Jan 2017 10:18:53 -0500 Received: from mga05.intel.com ([192.55.52.43]:42234 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933686AbdAKPSw (ORCPT ); Wed, 11 Jan 2017 10:18:52 -0500 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP; 11 Jan 2017 07:18:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,346,1477983600"; d="scan'208";a="29062394" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga002.jf.intel.com with ESMTP; 11 Jan 2017 07:18:44 -0800 Received: by black.fi.intel.com (Postfix, from userid 1003) id D2D22F8; Wed, 11 Jan 2017 17:18:13 +0200 (EET) From: Andy Shevchenko To: dmaengine@vger.kernel.org, Vinod Koul , Eugeniy Paltsev Cc: Jarkko Nikula , Andy Shevchenko Subject: [PATCH v3 1/8] dmaengine: dw: Fix data corruption in large device to memory transfers Date: Wed, 11 Jan 2017 17:18:05 +0200 Message-Id: <20170111151812.1037-2-andriy.shevchenko@linux.intel.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170111151812.1037-1-andriy.shevchenko@linux.intel.com> References: <20170111151812.1037-1-andriy.shevchenko@linux.intel.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jarkko Nikula When transferring more data than the maximum block size supported by the HW multiplied by source width the transfer is split into smaller chunks. Currently code calculates the memory width and thus aligment before splitting for both memory to device and device to memory transfers. For memory to device transfers this work fine since alignment is preserved through the splitting and split blocks are still memory width aligned. However in device to memory transfers aligment breaks when maximum block size multiplied by register width doesn't have the same alignment than the buffer. For instance when transferring from an 8-bit register 4100 bytes (32-bit aligned) on a DW DMA that has maximum block size of 4095 elements. An attempt to do such transfers caused data corruption. Fix this by calculating and setting the destination memory width after splitting by using the split block aligment and length. Signed-off-by: Jarkko Nikula Signed-off-by: Andy Shevchenko --- drivers/dma/dw/core.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c index e5adf5d1c34f..45bb608a1b7c 100644 --- a/drivers/dma/dw/core.c +++ b/drivers/dma/dw/core.c @@ -789,17 +789,13 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, lli_write(desc, sar, mem); lli_write(desc, dar, reg); - lli_write(desc, ctllo, ctllo | DWC_CTLL_SRC_WIDTH(mem_width)); if ((len >> mem_width) > dwc->block_size) { dlen = dwc->block_size << mem_width; - mem += dlen; - len -= dlen; } else { dlen = len; - len = 0; } - lli_write(desc, ctlhi, dlen >> mem_width); + lli_write(desc, ctllo, ctllo | DWC_CTLL_SRC_WIDTH(mem_width)); desc->len = dlen; if (!first) { @@ -809,6 +805,9 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, list_add_tail(&desc->desc_node, &first->tx_list); } prev = desc; + + mem += dlen; + len -= dlen; total_len += dlen; if (len) @@ -833,8 +832,6 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, mem = sg_dma_address(sg); len = sg_dma_len(sg); - mem_width = __ffs(data_width | mem | len); - slave_sg_fromdev_fill_desc: desc = dwc_desc_get(dwc); if (!desc) @@ -842,16 +839,14 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, lli_write(desc, sar, reg); lli_write(desc, dar, mem); - lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width)); if ((len >> reg_width) > dwc->block_size) { dlen = dwc->block_size << reg_width; - mem += dlen; - len -= dlen; } else { dlen = len; - len = 0; } lli_write(desc, ctlhi, dlen >> reg_width); + mem_width = __ffs(data_width | mem | dlen); + lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width)); desc->len = dlen; if (!first) { @@ -861,6 +856,9 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, list_add_tail(&desc->desc_node, &first->tx_list); } prev = desc; + + mem += dlen; + len -= dlen; total_len += dlen; if (len)