From patchwork Mon Feb 22 16:03:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Shevchenko X-Patchwork-Id: 8379451 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C8E82C0553 for ; Mon, 22 Feb 2016 16:08:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DC8DE20513 for ; Mon, 22 Feb 2016 16:08:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C781F2047D for ; Mon, 22 Feb 2016 16:08:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754678AbcBVQIY (ORCPT ); Mon, 22 Feb 2016 11:08:24 -0500 Received: from mga11.intel.com ([192.55.52.93]:38906 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752741AbcBVQEC (ORCPT ); Mon, 22 Feb 2016 11:04:02 -0500 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP; 22 Feb 2016 08:04:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,485,1449561600"; d="scan'208";a="52527324" Received: from black.fi.intel.com ([10.237.72.93]) by fmsmga004.fm.intel.com with ESMTP; 22 Feb 2016 08:03:56 -0800 Received: by black.fi.intel.com (Postfix, from userid 1003) id D0F4933B; Mon, 22 Feb 2016 18:03:51 +0200 (EET) From: Andy Shevchenko To: Viresh Kumar , Andy Shevchenko , Vinod Koul , linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, Rob Herring , Hans-Christian Egtvedt , Tejun Heo , Mark Brown , Greg Kroah-Hartman , Mark Rutland , Vineet Gupta Cc: Mans Rullgard Subject: [PATCH v2 05/15] dmaengine: dw: set LMS field in descriptors Date: Mon, 22 Feb 2016 18:03:40 +0200 Message-Id: <1456157030-54677-6-git-send-email-andriy.shevchenko@linux.intel.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1456157030-54677-1-git-send-email-andriy.shevchenko@linux.intel.com> References: <1456157030-54677-1-git-send-email-andriy.shevchenko@linux.intel.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Mans Rullgard The LMS field indicates from which master the descriptor is to be read. This patch assumes this is always the same as the memory side in a peripheral transfer which is true for all known systems. Signed-off-by: Mans Rullgard Signed-off-by: Andy Shevchenko --- drivers/dma/dw/core.c | 19 +++++++++---------- drivers/dma/dw/regs.h | 4 ++++ 2 files changed, 13 insertions(+), 10 deletions(-) diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c index 67e8618..90299fe 100644 --- a/drivers/dma/dw/core.c +++ b/drivers/dma/dw/core.c @@ -264,7 +264,7 @@ static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first) dwc_initialize(dwc); - channel_writel(dwc, LLP, first->txd.phys); + channel_writel(dwc, LLP, first->txd.phys | DWC_LLP_LMS(dwc->m_master)); channel_writel(dwc, CTL_LO, DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN); channel_writel(dwc, CTL_HI, 0); @@ -430,7 +430,7 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc) dwc->residue = desc->total_len; /* Check first descriptors addr */ - if (desc->txd.phys == llp) { + if (desc->txd.phys == DWC_LLP_LOC(llp)) { spin_unlock_irqrestore(&dwc->lock, flags); return; } @@ -755,7 +755,7 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, if (!first) { first = desc; } else { - lli_write(prev, llp, desc->txd.phys); + lli_write(prev, llp, desc->txd.phys | DWC_LLP_LMS(dwc->m_master)); list_add_tail(&desc->desc_node, &first->tx_list); } prev = desc; @@ -852,7 +852,7 @@ slave_sg_todev_fill_desc: if (!first) { first = desc; } else { - lli_write(prev, llp, desc->txd.phys); + lli_write(prev, llp, desc->txd.phys | DWC_LLP_LMS(dwc->m_master)); list_add_tail(&desc->desc_node, &first->tx_list); } prev = desc; @@ -907,7 +907,7 @@ slave_sg_fromdev_fill_desc: if (!first) { first = desc; } else { - lli_write(prev, llp, desc->txd.phys); + lli_write(prev, llp, desc->txd.phys | DWC_LLP_LMS(dwc->m_master)); list_add_tail(&desc->desc_node, &first->tx_list); } prev = desc; @@ -1432,13 +1432,13 @@ struct dw_cyclic_desc *dw_dma_cyclic_prep(struct dma_chan *chan, cdesc->desc[i] = desc; if (last) - lli_write(last, llp, desc->txd.phys); + lli_write(last, llp, desc->txd.phys | DWC_LLP_LMS(dwc->m_master)); last = desc; } /* Let's make a cyclic list */ - lli_write(last, llp, cdesc->desc[0]->txd.phys); + lli_write(last, llp, cdesc->desc[0]->txd.phys | DWC_LLP_LMS(dwc->m_master)); dev_dbg(chan2dev(&dwc->chan), "cyclic prepared buf %pad len %zu period %zu periods %d\n", @@ -1640,9 +1640,8 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) dwc->block_size = pdata->block_size; /* Check if channel supports multi block transfer */ - channel_writel(dwc, LLP, 0xfffffffc); - dwc->nollp = - (channel_readl(dwc, LLP) & 0xfffffffc) == 0; + channel_writel(dwc, LLP, DWC_LLP_LOC(0xffffffff)); + dwc->nollp = DWC_LLP_LOC(channel_readl(dwc, LLP)) == 0; channel_writel(dwc, LLP, 0); } } diff --git a/drivers/dma/dw/regs.h b/drivers/dma/dw/regs.h index 6571100..59d6cec 100644 --- a/drivers/dma/dw/regs.h +++ b/drivers/dma/dw/regs.h @@ -143,6 +143,10 @@ enum dw_dma_msize { DW_DMA_MSIZE_256, }; +/* Bitfields in LLP */ +#define DWC_LLP_LMS(x) ((x) & 3) /* list master select */ +#define DWC_LLP_LOC(x) ((x) & ~3) /* next lli */ + /* Bitfields in CTL_LO */ #define DWC_CTLL_INT_EN (1 << 0) /* irqs enabled? */ #define DWC_CTLL_DST_WIDTH(n) ((n)<<1) /* bytes per element */