From patchwork Tue Dec 15 17:30:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2016AC2BB9A for ; Tue, 15 Dec 2020 17:41:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D8BAC22B4B for ; Tue, 15 Dec 2020 17:41:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730031AbgLORbl (ORCPT ); Tue, 15 Dec 2020 12:31:41 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.87.133]:46024 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731043AbgLORbj (ORCPT ); Tue, 15 Dec 2020 12:31:39 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id B1590C045C; Tue, 15 Dec 2020 17:30:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053437; bh=A8Wse3dF6p4E6D3udyZg1XfTP61UAWOzGhe1pTgY0mA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=huSz6l3Zs7TLYk7k7JGEBs6xKOHBXoi6fVf7QiSR3cPU6gNEF1ghpBkJ9qvOx2BsK +6znKU8Qj+ZbJNnOdCfOy8woBDpO/I9mLbQs+DvPz7z5q2D1rRcMCymvQITtrzzStE lHoMvKtseZ3DFj4UA2Ho+xrE5XkkA8ySHfEcn9PX3P9qzpXnaCWootf5FU/sgBTHIE t8ps5IThVjSdT+qNceUd6i9Ogb8AAqK9168839SHISTI2p9LroaDgnYFaRnYBp6V94 C+KK+tD8AgmL3Kxd7HGhs6ENZjTPAWsoxTIRXIsSixfKj2XD1E4s/qsSwF30PdIEdv 2vbSmPb0xNyNA== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 9BB9AA005C; Tue, 15 Dec 2020 17:30:34 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Vinod Koul , Dan Williams Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 01/15] dmaengine: dw-edma: Add writeq() and readq() for 64 bits architectures Date: Tue, 15 Dec 2020 18:30:10 +0100 Message-Id: <9e8442071029d7be60b38d4c1661766a6761a4e7.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Add writeq() and readq() for 64 bits architures support. Supporting these two functions will allow the write or the read of eDMA 64 bits registers at once instead of having two consecutive operations. Also, this improvement will allow the PCI optimization transaction messages, which will generate a 64 bits message instead of two messages of 32 bits. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-v0-core.c | 254 +++++++++++++++++++++++-------- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 48 +++--- drivers/dma/dw-edma/dw-edma-v0-regs.h | 149 +++++++++++++----- 3 files changed, 326 insertions(+), 125 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index 692de47..7888eda 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -28,29 +28,69 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) return dw->rg_region.vaddr; } -#define SET(dw, name, value) \ +#define SET_32(dw, name, value) \ writel(value, &(__dw_regs(dw)->name)) -#define GET(dw, name) \ +#define GET_32(dw, name) \ readl(&(__dw_regs(dw)->name)) -#define SET_RW(dw, dir, name, value) \ +#define SET_RW_32(dw, dir, name, value) \ do { \ if ((dir) == EDMA_DIR_WRITE) \ - SET(dw, wr_##name, value); \ + SET_32(dw, wr_##name, value); \ else \ - SET(dw, rd_##name, value); \ + SET_32(dw, rd_##name, value); \ } while (0) -#define GET_RW(dw, dir, name) \ +#define GET_RW_32(dw, dir, name) \ ((dir) == EDMA_DIR_WRITE \ - ? GET(dw, wr_##name) \ - : GET(dw, rd_##name)) + ? GET_32(dw, wr_##name) \ + : GET_32(dw, rd_##name)) -#define SET_BOTH(dw, name, value) \ +#define SET_BOTH_32(dw, name, value) \ do { \ - SET(dw, wr_##name, value); \ - SET(dw, rd_##name, value); \ + SET_32(dw, wr_##name, value); \ + SET_32(dw, rd_##name, value); \ + } while (0) + +#ifdef CONFIG_64BIT + +#define SET_64(dw, name, value) \ + writeq(value, &(__dw_regs(dw)->name)) + +#define GET_64(dw, name) \ + readq(&(__dw_regs(dw)->name)) + +#define SET_RW_64(dw, dir, name, value) \ + do { \ + if ((dir) == EDMA_DIR_WRITE) \ + SET_64(dw, wr_##name, value); \ + else \ + SET_64(dw, rd_##name, value); \ + } while (0) + +#define GET_RW_64(dw, dir, name) \ + ((dir) == EDMA_DIR_WRITE \ + ? GET_64(dw, wr_##name) \ + : GET_64(dw, rd_##name)) + +#define SET_BOTH_64(dw, name, value) \ + do { \ + SET_64(dw, wr_##name, value); \ + SET_64(dw, rd_##name, value); \ + } while (0) + +#endif /* CONFIG_64BIT */ + +#define SET_COMPAT(dw, name, value) \ + writel(value, &(__dw_regs(dw)->type.unroll.name)) + +#define SET_RW_COMPAT(dw, dir, name, value) \ + do { \ + if ((dir) == EDMA_DIR_WRITE) \ + SET_COMPAT(dw, wr_##name, value); \ + else \ + SET_COMPAT(dw, rd_##name, value); \ } while (0) static inline struct dw_edma_v0_ch_regs __iomem * @@ -115,21 +155,86 @@ static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, return value; } -#define SET_CH(dw, dir, ch, name, value) \ +#define SET_CH_32(dw, dir, ch, name, value) \ writel_ch(dw, dir, ch, value, &(__dw_ch_regs(dw, dir, ch)->name)) -#define GET_CH(dw, dir, ch, name) \ +#define GET_CH_32(dw, dir, ch, name) \ readl_ch(dw, dir, ch, &(__dw_ch_regs(dw, dir, ch)->name)) -#define SET_LL(ll, value) \ +#define SET_LL_32(ll, value) \ writel(value, ll) +#ifdef CONFIG_64BIT + +static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, + u64 value, void __iomem *addr) +{ + if (dw->mf == EDMA_MF_EDMA_LEGACY) { + u32 viewport_sel; + unsigned long flags; + + raw_spin_lock_irqsave(&dw->lock, flags); + + viewport_sel = FIELD_PREP(EDMA_V0_VIEWPORT_MASK, ch); + if (dir == EDMA_DIR_READ) + viewport_sel |= BIT(31); + + writel(viewport_sel, + &(__dw_regs(dw)->type.legacy.viewport_sel)); + writeq(value, addr); + + raw_spin_unlock_irqrestore(&dw->lock, flags); + } else { + writeq(value, addr); + } +} + +static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, + const void __iomem *addr) +{ + u32 value; + + if (dw->mf == EDMA_MF_EDMA_LEGACY) { + u32 viewport_sel; + unsigned long flags; + + raw_spin_lock_irqsave(&dw->lock, flags); + + viewport_sel = FIELD_PREP(EDMA_V0_VIEWPORT_MASK, ch); + if (dir == EDMA_DIR_READ) + viewport_sel |= BIT(31); + + writel(viewport_sel, + &(__dw_regs(dw)->type.legacy.viewport_sel)); + value = readq(addr); + + raw_spin_unlock_irqrestore(&dw->lock, flags); + } else { + value = readq(addr); + } + + return value; +} + +#define SET_CH_64(dw, dir, ch, name, value) \ + writeq_ch(dw, dir, ch, value, &(__dw_ch_regs(dw, dir, ch)->name)) + +#define GET_CH_64(dw, dir, ch, name) \ + readq_ch(dw, dir, ch, &(__dw_ch_regs(dw, dir, ch)->name)) + +#define SET_LL_64(ll, value) \ + writeq(value, ll) + +#endif /* CONFIG_64BIT */ + /* eDMA management callbacks */ void dw_edma_v0_core_off(struct dw_edma *dw) { - SET_BOTH(dw, int_mask, EDMA_V0_DONE_INT_MASK | EDMA_V0_ABORT_INT_MASK); - SET_BOTH(dw, int_clear, EDMA_V0_DONE_INT_MASK | EDMA_V0_ABORT_INT_MASK); - SET_BOTH(dw, engine_en, 0); + SET_BOTH_32(dw, int_mask, + EDMA_V0_DONE_INT_MASK | EDMA_V0_ABORT_INT_MASK); + SET_BOTH_32(dw, int_clear, + EDMA_V0_DONE_INT_MASK | EDMA_V0_ABORT_INT_MASK); + SET_BOTH_32(dw, engine_en, 0); } u16 dw_edma_v0_core_ch_count(struct dw_edma *dw, enum dw_edma_dir dir) @@ -137,9 +242,11 @@ u16 dw_edma_v0_core_ch_count(struct dw_edma *dw, enum dw_edma_dir dir) u32 num_ch; if (dir == EDMA_DIR_WRITE) - num_ch = FIELD_GET(EDMA_V0_WRITE_CH_COUNT_MASK, GET(dw, ctrl)); + num_ch = FIELD_GET(EDMA_V0_WRITE_CH_COUNT_MASK, + GET_32(dw, ctrl)); else - num_ch = FIELD_GET(EDMA_V0_READ_CH_COUNT_MASK, GET(dw, ctrl)); + num_ch = FIELD_GET(EDMA_V0_READ_CH_COUNT_MASK, + GET_32(dw, ctrl)); if (num_ch > EDMA_V0_MAX_NR_CH) num_ch = EDMA_V0_MAX_NR_CH; @@ -153,7 +260,7 @@ enum dma_status dw_edma_v0_core_ch_status(struct dw_edma_chan *chan) u32 tmp; tmp = FIELD_GET(EDMA_V0_CH_STATUS_MASK, - GET_CH(dw, chan->dir, chan->id, ch_control1)); + GET_CH_32(dw, chan->dir, chan->id, ch_control1)); if (tmp == 1) return DMA_IN_PROGRESS; @@ -167,26 +274,28 @@ void dw_edma_v0_core_clear_done_int(struct dw_edma_chan *chan) { struct dw_edma *dw = chan->chip->dw; - SET_RW(dw, chan->dir, int_clear, - FIELD_PREP(EDMA_V0_DONE_INT_MASK, BIT(chan->id))); + SET_RW_32(dw, chan->dir, int_clear, + FIELD_PREP(EDMA_V0_DONE_INT_MASK, BIT(chan->id))); } void dw_edma_v0_core_clear_abort_int(struct dw_edma_chan *chan) { struct dw_edma *dw = chan->chip->dw; - SET_RW(dw, chan->dir, int_clear, - FIELD_PREP(EDMA_V0_ABORT_INT_MASK, BIT(chan->id))); + SET_RW_32(dw, chan->dir, int_clear, + FIELD_PREP(EDMA_V0_ABORT_INT_MASK, BIT(chan->id))); } u32 dw_edma_v0_core_status_done_int(struct dw_edma *dw, enum dw_edma_dir dir) { - return FIELD_GET(EDMA_V0_DONE_INT_MASK, GET_RW(dw, dir, int_status)); + return FIELD_GET(EDMA_V0_DONE_INT_MASK, + GET_RW_32(dw, dir, int_status)); } u32 dw_edma_v0_core_status_abort_int(struct dw_edma *dw, enum dw_edma_dir dir) { - return FIELD_GET(EDMA_V0_ABORT_INT_MASK, GET_RW(dw, dir, int_status)); + return FIELD_GET(EDMA_V0_ABORT_INT_MASK, + GET_RW_32(dw, dir, int_status)); } static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) @@ -209,15 +318,23 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) control |= (DW_EDMA_V0_LIE | DW_EDMA_V0_RIE); /* Channel control */ - SET_LL(&lli[i].control, control); + SET_LL_32(&lli[i].control, control); /* Transfer size */ - SET_LL(&lli[i].transfer_size, child->sz); - /* SAR - low, high */ - SET_LL(&lli[i].sar_low, lower_32_bits(child->sar)); - SET_LL(&lli[i].sar_high, upper_32_bits(child->sar)); - /* DAR - low, high */ - SET_LL(&lli[i].dar_low, lower_32_bits(child->dar)); - SET_LL(&lli[i].dar_high, upper_32_bits(child->dar)); + SET_LL_32(&lli[i].transfer_size, child->sz); + /* SAR */ + #ifdef CONFIG_64BIT + SET_LL_64(&lli[i].sar.reg, child->sar); + #else /* CONFIG_64BIT */ + SET_LL_32(&lli[i].sar.lsb, lower_32_bits(child->sar)); + SET_LL_32(&lli[i].sar.msb, upper_32_bits(child->sar)); + #endif /* CONFIG_64BIT */ + /* DAR */ + #ifdef CONFIG_64BIT + SET_LL_64(&lli[i].dar.reg, child->dar); + #else /* CONFIG_64BIT */ + SET_LL_32(&lli[i].dar.lsb, lower_32_bits(child->dar)); + SET_LL_32(&lli[i].dar.msb, upper_32_bits(child->dar)); + #endif /* CONFIG_64BIT */ i++; } @@ -227,10 +344,14 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) control |= DW_EDMA_V0_CB; /* Channel control */ - SET_LL(&llp->control, control); - /* Linked list - low, high */ - SET_LL(&llp->llp_low, lower_32_bits(chunk->ll_region.paddr)); - SET_LL(&llp->llp_high, upper_32_bits(chunk->ll_region.paddr)); + SET_LL_32(&llp->control, control); + /* Linked list */ + #ifdef CONFIG_64BIT + SET_LL_64(&llp->llp.reg, chunk->ll_region.paddr); + #else /* CONFIG_64BIT */ + SET_LL_32(&llp->llp.lsb, lower_32_bits(chunk->ll_region.paddr)); + SET_LL_32(&llp->llp.msb, upper_32_bits(chunk->ll_region.paddr)); + #endif /* CONFIG_64BIT */ } void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) @@ -243,28 +364,33 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) if (first) { /* Enable engine */ - SET_RW(dw, chan->dir, engine_en, BIT(0)); + SET_RW_32(dw, chan->dir, engine_en, BIT(0)); /* Interrupt unmask - done, abort */ - tmp = GET_RW(dw, chan->dir, int_mask); + tmp = GET_RW_32(dw, chan->dir, int_mask); tmp &= ~FIELD_PREP(EDMA_V0_DONE_INT_MASK, BIT(chan->id)); tmp &= ~FIELD_PREP(EDMA_V0_ABORT_INT_MASK, BIT(chan->id)); - SET_RW(dw, chan->dir, int_mask, tmp); + SET_RW_32(dw, chan->dir, int_mask, tmp); /* Linked list error */ - tmp = GET_RW(dw, chan->dir, linked_list_err_en); + tmp = GET_RW_32(dw, chan->dir, linked_list_err_en); tmp |= FIELD_PREP(EDMA_V0_LINKED_LIST_ERR_MASK, BIT(chan->id)); - SET_RW(dw, chan->dir, linked_list_err_en, tmp); + SET_RW_32(dw, chan->dir, linked_list_err_en, tmp); /* Channel control */ - SET_CH(dw, chan->dir, chan->id, ch_control1, - (DW_EDMA_V0_CCS | DW_EDMA_V0_LLE)); - /* Linked list - low, high */ - SET_CH(dw, chan->dir, chan->id, llp_low, - lower_32_bits(chunk->ll_region.paddr)); - SET_CH(dw, chan->dir, chan->id, llp_high, - upper_32_bits(chunk->ll_region.paddr)); + SET_CH_32(dw, chan->dir, chan->id, ch_control1, + (DW_EDMA_V0_CCS | DW_EDMA_V0_LLE)); + /* Linked list */ + #ifdef CONFIG_64BIT + SET_CH_64(dw, chan->dir, chan->id, llp.reg, + chunk->ll_region.paddr); + #else /* CONFIG_64BIT */ + SET_CH_32(dw, chan->dir, chan->id, llp.lsb, + lower_32_bits(chunk->ll_region.paddr)); + SET_CH_32(dw, chan->dir, chan->id, llp.msb, + upper_32_bits(chunk->ll_region.paddr)); + #endif /* CONFIG_64BIT */ } /* Doorbell */ - SET_RW(dw, chan->dir, doorbell, - FIELD_PREP(EDMA_V0_DOORBELL_CH_MASK, chan->id)); + SET_RW_32(dw, chan->dir, doorbell, + FIELD_PREP(EDMA_V0_DOORBELL_CH_MASK, chan->id)); } int dw_edma_v0_core_device_config(struct dw_edma_chan *chan) @@ -273,31 +399,31 @@ int dw_edma_v0_core_device_config(struct dw_edma_chan *chan) u32 tmp = 0; /* MSI done addr - low, high */ - SET_RW(dw, chan->dir, done_imwr_low, chan->msi.address_lo); - SET_RW(dw, chan->dir, done_imwr_high, chan->msi.address_hi); + SET_RW_32(dw, chan->dir, done_imwr.lsb, chan->msi.address_lo); + SET_RW_32(dw, chan->dir, done_imwr.msb, chan->msi.address_hi); /* MSI abort addr - low, high */ - SET_RW(dw, chan->dir, abort_imwr_low, chan->msi.address_lo); - SET_RW(dw, chan->dir, abort_imwr_high, chan->msi.address_hi); + SET_RW_32(dw, chan->dir, abort_imwr.lsb, chan->msi.address_lo); + SET_RW_32(dw, chan->dir, abort_imwr.msb, chan->msi.address_hi); /* MSI data - low, high */ switch (chan->id) { case 0: case 1: - tmp = GET_RW(dw, chan->dir, ch01_imwr_data); + tmp = GET_RW_32(dw, chan->dir, ch01_imwr_data); break; case 2: case 3: - tmp = GET_RW(dw, chan->dir, ch23_imwr_data); + tmp = GET_RW_32(dw, chan->dir, ch23_imwr_data); break; case 4: case 5: - tmp = GET_RW(dw, chan->dir, ch45_imwr_data); + tmp = GET_RW_32(dw, chan->dir, ch45_imwr_data); break; case 6: case 7: - tmp = GET_RW(dw, chan->dir, ch67_imwr_data); + tmp = GET_RW_32(dw, chan->dir, ch67_imwr_data); break; } @@ -316,22 +442,22 @@ int dw_edma_v0_core_device_config(struct dw_edma_chan *chan) switch (chan->id) { case 0: case 1: - SET_RW(dw, chan->dir, ch01_imwr_data, tmp); + SET_RW_32(dw, chan->dir, ch01_imwr_data, tmp); break; case 2: case 3: - SET_RW(dw, chan->dir, ch23_imwr_data, tmp); + SET_RW_32(dw, chan->dir, ch23_imwr_data, tmp); break; case 4: case 5: - SET_RW(dw, chan->dir, ch45_imwr_data, tmp); + SET_RW_32(dw, chan->dir, ch45_imwr_data, tmp); break; case 6: case 7: - SET_RW(dw, chan->dir, ch67_imwr_data, tmp); + SET_RW_32(dw, chan->dir, ch67_imwr_data, tmp); break; } diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 6f62711..a5e2783 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -114,12 +114,12 @@ static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, REGISTER(ch_control1), REGISTER(ch_control2), REGISTER(transfer_size), - REGISTER(sar_low), - REGISTER(sar_high), - REGISTER(dar_low), - REGISTER(dar_high), - REGISTER(llp_low), - REGISTER(llp_high), + REGISTER(sar.lsb), + REGISTER(sar.msb), + REGISTER(dar.lsb), + REGISTER(dar.msb), + REGISTER(llp.lsb), + REGISTER(llp.msb), }; nr_entries = ARRAY_SIZE(debugfs_regs); @@ -132,17 +132,17 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) /* eDMA global registers */ WR_REGISTER(engine_en), WR_REGISTER(doorbell), - WR_REGISTER(ch_arb_weight_low), - WR_REGISTER(ch_arb_weight_high), + WR_REGISTER(ch_arb_weight.lsb), + WR_REGISTER(ch_arb_weight.msb), /* eDMA interrupts registers */ WR_REGISTER(int_status), WR_REGISTER(int_mask), WR_REGISTER(int_clear), WR_REGISTER(err_status), - WR_REGISTER(done_imwr_low), - WR_REGISTER(done_imwr_high), - WR_REGISTER(abort_imwr_low), - WR_REGISTER(abort_imwr_high), + WR_REGISTER(done_imwr.lsb), + WR_REGISTER(done_imwr.msb), + WR_REGISTER(abort_imwr.lsb), + WR_REGISTER(abort_imwr.msb), WR_REGISTER(ch01_imwr_data), WR_REGISTER(ch23_imwr_data), WR_REGISTER(ch45_imwr_data), @@ -152,8 +152,8 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) const struct debugfs_entries debugfs_unroll_regs[] = { /* eDMA channel context grouping */ WR_REGISTER_UNROLL(engine_chgroup), - WR_REGISTER_UNROLL(engine_hshake_cnt_low), - WR_REGISTER_UNROLL(engine_hshake_cnt_high), + WR_REGISTER_UNROLL(engine_hshake_cnt.lsb), + WR_REGISTER_UNROLL(engine_hshake_cnt.msb), WR_REGISTER_UNROLL(ch0_pwr_en), WR_REGISTER_UNROLL(ch1_pwr_en), WR_REGISTER_UNROLL(ch2_pwr_en), @@ -200,19 +200,19 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) /* eDMA global registers */ RD_REGISTER(engine_en), RD_REGISTER(doorbell), - RD_REGISTER(ch_arb_weight_low), - RD_REGISTER(ch_arb_weight_high), + RD_REGISTER(ch_arb_weight.lsb), + RD_REGISTER(ch_arb_weight.msb), /* eDMA interrupts registers */ RD_REGISTER(int_status), RD_REGISTER(int_mask), RD_REGISTER(int_clear), - RD_REGISTER(err_status_low), - RD_REGISTER(err_status_high), + RD_REGISTER(err_status.lsb), + RD_REGISTER(err_status.msb), RD_REGISTER(linked_list_err_en), - RD_REGISTER(done_imwr_low), - RD_REGISTER(done_imwr_high), - RD_REGISTER(abort_imwr_low), - RD_REGISTER(abort_imwr_high), + RD_REGISTER(done_imwr.lsb), + RD_REGISTER(done_imwr.msb), + RD_REGISTER(abort_imwr.lsb), + RD_REGISTER(abort_imwr.msb), RD_REGISTER(ch01_imwr_data), RD_REGISTER(ch23_imwr_data), RD_REGISTER(ch45_imwr_data), @@ -221,8 +221,8 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) const struct debugfs_entries debugfs_unroll_regs[] = { /* eDMA channel context grouping */ RD_REGISTER_UNROLL(engine_chgroup), - RD_REGISTER_UNROLL(engine_hshake_cnt_low), - RD_REGISTER_UNROLL(engine_hshake_cnt_high), + RD_REGISTER_UNROLL(engine_hshake_cnt.lsb), + RD_REGISTER_UNROLL(engine_hshake_cnt.msb), RD_REGISTER_UNROLL(ch0_pwr_en), RD_REGISTER_UNROLL(ch1_pwr_en), RD_REGISTER_UNROLL(ch2_pwr_en), diff --git a/drivers/dma/dw-edma/dw-edma-v0-regs.h b/drivers/dma/dw-edma/dw-edma-v0-regs.h index dfd70e2..d07151d 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-regs.h +++ b/drivers/dma/dw-edma/dw-edma-v0-regs.h @@ -28,30 +28,55 @@ struct dw_edma_v0_ch_regs { u32 ch_control1; /* 0x000 */ u32 ch_control2; /* 0x004 */ u32 transfer_size; /* 0x008 */ - u32 sar_low; /* 0x00c */ - u32 sar_high; /* 0x010 */ - u32 dar_low; /* 0x014 */ - u32 dar_high; /* 0x018 */ - u32 llp_low; /* 0x01c */ - u32 llp_high; /* 0x020 */ -}; + union { + u64 reg; /* 0x00c..0x010 */ + struct { + u32 lsb; /* 0x00c */ + u32 msb; /* 0x010 */ + }; + } sar; + union { + u64 reg; /* 0x014..0x018 */ + struct { + u32 lsb; /* 0x014 */ + u32 msb; /* 0x018 */ + }; + } dar; + union { + u64 reg; /* 0x01c..0x020 */ + struct { + u32 lsb; /* 0x01c */ + u32 msb; /* 0x020 */ + }; + } llp; +} __packed; struct dw_edma_v0_ch { struct dw_edma_v0_ch_regs wr; /* 0x200 */ u32 padding_1[55]; /* [0x224..0x2fc] */ struct dw_edma_v0_ch_regs rd; /* 0x300 */ u32 padding_2[55]; /* [0x324..0x3fc] */ -}; +} __packed; struct dw_edma_v0_unroll { u32 padding_1; /* 0x0f8 */ u32 wr_engine_chgroup; /* 0x100 */ u32 rd_engine_chgroup; /* 0x104 */ - u32 wr_engine_hshake_cnt_low; /* 0x108 */ - u32 wr_engine_hshake_cnt_high; /* 0x10c */ + union { + u64 reg; /* 0x108..0x10c */ + struct { + u32 lsb; /* 0x108 */ + u32 msb; /* 0x10c */ + }; + } wr_engine_hshake_cnt; u32 padding_2[2]; /* [0x110..0x114] */ - u32 rd_engine_hshake_cnt_low; /* 0x118 */ - u32 rd_engine_hshake_cnt_high; /* 0x11c */ + union { + u64 reg; /* 0x120..0x124 */ + struct { + u32 lsb; /* 0x120 */ + u32 msb; /* 0x124 */ + }; + } rd_engine_hshake_cnt; u32 padding_3[2]; /* [0x120..0x124] */ u32 wr_ch0_pwr_en; /* 0x128 */ u32 wr_ch1_pwr_en; /* 0x12c */ @@ -72,12 +97,12 @@ struct dw_edma_v0_unroll { u32 rd_ch7_pwr_en; /* 0x184 */ u32 padding_5[30]; /* [0x188..0x1fc] */ struct dw_edma_v0_ch ch[EDMA_V0_MAX_NR_CH]; /* [0x200..0x1120] */ -}; +} __packed; struct dw_edma_v0_legacy { u32 viewport_sel; /* 0x0f8 */ struct dw_edma_v0_ch_regs ch; /* [0x100..0x120] */ -}; +} __packed; struct dw_edma_v0_regs { /* eDMA global registers */ @@ -87,14 +112,24 @@ struct dw_edma_v0_regs { u32 wr_engine_en; /* 0x00c */ u32 wr_doorbell; /* 0x010 */ u32 padding_2; /* 0x014 */ - u32 wr_ch_arb_weight_low; /* 0x018 */ - u32 wr_ch_arb_weight_high; /* 0x01c */ + union { + u64 reg; /* 0x018..0x01c */ + struct { + u32 lsb; /* 0x018 */ + u32 msb; /* 0x01c */ + }; + } wr_ch_arb_weight; u32 padding_3[3]; /* [0x020..0x028] */ u32 rd_engine_en; /* 0x02c */ u32 rd_doorbell; /* 0x030 */ u32 padding_4; /* 0x034 */ - u32 rd_ch_arb_weight_low; /* 0x038 */ - u32 rd_ch_arb_weight_high; /* 0x03c */ + union { + u64 reg; /* 0x038..0x03c */ + struct { + u32 lsb; /* 0x038 */ + u32 msb; /* 0x03c */ + }; + } rd_ch_arb_weight; u32 padding_5[3]; /* [0x040..0x048] */ /* eDMA interrupts registers */ u32 wr_int_status; /* 0x04c */ @@ -102,10 +137,20 @@ struct dw_edma_v0_regs { u32 wr_int_mask; /* 0x054 */ u32 wr_int_clear; /* 0x058 */ u32 wr_err_status; /* 0x05c */ - u32 wr_done_imwr_low; /* 0x060 */ - u32 wr_done_imwr_high; /* 0x064 */ - u32 wr_abort_imwr_low; /* 0x068 */ - u32 wr_abort_imwr_high; /* 0x06c */ + union { + u64 reg; /* 0x060..0x064 */ + struct { + u32 lsb; /* 0x060 */ + u32 msb; /* 0x064 */ + }; + } wr_done_imwr; + union { + u64 reg; /* 0x068..0x06c */ + struct { + u32 lsb; /* 0x068 */ + u32 msb; /* 0x06c */ + }; + } wr_abort_imwr; u32 wr_ch01_imwr_data; /* 0x070 */ u32 wr_ch23_imwr_data; /* 0x074 */ u32 wr_ch45_imwr_data; /* 0x078 */ @@ -118,15 +163,30 @@ struct dw_edma_v0_regs { u32 rd_int_mask; /* 0x0a8 */ u32 rd_int_clear; /* 0x0ac */ u32 padding_10; /* 0x0b0 */ - u32 rd_err_status_low; /* 0x0b4 */ - u32 rd_err_status_high; /* 0x0b8 */ + union { + u64 reg; /* 0x0b4..0x0b8 */ + struct { + u32 lsb; /* 0x0b4 */ + u32 msb; /* 0x0b8 */ + }; + } rd_err_status; u32 padding_11[2]; /* [0x0bc..0x0c0] */ u32 rd_linked_list_err_en; /* 0x0c4 */ u32 padding_12; /* 0x0c8 */ - u32 rd_done_imwr_low; /* 0x0cc */ - u32 rd_done_imwr_high; /* 0x0d0 */ - u32 rd_abort_imwr_low; /* 0x0d4 */ - u32 rd_abort_imwr_high; /* 0x0d8 */ + union { + u64 reg; /* 0x0cc..0x0d0 */ + struct { + u32 lsb; /* 0x0cc */ + u32 msb; /* 0x0d0 */ + }; + } rd_done_imwr; + union { + u64 reg; /* 0x0d4..0x0d8 */ + struct { + u32 lsb; /* 0x0d4 */ + u32 msb; /* 0x0d8 */ + }; + } rd_abort_imwr; u32 rd_ch01_imwr_data; /* 0x0dc */ u32 rd_ch23_imwr_data; /* 0x0e0 */ u32 rd_ch45_imwr_data; /* 0x0e4 */ @@ -137,22 +197,37 @@ struct dw_edma_v0_regs { struct dw_edma_v0_legacy legacy; /* [0x0f8..0x120] */ struct dw_edma_v0_unroll unroll; /* [0x0f8..0x1120] */ } type; -}; +} __packed; struct dw_edma_v0_lli { u32 control; u32 transfer_size; - u32 sar_low; - u32 sar_high; - u32 dar_low; - u32 dar_high; -}; + union { + u64 reg; + struct { + u32 lsb; + u32 msb; + }; + } sar; + union { + u64 reg; + struct { + u32 lsb; + u32 msb; + }; + } dar; +} __packed; struct dw_edma_v0_llp { u32 control; u32 reserved; - u32 llp_low; - u32 llp_high; -}; + union { + u64 reg; + struct { + u32 lsb; + u32 msb; + }; + } llp; +} __packed; #endif /* _DW_EDMA_V0_REGS_H */ From patchwork Tue Dec 15 17:30:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65A77C2BB48 for ; Tue, 15 Dec 2020 17:41:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 39BED2255F for ; Tue, 15 Dec 2020 17:41:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731269AbgLORlZ (ORCPT ); Tue, 15 Dec 2020 12:41:25 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.87.133]:46014 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731040AbgLORbi (ORCPT ); Tue, 15 Dec 2020 12:31:38 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id E8118C04D0; Tue, 15 Dec 2020 17:30:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053437; bh=oNfEKarVynQpWQYDjVDdd5U8tWSCEBgnT+5HnoYP8SI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=JFwIZvF5DM7HV26zwl5nYjXj82/8PcbaQvPdgYzFl4l6BeDWcppxyhXy1npyVK3Y4 Cb3AcmpRAl/6gmW8+uhB8iaz4pb18X23iHOx9P4XOhovx83PGTXhQpW/UKQx7tSRQL BXh0GYG0NzJHucJ2IOUJb9PGLU846q3LuqvO3HgWfMyhf3ORn740FogzAWM5ZBZoFT 3r65aWlPplC7Q2eOkCTkYhBI1a7ZOMIK9rslxggons6YL6o/LDmBD6VTOmiFUjAbjW k1w5NeWNj+SNyw7KZmNruP8T8kSpF8i0+72po2D30MkfrxLcaC7dUD9vX/8PfEEl/v onU4nTfF4EB/w== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 755ABA024A; Tue, 15 Dec 2020 17:30:35 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Vinod Koul , Dan Williams Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 02/15] dmaengine: dw-edma: Fix comments offset characters' alignment Date: Tue, 15 Dec 2020 18:30:11 +0100 Message-Id: <3525b67abb340d57d579f4d15edaf261855f96c4.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Fix comments offset characters' alignment to follow the same structure of similar comments. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-v0-regs.h | 214 +++++++++++++++++----------------- 1 file changed, 107 insertions(+), 107 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-regs.h b/drivers/dma/dw-edma/dw-edma-v0-regs.h index d07151d..e175f7b 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-regs.h +++ b/drivers/dma/dw-edma/dw-edma-v0-regs.h @@ -25,177 +25,177 @@ #define EDMA_V0_CH_EVEN_MSI_DATA_MASK GENMASK(15, 0) struct dw_edma_v0_ch_regs { - u32 ch_control1; /* 0x000 */ - u32 ch_control2; /* 0x004 */ - u32 transfer_size; /* 0x008 */ + u32 ch_control1; /* 0x0000 */ + u32 ch_control2; /* 0x0004 */ + u32 transfer_size; /* 0x0008 */ union { - u64 reg; /* 0x00c..0x010 */ + u64 reg; /* 0x000c..0x0010 */ struct { - u32 lsb; /* 0x00c */ - u32 msb; /* 0x010 */ + u32 lsb; /* 0x000c */ + u32 msb; /* 0x0010 */ }; } sar; union { - u64 reg; /* 0x014..0x018 */ + u64 reg; /* 0x0014..0x0018 */ struct { - u32 lsb; /* 0x014 */ - u32 msb; /* 0x018 */ + u32 lsb; /* 0x0014 */ + u32 msb; /* 0x0018 */ }; } dar; union { - u64 reg; /* 0x01c..0x020 */ + u64 reg; /* 0x001c..0x0020 */ struct { - u32 lsb; /* 0x01c */ - u32 msb; /* 0x020 */ + u32 lsb; /* 0x001c */ + u32 msb; /* 0x0020 */ }; } llp; } __packed; struct dw_edma_v0_ch { - struct dw_edma_v0_ch_regs wr; /* 0x200 */ - u32 padding_1[55]; /* [0x224..0x2fc] */ - struct dw_edma_v0_ch_regs rd; /* 0x300 */ - u32 padding_2[55]; /* [0x324..0x3fc] */ + struct dw_edma_v0_ch_regs wr; /* 0x0200 */ + u32 padding_1[55]; /* 0x0224..0x02fc */ + struct dw_edma_v0_ch_regs rd; /* 0x0300 */ + u32 padding_2[55]; /* 0x0324..0x03fc */ } __packed; struct dw_edma_v0_unroll { - u32 padding_1; /* 0x0f8 */ - u32 wr_engine_chgroup; /* 0x100 */ - u32 rd_engine_chgroup; /* 0x104 */ + u32 padding_1; /* 0x00f8 */ + u32 wr_engine_chgroup; /* 0x0100 */ + u32 rd_engine_chgroup; /* 0x0104 */ union { - u64 reg; /* 0x108..0x10c */ + u64 reg; /* 0x0108..0x010c */ struct { - u32 lsb; /* 0x108 */ - u32 msb; /* 0x10c */ + u32 lsb; /* 0x0108 */ + u32 msb; /* 0x010c */ }; } wr_engine_hshake_cnt; - u32 padding_2[2]; /* [0x110..0x114] */ + u32 padding_2[2]; /* 0x0110..0x0114 */ union { - u64 reg; /* 0x120..0x124 */ + u64 reg; /* 0x0120..0x0124 */ struct { - u32 lsb; /* 0x120 */ - u32 msb; /* 0x124 */ + u32 lsb; /* 0x0120 */ + u32 msb; /* 0x0124 */ }; } rd_engine_hshake_cnt; - u32 padding_3[2]; /* [0x120..0x124] */ - u32 wr_ch0_pwr_en; /* 0x128 */ - u32 wr_ch1_pwr_en; /* 0x12c */ - u32 wr_ch2_pwr_en; /* 0x130 */ - u32 wr_ch3_pwr_en; /* 0x134 */ - u32 wr_ch4_pwr_en; /* 0x138 */ - u32 wr_ch5_pwr_en; /* 0x13c */ - u32 wr_ch6_pwr_en; /* 0x140 */ - u32 wr_ch7_pwr_en; /* 0x144 */ - u32 padding_4[8]; /* [0x148..0x164] */ - u32 rd_ch0_pwr_en; /* 0x168 */ - u32 rd_ch1_pwr_en; /* 0x16c */ - u32 rd_ch2_pwr_en; /* 0x170 */ - u32 rd_ch3_pwr_en; /* 0x174 */ - u32 rd_ch4_pwr_en; /* 0x178 */ - u32 rd_ch5_pwr_en; /* 0x18c */ - u32 rd_ch6_pwr_en; /* 0x180 */ - u32 rd_ch7_pwr_en; /* 0x184 */ - u32 padding_5[30]; /* [0x188..0x1fc] */ - struct dw_edma_v0_ch ch[EDMA_V0_MAX_NR_CH]; /* [0x200..0x1120] */ + u32 padding_3[2]; /* 0x0120..0x0124 */ + u32 wr_ch0_pwr_en; /* 0x0128 */ + u32 wr_ch1_pwr_en; /* 0x012c */ + u32 wr_ch2_pwr_en; /* 0x0130 */ + u32 wr_ch3_pwr_en; /* 0x0134 */ + u32 wr_ch4_pwr_en; /* 0x0138 */ + u32 wr_ch5_pwr_en; /* 0x013c */ + u32 wr_ch6_pwr_en; /* 0x0140 */ + u32 wr_ch7_pwr_en; /* 0x0144 */ + u32 padding_4[8]; /* 0x0148..0x0164 */ + u32 rd_ch0_pwr_en; /* 0x0168 */ + u32 rd_ch1_pwr_en; /* 0x016c */ + u32 rd_ch2_pwr_en; /* 0x0170 */ + u32 rd_ch3_pwr_en; /* 0x0174 */ + u32 rd_ch4_pwr_en; /* 0x0178 */ + u32 rd_ch5_pwr_en; /* 0x018c */ + u32 rd_ch6_pwr_en; /* 0x0180 */ + u32 rd_ch7_pwr_en; /* 0x0184 */ + u32 padding_5[30]; /* 0x0188..0x01fc */ + struct dw_edma_v0_ch ch[EDMA_V0_MAX_NR_CH]; /* 0x0200..0x1120 */ } __packed; struct dw_edma_v0_legacy { - u32 viewport_sel; /* 0x0f8 */ - struct dw_edma_v0_ch_regs ch; /* [0x100..0x120] */ + u32 viewport_sel; /* 0x00f8 */ + struct dw_edma_v0_ch_regs ch; /* 0x0100..0x0120 */ } __packed; struct dw_edma_v0_regs { /* eDMA global registers */ - u32 ctrl_data_arb_prior; /* 0x000 */ - u32 padding_1; /* 0x004 */ - u32 ctrl; /* 0x008 */ - u32 wr_engine_en; /* 0x00c */ - u32 wr_doorbell; /* 0x010 */ - u32 padding_2; /* 0x014 */ + u32 ctrl_data_arb_prior; /* 0x0000 */ + u32 padding_1; /* 0x0004 */ + u32 ctrl; /* 0x0008 */ + u32 wr_engine_en; /* 0x000c */ + u32 wr_doorbell; /* 0x0010 */ + u32 padding_2; /* 0x0014 */ union { - u64 reg; /* 0x018..0x01c */ + u64 reg; /* 0x0018..0x001c */ struct { - u32 lsb; /* 0x018 */ - u32 msb; /* 0x01c */ + u32 lsb; /* 0x0018 */ + u32 msb; /* 0x001c */ }; } wr_ch_arb_weight; - u32 padding_3[3]; /* [0x020..0x028] */ - u32 rd_engine_en; /* 0x02c */ - u32 rd_doorbell; /* 0x030 */ - u32 padding_4; /* 0x034 */ + u32 padding_3[3]; /* 0x0020..0x0028 */ + u32 rd_engine_en; /* 0x002c */ + u32 rd_doorbell; /* 0x0030 */ + u32 padding_4; /* 0x0034 */ union { - u64 reg; /* 0x038..0x03c */ + u64 reg; /* 0x0038..0x003c */ struct { - u32 lsb; /* 0x038 */ - u32 msb; /* 0x03c */ + u32 lsb; /* 0x0038 */ + u32 msb; /* 0x003c */ }; } rd_ch_arb_weight; - u32 padding_5[3]; /* [0x040..0x048] */ + u32 padding_5[3]; /* 0x0040..0x0048 */ /* eDMA interrupts registers */ - u32 wr_int_status; /* 0x04c */ - u32 padding_6; /* 0x050 */ - u32 wr_int_mask; /* 0x054 */ - u32 wr_int_clear; /* 0x058 */ - u32 wr_err_status; /* 0x05c */ + u32 wr_int_status; /* 0x004c */ + u32 padding_6; /* 0x0050 */ + u32 wr_int_mask; /* 0x0054 */ + u32 wr_int_clear; /* 0x0058 */ + u32 wr_err_status; /* 0x005c */ union { - u64 reg; /* 0x060..0x064 */ + u64 reg; /* 0x0060..0x0064 */ struct { - u32 lsb; /* 0x060 */ - u32 msb; /* 0x064 */ + u32 lsb; /* 0x0060 */ + u32 msb; /* 0x0064 */ }; } wr_done_imwr; union { - u64 reg; /* 0x068..0x06c */ + u64 reg; /* 0x0068..0x006c */ struct { - u32 lsb; /* 0x068 */ - u32 msb; /* 0x06c */ + u32 lsb; /* 0x0068 */ + u32 msb; /* 0x006c */ }; } wr_abort_imwr; - u32 wr_ch01_imwr_data; /* 0x070 */ - u32 wr_ch23_imwr_data; /* 0x074 */ - u32 wr_ch45_imwr_data; /* 0x078 */ - u32 wr_ch67_imwr_data; /* 0x07c */ - u32 padding_7[4]; /* [0x080..0x08c] */ - u32 wr_linked_list_err_en; /* 0x090 */ - u32 padding_8[3]; /* [0x094..0x09c] */ - u32 rd_int_status; /* 0x0a0 */ - u32 padding_9; /* 0x0a4 */ - u32 rd_int_mask; /* 0x0a8 */ - u32 rd_int_clear; /* 0x0ac */ - u32 padding_10; /* 0x0b0 */ + u32 wr_ch01_imwr_data; /* 0x0070 */ + u32 wr_ch23_imwr_data; /* 0x0074 */ + u32 wr_ch45_imwr_data; /* 0x0078 */ + u32 wr_ch67_imwr_data; /* 0x007c */ + u32 padding_7[4]; /* 0x0080..0x008c */ + u32 wr_linked_list_err_en; /* 0x0090 */ + u32 padding_8[3]; /* 0x0094..0x009c */ + u32 rd_int_status; /* 0x00a0 */ + u32 padding_9; /* 0x00a4 */ + u32 rd_int_mask; /* 0x00a8 */ + u32 rd_int_clear; /* 0x00ac */ + u32 padding_10; /* 0x00b0 */ union { - u64 reg; /* 0x0b4..0x0b8 */ + u64 reg; /* 0x00b4..0x00b8 */ struct { - u32 lsb; /* 0x0b4 */ - u32 msb; /* 0x0b8 */ + u32 lsb; /* 0x00b4 */ + u32 msb; /* 0x00b8 */ }; } rd_err_status; - u32 padding_11[2]; /* [0x0bc..0x0c0] */ - u32 rd_linked_list_err_en; /* 0x0c4 */ - u32 padding_12; /* 0x0c8 */ + u32 padding_11[2]; /* 0x00bc..0x00c0 */ + u32 rd_linked_list_err_en; /* 0x00c4 */ + u32 padding_12; /* 0x00c8 */ union { - u64 reg; /* 0x0cc..0x0d0 */ + u64 reg; /* 0x00cc..0x00d0 */ struct { - u32 lsb; /* 0x0cc */ - u32 msb; /* 0x0d0 */ + u32 lsb; /* 0x00cc */ + u32 msb; /* 0x00d0 */ }; } rd_done_imwr; union { - u64 reg; /* 0x0d4..0x0d8 */ + u64 reg; /* 0x00d4..0x00d8 */ struct { - u32 lsb; /* 0x0d4 */ - u32 msb; /* 0x0d8 */ + u32 lsb; /* 0x00d4 */ + u32 msb; /* 0x00d8 */ }; } rd_abort_imwr; - u32 rd_ch01_imwr_data; /* 0x0dc */ - u32 rd_ch23_imwr_data; /* 0x0e0 */ - u32 rd_ch45_imwr_data; /* 0x0e4 */ - u32 rd_ch67_imwr_data; /* 0x0e8 */ - u32 padding_13[4]; /* [0x0ec..0x0f8] */ + u32 rd_ch01_imwr_data; /* 0x00dc */ + u32 rd_ch23_imwr_data; /* 0x00e0 */ + u32 rd_ch45_imwr_data; /* 0x00e4 */ + u32 rd_ch67_imwr_data; /* 0x00e8 */ + u32 padding_13[4]; /* 0x00ec..0x00f8 */ /* eDMA channel context grouping */ union dw_edma_v0_type { - struct dw_edma_v0_legacy legacy; /* [0x0f8..0x120] */ - struct dw_edma_v0_unroll unroll; /* [0x0f8..0x1120] */ + struct dw_edma_v0_legacy legacy; /* 0x00f8..0x0120 */ + struct dw_edma_v0_unroll unroll; /* 0x00f8..0x1120 */ } type; } __packed; From patchwork Tue Dec 15 17:30:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EAF8C2BBCA for ; Tue, 15 Dec 2020 17:41:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B714E22B3B for ; Tue, 15 Dec 2020 17:41:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731129AbgLORbm (ORCPT ); Tue, 15 Dec 2020 12:31:42 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.73.133]:54412 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731018AbgLORbj (ORCPT ); Tue, 15 Dec 2020 12:31:39 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id DCA5C4039E; Tue, 15 Dec 2020 17:30:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053438; bh=gCI3nNoPy8dCurab6Uc+GVpEmLeCzjb3I1ubHtDkaBI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=B/PQbS8d0T2EFydPxweAub7F8uE3dhmVwVlSQgUXw/yq665HvkYjzWaGFJ/jrTd9o vXhds3oQ14UPKx7D+K31XlUc3VEFaS1KMfYwp25muYXp54ukkPf+HmK7F6NFrIMdMj u/r9eSVK7lMzj0ZMrCjC8AoL1Pgptec1ypkhtsJcCYWHIbNJVt5oSWqtqKBNSKxP16 XCfTU6nk23tVbgdRKdJvzlQrv7gL8ty3rzrJX9Ekzep9dgJfLkQy9ENiGDeqQib/RK laVZJKdoeX9jqsYbV9CuC21BbDVBCMQY2jS1yBRdsji4mSsWcUwZPj1jNjS+5ukFds WPmCCeghcYcTQ== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id B6643A024B; Tue, 15 Dec 2020 17:30:36 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Vinod Koul , Dan Williams Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/15] dmaengine: dw-edma: Add support for the HDMA feature Date: Tue, 15 Dec 2020 18:30:12 +0100 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Add support for the HDMA feature. This new feature enables the current eDMA IP to use a deeper prefetch of the linked list, which reduces the algorithm execution latency observed when loading the elements of the list, causing more stable and higher data transfer. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-core.h | 10 ++++---- drivers/dma/dw-edma/dw-edma-pcie.c | 23 ++++++++--------- drivers/dma/dw-edma/dw-edma-v0-core.c | 42 +++++++++++++++++++++++++++++--- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 9 +++---- 4 files changed, 60 insertions(+), 24 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index 31fc50d..3f9593e 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -21,9 +21,10 @@ enum dw_edma_dir { EDMA_DIR_READ }; -enum dw_edma_mode { - EDMA_MODE_LEGACY = 0, - EDMA_MODE_UNROLL +enum dw_edma_map_format { + EDMA_MF_EDMA_LEGACY = 0x0, + EDMA_MF_EDMA_UNROLL = 0x1, + EDMA_MF_HDMA_COMPAT = 0x5 }; enum dw_edma_request { @@ -123,8 +124,7 @@ struct dw_edma { struct dw_edma_irq *irq; int nr_irqs; - u32 version; - enum dw_edma_mode mode; + enum dw_edma_map_format mf; struct dw_edma_chan *chan; const struct dw_edma_core_ops *ops; diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index 1eafc60..c130549 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -30,8 +30,7 @@ struct dw_edma_pcie_data { off_t dt_off; size_t dt_sz; /* Other */ - u32 version; - enum dw_edma_mode mode; + enum dw_edma_map_format mf; u8 irqs; }; @@ -49,8 +48,7 @@ static const struct dw_edma_pcie_data snps_edda_data = { .dt_off = 0x00800000, /* 8 Mbytes */ .dt_sz = 0x03800000, /* 56 Mbytes */ /* Other */ - .version = 0, - .mode = EDMA_MODE_UNROLL, + .mf = EDMA_MF_EDMA_UNROLL, .irqs = 1, }; @@ -69,8 +67,8 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, const struct dw_edma_pcie_data *pdata = (void *)pid->driver_data; struct device *dev = &pdev->dev; struct dw_edma_chip *chip; - int err, nr_irqs; struct dw_edma *dw; + int err, nr_irqs; /* Enable PCI device */ err = pcim_enable_device(pdev); @@ -157,16 +155,19 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, dw->dt_region.paddr += pdata->dt_off; dw->dt_region.sz = pdata->dt_sz; - dw->version = pdata->version; - dw->mode = pdata->mode; + dw->mf = pdata->mf; dw->nr_irqs = nr_irqs; dw->ops = &dw_edma_pcie_core_ops; /* Debug info */ - pci_dbg(pdev, "Version:\t%u\n", dw->version); - - pci_dbg(pdev, "Mode:\t%s\n", - dw->mode == EDMA_MODE_LEGACY ? "Legacy" : "Unroll"); + if (dw->mf == EDMA_MF_EDMA_LEGACY) + pci_dbg(pdev, "Version:\teDMA Port Logic (0x%x)\n", dw->mf); + else if (dw->mf == EDMA_MF_EDMA_UNROLL) + pci_dbg(pdev, "Version:\teDMA Unroll (0x%x)\n", dw->mf); + else if (dw->mf == EDMA_MF_HDMA_COMPAT) + pci_dbg(pdev, "Version:\tHDMA Compatible (0x%x)\n", dw->mf); + else + pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", dw->mf); pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", pdata->rg_bar, pdata->rg_off, pdata->rg_sz, diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index 7888eda..5b0541a 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -96,7 +96,7 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) static inline struct dw_edma_v0_ch_regs __iomem * __dw_ch_regs(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch) { - if (dw->mode == EDMA_MODE_LEGACY) + if (dw->mf == EDMA_MF_EDMA_LEGACY) return &(__dw_regs(dw)->type.legacy.ch); if (dir == EDMA_DIR_WRITE) @@ -108,7 +108,7 @@ __dw_ch_regs(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch) static inline void writel_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, u32 value, void __iomem *addr) { - if (dw->mode == EDMA_MODE_LEGACY) { + if (dw->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; unsigned long flags; @@ -133,7 +133,7 @@ static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, { u32 value; - if (dw->mode == EDMA_MODE_LEGACY) { + if (dw->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; unsigned long flags; @@ -365,6 +365,42 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) if (first) { /* Enable engine */ SET_RW_32(dw, chan->dir, engine_en, BIT(0)); + if (dw->mf == EDMA_MF_HDMA_COMPAT) { + switch (chan->id) { + case 0: + SET_RW_COMPAT(dw, chan->dir, ch0_pwr_en, + BIT(0)); + break; + case 1: + SET_RW_COMPAT(dw, chan->dir, ch1_pwr_en, + BIT(0)); + break; + case 2: + SET_RW_COMPAT(dw, chan->dir, ch2_pwr_en, + BIT(0)); + break; + case 3: + SET_RW_COMPAT(dw, chan->dir, ch3_pwr_en, + BIT(0)); + break; + case 4: + SET_RW_COMPAT(dw, chan->dir, ch4_pwr_en, + BIT(0)); + break; + case 5: + SET_RW_COMPAT(dw, chan->dir, ch5_pwr_en, + BIT(0)); + break; + case 6: + SET_RW_COMPAT(dw, chan->dir, ch6_pwr_en, + BIT(0)); + break; + case 7: + SET_RW_COMPAT(dw, chan->dir, ch7_pwr_en, + BIT(0)); + break; + } + } /* Interrupt unmask - done, abort */ tmp = GET_RW_32(dw, chan->dir, int_mask); tmp &= ~FIELD_PREP(EDMA_V0_DONE_INT_MASK, BIT(chan->id)); diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index a5e2783..157dfc2 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -55,7 +55,7 @@ struct debugfs_entries { static int dw_edma_debugfs_u32_get(void *data, u64 *val) { void __iomem *reg = (void __force __iomem *)data; - if (dw->mode == EDMA_MODE_LEGACY && + if (dw->mf == EDMA_MF_EDMA_LEGACY && reg >= (void __iomem *)®s->type.legacy.ch) { void __iomem *ptr = ®s->type.legacy.ch; u32 viewport_sel = 0; @@ -174,7 +174,7 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); - if (dw->mode == EDMA_MODE_UNROLL) { + if (dw->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, regs_dir); @@ -243,7 +243,7 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); - if (dw->mode == EDMA_MODE_UNROLL) { + if (dw->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, regs_dir); @@ -297,8 +297,7 @@ void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip) if (!base_dir) return; - debugfs_create_u32("version", 0444, base_dir, &dw->version); - debugfs_create_u32("mode", 0444, base_dir, &dw->mode); + debugfs_create_u32("mf", 0444, base_dir, &dw->mf); debugfs_create_u16("wr_ch_cnt", 0444, base_dir, &dw->wr_ch_cnt); debugfs_create_u16("rd_ch_cnt", 0444, base_dir, &dw->rd_ch_cnt); From patchwork Tue Dec 15 17:30:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D65BFC2BB48 for ; Tue, 15 Dec 2020 17:41:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9655122ADC for ; Tue, 15 Dec 2020 17:41:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731138AbgLORbm (ORCPT ); Tue, 15 Dec 2020 12:31:42 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.73.133]:54420 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731108AbgLORbk (ORCPT ); Tue, 15 Dec 2020 12:31:40 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id ABEF240442; Tue, 15 Dec 2020 17:30:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053439; bh=1ivAVxpeQx5tfOmUaqDB9DxxCl8IL2EkAl3Vm7KC/wc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=d8tZNL7H/LgC3jxg0ljNYwGn7GOmYBjAwM+/27ntmxz0E/0XFfMq3Q7zMllqIJZT4 LLcDPqHbC+3FKyBKcUyh3/WSSUC/XPdyzfh5FcXQAnrfl8uVK5s84i7iIV7QmUrYJu gapxrYH4m9j2SCgGP1oRfgIMyHzeo1sIB6zBV50/65pp0VXzy9Z2k0l5hY5p4mGmiq r0ycdiOWYqVvZbBE/TcW/YlVe7bFr/2hcklDDu+PccoiIi/zJEDlAminbLIHEaaYWs iUZ61wIkwl3HU1+ga4qzsuXmisEr11ZkMikdWr+NlTpv/eqsg+lptJf0ipQb0DFMiT +8emZ6Dhtfuiw== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 89959A024B; Tue, 15 Dec 2020 17:30:38 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Dan Williams , Vinod Koul Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/15] dmaengine: dw-edma: Add PCIe VSEC data retrieval support Date: Tue, 15 Dec 2020 18:30:14 +0100 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org The latest eDMA IP development implements a Vendor-Specific Extended Capability that contains the eDMA BAR, offset, map format, and the number of read/write channels available. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-core.c | 20 ++++--- drivers/dma/dw-edma/dw-edma-pcie.c | 114 ++++++++++++++++++++++++++++--------- 2 files changed, 99 insertions(+), 35 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index b971505..b65c32e1 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -863,15 +863,19 @@ int dw_edma_probe(struct dw_edma_chip *chip) raw_spin_lock_init(&dw->lock); - /* Find out how many write channels are supported by hardware */ - dw->wr_ch_cnt = dw_edma_v0_core_ch_count(dw, EDMA_DIR_WRITE); - if (!dw->wr_ch_cnt) - return -EINVAL; + if (!dw->wr_ch_cnt) { + /* Find out how many write channels are supported by hardware */ + dw->wr_ch_cnt = dw_edma_v0_core_ch_count(dw, EDMA_DIR_WRITE); + if (!dw->wr_ch_cnt) + return -EINVAL; + } - /* Find out how many read channels are supported by hardware */ - dw->rd_ch_cnt = dw_edma_v0_core_ch_count(dw, EDMA_DIR_READ); - if (!dw->rd_ch_cnt) - return -EINVAL; + if (!dw->rd_ch_cnt) { + /* Find out how many read channels are supported by hardware */ + dw->rd_ch_cnt = dw_edma_v0_core_ch_count(dw, EDMA_DIR_READ); + if (!dw->rd_ch_cnt) + return -EINVAL; + } dev_vdbg(dev, "Channels:\twrite=%d, read=%d\n", dw->wr_ch_cnt, dw->rd_ch_cnt); diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index c130549..2bd31b0 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -13,9 +13,16 @@ #include #include #include +#include #include "dw-edma-core.h" +#define DW_PCIE_VSEC_DMA_ID 0x6 +#define DW_PCIE_VSEC_DMA_BAR GENMASK(10, 8) +#define DW_PCIE_VSEC_DMA_MAP GENMASK(2, 0) +#define DW_PCIE_VSEC_DMA_RD_CH GENMASK(25, 16) +#define DW_PCIE_VSEC_DMA_WR_CH GENMASK(9, 0) + struct dw_edma_pcie_data { /* eDMA registers location */ enum pci_barno rg_bar; @@ -32,6 +39,8 @@ struct dw_edma_pcie_data { /* Other */ enum dw_edma_map_format mf; u8 irqs; + u16 rd_ch_cnt; + u16 wr_ch_cnt; }; static const struct dw_edma_pcie_data snps_edda_data = { @@ -50,6 +59,8 @@ static const struct dw_edma_pcie_data snps_edda_data = { /* Other */ .mf = EDMA_MF_EDMA_UNROLL, .irqs = 1, + .rd_ch_cnt = 0, + .wr_ch_cnt = 0, }; static int dw_edma_pcie_irq_vector(struct device *dev, unsigned int nr) @@ -61,10 +72,49 @@ static const struct dw_edma_core_ops dw_edma_pcie_core_ops = { .irq_vector = dw_edma_pcie_irq_vector, }; +static void dw_edma_pcie_get_vsec_dma_data(struct pci_dev *pdev, + struct dw_edma_pcie_data *pdata) +{ + u32 val, map; + int vsec; + u64 off; + + vsec = pci_find_vsec_capability(pdev, DW_PCIE_VSEC_DMA_ID); + if (!vsec) + return; + + pci_read_config_dword(pdev, vsec + 0x4, &val); + if (PCI_VSEC_CAP_REV(val) != 0x00 || PCI_VSEC_CAP_LEN(val) != 0x18) + return; + + pci_dbg(pdev, "Detected PCIe Vendor-Specific Extended Capability DMA\n"); + pci_read_config_dword(pdev, vsec + 0x8, &val); + map = FIELD_GET(DW_PCIE_VSEC_DMA_MAP, val); + if (map != EDMA_MF_EDMA_LEGACY && + map != EDMA_MF_EDMA_UNROLL && + map != EDMA_MF_HDMA_COMPAT) + return; + + pdata->mf = map; + pdata->rg_bar = FIELD_GET(DW_PCIE_VSEC_DMA_BAR, val); + + pci_read_config_dword(pdev, vsec + 0xc, &val); + pdata->rd_ch_cnt = FIELD_GET(DW_PCIE_VSEC_DMA_RD_CH, val); + pdata->wr_ch_cnt = FIELD_GET(DW_PCIE_VSEC_DMA_WR_CH, val); + + pci_read_config_dword(pdev, vsec + 0x14, &val); + off = val; + pci_read_config_dword(pdev, vsec + 0x10, &val); + off <<= 32; + off |= val; + pdata->rg_off = off; +} + static int dw_edma_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *pid) { - const struct dw_edma_pcie_data *pdata = (void *)pid->driver_data; + struct dw_edma_pcie_data *pdata = (void *)pid->driver_data; + struct dw_edma_pcie_data vsec_data; struct device *dev = &pdev->dev; struct dw_edma_chip *chip; struct dw_edma *dw; @@ -77,10 +127,18 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return err; } + memcpy(&vsec_data, pdata, sizeof(struct dw_edma_pcie_data)); + + /* + * Tries to find if exists a PCIe Vendor-Specific Extended Capability + * for the DMA, if exists one, then reconfigures with the new data + */ + dw_edma_pcie_get_vsec_dma_data(pdev, &vsec_data); + /* Mapping PCI BAR regions */ - err = pcim_iomap_regions(pdev, BIT(pdata->rg_bar) | - BIT(pdata->ll_bar) | - BIT(pdata->dt_bar), + err = pcim_iomap_regions(pdev, BIT(vsec_data.rg_bar) | + BIT(vsec_data.ll_bar) | + BIT(vsec_data.dt_bar), pci_name(pdev)); if (err) { pci_err(pdev, "eDMA BAR I/O remapping failed\n"); @@ -123,7 +181,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -ENOMEM; /* IRQs allocation */ - nr_irqs = pci_alloc_irq_vectors(pdev, 1, pdata->irqs, + nr_irqs = pci_alloc_irq_vectors(pdev, 1, vsec_data.irqs, PCI_IRQ_MSI | PCI_IRQ_MSIX); if (nr_irqs < 1) { pci_err(pdev, "fail to alloc IRQ vector (number of IRQs=%u)\n", @@ -137,27 +195,29 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, chip->id = pdev->devfn; chip->irq = pdev->irq; - dw->rg_region.vaddr = pcim_iomap_table(pdev)[pdata->rg_bar]; - dw->rg_region.vaddr += pdata->rg_off; - dw->rg_region.paddr = pdev->resource[pdata->rg_bar].start; - dw->rg_region.paddr += pdata->rg_off; - dw->rg_region.sz = pdata->rg_sz; - - dw->ll_region.vaddr = pcim_iomap_table(pdev)[pdata->ll_bar]; - dw->ll_region.vaddr += pdata->ll_off; - dw->ll_region.paddr = pdev->resource[pdata->ll_bar].start; - dw->ll_region.paddr += pdata->ll_off; - dw->ll_region.sz = pdata->ll_sz; - - dw->dt_region.vaddr = pcim_iomap_table(pdev)[pdata->dt_bar]; - dw->dt_region.vaddr += pdata->dt_off; - dw->dt_region.paddr = pdev->resource[pdata->dt_bar].start; - dw->dt_region.paddr += pdata->dt_off; - dw->dt_region.sz = pdata->dt_sz; - - dw->mf = pdata->mf; + dw->rg_region.vaddr = pcim_iomap_table(pdev)[vsec_data.rg_bar]; + dw->rg_region.vaddr += vsec_data.rg_off; + dw->rg_region.paddr = pdev->resource[vsec_data.rg_bar].start; + dw->rg_region.paddr += vsec_data.rg_off; + dw->rg_region.sz = vsec_data.rg_sz; + + dw->ll_region.vaddr = pcim_iomap_table(pdev)[vsec_data.ll_bar]; + dw->ll_region.vaddr += vsec_data.ll_off; + dw->ll_region.paddr = pdev->resource[vsec_data.ll_bar].start; + dw->ll_region.paddr += vsec_data.ll_off; + dw->ll_region.sz = vsec_data.ll_sz; + + dw->dt_region.vaddr = pcim_iomap_table(pdev)[vsec_data.dt_bar]; + dw->dt_region.vaddr += vsec_data.dt_off; + dw->dt_region.paddr = pdev->resource[vsec_data.dt_bar].start; + dw->dt_region.paddr += vsec_data.dt_off; + dw->dt_region.sz = vsec_data.dt_sz; + + dw->mf = vsec_data.mf; dw->nr_irqs = nr_irqs; dw->ops = &dw_edma_pcie_core_ops; + dw->rd_ch_cnt = vsec_data.rd_ch_cnt; + dw->wr_ch_cnt = vsec_data.wr_ch_cnt; /* Debug info */ if (dw->mf == EDMA_MF_EDMA_LEGACY) @@ -170,15 +230,15 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", dw->mf); pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", - pdata->rg_bar, pdata->rg_off, pdata->rg_sz, + vsec_data.rg_bar, vsec_data.rg_off, vsec_data.rg_sz, dw->rg_region.vaddr, &dw->rg_region.paddr); pci_dbg(pdev, "L. List:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", - pdata->ll_bar, pdata->ll_off, pdata->ll_sz, + vsec_data.ll_bar, vsec_data.ll_off, vsec_data.ll_sz, dw->ll_region.vaddr, &dw->ll_region.paddr); pci_dbg(pdev, "Data:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", - pdata->dt_bar, pdata->dt_off, pdata->dt_sz, + vsec_data.dt_bar, vsec_data.dt_off, vsec_data.dt_sz, dw->dt_region.vaddr, &dw->dt_region.paddr); pci_dbg(pdev, "Nr. IRQs:\t%u\n", dw->nr_irqs); From patchwork Tue Dec 15 17:30:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89B14C2BB9A for ; Tue, 15 Dec 2020 17:40:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3FA4E2255F for ; Tue, 15 Dec 2020 17:40:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731149AbgLORbm (ORCPT ); Tue, 15 Dec 2020 12:31:42 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.73.133]:54424 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731123AbgLORbl (ORCPT ); Tue, 15 Dec 2020 12:31:41 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 9AD7D40464; Tue, 15 Dec 2020 17:30:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053441; bh=ZwuAEc7PYGYf65aM+oFkF22mTyotdTO38xdBbdz9Hvs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=Tbe9ACe25I7ksxgO+UMMRhMYmCfmDBpsI59wuGrTsQZPaqhGs8Q6FSSQI6LOPeHIa 9djYvdCtGMIaYVFn4zNdrvn4FIoJFPdfEOLZ7l8YqmvgVm/qvD2Z7LNEGjMY2bA2AN 69lT9QJ8EqqWmv2rmmM7iCTipF6EfVEp0pKTQuWDag14Pio/FqJuCYFymeZeadYeSA tJfGBy97Iu8CLvOB7CakuWpV3UI1DQnYhYcRsMTaUVbwZAWHjCjd7uulVvSOBAi87l QJfXv8k6SIvi0E7o1rLBMsE0fFSuWFnm7h1wwSKuuDOWmWjTmufsyVudigZNpo51h9 pi4xGnCEgxyiw== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 6D53EA005C; Tue, 15 Dec 2020 17:30:39 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Vinod Koul , Dan Williams Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/15] dmaengine: dw-edma: Add device_prep_interleave_dma() support Date: Tue, 15 Dec 2020 18:30:15 +0100 Message-Id: <9626b4344cc67dc22172935222ad8c466e53edf8.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Add device_prep_interleave_dma() support to Synopsys DMA driver. This feature implements a similar data transfer mechanism to the scatter-gather implementation. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-core.c | 85 ++++++++++++++++++++++++++++++-------- drivers/dma/dw-edma/dw-edma-core.h | 13 ++++-- 2 files changed, 78 insertions(+), 20 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index b65c32e1..0fe3835 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -329,7 +329,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) struct dw_edma_chunk *chunk; struct dw_edma_burst *burst; struct dw_edma_desc *desc; - u32 cnt; + u32 cnt = 0; int i; if (!chan->configured) @@ -352,12 +352,19 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) return NULL; } - if (xfer->cyclic) { + if (xfer->type == EDMA_XFER_CYCLIC) { if (!xfer->xfer.cyclic.len || !xfer->xfer.cyclic.cnt) return NULL; - } else { + } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { if (xfer->xfer.sg.len < 1) return NULL; + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { + if (!xfer->xfer.il->numf) + return NULL; + if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) + return NULL; + } else { + return NULL; } desc = dw_edma_alloc_desc(chan); @@ -368,18 +375,28 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (unlikely(!chunk)) goto err_alloc; - src_addr = chan->config.src_addr; - dst_addr = chan->config.dst_addr; + if (xfer->type == EDMA_XFER_INTERLEAVED) { + src_addr = xfer->xfer.il->src_start; + dst_addr = xfer->xfer.il->dst_start; + } else { + src_addr = chan->config.src_addr; + dst_addr = chan->config.dst_addr; + } - if (xfer->cyclic) { + if (xfer->type == EDMA_XFER_CYCLIC) { cnt = xfer->xfer.cyclic.cnt; - } else { + } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { cnt = xfer->xfer.sg.len; sg = xfer->xfer.sg.sgl; + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { + if (xfer->xfer.il->numf > 0) + cnt = xfer->xfer.il->numf; + else + cnt = xfer->xfer.il->frame_size; } for (i = 0; i < cnt; i++) { - if (!xfer->cyclic && !sg) + if (xfer->type == EDMA_XFER_SCATTER_GATHER && !sg) break; if (chunk->bursts_alloc == chan->ll_max) { @@ -392,19 +409,21 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (unlikely(!burst)) goto err_alloc; - if (xfer->cyclic) + if (xfer->type == EDMA_XFER_CYCLIC) burst->sz = xfer->xfer.cyclic.len; - else + else if (xfer->type == EDMA_XFER_SCATTER_GATHER) burst->sz = sg_dma_len(sg); + else if (xfer->type == EDMA_XFER_INTERLEAVED) + burst->sz = xfer->xfer.il->sgl[i].size; chunk->ll_region.sz += burst->sz; desc->alloc_sz += burst->sz; if (chan->dir == EDMA_DIR_WRITE) { burst->sar = src_addr; - if (xfer->cyclic) { + if (xfer->type == EDMA_XFER_CYCLIC) { burst->dar = xfer->xfer.cyclic.paddr; - } else { + } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { burst->dar = dst_addr; /* Unlike the typical assumption by other * drivers/IPs the peripheral memory isn't @@ -416,9 +435,9 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) } } else { burst->dar = dst_addr; - if (xfer->cyclic) { + if (xfer->type == EDMA_XFER_CYCLIC) { burst->sar = xfer->xfer.cyclic.paddr; - } else { + } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { burst->sar = src_addr; /* Unlike the typical assumption by other * drivers/IPs the peripheral memory isn't @@ -430,10 +449,24 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) } } - if (!xfer->cyclic) { + if (xfer->type == EDMA_XFER_SCATTER_GATHER) { src_addr += sg_dma_len(sg); dst_addr += sg_dma_len(sg); sg = sg_next(sg); + } else if (xfer->type == EDMA_XFER_INTERLEAVED && + xfer->xfer.il->frame_size > 0) { + struct dma_interleaved_template *il = xfer->xfer.il; + struct data_chunk *dc = &il->sgl[i]; + + if (il->src_sgl) { + src_addr += burst->sz; + src_addr += dmaengine_get_src_icg(il, dc); + } + + if (il->dst_sgl) { + dst_addr += burst->sz; + dst_addr += dmaengine_get_dst_icg(il, dc); + } } } @@ -459,7 +492,7 @@ dw_edma_device_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, xfer.xfer.sg.sgl = sgl; xfer.xfer.sg.len = len; xfer.flags = flags; - xfer.cyclic = false; + xfer.type = EDMA_XFER_SCATTER_GATHER; return dw_edma_device_transfer(&xfer); } @@ -478,7 +511,23 @@ dw_edma_device_prep_dma_cyclic(struct dma_chan *dchan, dma_addr_t paddr, xfer.xfer.cyclic.len = len; xfer.xfer.cyclic.cnt = count; xfer.flags = flags; - xfer.cyclic = true; + xfer.type = EDMA_XFER_CYCLIC; + + return dw_edma_device_transfer(&xfer); +} + +static struct dma_async_tx_descriptor * +dw_edma_device_prep_interleaved_dma(struct dma_chan *dchan, + struct dma_interleaved_template *ilt, + unsigned long flags) +{ + struct dw_edma_transfer xfer; + + xfer.dchan = dchan; + xfer.direction = ilt->dir; + xfer.xfer.il = ilt; + xfer.flags = flags; + xfer.type = EDMA_XFER_INTERLEAVED; return dw_edma_device_transfer(&xfer); } @@ -738,6 +787,7 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, dma_cap_set(DMA_SLAVE, dma->cap_mask); dma_cap_set(DMA_CYCLIC, dma->cap_mask); dma_cap_set(DMA_PRIVATE, dma->cap_mask); + dma_cap_set(DMA_INTERLEAVE, dma->cap_mask); dma->directions = BIT(write ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV); dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); @@ -756,6 +806,7 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, dma->device_tx_status = dw_edma_device_tx_status; dma->device_prep_slave_sg = dw_edma_device_prep_slave_sg; dma->device_prep_dma_cyclic = dw_edma_device_prep_dma_cyclic; + dma->device_prep_interleaved_dma = dw_edma_device_prep_interleaved_dma; dma_set_max_seg_size(dma->dev, U32_MAX); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index 3f9593e..f72ebaa 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -39,6 +39,12 @@ enum dw_edma_status { EDMA_ST_BUSY }; +enum dw_edma_xfer_type { + EDMA_XFER_SCATTER_GATHER = 0, + EDMA_XFER_CYCLIC, + EDMA_XFER_INTERLEAVED +}; + struct dw_edma_chan; struct dw_edma_chunk; @@ -146,12 +152,13 @@ struct dw_edma_cyclic { struct dw_edma_transfer { struct dma_chan *dchan; union dw_edma_xfer { - struct dw_edma_sg sg; - struct dw_edma_cyclic cyclic; + struct dw_edma_sg sg; + struct dw_edma_cyclic cyclic; + struct dma_interleaved_template *il; } xfer; enum dma_transfer_direction direction; unsigned long flags; - bool cyclic; + enum dw_edma_xfer_type type; }; static inline From patchwork Tue Dec 15 17:30:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35203C2BBCD for ; Tue, 15 Dec 2020 17:41:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E4BC722ADC for ; Tue, 15 Dec 2020 17:41:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729389AbgLORlE (ORCPT ); Tue, 15 Dec 2020 12:41:04 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.87.133]:46032 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731136AbgLORbm (ORCPT ); Tue, 15 Dec 2020 12:31:42 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id CFBC3C044B; Tue, 15 Dec 2020 17:30:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053442; bh=26bcMb1hhhSb/6zwC97LYr49GH4oBhX47qSAAxTu+pw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=NDxkOIyNX0+PjutBwnOVzjkefiXVHwv6yg6ifCUsLyqj7uGBFe49ILrY/hmCJFbZH iWYAl4nvqLGkH15ZNLTZaa2HE4611Rnxq8FadiIA8bHZlwCDmD8EwlAaCsoHSK0X1n cuy0oSv7BWErB/YnlXQXwCtvVBRpEF4hxcrwTpndWLBzG+nihR5Bmz2gSOCgNTMJGk tp35EdHP6D5F7qz9HvKXIFD+KUDjL63cqwRWlqiMb7cr9F1/GNkIh/s3CqGoiVDXu5 IgcXwk1GHe3GdYzkufoOKbyLSnXvF3ia3qZGtjuBNgT/1SjAN3NWqpwlFzChhE/L4L L896tCNdb7CBg== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 6F900A024B; Tue, 15 Dec 2020 17:30:40 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Dan Williams , Vinod Koul Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/15] dmaengine: dw-edma: Improve number of channels check Date: Tue, 15 Dec 2020 18:30:16 +0100 Message-Id: <2d184478b75daf7e829671a14350ae6da5454929.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org It was added some extra checks to ensure that the driver doesn't try to use more DMA channels than actually are available in hardware. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-core.c | 21 +++++++++------------ drivers/dma/dw-edma/dw-edma-core.h | 2 ++ 2 files changed, 11 insertions(+), 12 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 0fe3835..5495cf7 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -914,19 +914,16 @@ int dw_edma_probe(struct dw_edma_chip *chip) raw_spin_lock_init(&dw->lock); - if (!dw->wr_ch_cnt) { - /* Find out how many write channels are supported by hardware */ - dw->wr_ch_cnt = dw_edma_v0_core_ch_count(dw, EDMA_DIR_WRITE); - if (!dw->wr_ch_cnt) - return -EINVAL; - } + dw->wr_ch_cnt = min_t(u16, dw->wr_ch_cnt, + dw_edma_v0_core_ch_count(dw, EDMA_DIR_WRITE)); + dw->wr_ch_cnt = min_t(u16, dw->wr_ch_cnt, EDMA_MAX_WR_CH); - if (!dw->rd_ch_cnt) { - /* Find out how many read channels are supported by hardware */ - dw->rd_ch_cnt = dw_edma_v0_core_ch_count(dw, EDMA_DIR_READ); - if (!dw->rd_ch_cnt) - return -EINVAL; - } + dw->rd_ch_cnt = min_t(u16, dw->rd_ch_cnt, + dw_edma_v0_core_ch_count(dw, EDMA_DIR_READ)); + dw->rd_ch_cnt = min_t(u16, dw->rd_ch_cnt, EDMA_MAX_RD_CH); + + if (!dw->wr_ch_cnt && !dw->rd_ch_cnt) + return -EINVAL; dev_vdbg(dev, "Channels:\twrite=%d, read=%d\n", dw->wr_ch_cnt, dw->rd_ch_cnt); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index f72ebaa..650b1c7 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -15,6 +15,8 @@ #include "../virt-dma.h" #define EDMA_LL_SZ 24 +#define EDMA_MAX_WR_CH 8 +#define EDMA_MAX_RD_CH 8 enum dw_edma_dir { EDMA_DIR_WRITE = 0, From patchwork Tue Dec 15 17:30:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82A7FC4361B for ; Tue, 15 Dec 2020 17:41:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4860D22ADC for ; Tue, 15 Dec 2020 17:41:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730558AbgLORkB (ORCPT ); Tue, 15 Dec 2020 12:40:01 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.87.133]:46040 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731151AbgLORbn (ORCPT ); Tue, 15 Dec 2020 12:31:43 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 91FA6C04D2; Tue, 15 Dec 2020 17:30:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053442; bh=VirXQZecPAlV36Ny+MWMBcT1Z6kF45ir/tilZlvCLqQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=lv5tBCI4fw0wYof+4i8E3ffjsZENiDEY+0qDTYV0aIjkkt0mH20HtlVDwusvoKRbp Bp1hPP4GUVVNxnUUo7vQQtZ6Gjdb6er0VpiEQSyV3KseSjBNiO+4lBsYw70IqO8kEs ng6u8gfPdpUnBbN4q1ri9Aw/mMAPjKH0Vy5VuUvhhmRNVD7CA5M3HBlPc2/ANU/+mr XcNiqZBJoRE3XlfvZRQvdRf4nFIPaU0RuTIAqSTkn9w0DlaMsoYsa8bTP7w7rfC2DE 25Jhk1dpUmecOg3CgpHWDMH5ZaK91joAocefQKxpEJFlCm7NhMrpxCf2iCPk+C8EvS o7nhHNJkEQPGA== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 26D1AA024D; Tue, 15 Dec 2020 17:30:41 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Dan Williams , Vinod Koul Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/15] dmaengine: dw-edma: Reorder variables to keep consistency Date: Tue, 15 Dec 2020 18:30:17 +0100 Message-Id: <7126adcc8ffaa0a81caea7354a5eec4f6cd84e7c.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org In the driver code structure, I tried to keep the code style consistency by writing the write channels instructions first, and then follow by the read channels instructions, mimicking the hardware implementation. However, this code style failed in some cases. This patch fixes that and no functional changes are expected. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-pcie.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index 2bd31b0..ebb2605 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -20,8 +20,8 @@ #define DW_PCIE_VSEC_DMA_ID 0x6 #define DW_PCIE_VSEC_DMA_BAR GENMASK(10, 8) #define DW_PCIE_VSEC_DMA_MAP GENMASK(2, 0) -#define DW_PCIE_VSEC_DMA_RD_CH GENMASK(25, 16) #define DW_PCIE_VSEC_DMA_WR_CH GENMASK(9, 0) +#define DW_PCIE_VSEC_DMA_RD_CH GENMASK(25, 16) struct dw_edma_pcie_data { /* eDMA registers location */ @@ -39,8 +39,8 @@ struct dw_edma_pcie_data { /* Other */ enum dw_edma_map_format mf; u8 irqs; - u16 rd_ch_cnt; u16 wr_ch_cnt; + u16 rd_ch_cnt; }; static const struct dw_edma_pcie_data snps_edda_data = { @@ -59,8 +59,8 @@ static const struct dw_edma_pcie_data snps_edda_data = { /* Other */ .mf = EDMA_MF_EDMA_UNROLL, .irqs = 1, - .rd_ch_cnt = 0, .wr_ch_cnt = 0, + .rd_ch_cnt = 0, }; static int dw_edma_pcie_irq_vector(struct device *dev, unsigned int nr) @@ -99,8 +99,8 @@ static void dw_edma_pcie_get_vsec_dma_data(struct pci_dev *pdev, pdata->rg_bar = FIELD_GET(DW_PCIE_VSEC_DMA_BAR, val); pci_read_config_dword(pdev, vsec + 0xc, &val); - pdata->rd_ch_cnt = FIELD_GET(DW_PCIE_VSEC_DMA_RD_CH, val); pdata->wr_ch_cnt = FIELD_GET(DW_PCIE_VSEC_DMA_WR_CH, val); + pdata->rd_ch_cnt = FIELD_GET(DW_PCIE_VSEC_DMA_RD_CH, val); pci_read_config_dword(pdev, vsec + 0x14, &val); off = val; @@ -216,8 +216,8 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, dw->mf = vsec_data.mf; dw->nr_irqs = nr_irqs; dw->ops = &dw_edma_pcie_core_ops; - dw->rd_ch_cnt = vsec_data.rd_ch_cnt; dw->wr_ch_cnt = vsec_data.wr_ch_cnt; + dw->rd_ch_cnt = vsec_data.rd_ch_cnt; /* Debug info */ if (dw->mf == EDMA_MF_EDMA_LEGACY) From patchwork Tue Dec 15 17:30:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6478C2BBCA for ; Tue, 15 Dec 2020 17:40:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D7AE22ADC for ; Tue, 15 Dec 2020 17:40:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730595AbgLORkB (ORCPT ); Tue, 15 Dec 2020 12:40:01 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.73.133]:54432 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731155AbgLORbo (ORCPT ); Tue, 15 Dec 2020 12:31:44 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 7FE9F40471; Tue, 15 Dec 2020 17:30:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053443; bh=+LqNa7gToM/5EdjREKvNb4Kzq8TeBHMqpkibQuiyqyM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=f2EuPt2PNpfbveQf89I2x3pDps6YkFBXZMZB7ti+30/xPTnCbgdL/plGogdcUcBCU 3AWCrPnZCEvnB1V0ci803ny93B0b1D4NXS5Gp7R95zKH75yy0qntgPzBOunsqBMDYI O7zqiCcoqh7zcpu4WX2w0uexO+debfK6bzJ6cms6xGab4zLVosR6X7yVrfuJNXsEe7 ZI0J/+W+tI2b7ko3kQwaisigIsw8sG1Vc/hxSNdlsURkJ2EeinNbKGIOeJ5+foeprm JSrwRIoTNr+tWgcIRP+oKdyCU2KTRRAulTN+iMDE02r+PKqRahWtnY6JK489pa+ByP WePHi3fHrCcaQ== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 59E27A005C; Tue, 15 Dec 2020 17:30:42 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Vinod Koul , Dan Williams Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/15] dmaengine: dw-edma: Improve the linked list and data blocks definition Date: Tue, 15 Dec 2020 18:30:18 +0100 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org In the previous implementation the driver assumes that only existed 2 memory spaces that would be equal distributed amount the write/read channels. This might not be the case on some other implementations, therefore this patches changes this requirement so that each write/read channel has their own linked list and data space well defined, which allows different sizes and locations. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-core.c | 51 +++++----- drivers/dma/dw-edma/dw-edma-core.h | 9 +- drivers/dma/dw-edma/dw-edma-pcie.c | 185 ++++++++++++++++++++++++++----------- 3 files changed, 160 insertions(+), 85 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 5495cf7..48887b5 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -81,8 +81,13 @@ static struct dw_edma_chunk *dw_edma_alloc_chunk(struct dw_edma_desc *desc) * - Even chunks originate CB equal to 1 */ chunk->cb = !(desc->chunks_alloc % 2); - chunk->ll_region.paddr = dw->ll_region.paddr + chan->ll_off; - chunk->ll_region.vaddr = dw->ll_region.vaddr + chan->ll_off; + if (chan->dir == EDMA_DIR_WRITE) { + chunk->ll_region.paddr = dw->ll_region_wr[chan->id].paddr; + chunk->ll_region.vaddr = dw->ll_region_wr[chan->id].vaddr; + } else { + chunk->ll_region.paddr = dw->ll_region_rd[chan->id].paddr; + chunk->ll_region.vaddr = dw->ll_region_rd[chan->id].vaddr; + } if (desc->chunk) { /* Create and add new element into the linked list */ @@ -691,24 +696,13 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, struct device *dev = chip->dev; struct dw_edma *dw = chip->dw; struct dw_edma_chan *chan; - size_t ll_chunk, dt_chunk; struct dw_edma_irq *irq; struct dma_device *dma; - u32 i, j, cnt, ch_cnt; u32 alloc, off_alloc; + u32 i, j, cnt; int err = 0; u32 pos; - ch_cnt = dw->wr_ch_cnt + dw->rd_ch_cnt; - ll_chunk = dw->ll_region.sz; - dt_chunk = dw->dt_region.sz; - - /* Calculate linked list chunk for each channel */ - ll_chunk /= roundup_pow_of_two(ch_cnt); - - /* Calculate linked list chunk for each channel */ - dt_chunk /= roundup_pow_of_two(ch_cnt); - if (write) { i = 0; cnt = dw->wr_ch_cnt; @@ -740,14 +734,14 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, chan->request = EDMA_REQ_NONE; chan->status = EDMA_ST_IDLE; - chan->ll_off = (ll_chunk * i); - chan->ll_max = (ll_chunk / EDMA_LL_SZ) - 1; - - chan->dt_off = (dt_chunk * i); + if (write) + chan->ll_max = (dw->ll_region_wr[j].sz / EDMA_LL_SZ); + else + chan->ll_max = (dw->ll_region_rd[j].sz / EDMA_LL_SZ); + chan->ll_max -= 1; - dev_vdbg(dev, "L. List:\tChannel %s[%u] off=0x%.8lx, max_cnt=%u\n", - write ? "write" : "read", j, - chan->ll_off, chan->ll_max); + dev_vdbg(dev, "L. List:\tChannel %s[%u] max_cnt=%u\n", + write ? "write" : "read", j, chan->ll_max); if (dw->nr_irqs == 1) pos = 0; @@ -772,12 +766,15 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, chan->vc.desc_free = vchan_free_desc; vchan_init(&chan->vc, dma); - dt_region->paddr = dw->dt_region.paddr + chan->dt_off; - dt_region->vaddr = dw->dt_region.vaddr + chan->dt_off; - dt_region->sz = dt_chunk; - - dev_vdbg(dev, "Data:\tChannel %s[%u] off=0x%.8lx\n", - write ? "write" : "read", j, chan->dt_off); + if (write) { + dt_region->paddr = dw->dt_region_wr[j].paddr; + dt_region->vaddr = dw->dt_region_wr[j].vaddr; + dt_region->sz = dw->dt_region_wr[j].sz; + } else { + dt_region->paddr = dw->dt_region_rd[j].paddr; + dt_region->vaddr = dw->dt_region_rd[j].vaddr; + dt_region->sz = dw->dt_region_rd[j].sz; + } dw_edma_v0_core_device_config(chan); } diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index 650b1c7..cba5436 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -91,11 +91,8 @@ struct dw_edma_chan { int id; enum dw_edma_dir dir; - off_t ll_off; u32 ll_max; - off_t dt_off; - struct msi_msg msi; enum dw_edma_request request; @@ -126,8 +123,10 @@ struct dw_edma { u16 rd_ch_cnt; struct dw_edma_region rg_region; /* Registers */ - struct dw_edma_region ll_region; /* Linked list */ - struct dw_edma_region dt_region; /* Data */ + struct dw_edma_region ll_region_wr[EDMA_MAX_WR_CH]; + struct dw_edma_region ll_region_rd[EDMA_MAX_RD_CH]; + struct dw_edma_region dt_region_wr[EDMA_MAX_WR_CH]; + struct dw_edma_region dt_region_rd[EDMA_MAX_RD_CH]; struct dw_edma_irq *irq; int nr_irqs; diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index ebb2605..8c1b538 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -23,19 +23,28 @@ #define DW_PCIE_VSEC_DMA_WR_CH GENMASK(9, 0) #define DW_PCIE_VSEC_DMA_RD_CH GENMASK(25, 16) +#define DW_BLOCK(a, b, c) \ + { \ + .bar = a, \ + .off = b, \ + .sz = c, \ + }, + +struct dw_edma_block { + enum pci_barno bar; + off_t off; + size_t sz; +}; + struct dw_edma_pcie_data { /* eDMA registers location */ - enum pci_barno rg_bar; - off_t rg_off; - size_t rg_sz; + struct dw_edma_block rg; /* eDMA memory linked list location */ - enum pci_barno ll_bar; - off_t ll_off; - size_t ll_sz; + struct dw_edma_block ll_wr[EDMA_MAX_WR_CH]; + struct dw_edma_block ll_rd[EDMA_MAX_RD_CH]; /* eDMA memory data location */ - enum pci_barno dt_bar; - off_t dt_off; - size_t dt_sz; + struct dw_edma_block dt_wr[EDMA_MAX_WR_CH]; + struct dw_edma_block dt_rd[EDMA_MAX_RD_CH]; /* Other */ enum dw_edma_map_format mf; u8 irqs; @@ -45,22 +54,40 @@ struct dw_edma_pcie_data { static const struct dw_edma_pcie_data snps_edda_data = { /* eDMA registers location */ - .rg_bar = BAR_0, - .rg_off = 0x00001000, /* 4 Kbytes */ - .rg_sz = 0x00002000, /* 8 Kbytes */ + .rg.bar = BAR_0, + .rg.off = 0x00001000, /* 4 Kbytes */ + .rg.sz = 0x00002000, /* 8 Kbytes */ /* eDMA memory linked list location */ - .ll_bar = BAR_2, - .ll_off = 0x00000000, /* 0 Kbytes */ - .ll_sz = 0x00800000, /* 8 Mbytes */ + .ll_wr = { + /* Channel 0 - BAR 2, offset 0 Mbytes, size 2 Mbytes */ + DW_BLOCK(BAR_2, 0x00000000, 0x00200000) + /* Channel 1 - BAR 2, offset 2 Mbytes, size 2 Mbytes */ + DW_BLOCK(BAR_2, 0x00200000, 0x00200000) + }, + .ll_rd = { + /* Channel 0 - BAR 2, offset 4 Mbytes, size 2 Mbytes */ + DW_BLOCK(BAR_2, 0x00400000, 0x00200000) + /* Channel 1 - BAR 2, offset 6 Mbytes, size 2 Mbytes */ + DW_BLOCK(BAR_2, 0x00600000, 0x00200000) + }, /* eDMA memory data location */ - .dt_bar = BAR_2, - .dt_off = 0x00800000, /* 8 Mbytes */ - .dt_sz = 0x03800000, /* 56 Mbytes */ + .dt_wr = { + /* Channel 0 - BAR 2, offset 8 Mbytes, size 14 Mbytes */ + DW_BLOCK(BAR_2, 0x00800000, 0x00e00000) + /* Channel 1 - BAR 2, offset 22 Mbytes, size 14 Mbytes */ + DW_BLOCK(BAR_2, 0x01600000, 0x00e00000) + }, + .dt_rd = { + /* Channel 0 - BAR 2, offset 36 Mbytes, size 14 Mbytes */ + DW_BLOCK(BAR_2, 0x02400000, 0x00e00000) + /* Channel 1 - BAR 2, offset 50 Mbytes, size 14 Mbytes */ + DW_BLOCK(BAR_2, 0x03200000, 0x00e00000) + }, /* Other */ .mf = EDMA_MF_EDMA_UNROLL, .irqs = 1, - .wr_ch_cnt = 0, - .rd_ch_cnt = 0, + .wr_ch_cnt = 2, + .rd_ch_cnt = 2, }; static int dw_edma_pcie_irq_vector(struct device *dev, unsigned int nr) @@ -96,18 +123,20 @@ static void dw_edma_pcie_get_vsec_dma_data(struct pci_dev *pdev, return; pdata->mf = map; - pdata->rg_bar = FIELD_GET(DW_PCIE_VSEC_DMA_BAR, val); + pdata->rg.bar = FIELD_GET(DW_PCIE_VSEC_DMA_BAR, val); pci_read_config_dword(pdev, vsec + 0xc, &val); - pdata->wr_ch_cnt = FIELD_GET(DW_PCIE_VSEC_DMA_WR_CH, val); - pdata->rd_ch_cnt = FIELD_GET(DW_PCIE_VSEC_DMA_RD_CH, val); + pdata->wr_ch_cnt = min_t(u16, pdata->wr_ch_cnt, + FIELD_GET(DW_PCIE_VSEC_DMA_WR_CH, val)); + pdata->rd_ch_cnt = min_t(u16, pdata->rd_ch_cnt, + FIELD_GET(DW_PCIE_VSEC_DMA_RD_CH, val)); pci_read_config_dword(pdev, vsec + 0x14, &val); off = val; pci_read_config_dword(pdev, vsec + 0x10, &val); off <<= 32; off |= val; - pdata->rg_off = off; + pdata->rg.off = off; } static int dw_edma_pcie_probe(struct pci_dev *pdev, @@ -119,6 +148,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, struct dw_edma_chip *chip; struct dw_edma *dw; int err, nr_irqs; + int i, mask; /* Enable PCI device */ err = pcim_enable_device(pdev); @@ -136,10 +166,16 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, dw_edma_pcie_get_vsec_dma_data(pdev, &vsec_data); /* Mapping PCI BAR regions */ - err = pcim_iomap_regions(pdev, BIT(vsec_data.rg_bar) | - BIT(vsec_data.ll_bar) | - BIT(vsec_data.dt_bar), - pci_name(pdev)); + mask = BIT(vsec_data.rg.bar); + for (i = 0; i < vsec_data.wr_ch_cnt; i++) { + mask |= BIT(vsec_data.ll_wr[i].bar); + mask |= BIT(vsec_data.dt_wr[i].bar); + } + for (i = 0; i < vsec_data.rd_ch_cnt; i++) { + mask |= BIT(vsec_data.ll_rd[i].bar); + mask |= BIT(vsec_data.dt_rd[i].bar); + } + err = pcim_iomap_regions(pdev, mask, pci_name(pdev)); if (err) { pci_err(pdev, "eDMA BAR I/O remapping failed\n"); return err; @@ -195,30 +231,56 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, chip->id = pdev->devfn; chip->irq = pdev->irq; - dw->rg_region.vaddr = pcim_iomap_table(pdev)[vsec_data.rg_bar]; - dw->rg_region.vaddr += vsec_data.rg_off; - dw->rg_region.paddr = pdev->resource[vsec_data.rg_bar].start; - dw->rg_region.paddr += vsec_data.rg_off; - dw->rg_region.sz = vsec_data.rg_sz; - - dw->ll_region.vaddr = pcim_iomap_table(pdev)[vsec_data.ll_bar]; - dw->ll_region.vaddr += vsec_data.ll_off; - dw->ll_region.paddr = pdev->resource[vsec_data.ll_bar].start; - dw->ll_region.paddr += vsec_data.ll_off; - dw->ll_region.sz = vsec_data.ll_sz; - - dw->dt_region.vaddr = pcim_iomap_table(pdev)[vsec_data.dt_bar]; - dw->dt_region.vaddr += vsec_data.dt_off; - dw->dt_region.paddr = pdev->resource[vsec_data.dt_bar].start; - dw->dt_region.paddr += vsec_data.dt_off; - dw->dt_region.sz = vsec_data.dt_sz; - dw->mf = vsec_data.mf; dw->nr_irqs = nr_irqs; dw->ops = &dw_edma_pcie_core_ops; dw->wr_ch_cnt = vsec_data.wr_ch_cnt; dw->rd_ch_cnt = vsec_data.rd_ch_cnt; + dw->rg_region.vaddr = pcim_iomap_table(pdev)[vsec_data.rg.bar]; + dw->rg_region.vaddr += vsec_data.rg.off; + dw->rg_region.paddr = pdev->resource[vsec_data.rg.bar].start; + dw->rg_region.paddr += vsec_data.rg.off; + dw->rg_region.sz = vsec_data.rg.sz; + + for (i = 0; i < dw->wr_ch_cnt; i++) { + struct dw_edma_region *ll_region = &dw->ll_region_wr[i]; + struct dw_edma_region *dt_region = &dw->dt_region_wr[i]; + struct dw_edma_block *ll_block = &vsec_data.ll_wr[i]; + struct dw_edma_block *dt_block = &vsec_data.dt_wr[i]; + + ll_region->vaddr = pcim_iomap_table(pdev)[ll_block->bar]; + ll_region->vaddr += ll_block->off; + ll_region->paddr = pdev->resource[ll_block->bar].start; + ll_region->paddr += ll_block->off; + ll_region->sz = ll_block->sz; + + dt_region->vaddr = pcim_iomap_table(pdev)[dt_block->bar]; + dt_region->vaddr += dt_block->off; + dt_region->paddr = pdev->resource[dt_block->bar].start; + dt_region->paddr += dt_block->off; + dt_region->sz = dt_block->sz; + } + + for (i = 0; i < dw->rd_ch_cnt; i++) { + struct dw_edma_region *ll_region = &dw->ll_region_rd[i]; + struct dw_edma_region *dt_region = &dw->dt_region_rd[i]; + struct dw_edma_block *ll_block = &vsec_data.ll_rd[i]; + struct dw_edma_block *dt_block = &vsec_data.dt_rd[i]; + + ll_region->vaddr = pcim_iomap_table(pdev)[ll_block->bar]; + ll_region->vaddr += ll_block->off; + ll_region->paddr = pdev->resource[ll_block->bar].start; + ll_region->paddr += ll_block->off; + ll_region->sz = ll_block->sz; + + dt_region->vaddr = pcim_iomap_table(pdev)[dt_block->bar]; + dt_region->vaddr += dt_block->off; + dt_region->paddr = pdev->resource[dt_block->bar].start; + dt_region->paddr += dt_block->off; + dt_region->sz = dt_block->sz; + } + /* Debug info */ if (dw->mf == EDMA_MF_EDMA_LEGACY) pci_dbg(pdev, "Version:\teDMA Port Logic (0x%x)\n", dw->mf); @@ -230,16 +292,33 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", dw->mf); pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", - vsec_data.rg_bar, vsec_data.rg_off, vsec_data.rg_sz, + vsec_data.rg.bar, vsec_data.rg.off, vsec_data.rg.sz, dw->rg_region.vaddr, &dw->rg_region.paddr); - pci_dbg(pdev, "L. List:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", - vsec_data.ll_bar, vsec_data.ll_off, vsec_data.ll_sz, - dw->ll_region.vaddr, &dw->ll_region.paddr); - pci_dbg(pdev, "Data:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", - vsec_data.dt_bar, vsec_data.dt_off, vsec_data.dt_sz, - dw->dt_region.vaddr, &dw->dt_region.paddr); + for (i = 0; i < dw->wr_ch_cnt; i++) { + pci_dbg(pdev, "L. List:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", + i, vsec_data.ll_wr[i].bar, + vsec_data.ll_wr[i].off, dw->ll_region_wr[i].sz, + dw->ll_region_wr[i].vaddr, &dw->ll_region_wr[i].paddr); + + pci_dbg(pdev, "Data:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", + i, vsec_data.dt_wr[i].bar, + vsec_data.dt_wr[i].off, dw->dt_region_wr[i].sz, + dw->dt_region_wr[i].vaddr, &dw->dt_region_wr[i].paddr); + } + + for (i = 0; i < dw->rd_ch_cnt; i++) { + pci_dbg(pdev, "L. List:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", + i, vsec_data.ll_rd[i].bar, + vsec_data.ll_rd[i].off, dw->ll_region_rd[i].sz, + dw->ll_region_rd[i].vaddr, &dw->ll_region_rd[i].paddr); + + pci_dbg(pdev, "Data:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", + i, vsec_data.dt_rd[i].bar, + vsec_data.dt_rd[i].off, dw->dt_region_rd[i].sz, + dw->dt_region_rd[i].vaddr, &dw->dt_region_rd[i].paddr); + } pci_dbg(pdev, "Nr. IRQs:\t%u\n", dw->nr_irqs); From patchwork Tue Dec 15 17:30:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92402C2BB48 for ; Tue, 15 Dec 2020 17:36:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 38EDF2255F for ; Tue, 15 Dec 2020 17:36:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731018AbgLORbr (ORCPT ); Tue, 15 Dec 2020 12:31:47 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.87.133]:46048 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731163AbgLORbp (ORCPT ); Tue, 15 Dec 2020 12:31:45 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 5F5C5C04D4; Tue, 15 Dec 2020 17:30:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053444; bh=2zvlnfO8mEL0+jPkJwBkzuRWJMiljFKkus2w4yewnww=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=CjhLUsSFguuRH+xVBd7BDiykucwKI0FLFQXzcCstuV7dhoMBgkHV4HBRVOv5stFio NKovK4magjehLAphUl0KGxsggOlSlJoEWc/1yJ6hlQfdknVjWEGhTt5UntYX9cn2N7 skgcT/6K22SutV4ia6pHx60ArbpAqKbrBy/yd5IUJtttdMSHCyCtBGGhv9MChqsYPN k7wKQPi84ZUoAH8d309QFaIRWFHsfGTkVDR10ORFaABwKH5QyGNBN23v8Ui5IKAEaM D710WuUs/KLAuP6yMrCuLI1sOoYGNXFRIyd1QbmBh/+45eAd1k4YLg+sJ9NAfdsyJn 62wRcbwODu1jw== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id E96E4A024A; Tue, 15 Dec 2020 17:30:42 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Dan Williams , Vinod Koul Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 10/15] dmaengine: dw-edma: Change linked list and data blocks offset and sizes Date: Tue, 15 Dec 2020 18:30:19 +0100 Message-Id: <0c547e7ce575e7059659a99be1ad64aed4614633.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Changes the linked list and data blocks offset and sizes to follow the recommendation given by the hardware team for the IPK solution. Although the previous data blocks offset and sizes are still valid and functional, using them that might present some issues related to the IPK solution, since this solution is based on FPGA and might be subjected to timmings constrains. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-pcie.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index 8c1b538..1fa43e3 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -59,29 +59,29 @@ static const struct dw_edma_pcie_data snps_edda_data = { .rg.sz = 0x00002000, /* 8 Kbytes */ /* eDMA memory linked list location */ .ll_wr = { - /* Channel 0 - BAR 2, offset 0 Mbytes, size 2 Mbytes */ - DW_BLOCK(BAR_2, 0x00000000, 0x00200000) - /* Channel 1 - BAR 2, offset 2 Mbytes, size 2 Mbytes */ - DW_BLOCK(BAR_2, 0x00200000, 0x00200000) + /* Channel 0 - BAR 2, offset 0 Mbytes, size 2 Kbytes */ + DW_BLOCK(BAR_2, 0x00000000, 0x00000800) + /* Channel 1 - BAR 2, offset 2 Mbytes, size 2 Kbytes */ + DW_BLOCK(BAR_2, 0x00200000, 0x00000800) }, .ll_rd = { - /* Channel 0 - BAR 2, offset 4 Mbytes, size 2 Mbytes */ - DW_BLOCK(BAR_2, 0x00400000, 0x00200000) - /* Channel 1 - BAR 2, offset 6 Mbytes, size 2 Mbytes */ - DW_BLOCK(BAR_2, 0x00600000, 0x00200000) + /* Channel 0 - BAR 2, offset 4 Mbytes, size 2 Kbytes */ + DW_BLOCK(BAR_2, 0x00400000, 0x00000800) + /* Channel 1 - BAR 2, offset 6 Mbytes, size 2 Kbytes */ + DW_BLOCK(BAR_2, 0x00600000, 0x00000800) }, /* eDMA memory data location */ .dt_wr = { - /* Channel 0 - BAR 2, offset 8 Mbytes, size 14 Mbytes */ - DW_BLOCK(BAR_2, 0x00800000, 0x00e00000) - /* Channel 1 - BAR 2, offset 22 Mbytes, size 14 Mbytes */ - DW_BLOCK(BAR_2, 0x01600000, 0x00e00000) + /* Channel 0 - BAR 2, offset 8 Mbytes, size 2 Kbytes */ + DW_BLOCK(BAR_2, 0x00800000, 0x00000800) + /* Channel 1 - BAR 2, offset 9 Mbytes, size 2 Kbytes */ + DW_BLOCK(BAR_2, 0x00900000, 0x00000800) }, .dt_rd = { - /* Channel 0 - BAR 2, offset 36 Mbytes, size 14 Mbytes */ - DW_BLOCK(BAR_2, 0x02400000, 0x00e00000) - /* Channel 1 - BAR 2, offset 50 Mbytes, size 14 Mbytes */ - DW_BLOCK(BAR_2, 0x03200000, 0x00e00000) + /* Channel 0 - BAR 2, offset 10 Mbytes, size 2 Kbytes */ + DW_BLOCK(BAR_2, 0x00a00000, 0x00000800) + /* Channel 1 - BAR 2, offset 11 Mbytes, size 2 Kbytes */ + DW_BLOCK(BAR_2, 0x00b00000, 0x00000800) }, /* Other */ .mf = EDMA_MF_EDMA_UNROLL, From patchwork Tue Dec 15 17:30:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A866C4361B for ; Tue, 15 Dec 2020 17:36:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5165022ADC for ; Tue, 15 Dec 2020 17:36:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731302AbgLORbx (ORCPT ); Tue, 15 Dec 2020 12:31:53 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.73.133]:54438 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731232AbgLORbq (ORCPT ); Tue, 15 Dec 2020 12:31:46 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id B05F54046A; Tue, 15 Dec 2020 17:30:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053446; bh=0LPtbl9/ClUF4yDOkawYP8loLWswFVdF/V71qFngQe4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=e1v2BTlF2t5wNwuPVTRAXrC0oI5dBWfqywl2GhAHyKcsfMlC8z0ecHiM2PDDqP4W7 gJISUA+OW7bNZgM3fY774nF69gJXQ+/uuZd/Ezu6pS9+wSQL31hxjoHNqotoYt0rh3 YuywGytsus3r9gxDLIhIlaPeODJjsz078OSQNfMYCbhqvjO4s44OWv8R7MT73bSs4j jTkc30e58AOhYQ4BRlz7904b6z/0UycJ6YCXt84btFKjcQiZKekVzmCvqka/Uf8JwT oAaNpRRARfMbe4xYhuem8habEKFwecoDkJkfl2eih8HQrOeCB80yRAmQVqYV9OqRzF BJ8e8fTk870VA== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 80EB6A005C; Tue, 15 Dec 2020 17:30:44 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Vinod Koul , Dan Williams Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 11/15] dmaengine: dw-edma: Move struct dentry variable from static definition into dw_edma struct Date: Tue, 15 Dec 2020 18:30:20 +0100 Message-Id: <37b4deae93f8415a4172b90824121f51a756de59.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Move struct dentry variable from static definition (dw-edma-v0-debugfs.c) into dw_edma struct (dw-edma-core.h) Also the variable was renamed from base_dir to debugfs. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-core.c | 2 +- drivers/dma/dw-edma/dw-edma-core.h | 3 +++ drivers/dma/dw-edma/dw-edma-v0-core.c | 4 ++-- drivers/dma/dw-edma/dw-edma-v0-core.h | 2 +- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 22 +++++++++++++--------- drivers/dma/dw-edma/dw-edma-v0-debugfs.h | 4 ++-- 6 files changed, 22 insertions(+), 15 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 48887b5..8d8292e 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -1003,7 +1003,7 @@ int dw_edma_remove(struct dw_edma_chip *chip) dma_async_device_unregister(&dw->rd_edma); /* Turn debugfs off */ - dw_edma_v0_core_debugfs_off(); + dw_edma_v0_core_debugfs_off(chip); return 0; } diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index cba5436..60316d4 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -137,6 +137,9 @@ struct dw_edma { const struct dw_edma_core_ops *ops; raw_spinlock_t lock; /* Only for legacy */ +#ifdef CONFIG_DEBUG_FS + struct dentry *debugfs; +#endif /* CONFIG_DEBUG_FS */ }; struct dw_edma_sg { diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index 5b0541a..329fc2e 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -506,7 +506,7 @@ void dw_edma_v0_core_debugfs_on(struct dw_edma_chip *chip) dw_edma_v0_debugfs_on(chip); } -void dw_edma_v0_core_debugfs_off(void) +void dw_edma_v0_core_debugfs_off(struct dw_edma_chip *chip) { - dw_edma_v0_debugfs_off(); + dw_edma_v0_debugfs_off(chip); } diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.h b/drivers/dma/dw-edma/dw-edma-v0-core.h index abae152..2afa626 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.h +++ b/drivers/dma/dw-edma/dw-edma-v0-core.h @@ -23,6 +23,6 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first); int dw_edma_v0_core_device_config(struct dw_edma_chan *chan); /* eDMA debug fs callbacks */ void dw_edma_v0_core_debugfs_on(struct dw_edma_chip *chip); -void dw_edma_v0_core_debugfs_off(void); +void dw_edma_v0_core_debugfs_off(struct dw_edma_chip *chip); #endif /* _DW_EDMA_V0_CORE_H */ diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 157dfc2..4b3bcff 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -38,7 +38,6 @@ #define CHANNEL_STR "channel" #define REGISTERS_STR "registers" -static struct dentry *base_dir; static struct dw_edma *dw; static struct dw_edma_v0_regs __iomem *regs; @@ -272,7 +271,7 @@ static void dw_edma_debugfs_regs(void) struct dentry *regs_dir; int nr_entries; - regs_dir = debugfs_create_dir(REGISTERS_STR, base_dir); + regs_dir = debugfs_create_dir(REGISTERS_STR, dw->debugfs); if (!regs_dir) return; @@ -293,18 +292,23 @@ void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip) if (!regs) return; - base_dir = debugfs_create_dir(dw->name, NULL); - if (!base_dir) + dw->debugfs = debugfs_create_dir(dw->name, NULL); + if (!dw->debugfs) return; - debugfs_create_u32("mf", 0444, base_dir, &dw->mf); - debugfs_create_u16("wr_ch_cnt", 0444, base_dir, &dw->wr_ch_cnt); - debugfs_create_u16("rd_ch_cnt", 0444, base_dir, &dw->rd_ch_cnt); + debugfs_create_u32("mf", 0444, dw->debugfs, &dw->mf); + debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); + debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt); dw_edma_debugfs_regs(); } -void dw_edma_v0_debugfs_off(void) +void dw_edma_v0_debugfs_off(struct dw_edma_chip *chip) { - debugfs_remove_recursive(base_dir); + dw = chip->dw; + if (!dw) + return; + + debugfs_remove_recursive(dw->debugfs); + dw->debugfs = NULL; } diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.h b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h index 5450a0a..d0ff25a 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.h +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h @@ -13,13 +13,13 @@ #ifdef CONFIG_DEBUG_FS void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip); -void dw_edma_v0_debugfs_off(void); +void dw_edma_v0_debugfs_off(struct dw_edma_chip *chip); #else static inline void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip) { } -static inline void dw_edma_v0_debugfs_off(void) +static inline void dw_edma_v0_debugfs_off(struct dw_edma_chip *chip) { } #endif /* CONFIG_DEBUG_FS */ From patchwork Tue Dec 15 17:30:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75C5EC4361B for ; Tue, 15 Dec 2020 17:37:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3D9C72255F for ; Tue, 15 Dec 2020 17:37:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729457AbgLORgc (ORCPT ); Tue, 15 Dec 2020 12:36:32 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.73.133]:54442 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731289AbgLORbq (ORCPT ); Tue, 15 Dec 2020 12:31:46 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 742E240476; Tue, 15 Dec 2020 17:30:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053446; bh=71t8kLn/h4uRT+rZrsVOIM9WjcoyjC97XsyNd4xo0/8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=HuUgsMsuLEO9vN/oLpFFd64Zo/Lspv/eMNDAnfOaiTxniSA8gQET3IunQiD4MI47z imXKsIWSBvrbZ4Vt1wpy79u9po+5cOCXW1rLSU2qAZycUTpifBGBKFoxei498HQ1MH ycCoMaDCwjOPo+KLoH78MOxhv8zvW8tAUFVnvdV8lqYE6VAOlU2poqbEOVSkEWNuDF mIVEGkGnO4JrvH1OIRvTFZ9EQOMBfhgsVOmYiA9e9SMoFvBbugdVwuy30ZStXVnHF9 h4vxbO0HxxCep37eabbglneohB+2GovV76N/p547o6431q5bEhGVpt+PZLzd44viFW cTqIAY4ejaj7g== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 51D56A024A; Tue, 15 Dec 2020 17:30:45 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Dan Williams , Vinod Koul Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/15] dmaengine: dw-edma: Fix crash on loading/unloading driver Date: Tue, 15 Dec 2020 18:30:21 +0100 Message-Id: <615cabee7443a8f6fd054e6bf18c75985e81131c.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org When the driver is compiled as a module and loaded if we try to unload it, the Kernel shows a crash log. This Kernel crash is due to the dma_async_device_unregister() call done after deleting the channels, this patch fixes this issue. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-core.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 8d8292e..f7a1930 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -986,22 +986,21 @@ int dw_edma_remove(struct dw_edma_chip *chip) /* Power management */ pm_runtime_disable(dev); + /* Deregister eDMA device */ + dma_async_device_unregister(&dw->wr_edma); list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels, vc.chan.device_node) { - list_del(&chan->vc.chan.device_node); tasklet_kill(&chan->vc.task); + list_del(&chan->vc.chan.device_node); } + dma_async_device_unregister(&dw->rd_edma); list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels, vc.chan.device_node) { - list_del(&chan->vc.chan.device_node); tasklet_kill(&chan->vc.task); + list_del(&chan->vc.chan.device_node); } - /* Deregister eDMA device */ - dma_async_device_unregister(&dw->wr_edma); - dma_async_device_unregister(&dw->rd_edma); - /* Turn debugfs off */ dw_edma_v0_core_debugfs_off(chip); From patchwork Tue Dec 15 17:30:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE040C2BBCA for ; Tue, 15 Dec 2020 17:32:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98B6222B4B for ; Tue, 15 Dec 2020 17:32:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731317AbgLORcF (ORCPT ); Tue, 15 Dec 2020 12:32:05 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.87.133]:46054 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731294AbgLORbs (ORCPT ); Tue, 15 Dec 2020 12:31:48 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 83BD0C04D7; Tue, 15 Dec 2020 17:30:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053447; bh=yp46md2f1aLH/kML4cPAiVttSwNoyzCG7t+Fe4rx+gc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=BT9Nm9/wdBndulKVY0glbXtCF06MvMwdoGS/OZXWAbLQtyLaoriJOEyZcHpfZDqUl MoDYyRa/Q5rMdFyfubpcxc0FvX0ytp7eGT5AaGVqzyUywrUs/S0jkTGZZOy75qY6v2 QaS/zWtX9uCl8x1WKX1/iEGdrZOAuvX0TVikL2+QkXlzwqQvqqMYCAWd93iklSda70 FzsX5HlR0qVvH6H2wU1QERUUYi63F02y2mj94hryzb7whJSIyvfyG7KAKAoWrrfuBT hJy5lM3gTp4sWhoweAPSkaygNRaWIuWXmk2Aw/c0mfZ8Y0fj8BmhO7HCBnqR1d9IHh IEhWxnmGy2A3w== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 19361A024D; Tue, 15 Dec 2020 17:30:46 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Vinod Koul , Dan Williams Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 13/15] dmaengine: dw-edma: Change DMA abreviation from lower into upper case Date: Tue, 15 Dec 2020 18:30:22 +0100 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org To keep code consistent, some comments with dma keyword written in lower case are now in upper case. Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-core.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index f7a1930..a299eed 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -341,15 +341,15 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) return NULL; switch (chan->config.direction) { - case DMA_DEV_TO_MEM: /* local dma */ + case DMA_DEV_TO_MEM: /* local DMA */ if (dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_READ) break; return NULL; - case DMA_MEM_TO_DEV: /* local dma */ + case DMA_MEM_TO_DEV: /* local DMA */ if (dir == DMA_MEM_TO_DEV && chan->dir == EDMA_DIR_WRITE) break; return NULL; - default: /* remote dma */ + default: /* remote DMA */ if (dir == DMA_MEM_TO_DEV && chan->dir == EDMA_DIR_READ) break; if (dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_WRITE) From patchwork Tue Dec 15 17:30:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AED9CC2BB9A for ; Tue, 15 Dec 2020 17:32:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7899822B43 for ; Tue, 15 Dec 2020 17:32:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731313AbgLORcE (ORCPT ); Tue, 15 Dec 2020 12:32:04 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.87.133]:46062 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731296AbgLORbt (ORCPT ); Tue, 15 Dec 2020 12:31:49 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 3BEC8C04D9; Tue, 15 Dec 2020 17:30:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053448; bh=WnHPY+sRUSSJJQq7Jc0VtKP64lY4iaChTFLji7S/MT0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=Tw7puXCYNl+bk6YvaewpFS6pQVFi1Qw5ZiCtKQA4QrNAEQzF/f55ioOwaT5QkU132 3zj/KzuxeQY3C5y44F4k6FUqu4vuCjZCNqeAxJucM82PqiK+FRxJ4Tz11p/qgY6DFd bKpet08cZL71sYA+umBoFQQz0dIk2tQqiLLhi++Mp4WiDUEoF5+GOy8SV5uioDx45m zGhIcDfbJc/kG/0Mjsv/oTO7PbwORB2zmQF6jbQngl9dlnkm+rEMY3nBzMCo5bq7zk FRPPtSCflOjxHU9unimmhY74ctJN0cKKQP9RWTSKd/macPZikj3HvFVTKTw46e89bB OFMnZOqu7XBVw== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id CFE07A005C; Tue, 15 Dec 2020 17:30:46 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Vinod Koul , Dan Williams Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 14/15] dmaengine: dw-edma: Revert fix scatter-gather address calculation Date: Tue, 15 Dec 2020 18:30:23 +0100 Message-Id: <4ef952dc04b2831f82b63ec8becd7b04a93a84ca.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Reverting the applied patch because it caused a regression on ARC700 platform (32 bits). Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-core.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index a299eed..c198451 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -429,7 +429,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (xfer->type == EDMA_XFER_CYCLIC) { burst->dar = xfer->xfer.cyclic.paddr; } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { - burst->dar = dst_addr; + src_addr += sg_dma_len(sg); + burst->dar = sg_dma_address(sg); /* Unlike the typical assumption by other * drivers/IPs the peripheral memory isn't * a FIFO memory, in this case, it's a @@ -443,7 +444,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (xfer->type == EDMA_XFER_CYCLIC) { burst->sar = xfer->xfer.cyclic.paddr; } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { - burst->sar = src_addr; + dst_addr += sg_dma_len(sg); + burst->sar = sg_dma_address(sg); /* Unlike the typical assumption by other * drivers/IPs the peripheral memory isn't * a FIFO memory, in this case, it's a @@ -455,8 +457,6 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) } if (xfer->type == EDMA_XFER_SCATTER_GATHER) { - src_addr += sg_dma_len(sg); - dst_addr += sg_dma_len(sg); sg = sg_next(sg); } else if (xfer->type == EDMA_XFER_INTERLEAVED && xfer->xfer.il->frame_size > 0) { From patchwork Tue Dec 15 17:30:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 11975385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96806C2BB48 for ; Tue, 15 Dec 2020 17:32:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 615B922B3B for ; Tue, 15 Dec 2020 17:32:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731309AbgLORb6 (ORCPT ); Tue, 15 Dec 2020 12:31:58 -0500 Received: from smtprelay-out1.synopsys.com ([149.117.87.133]:46068 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731297AbgLORbt (ORCPT ); Tue, 15 Dec 2020 12:31:49 -0500 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id DE665C04DA; Tue, 15 Dec 2020 17:30:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1608053449; bh=ZCFeUiaOqr6nmVdSrA/yeLCZ+L/gs597171/4LwdJCg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=ZaoDvg8wbB8lCJwqvd7TcwYJEpkV+2//cfQTiz+RHVEC+jMYRlc+izTY8ju7EOoA0 Ucu4KqBspALgJaI821QKEyJNXNFci8CobzLMbon9298Yco7m+SocDx7HrN8/stwnF9 xDA7cekeWAmWICbA87U5jAqTYKd4TbS65SXnZNgMSdCWKefnN4twArukNMdYzgfH+2 +E5h93VzSXxiIg26AarKgzS5hgY3P1xNPQeBTQ8SnIeLnVAub7p+2akL1jOK0BdcvI jrfCyWa+S7jeNAGeMQ2M8AUgCDjDOnzE8O2rcqq5X1lgDjEihfB+RnCXPniIlRpozk p0KIKYv9dCBcg== Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by mailhost.synopsys.com (Postfix) with ESMTP id 73759A024A; Tue, 15 Dec 2020 17:30:47 +0000 (UTC) X-SNPS-Relay: synopsys.com From: Gustavo Pimentel To: Gustavo Pimentel , Vinod Koul , Dan Williams Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 15/15] dmaengine: dw-edma: Add pcim_iomap_table return checker Date: Tue, 15 Dec 2020 18:30:24 +0100 Message-Id: <6df68db09ac19b2be8559e848497714ee763d5bd.1608053262.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Detected by CoverityScan CID 16555 ("Dereference null return") Signed-off-by: Gustavo Pimentel --- drivers/dma/dw-edma/dw-edma-pcie.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index 1fa43e3..b759796 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -238,6 +238,9 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, dw->rd_ch_cnt = vsec_data.rd_ch_cnt; dw->rg_region.vaddr = pcim_iomap_table(pdev)[vsec_data.rg.bar]; + if (!dw->rg_region.vaddr) + return -ENOMEM; + dw->rg_region.vaddr += vsec_data.rg.off; dw->rg_region.paddr = pdev->resource[vsec_data.rg.bar].start; dw->rg_region.paddr += vsec_data.rg.off; @@ -250,12 +253,18 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, struct dw_edma_block *dt_block = &vsec_data.dt_wr[i]; ll_region->vaddr = pcim_iomap_table(pdev)[ll_block->bar]; + if (!ll_region->vaddr) + return -ENOMEM; + ll_region->vaddr += ll_block->off; ll_region->paddr = pdev->resource[ll_block->bar].start; ll_region->paddr += ll_block->off; ll_region->sz = ll_block->sz; dt_region->vaddr = pcim_iomap_table(pdev)[dt_block->bar]; + if (!dt_region->vaddr) + return -ENOMEM; + dt_region->vaddr += dt_block->off; dt_region->paddr = pdev->resource[dt_block->bar].start; dt_region->paddr += dt_block->off; @@ -269,12 +278,18 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, struct dw_edma_block *dt_block = &vsec_data.dt_rd[i]; ll_region->vaddr = pcim_iomap_table(pdev)[ll_block->bar]; + if (!ll_region->vaddr) + return -ENOMEM; + ll_region->vaddr += ll_block->off; ll_region->paddr = pdev->resource[ll_block->bar].start; ll_region->paddr += ll_block->off; ll_region->sz = ll_block->sz; dt_region->vaddr = pcim_iomap_table(pdev)[dt_block->bar]; + if (!dt_region->vaddr) + return -ENOMEM; + dt_region->vaddr += dt_block->off; dt_region->paddr = pdev->resource[dt_block->bar].start; dt_region->paddr += dt_block->off;