From patchwork Wed Mar 9 21:11:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Li X-Patchwork-Id: 12775618 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED4A4C433FE for ; Wed, 9 Mar 2022 21:12:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235874AbiCIVNh (ORCPT ); Wed, 9 Mar 2022 16:13:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236334AbiCIVNg (ORCPT ); Wed, 9 Mar 2022 16:13:36 -0500 Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2082.outbound.protection.outlook.com [40.107.22.82]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22FF92F3; Wed, 9 Mar 2022 13:12:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FuRc6Z5X+UfTs7HxYDF+n07U6elCPg7b76/iHrGmfcS9q9zRZ7ltbUTOu/MZ5tjnai7tCXln08Yl3gCtXXY6pfrleA8ldtIWP2yFwrKV8VVzpmg+2BSyumHCr4slp+HpGxWDJdIpYR5zrV0WvVdwyVEqXXNYZ2uHjF5IDYE7PUUZ0ZIYL/t4X766Ap0ktd6XC6BeMM3U+lYiDaO2TZ4n5OzpdpK0BZtJawURc/vvtBoHHBSo0+HWiiRGd1eAv+UlID1l91MPYi9T44cEO4Hb6617NyCI7v/51rmG0s7TZ0AsLOlOQmxh+Jv4TjPrwC73DTevblxrsJWpttu68xz0qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=X4C12opOJyCvPQRa+M4nwmQn7NR7/Y/bRzT+SCYkdD0=; b=oEHNcMVJTyGvvbKLp4CqPeIE7xd+Xk0WceA9jyxAD3e4PgLU6Oi7GcHpEWckJF9ZGjtco8goxb9Hb4V3fY87ZDPn8gsMB5JQ9CMHN7db5zksdoAOtDmymc4bVIiiesGcfunoqKhIVQ5NiuxQTYXRXZ5JhJ0dQm+8G7VZPqb8np6aLdWRUTVOPPz6g++NhlZp4P1SbFy/iB/Yy0y67u7XHTp+c/Zqc93sKe+sqh01wQnetWpIU2Ya61x1MMzJ6rzLBlbsNk/RH+P8cXPsbLI81jNKhZLF0O0gLuCxKc45uJeXRTB5Zj8oPMbKtkVVSnCYTHjoVTzJhnWzZRz8wOEqYw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=X4C12opOJyCvPQRa+M4nwmQn7NR7/Y/bRzT+SCYkdD0=; b=SHwi3cAOTAaC5FHa7s0vCa/jv9/B/dG4wfu3a6tptxjAzp4ISG3ITghCkPAIJwM0EumAyh4Q5A1Ml80w0FZNg3sexy/wUTbSEJV5Rb6+gwVpJ3vWN0rUMxXpbe2rjZVo75ud8l5bzgTh9NhI4Yshw1Y9cWJ1tKdwZa9twkPCzMs= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from PAXPR04MB9186.eurprd04.prod.outlook.com (2603:10a6:102:232::18) by AS8PR04MB8358.eurprd04.prod.outlook.com (2603:10a6:20b:3fa::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.14; Wed, 9 Mar 2022 21:12:31 +0000 Received: from PAXPR04MB9186.eurprd04.prod.outlook.com ([fe80::c897:1bdf:e643:aef8]) by PAXPR04MB9186.eurprd04.prod.outlook.com ([fe80::c897:1bdf:e643:aef8%7]) with mapi id 15.20.5038.027; Wed, 9 Mar 2022 21:12:31 +0000 From: Frank Li To: gustavo.pimentel@synopsys.com, hongxing.zhu@nxp.com, l.stach@pengutronix.de, linux-imx@nxp.com, linux-pci@vger.kernel.org, dmaengine@vger.kernel.org, fancer.lancer@gmail.com, lznuaa@gmail.com Cc: vkoul@kernel.org, lorenzo.pieralisi@arm.com, robh@kernel.org, kw@linux.com, bhelgaas@google.com, shawnguo@kernel.org, manivannan.sadhasivam@linaro.org Subject: [PATCH v4 1/8] dmaengine: dw-edma: Detach the private data and chip info structures Date: Wed, 9 Mar 2022 15:11:57 -0600 Message-Id: <20220309211204.26050-2-Frank.Li@nxp.com> X-Mailer: git-send-email 2.24.0.rc1 In-Reply-To: <20220309211204.26050-1-Frank.Li@nxp.com> References: <20220309211204.26050-1-Frank.Li@nxp.com> X-ClientProxiedBy: SJ0PR05CA0170.namprd05.prod.outlook.com (2603:10b6:a03:339::25) To PAXPR04MB9186.eurprd04.prod.outlook.com (2603:10a6:102:232::18) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 89aa71ff-2144-46ab-9f4d-08da0211861c X-MS-TrafficTypeDiagnostic: AS8PR04MB8358:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wCaVIYAf2UBubPOEP60vZ/OQeiwY7nAtYLBSRIW3ERDWa4quYzKV1IlAs3Rv1kLfVPfo7sHlQNHx/IlPnHNaeAz0w7ivUmCUGK3ArYShWODR8LQsZmjRp1gKuHmEN5IVdcXH+CEPG78rIdQ5pnGpX4vbiLIIyzPZe3psqSKsQR8i8l3f2vJHzE1HB0OnK+nOz0CbWFVlSCcDVndI84RfZ+ScLddgV9J4Mcm4rlAzDqfMLN/0MWCCFFuZ3iOHPbO3GcDhKSzfGDW8aj15NLgwxByspIwZdiW6bPR6cN/ZaCkK+S91G/oN0U6/xgtx8Ezvl2n2dn6dvvmVSjywdGKu2DKADbTsnQ4ouep3Uznpj6RSUkfsGY7rO4b5QpGyTzZC2cVqGZMt3hCI8Crz3FfPX3slTURLHBq/XI3n58bReqFfNGG27oGz+L2RrM6Rhh2U39+aBbfgsVvLw7WY1BZUBNL+1INGhhLHmCgWVIhT6X5x7XK4jVP59GUJhdt0izO40ln9GVlC5O8OsEfiNvdN8uBVr6BOJUpmbx6DAXm1fPWWpf3AamRz2RDD8p9t/LXTrrFQKFxqlrzmuTweu8XVg0MwXZbhGdY7eXn+sxNV5Ow4/Ag6qZ8VunQmrx61VWa9eSrGhmEAN+DEgRQP4oT1p8QpknxeS4RAKA/nYRzueFyb+4NJctCBLHdAKpOsQjLCCcV3JlKUuXa7Z0VTpyH9Vg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR04MB9186.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(26005)(186003)(508600001)(2616005)(1076003)(316002)(83380400001)(6512007)(66946007)(86362001)(66556008)(52116002)(66476007)(30864003)(8676002)(6506007)(4326008)(36756003)(2906002)(38350700002)(38100700002)(6486002)(5660300002)(6666004)(8936002)(7416002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: e9gDfCPRbIujOXJbWHWGQS/69iZ48S/axCJdrMViJK8p7DwIecHwcamanre5ZUFMh0GzbAfwGwXr73/J+ChQMgGZgbAl/+UUAvRm5leoJmDtVHjOU0s3hPjMJSOFChUQhkTX1zD50tFLg2mPD3EO4hrm3dw2J1O1Zz5xL+qlJhgZ8Z3s3JMGEwkhFv76I88xiDhE+uClTOpVszIW/OqRhNdaF8QGsTMkSwIUKJCCrP2mnpuBoGbDg12ktAUWZRnCBzXCFEEM/u0nf3xT3pHsros0QgQJ3ED+jA9IRjwjWbnbLddjsYJ7oXqEqxlbGj1RfbrdbchFxk12jSJCbK3xhnyem1yFa8MzFSrTAEINeeZDUcmOfQ2FnMOlZ2yXAxY9kVNTbIEWQHZKb+bciDln38J/6kEJNzu5kbel4+6g+E0yQ2e6s6sf8u9zaw7Cc3NoWjz18hmereRo8ZwX2capa+Q4W05xXWJEtMTarFwbNsLW2dYIcdLhmizvrxeFo9RB1GVXdlhOdUPdrio9bxl7mswZ22FQ+y68BW7eRC6BNZTmnPS4++EIdw5Pb3eeoHT6pabdKrb4bhYpudDfjg3QZPMlFpoVIIoUiFLf5iubJewxHEJlqr86pHakGAHN69+uPiFdUHSQOo0iI273sh8XsQlMssHkjrMcjbW65Tb6nhsiV4RrUnUAS03GFR70ZE8bYsTlZD+P/dLaf5jZMacYUHs+5gwSa4jz3HPjPf0srlccey4J6VSp2Sxrr49Ufr5VcFJG2FG8OsTqpC9pQiLrjzPfyterTie0VQIgbOx17ugKUU5iUG65HsaJwtghMfR4Ffnqh2jiuriaWZfr5vITeO5unSEIymsq0izjiS/IyTLSY84n/6Tf5GkszOXrPkr0gO9KE4qcYkt0BZ8VDLDZ+a4aQB26n13KSRjmxjSvEEqMUHpPXX8yJADo1eopQWXVq3Rvt3RIsHa2d391j6F+qW7mx1CCTSxGEo5ysCgvg2hqyeHqU7unxQy8UkzLfVBdCjv9h9NR6wqLhCxdKvUP0hAiasvTScD/XV6QnDoJVf1zexst4iZvHeXfJYGNU/kdblKBPPo+3snThVfNMl4+hcpa/AvNQvnGoLH4a9APgpYHgBWXF8zMRhRSO6mGtEwKmR5CHDLQDaQoPlsemc3OAHhKB/dp+pGOMNco76uoXDSeOb44uHGhZOJrc6LBH1I+MJFnaxaaDN0P1TBZo3LVns5jQ0OvNvluiTV3fEx63Il+4F9hOdEdvmrPOfuhk6UcQnpV1jiTSsa4QqgijVwAJLeAiIXvRJku8fq85caHhVnLB7YNLZiQ97qLtkNF5CC4psr94GGgz1gQLQNgVoVEyViQz/QE49KDYa1otQoSdYMs0SHOPMECHILG5INZ1BhnBt9FcRMVqJ+GjVSVIEcb5wtliIONpWamceo/PXBxyGNS7Pj3vuvKugrW0tX5h4J6jHm2WKiqX34K1jkke/gmVRCn94HBjKdGQE4LM7MlL+qeKKW6LbmVOx9PJKy3hkrlIQv6hnhv55y90jG2eUYJo2wuF6XXvuPodn73CNwRYSbymtfA529Ey8MIgY4ZqBgEbel4wqQglllMGGrafQl0Jqw5FIyvjLL9YHAoQoIFV8Y= X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 89aa71ff-2144-46ab-9f4d-08da0211861c X-MS-Exchange-CrossTenant-AuthSource: PAXPR04MB9186.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2022 21:12:31.7193 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: qofPQKtBPk+CM/XXyZ33bwXI87wCLz/Cel4APPlE2RntG+uoqKFba3sZN2mhms5E32rYHFTsBnm8v4nF9/wBcg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8358 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org "struct dw_edma_chip" contains an internal structure "struct dw_edma" that is used by the eDMA core internally. This structure should not be touched by the eDMA controller drivers themselves. But currently, the eDMA controller drivers like "dw-edma-pci" allocates and populates this internal structure then passes it on to eDMA core. The eDMA core further populates the structure and uses it. This is wrong! Hence, move all the "struct dw_edma" specifics from controller drivers to the eDMA core. Signed-off-by: Frank Li --- Change from v3 to v4 - Accept most suggestions of Serge Semin Change from v2 to v3 - none Change from v1 to v2 - rework commit message - remove duplicate field in struct dw_edma drivers/dma/dw-edma/dw-edma-core.c | 81 +++++++++++++---------- drivers/dma/dw-edma/dw-edma-core.h | 32 +-------- drivers/dma/dw-edma/dw-edma-pcie.c | 83 ++++++++++-------------- drivers/dma/dw-edma/dw-edma-v0-core.c | 24 +++---- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 10 +-- include/linux/dma/edma.h | 44 +++++++++++++ 6 files changed, 144 insertions(+), 130 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 53289927dd0d6..1abf41d49f75b 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -65,7 +65,7 @@ static struct dw_edma_burst *dw_edma_alloc_burst(struct dw_edma_chunk *chunk) static struct dw_edma_chunk *dw_edma_alloc_chunk(struct dw_edma_desc *desc) { struct dw_edma_chan *chan = desc->chan; - struct dw_edma *dw = chan->chip->dw; + struct dw_edma_chip *chip = chan->dw->chip; struct dw_edma_chunk *chunk; chunk = kzalloc(sizeof(*chunk), GFP_NOWAIT); @@ -82,11 +82,11 @@ static struct dw_edma_chunk *dw_edma_alloc_chunk(struct dw_edma_desc *desc) */ chunk->cb = !(desc->chunks_alloc % 2); if (chan->dir == EDMA_DIR_WRITE) { - chunk->ll_region.paddr = dw->ll_region_wr[chan->id].paddr; - chunk->ll_region.vaddr = dw->ll_region_wr[chan->id].vaddr; + chunk->ll_region.paddr = chip->ll_region_wr[chan->id].paddr; + chunk->ll_region.vaddr = chip->ll_region_wr[chan->id].vaddr; } else { - chunk->ll_region.paddr = dw->ll_region_rd[chan->id].paddr; - chunk->ll_region.vaddr = dw->ll_region_rd[chan->id].vaddr; + chunk->ll_region.paddr = chip->ll_region_rd[chan->id].paddr; + chunk->ll_region.vaddr = chip->ll_region_rd[chan->id].vaddr; } if (desc->chunk) { @@ -664,7 +664,7 @@ static int dw_edma_alloc_chan_resources(struct dma_chan *dchan) if (chan->status != EDMA_ST_IDLE) return -EBUSY; - pm_runtime_get(chan->chip->dev); + pm_runtime_get(chan->dw->chip->dev); return 0; } @@ -686,7 +686,7 @@ static void dw_edma_free_chan_resources(struct dma_chan *dchan) cpu_relax(); } - pm_runtime_put(chan->chip->dev); + pm_runtime_put(chan->dw->chip->dev); } static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, @@ -718,7 +718,7 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, } INIT_LIST_HEAD(&dma->channels); - for (j = 0; (alloc || dw->nr_irqs == 1) && j < cnt; j++, i++) { + for (j = 0; (alloc || chip->nr_irqs == 1) && j < cnt; j++, i++) { chan = &dw->chan[i]; dt_region = devm_kzalloc(dev, sizeof(*dt_region), GFP_KERNEL); @@ -727,7 +727,7 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, chan->vc.chan.private = dt_region; - chan->chip = chip; + chan->dw = dw; chan->id = j; chan->dir = write ? EDMA_DIR_WRITE : EDMA_DIR_READ; chan->configured = false; @@ -735,15 +735,15 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, chan->status = EDMA_ST_IDLE; if (write) - chan->ll_max = (dw->ll_region_wr[j].sz / EDMA_LL_SZ); + chan->ll_max = (chip->ll_region_wr[j].sz / EDMA_LL_SZ); else - chan->ll_max = (dw->ll_region_rd[j].sz / EDMA_LL_SZ); + chan->ll_max = (chip->ll_region_rd[j].sz / EDMA_LL_SZ); chan->ll_max -= 1; dev_vdbg(dev, "L. List:\tChannel %s[%u] max_cnt=%u\n", write ? "write" : "read", j, chan->ll_max); - if (dw->nr_irqs == 1) + if (chip->nr_irqs == 1) pos = 0; else pos = off_alloc + (j % alloc); @@ -767,13 +767,13 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, vchan_init(&chan->vc, dma); if (write) { - dt_region->paddr = dw->dt_region_wr[j].paddr; - dt_region->vaddr = dw->dt_region_wr[j].vaddr; - dt_region->sz = dw->dt_region_wr[j].sz; + dt_region->paddr = chip->dt_region_wr[j].paddr; + dt_region->vaddr = chip->dt_region_wr[j].vaddr; + dt_region->sz = chip->dt_region_wr[j].sz; } else { - dt_region->paddr = dw->dt_region_rd[j].paddr; - dt_region->vaddr = dw->dt_region_rd[j].vaddr; - dt_region->sz = dw->dt_region_rd[j].sz; + dt_region->paddr = chip->dt_region_rd[j].paddr; + dt_region->vaddr = chip->dt_region_rd[j].vaddr; + dt_region->sz = chip->dt_region_rd[j].sz; } dw_edma_v0_core_device_config(chan); @@ -840,16 +840,16 @@ static int dw_edma_irq_request(struct dw_edma_chip *chip, ch_cnt = dw->wr_ch_cnt + dw->rd_ch_cnt; - if (dw->nr_irqs < 1) + if (chip->nr_irqs < 1) return -EINVAL; - if (dw->nr_irqs == 1) { + if (chip->nr_irqs == 1) { /* Common IRQ shared among all channels */ - irq = dw->ops->irq_vector(dev, 0); + irq = chip->ops->irq_vector(dev, 0); err = request_irq(irq, dw_edma_interrupt_common, IRQF_SHARED, dw->name, &dw->irq[0]); if (err) { - dw->nr_irqs = 0; + chip->nr_irqs = 0; return err; } @@ -857,7 +857,7 @@ static int dw_edma_irq_request(struct dw_edma_chip *chip, get_cached_msi_msg(irq, &dw->irq[0].msi); } else { /* Distribute IRQs equally among all channels */ - int tmp = dw->nr_irqs; + int tmp = chip->nr_irqs; while (tmp && (*wr_alloc + *rd_alloc) < ch_cnt) { dw_edma_dec_irq_alloc(&tmp, wr_alloc, dw->wr_ch_cnt); @@ -868,7 +868,7 @@ static int dw_edma_irq_request(struct dw_edma_chip *chip, dw_edma_add_irq_mask(&rd_mask, *rd_alloc, dw->rd_ch_cnt); for (i = 0; i < (*wr_alloc + *rd_alloc); i++) { - irq = dw->ops->irq_vector(dev, i); + irq = chip->ops->irq_vector(dev, i); err = request_irq(irq, i < *wr_alloc ? dw_edma_interrupt_write : @@ -876,7 +876,7 @@ static int dw_edma_irq_request(struct dw_edma_chip *chip, IRQF_SHARED, dw->name, &dw->irq[i]); if (err) { - dw->nr_irqs = i; + chip->nr_irqs = i; return err; } @@ -884,7 +884,7 @@ static int dw_edma_irq_request(struct dw_edma_chip *chip, get_cached_msi_msg(irq, &dw->irq[i].msi); } - dw->nr_irqs = i; + chip->nr_irqs = i; } return err; @@ -905,17 +905,24 @@ int dw_edma_probe(struct dw_edma_chip *chip) if (!dev) return -EINVAL; - dw = chip->dw; - if (!dw || !dw->irq || !dw->ops || !dw->ops->irq_vector) + dw = devm_kzalloc(dev, sizeof(*dw), GFP_KERNEL); + if (!dw) + return -ENOMEM; + + chip->dw = dw; + dw->chip = chip; + + if (!chip->nr_irqs || !chip->ops) return -EINVAL; raw_spin_lock_init(&dw->lock); - dw->wr_ch_cnt = min_t(u16, dw->wr_ch_cnt, + + dw->wr_ch_cnt = min_t(u16, chip->wr_ch_cnt, dw_edma_v0_core_ch_count(dw, EDMA_DIR_WRITE)); dw->wr_ch_cnt = min_t(u16, dw->wr_ch_cnt, EDMA_MAX_WR_CH); - dw->rd_ch_cnt = min_t(u16, dw->rd_ch_cnt, + dw->rd_ch_cnt = min_t(u16, chip->rd_ch_cnt, dw_edma_v0_core_ch_count(dw, EDMA_DIR_READ)); dw->rd_ch_cnt = min_t(u16, dw->rd_ch_cnt, EDMA_MAX_RD_CH); @@ -936,6 +943,10 @@ int dw_edma_probe(struct dw_edma_chip *chip) /* Disable eDMA, only to establish the ideal initial conditions */ dw_edma_v0_core_off(dw); + dw->irq = devm_kcalloc(dev, chip->nr_irqs, sizeof(*dw->irq), GFP_KERNEL); + if (!dw->irq) + return -ENOMEM; + /* Request IRQs */ err = dw_edma_irq_request(chip, &wr_alloc, &rd_alloc); if (err) @@ -960,10 +971,10 @@ int dw_edma_probe(struct dw_edma_chip *chip) return 0; err_irq_free: - for (i = (dw->nr_irqs - 1); i >= 0; i--) - free_irq(dw->ops->irq_vector(dev, i), &dw->irq[i]); + for (i = (chip->nr_irqs - 1); i >= 0; i--) + free_irq(chip->ops->irq_vector(dev, i), &dw->irq[i]); - dw->nr_irqs = 0; + chip->nr_irqs = 0; return err; } @@ -980,8 +991,8 @@ int dw_edma_remove(struct dw_edma_chip *chip) dw_edma_v0_core_off(dw); /* Free irqs */ - for (i = (dw->nr_irqs - 1); i >= 0; i--) - free_irq(dw->ops->irq_vector(dev, i), &dw->irq[i]); + for (i = (chip->nr_irqs - 1); i >= 0; i--) + free_irq(chip->ops->irq_vector(dev, i), &dw->irq[i]); /* Power management */ pm_runtime_disable(dev); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index 60316d408c3e0..e254c2fc3d9cf 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -15,20 +15,12 @@ #include "../virt-dma.h" #define EDMA_LL_SZ 24 -#define EDMA_MAX_WR_CH 8 -#define EDMA_MAX_RD_CH 8 enum dw_edma_dir { EDMA_DIR_WRITE = 0, EDMA_DIR_READ }; -enum dw_edma_map_format { - EDMA_MF_EDMA_LEGACY = 0x0, - EDMA_MF_EDMA_UNROLL = 0x1, - EDMA_MF_HDMA_COMPAT = 0x5 -}; - enum dw_edma_request { EDMA_REQ_NONE = 0, EDMA_REQ_STOP, @@ -57,12 +49,6 @@ struct dw_edma_burst { u32 sz; }; -struct dw_edma_region { - phys_addr_t paddr; - void __iomem *vaddr; - size_t sz; -}; - struct dw_edma_chunk { struct list_head list; struct dw_edma_chan *chan; @@ -87,7 +73,7 @@ struct dw_edma_desc { struct dw_edma_chan { struct virt_dma_chan vc; - struct dw_edma_chip *chip; + struct dw_edma *dw; int id; enum dw_edma_dir dir; @@ -109,10 +95,6 @@ struct dw_edma_irq { struct dw_edma *dw; }; -struct dw_edma_core_ops { - int (*irq_vector)(struct device *dev, unsigned int nr); -}; - struct dw_edma { char name[20]; @@ -122,21 +104,13 @@ struct dw_edma { struct dma_device rd_edma; u16 rd_ch_cnt; - struct dw_edma_region rg_region; /* Registers */ - struct dw_edma_region ll_region_wr[EDMA_MAX_WR_CH]; - struct dw_edma_region ll_region_rd[EDMA_MAX_RD_CH]; - struct dw_edma_region dt_region_wr[EDMA_MAX_WR_CH]; - struct dw_edma_region dt_region_rd[EDMA_MAX_RD_CH]; - struct dw_edma_irq *irq; - int nr_irqs; - - enum dw_edma_map_format mf; struct dw_edma_chan *chan; - const struct dw_edma_core_ops *ops; raw_spinlock_t lock; /* Only for legacy */ + + struct dw_edma_chip *chip; #ifdef CONFIG_DEBUG_FS struct dentry *debugfs; #endif /* CONFIG_DEBUG_FS */ diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index 44f6e09bdb531..2c1c5fa4e9f28 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -148,7 +148,6 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, struct dw_edma_pcie_data vsec_data; struct device *dev = &pdev->dev; struct dw_edma_chip *chip; - struct dw_edma *dw; int err, nr_irqs; int i, mask; @@ -214,10 +213,6 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, if (!chip) return -ENOMEM; - dw = devm_kzalloc(dev, sizeof(*dw), GFP_KERNEL); - if (!dw) - return -ENOMEM; - /* IRQs allocation */ nr_irqs = pci_alloc_irq_vectors(pdev, 1, vsec_data.irqs, PCI_IRQ_MSI | PCI_IRQ_MSIX); @@ -228,29 +223,23 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, } /* Data structure initialization */ - chip->dw = dw; chip->dev = dev; chip->id = pdev->devfn; - chip->irq = pdev->irq; - dw->mf = vsec_data.mf; - dw->nr_irqs = nr_irqs; - dw->ops = &dw_edma_pcie_core_ops; - dw->wr_ch_cnt = vsec_data.wr_ch_cnt; - dw->rd_ch_cnt = vsec_data.rd_ch_cnt; + chip->mf = vsec_data.mf; + chip->nr_irqs = nr_irqs; + chip->ops = &dw_edma_pcie_core_ops; - dw->rg_region.vaddr = pcim_iomap_table(pdev)[vsec_data.rg.bar]; - if (!dw->rg_region.vaddr) - return -ENOMEM; + chip->wr_ch_cnt = vsec_data.wr_ch_cnt; + chip->rd_ch_cnt = vsec_data.rd_ch_cnt; - dw->rg_region.vaddr += vsec_data.rg.off; - dw->rg_region.paddr = pdev->resource[vsec_data.rg.bar].start; - dw->rg_region.paddr += vsec_data.rg.off; - dw->rg_region.sz = vsec_data.rg.sz; + chip->rg_region.vaddr = pcim_iomap_table(pdev)[vsec_data.rg.bar]; + if (!chip->rg_region.vaddr) + return -ENOMEM; - for (i = 0; i < dw->wr_ch_cnt; i++) { - struct dw_edma_region *ll_region = &dw->ll_region_wr[i]; - struct dw_edma_region *dt_region = &dw->dt_region_wr[i]; + for (i = 0; i < chip->wr_ch_cnt; i++) { + struct dw_edma_region *ll_region = &chip->ll_region_wr[i]; + struct dw_edma_region *dt_region = &chip->dt_region_wr[i]; struct dw_edma_block *ll_block = &vsec_data.ll_wr[i]; struct dw_edma_block *dt_block = &vsec_data.dt_wr[i]; @@ -273,9 +262,9 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, dt_region->sz = dt_block->sz; } - for (i = 0; i < dw->rd_ch_cnt; i++) { - struct dw_edma_region *ll_region = &dw->ll_region_rd[i]; - struct dw_edma_region *dt_region = &dw->dt_region_rd[i]; + for (i = 0; i < chip->rd_ch_cnt; i++) { + struct dw_edma_region *ll_region = &chip->ll_region_rd[i]; + struct dw_edma_region *dt_region = &chip->dt_region_rd[i]; struct dw_edma_block *ll_block = &vsec_data.ll_rd[i]; struct dw_edma_block *dt_block = &vsec_data.dt_rd[i]; @@ -299,45 +288,45 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, } /* Debug info */ - if (dw->mf == EDMA_MF_EDMA_LEGACY) - pci_dbg(pdev, "Version:\teDMA Port Logic (0x%x)\n", dw->mf); - else if (dw->mf == EDMA_MF_EDMA_UNROLL) - pci_dbg(pdev, "Version:\teDMA Unroll (0x%x)\n", dw->mf); - else if (dw->mf == EDMA_MF_HDMA_COMPAT) - pci_dbg(pdev, "Version:\tHDMA Compatible (0x%x)\n", dw->mf); + if (chip->mf == EDMA_MF_EDMA_LEGACY) + pci_dbg(pdev, "Version:\teDMA Port Logic (0x%x)\n", chip->mf); + else if (chip->mf == EDMA_MF_EDMA_UNROLL) + pci_dbg(pdev, "Version:\teDMA Unroll (0x%x)\n", chip->mf); + else if (chip->mf == EDMA_MF_HDMA_COMPAT) + pci_dbg(pdev, "Version:\tHDMA Compatible (0x%x)\n", chip->mf); else - pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", dw->mf); + pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", chip->mf); - pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", + pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p)\n", vsec_data.rg.bar, vsec_data.rg.off, vsec_data.rg.sz, - dw->rg_region.vaddr, &dw->rg_region.paddr); + chip->rg_region.vaddr); - for (i = 0; i < dw->wr_ch_cnt; i++) { + for (i = 0; i < chip->wr_ch_cnt; i++) { pci_dbg(pdev, "L. List:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", i, vsec_data.ll_wr[i].bar, - vsec_data.ll_wr[i].off, dw->ll_region_wr[i].sz, - dw->ll_region_wr[i].vaddr, &dw->ll_region_wr[i].paddr); + vsec_data.ll_wr[i].off, chip->ll_region_wr[i].sz, + chip->ll_region_wr[i].vaddr, &chip->ll_region_wr[i].paddr); pci_dbg(pdev, "Data:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", i, vsec_data.dt_wr[i].bar, - vsec_data.dt_wr[i].off, dw->dt_region_wr[i].sz, - dw->dt_region_wr[i].vaddr, &dw->dt_region_wr[i].paddr); + vsec_data.dt_wr[i].off, chip->dt_region_wr[i].sz, + chip->dt_region_wr[i].vaddr, &chip->dt_region_wr[i].paddr); } - for (i = 0; i < dw->rd_ch_cnt; i++) { + for (i = 0; i < chip->rd_ch_cnt; i++) { pci_dbg(pdev, "L. List:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", i, vsec_data.ll_rd[i].bar, - vsec_data.ll_rd[i].off, dw->ll_region_rd[i].sz, - dw->ll_region_rd[i].vaddr, &dw->ll_region_rd[i].paddr); + vsec_data.ll_rd[i].off, chip->ll_region_rd[i].sz, + chip->ll_region_rd[i].vaddr, &chip->ll_region_rd[i].paddr); pci_dbg(pdev, "Data:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", i, vsec_data.dt_rd[i].bar, - vsec_data.dt_rd[i].off, dw->dt_region_rd[i].sz, - dw->dt_region_rd[i].vaddr, &dw->dt_region_rd[i].paddr); + vsec_data.dt_rd[i].off, chip->dt_region_rd[i].sz, + chip->dt_region_rd[i].vaddr, &chip->dt_region_rd[i].paddr); } - pci_dbg(pdev, "Nr. IRQs:\t%u\n", dw->nr_irqs); + pci_dbg(pdev, "Nr. IRQs:\t%u\n", chip->nr_irqs); /* Validating if PCI interrupts were enabled */ if (!pci_dev_msi_enabled(pdev)) { @@ -345,10 +334,6 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -EPERM; } - dw->irq = devm_kcalloc(dev, nr_irqs, sizeof(*dw->irq), GFP_KERNEL); - if (!dw->irq) - return -ENOMEM; - /* Starting eDMA driver */ err = dw_edma_probe(chip); if (err) { diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index 329fc2e57b703..e507e076fad16 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -25,7 +25,7 @@ enum dw_edma_control { static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) { - return dw->rg_region.vaddr; + return dw->chip->rg_region.vaddr; } #define SET_32(dw, name, value) \ @@ -96,7 +96,7 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) static inline struct dw_edma_v0_ch_regs __iomem * __dw_ch_regs(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch) { - if (dw->mf == EDMA_MF_EDMA_LEGACY) + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) return &(__dw_regs(dw)->type.legacy.ch); if (dir == EDMA_DIR_WRITE) @@ -108,7 +108,7 @@ __dw_ch_regs(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch) static inline void writel_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, u32 value, void __iomem *addr) { - if (dw->mf == EDMA_MF_EDMA_LEGACY) { + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; unsigned long flags; @@ -133,7 +133,7 @@ static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, { u32 value; - if (dw->mf == EDMA_MF_EDMA_LEGACY) { + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; unsigned long flags; @@ -169,7 +169,7 @@ static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, u64 value, void __iomem *addr) { - if (dw->mf == EDMA_MF_EDMA_LEGACY) { + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; unsigned long flags; @@ -194,7 +194,7 @@ static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, { u32 value; - if (dw->mf == EDMA_MF_EDMA_LEGACY) { + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; unsigned long flags; @@ -256,7 +256,7 @@ u16 dw_edma_v0_core_ch_count(struct dw_edma *dw, enum dw_edma_dir dir) enum dma_status dw_edma_v0_core_ch_status(struct dw_edma_chan *chan) { - struct dw_edma *dw = chan->chip->dw; + struct dw_edma *dw = chan->dw; u32 tmp; tmp = FIELD_GET(EDMA_V0_CH_STATUS_MASK, @@ -272,7 +272,7 @@ enum dma_status dw_edma_v0_core_ch_status(struct dw_edma_chan *chan) void dw_edma_v0_core_clear_done_int(struct dw_edma_chan *chan) { - struct dw_edma *dw = chan->chip->dw; + struct dw_edma *dw = chan->dw; SET_RW_32(dw, chan->dir, int_clear, FIELD_PREP(EDMA_V0_DONE_INT_MASK, BIT(chan->id))); @@ -280,7 +280,7 @@ void dw_edma_v0_core_clear_done_int(struct dw_edma_chan *chan) void dw_edma_v0_core_clear_abort_int(struct dw_edma_chan *chan) { - struct dw_edma *dw = chan->chip->dw; + struct dw_edma *dw = chan->dw; SET_RW_32(dw, chan->dir, int_clear, FIELD_PREP(EDMA_V0_ABORT_INT_MASK, BIT(chan->id))); @@ -357,7 +357,7 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) { struct dw_edma_chan *chan = chunk->chan; - struct dw_edma *dw = chan->chip->dw; + struct dw_edma *dw = chan->dw; u32 tmp; dw_edma_v0_core_write_chunk(chunk); @@ -365,7 +365,7 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) if (first) { /* Enable engine */ SET_RW_32(dw, chan->dir, engine_en, BIT(0)); - if (dw->mf == EDMA_MF_HDMA_COMPAT) { + if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { switch (chan->id) { case 0: SET_RW_COMPAT(dw, chan->dir, ch0_pwr_en, @@ -431,7 +431,7 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) int dw_edma_v0_core_device_config(struct dw_edma_chan *chan) { - struct dw_edma *dw = chan->chip->dw; + struct dw_edma *dw = chan->dw; u32 tmp = 0; /* MSI done addr - low, high */ diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 4b3bcffd15ef1..edb7e137cb35a 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -54,7 +54,7 @@ struct debugfs_entries { static int dw_edma_debugfs_u32_get(void *data, u64 *val) { void __iomem *reg = (void __force __iomem *)data; - if (dw->mf == EDMA_MF_EDMA_LEGACY && + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && reg >= (void __iomem *)®s->type.legacy.ch) { void __iomem *ptr = ®s->type.legacy.ch; u32 viewport_sel = 0; @@ -173,7 +173,7 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); - if (dw->mf == EDMA_MF_HDMA_COMPAT) { + if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, regs_dir); @@ -242,7 +242,7 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); - if (dw->mf == EDMA_MF_HDMA_COMPAT) { + if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, regs_dir); @@ -288,7 +288,7 @@ void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip) if (!dw) return; - regs = dw->rg_region.vaddr; + regs = dw->chip->rg_region.vaddr; if (!regs) return; @@ -296,7 +296,7 @@ void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip) if (!dw->debugfs) return; - debugfs_create_u32("mf", 0444, dw->debugfs, &dw->mf); + debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt); diff --git a/include/linux/dma/edma.h b/include/linux/dma/edma.h index cab6e18773dad..a9bee4aeb2eee 100644 --- a/include/linux/dma/edma.h +++ b/include/linux/dma/edma.h @@ -12,19 +12,63 @@ #include #include +#define EDMA_MAX_WR_CH 8 +#define EDMA_MAX_RD_CH 8 + struct dw_edma; +struct dw_edma_region { + phys_addr_t paddr; + void __iomem *vaddr; + size_t sz; +}; + +struct dw_edma_core_ops { + int (*irq_vector)(struct device *dev, unsigned int nr); +}; + +enum dw_edma_map_format { + EDMA_MF_EDMA_LEGACY = 0x0, + EDMA_MF_EDMA_UNROLL = 0x1, + EDMA_MF_HDMA_COMPAT = 0x5 +}; + /** * struct dw_edma_chip - representation of DesignWare eDMA controller hardware * @dev: struct device of the eDMA controller * @id: instance ID * @irq: irq line + * @nr_irqs: total dma irq number + * @ops DMA channel to IRQ number mapping + * @wr_ch_cnt DMA write channel number + * @rd_ch_cnt DMA read channel number + * @rg_region DMA register region + * @ll_region_wr DMA descriptor link list memory for write channel + * @ll_region_rd DMA descriptor link list memory for read channel + * @mf DMA register map format * @dw: struct dw_edma that is filed by dw_edma_probe() */ struct dw_edma_chip { struct device *dev; int id; int irq; + int nr_irqs; + const struct dw_edma_core_ops *ops; + + struct dw_edma_region rg_region; + + u16 wr_ch_cnt; + u16 rd_ch_cnt; + /* link list address */ + struct dw_edma_region ll_region_wr[EDMA_MAX_WR_CH]; + struct dw_edma_region ll_region_rd[EDMA_MAX_RD_CH]; + + /* data region */ + struct dw_edma_region dt_region_wr[EDMA_MAX_WR_CH]; + struct dw_edma_region dt_region_rd[EDMA_MAX_RD_CH]; + + enum dw_edma_map_format mf; + struct dw_edma *dw; };