From patchwork Wed Jan 24 12:43:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Siddharth Vadapalli X-Patchwork-Id: 13529189 Received: from fllv0015.ext.ti.com (fllv0015.ext.ti.com [198.47.19.141]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2BE677F3A; Wed, 24 Jan 2024 12:43:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.141 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706100218; cv=none; b=VgqRDlpYfxxN8eAuJZSUdQqZYYH8sTyAND2TVdoCslmJ8hPVjCFlGPJLEay4mCJ+/jN/wbQ4h677emzI88ogkyvEpNtQ9PmoL4RynneWNKxVDtP/1YKtxBao/9OGcaF8+WwjZsVyUa4/cDyXg3+L3cBdU6GmulDwOT2AeWOJXDs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706100218; c=relaxed/simple; bh=G5W8d9S6XQSz/RIxI6UeDblSpjd2ZXj8x3EGlkpmgLU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=SyOSA3IULUe2JoZbYAPwo952vaCdYWqJKghEJURJAfnH25vtlyC2F3pas0An/wDqdc87W148I5danykMGEVrEnP0TLBkzacTNJF/pbJ8A94cawS3uprSIibEiKHarRkSS2t710SHhbC16smeMaq1qMANJ3zvgLtL7+UrbZ5F6rU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=Qh2TZ08J; arc=none smtp.client-ip=198.47.19.141 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="Qh2TZ08J" Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 40OChUiM122908; Wed, 24 Jan 2024 06:43:30 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1706100210; bh=zT6qRDUHdTs6o96O1xurWRTbC6RC1qpnmXojWGknCw8=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=Qh2TZ08J1U7v7j5voVf6FORghS1U/p3zSN6XAvWDkybjRcj+W/CJs5Qe4RMMfSlHe Lw+36ymaMc4y4DR2n4sb55o8E0nIy6YARA07PrpZfa5oUB+Hj3qR43AKSvtm85eAXF hKiPub7CQ7oGCP7CJxdQSpG+e2k//T0UoJbsArN4= Received: from DFLE115.ent.ti.com (dfle115.ent.ti.com [10.64.6.36]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 40OChUFI030823 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 24 Jan 2024 06:43:30 -0600 Received: from DFLE115.ent.ti.com (10.64.6.36) by DFLE115.ent.ti.com (10.64.6.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Wed, 24 Jan 2024 06:43:30 -0600 Received: from lelvsmtp5.itg.ti.com (10.180.75.250) by DFLE115.ent.ti.com (10.64.6.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Wed, 24 Jan 2024 06:43:30 -0600 Received: from uda0492258.dhcp.ti.com (uda0492258.dhcp.ti.com [172.24.227.9]) by lelvsmtp5.itg.ti.com (8.15.2/8.15.2) with ESMTP id 40OChJPg014062; Wed, 24 Jan 2024 06:43:28 -0600 From: Siddharth Vadapalli To: , CC: , , , , , Subject: [PATCH v4 3/4] dmaengine: ti: k3-udma-glue: Add function to request TX chan for thread ID Date: Wed, 24 Jan 2024 18:13:18 +0530 Message-ID: <20240124124319.820002-4-s-vadapalli@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240124124319.820002-1-s-vadapalli@ti.com> References: <20240124124319.820002-1-s-vadapalli@ti.com> Precedence: bulk X-Mailing-List: dmaengine@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 The existing function k3_udma_glue_request_tx_chn() supports requesting a TX DMA channel by its name. Add a new function to request TX DMA channel for a given thread ID, named k3_udma_glue_request_tx_chn_for_thread_id(). Also, export it for use by drivers which are probed by alternate methods (non device-tree) but still wish to make use of the existing DMA APIs. Such drivers could be informed about the thread ID corresponding to the TX DMA channel by RPMsg for example. Since the new function k3_udma_glue_request_tx_chn_for_thread_id() reuses most of the code in k3_udma_glue_request_tx_chn(), create a new function for the common code, named k3_udma_glue_request_tx_chn_common(). Signed-off-by: Siddharth Vadapalli Acked-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma-glue.c | 102 +++++++++++++++++++++++-------- include/linux/dma/k3-udma-glue.h | 5 ++ 2 files changed, 81 insertions(+), 26 deletions(-) diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c index eff1ae3d3efe..a475bbea35ee 100644 --- a/drivers/dma/ti/k3-udma-glue.c +++ b/drivers/dma/ti/k3-udma-glue.c @@ -274,29 +274,13 @@ static int k3_udma_glue_cfg_tx_chn(struct k3_udma_glue_tx_channel *tx_chn) return tisci_rm->tisci_udmap_ops->tx_ch_cfg(tisci_rm->tisci, &req); } -struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, - const char *name, struct k3_udma_glue_tx_channel_cfg *cfg) +static int +k3_udma_glue_request_tx_chn_common(struct device *dev, + struct k3_udma_glue_tx_channel *tx_chn, + struct k3_udma_glue_tx_channel_cfg *cfg) { - struct k3_udma_glue_tx_channel *tx_chn; int ret; - tx_chn = devm_kzalloc(dev, sizeof(*tx_chn), GFP_KERNEL); - if (!tx_chn) - return ERR_PTR(-ENOMEM); - - tx_chn->common.dev = dev; - tx_chn->common.swdata_size = cfg->swdata_size; - tx_chn->tx_pause_on_err = cfg->tx_pause_on_err; - tx_chn->tx_filt_einfo = cfg->tx_filt_einfo; - tx_chn->tx_filt_pswords = cfg->tx_filt_pswords; - tx_chn->tx_supr_tdpkt = cfg->tx_supr_tdpkt; - - /* parse of udmap channel */ - ret = of_k3_udma_glue_parse_chn(dev->of_node, name, - &tx_chn->common, true); - if (ret) - goto err; - tx_chn->common.hdesc_size = cppi5_hdesc_calc_size(tx_chn->common.epib, tx_chn->common.psdata_size, tx_chn->common.swdata_size); @@ -312,7 +296,7 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, if (IS_ERR(tx_chn->udma_tchanx)) { ret = PTR_ERR(tx_chn->udma_tchanx); dev_err(dev, "UDMAX tchanx get err %d\n", ret); - goto err; + return ret; } tx_chn->udma_tchan_id = xudma_tchan_get_id(tx_chn->udma_tchanx); @@ -325,7 +309,7 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, dev_err(dev, "Channel Device registration failed %d\n", ret); put_device(&tx_chn->common.chan_dev); tx_chn->common.chan_dev.parent = NULL; - goto err; + return ret; } if (xudma_is_pktdma(tx_chn->common.udmax)) { @@ -349,7 +333,7 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, &tx_chn->ringtxcq); if (ret) { dev_err(dev, "Failed to get TX/TXCQ rings %d\n", ret); - goto err; + return ret; } /* Set the dma_dev for the rings to be configured */ @@ -365,13 +349,13 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, ret = k3_ringacc_ring_cfg(tx_chn->ringtx, &cfg->tx_cfg); if (ret) { dev_err(dev, "Failed to cfg ringtx %d\n", ret); - goto err; + return ret; } ret = k3_ringacc_ring_cfg(tx_chn->ringtxcq, &cfg->txcq_cfg); if (ret) { dev_err(dev, "Failed to cfg ringtx %d\n", ret); - goto err; + return ret; } /* request and cfg psi-l */ @@ -382,11 +366,42 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, ret = k3_udma_glue_cfg_tx_chn(tx_chn); if (ret) { dev_err(dev, "Failed to cfg tchan %d\n", ret); - goto err; + return ret; } k3_udma_glue_dump_tx_chn(tx_chn); + return 0; +} + +struct k3_udma_glue_tx_channel * +k3_udma_glue_request_tx_chn(struct device *dev, const char *name, + struct k3_udma_glue_tx_channel_cfg *cfg) +{ + struct k3_udma_glue_tx_channel *tx_chn; + int ret; + + tx_chn = devm_kzalloc(dev, sizeof(*tx_chn), GFP_KERNEL); + if (!tx_chn) + return ERR_PTR(-ENOMEM); + + tx_chn->common.dev = dev; + tx_chn->common.swdata_size = cfg->swdata_size; + tx_chn->tx_pause_on_err = cfg->tx_pause_on_err; + tx_chn->tx_filt_einfo = cfg->tx_filt_einfo; + tx_chn->tx_filt_pswords = cfg->tx_filt_pswords; + tx_chn->tx_supr_tdpkt = cfg->tx_supr_tdpkt; + + /* parse of udmap channel */ + ret = of_k3_udma_glue_parse_chn(dev->of_node, name, + &tx_chn->common, true); + if (ret) + goto err; + + ret = k3_udma_glue_request_tx_chn_common(dev, tx_chn, cfg); + if (ret) + goto err; + return tx_chn; err: @@ -395,6 +410,41 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, } EXPORT_SYMBOL_GPL(k3_udma_glue_request_tx_chn); +struct k3_udma_glue_tx_channel * +k3_udma_glue_request_tx_chn_for_thread_id(struct device *dev, + struct k3_udma_glue_tx_channel_cfg *cfg, + struct device_node *udmax_np, u32 thread_id) +{ + struct k3_udma_glue_tx_channel *tx_chn; + int ret; + + tx_chn = devm_kzalloc(dev, sizeof(*tx_chn), GFP_KERNEL); + if (!tx_chn) + return ERR_PTR(-ENOMEM); + + tx_chn->common.dev = dev; + tx_chn->common.swdata_size = cfg->swdata_size; + tx_chn->tx_pause_on_err = cfg->tx_pause_on_err; + tx_chn->tx_filt_einfo = cfg->tx_filt_einfo; + tx_chn->tx_filt_pswords = cfg->tx_filt_pswords; + tx_chn->tx_supr_tdpkt = cfg->tx_supr_tdpkt; + + ret = of_k3_udma_glue_parse_chn_by_id(udmax_np, &tx_chn->common, true, thread_id); + if (ret) + goto err; + + ret = k3_udma_glue_request_tx_chn_common(dev, tx_chn, cfg); + if (ret) + goto err; + + return tx_chn; + +err: + k3_udma_glue_release_tx_chn(tx_chn); + return ERR_PTR(ret); +} +EXPORT_SYMBOL_GPL(k3_udma_glue_request_tx_chn_for_thread_id); + void k3_udma_glue_release_tx_chn(struct k3_udma_glue_tx_channel *tx_chn) { if (tx_chn->psil_paired) { diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h index e443be4d3b4b..c81386ceb1c1 100644 --- a/include/linux/dma/k3-udma-glue.h +++ b/include/linux/dma/k3-udma-glue.h @@ -26,6 +26,11 @@ struct k3_udma_glue_tx_channel; struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, const char *name, struct k3_udma_glue_tx_channel_cfg *cfg); +struct k3_udma_glue_tx_channel * +k3_udma_glue_request_tx_chn_for_thread_id(struct device *dev, + struct k3_udma_glue_tx_channel_cfg *cfg, + struct device_node *udmax_np, u32 thread_id); + void k3_udma_glue_release_tx_chn(struct k3_udma_glue_tx_channel *tx_chn); int k3_udma_glue_push_tx_chn(struct k3_udma_glue_tx_channel *tx_chn, struct cppi5_host_desc_t *desc_tx,