From patchwork Sat May 18 12:42:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Siddharth Vadapalli X-Patchwork-Id: 13667644 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D06DBC25B75 for ; Sat, 18 May 2024 12:44:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NccONkEM2W/LOCug1URBnzmR+eJtgfNn8wr0x8cv7yY=; b=YV24NUgLosZvJS B6R3hdJGa8HRxGaRJHth+qMtzAg/bRbhnoy1c03grU8j6rURXBgmPwL5jsQMGLDjUOhiDP0sBlPen GFOpm2BK1/UFotgPs4QdT1BOZtFrWzSEN1vhueRxGZb/cCnihtaLOLfpxp/hr483A6H2tnheEc1Cq 6soJFs32ycUU2b0TZwY9YkVbpMwKdH8fepQ+wd3UDJ1VIfcCu9NUwk8SsuPLFgPJwypjwQzgWA2HG K1PPZqcuD/4jUbEib2mF1tK8TE6yf/pag1S6xfJ7exUqof6KUgEHiPAcGbFHvTo9wcm5pob1CmGS/ G6RUSG2XpmcfmSM766zA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s8JPx-0000000A5Wt-10GM; Sat, 18 May 2024 12:44:05 +0000 Received: from fllv0015.ext.ti.com ([198.47.19.141]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s8JPQ-0000000A5Am-2TI3 for linux-arm-kernel@lists.infradead.org; Sat, 18 May 2024 12:43:35 +0000 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 44IChPl6002785; Sat, 18 May 2024 07:43:25 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1716036205; bh=0GrwBlSU/Vcks+HbbBXVZ6OE94NII5ICaCjwmTrIL4Q=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=STbMyYOsqOucYTiFz+Bcl+qpGizPVtHQUzuXBJG+CQoNEEEh7l3Wx+VGjYCyZ1xNL 8I4LSpgJLohdCWh6d0/up686qtusvvYlZQ33HOJIVIbdpq3yQ/UbXf/r1B/3YEcC0T m7IZez6l6IJGuFpmQvTBkth1zzHQpWbxBc9uJW7w= Received: from DLEE107.ent.ti.com (dlee107.ent.ti.com [157.170.170.37]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 44IChP6R017301 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 18 May 2024 07:43:25 -0500 Received: from DLEE108.ent.ti.com (157.170.170.38) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Sat, 18 May 2024 07:43:25 -0500 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DLEE108.ent.ti.com (157.170.170.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Sat, 18 May 2024 07:43:25 -0500 Received: from uda0492258.dhcp.ti.com (uda0492258.dhcp.ti.com [172.24.227.9]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 44ICgY9Q041511; Sat, 18 May 2024 07:43:21 -0500 From: Siddharth Vadapalli To: , , , , , , , CC: , , , , , , , Subject: [RFC PATCH net-next 10/28] net: ethernet: ti: cpsw-proxy-client: add helper to init RX DMA Channels Date: Sat, 18 May 2024 18:12:16 +0530 Message-ID: <20240518124234.2671651-11-s-vadapalli@ti.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240518124234.2671651-1-s-vadapalli@ti.com> References: <20240518124234.2671651-1-s-vadapalli@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240518_054333_529788_C07306DB X-CRM114-Status: GOOD ( 17.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the "init_rx_chans()" function to initialize the RX DMA Channels. With the knowledge of the PSI-L Thread ID for the RX Channel along with the details of the RX Flow Base and RX Flow Offset, the RX DMA Flow on the RX Channel can be setup using the DMA APIs. Signed-off-by: Siddharth Vadapalli --- drivers/net/ethernet/ti/cpsw-proxy-client.c | 128 ++++++++++++++++++++ 1 file changed, 128 insertions(+) diff --git a/drivers/net/ethernet/ti/cpsw-proxy-client.c b/drivers/net/ethernet/ti/cpsw-proxy-client.c index efb44ff04b6a..16e8e585adce 100644 --- a/drivers/net/ethernet/ti/cpsw-proxy-client.c +++ b/drivers/net/ethernet/ti/cpsw-proxy-client.c @@ -20,6 +20,8 @@ #define SW_DATA_SIZE 16 #define MAX_TX_DESC 500 +#define MAX_RX_DESC 500 +#define MAX_RX_FLOWS 1 #define CHAN_NAME_LEN 128 @@ -46,10 +48,16 @@ struct cpsw_proxy_req_params { struct rx_dma_chan { struct virtual_port *vport; + struct device *dev; + struct k3_cppi_desc_pool *desc_pool; + struct k3_udma_glue_rx_channel *rx_chan; u32 rel_chan_idx; u32 flow_base; u32 flow_offset; u32 thread_id; + u32 num_descs; + unsigned int irq; + char rx_chan_name[CHAN_NAME_LEN]; bool in_use; }; @@ -96,6 +104,7 @@ struct cpsw_proxy_priv { u32 num_mac_ports; u32 num_virt_ports; u32 num_active_tx_chans; + u32 num_active_rx_chans; }; static int cpsw_proxy_client_cb(struct rpmsg_device *rpdev, void *data, @@ -720,6 +729,125 @@ static int init_tx_chans(struct cpsw_proxy_priv *proxy_priv) return ret; } +static void free_rx_chns(void *data) +{ + struct cpsw_proxy_priv *proxy_priv = data; + struct rx_dma_chan *rx_chn; + struct virtual_port *vport; + u32 i, j; + + for (i = 0; i < proxy_priv->num_virt_ports; i++) { + vport = &proxy_priv->virt_ports[i]; + + for (j = 0; j < vport->num_rx_chan; j++) { + rx_chn = &vport->rx_chans[j]; + + if (!IS_ERR_OR_NULL(rx_chn->desc_pool)) + k3_cppi_desc_pool_destroy(rx_chn->desc_pool); + + if (!IS_ERR_OR_NULL(rx_chn->rx_chan)) + k3_udma_glue_release_rx_chn(rx_chn->rx_chan); + } + } +} + +static int init_rx_chans(struct cpsw_proxy_priv *proxy_priv) +{ + struct k3_udma_glue_rx_channel_cfg rx_cfg = {0}; + struct device *dev = proxy_priv->dev; + u32 hdesc_size, rx_chn_num, i, j; + u32 max_desc_num = MAX_RX_DESC; + char rx_chn_name[CHAN_NAME_LEN]; + struct rx_dma_chan *rx_chn; + struct virtual_port *vport; + struct k3_ring_cfg rxring_cfg = { + .elm_size = K3_RINGACC_RING_ELSIZE_8, + .mode = K3_RINGACC_RING_MODE_MESSAGE, + .flags = 0, + }; + struct k3_ring_cfg fdqring_cfg = { + .elm_size = K3_RINGACC_RING_ELSIZE_8, + .mode = K3_RINGACC_RING_MODE_MESSAGE, + .flags = 0, + }; + struct k3_udma_glue_rx_flow_cfg rx_flow_cfg = { + .rx_cfg = rxring_cfg, + .rxfdq_cfg = fdqring_cfg, + .ring_rxq_id = K3_RINGACC_RING_ID_ANY, + .ring_rxfdq0_id = K3_RINGACC_RING_ID_ANY, + .src_tag_lo_sel = K3_UDMA_GLUE_SRC_TAG_LO_USE_REMOTE_SRC_TAG, + }; + int ret = 0, ret1; + + hdesc_size = cppi5_hdesc_calc_size(true, PS_DATA_SIZE, SW_DATA_SIZE); + + rx_cfg.swdata_size = SW_DATA_SIZE; + rx_cfg.flow_id_num = MAX_RX_FLOWS; + rx_cfg.remote = true; + + for (i = 0; i < proxy_priv->num_virt_ports; i++) { + vport = &proxy_priv->virt_ports[i]; + + for (j = 0; j < vport->num_rx_chan; j++) { + rx_chn = &vport->rx_chans[j]; + + rx_chn_num = proxy_priv->num_active_rx_chans++; + snprintf(rx_chn_name, sizeof(rx_chn_name), "rx%u-virt-port-%u", rx_chn_num, + vport->port_id); + strscpy(rx_chn->rx_chan_name, rx_chn_name, sizeof(rx_chn->rx_chan_name)); + + rx_cfg.flow_id_base = rx_chn->flow_base + rx_chn->flow_offset; + + /* init all flows */ + rx_chn->dev = dev; + rx_chn->num_descs = max_desc_num; + rx_chn->desc_pool = k3_cppi_desc_pool_create_name(dev, + rx_chn->num_descs, + hdesc_size, + rx_chn_name); + if (IS_ERR(rx_chn->desc_pool)) { + ret = PTR_ERR(rx_chn->desc_pool); + dev_err(dev, "Failed to create rx pool %d\n", ret); + goto err; + } + + rx_chn->rx_chan = + k3_udma_glue_request_remote_rx_chn_for_thread_id(dev, &rx_cfg, + proxy_priv->dma_node, + rx_chn->thread_id); + if (IS_ERR(rx_chn->rx_chan)) { + ret = PTR_ERR(rx_chn->rx_chan); + dev_err(dev, "Failed to request rx dma channel %d\n", ret); + goto err; + } + + rx_flow_cfg.rx_cfg.size = max_desc_num; + rx_flow_cfg.rxfdq_cfg.size = max_desc_num; + ret = k3_udma_glue_rx_flow_init(rx_chn->rx_chan, + 0, &rx_flow_cfg); + if (ret) { + dev_err(dev, "Failed to init rx flow %d\n", ret); + goto err; + } + + rx_chn->irq = k3_udma_glue_rx_get_irq(rx_chn->rx_chan, 0); + if (rx_chn->irq <= 0) { + ret = -ENXIO; + dev_err(dev, "Failed to get rx dma irq %d\n", rx_chn->irq); + } + } + } + +err: + ret1 = devm_add_action(dev, free_rx_chns, proxy_priv); + if (ret1) { + dev_err(dev, "failed to add free_rx_chns action %d", ret1); + return ret1; + } + + return ret; +} + static int cpsw_proxy_client_probe(struct rpmsg_device *rpdev) { struct cpsw_proxy_priv *proxy_priv;