From patchwork Fri Jan 17 14:06:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13943371 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83C08201001; Fri, 17 Jan 2025 14:07:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737122825; cv=none; b=rST6DRCnWdC1rcRjZt5guC6KdCdkaT4ZkNVNlQCFP7ceTjyc8FVllFBsmln6Q988MqNxPX1GG3eDGckpdk0HB+KkyE3ChbrVcDt4J0SGrShMaeQm64U1CHP2MamIziXDMwrNrRvp9/JyHUvqXBrfJSIvgXezb+tn2/Z7PGK0OHg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737122825; c=relaxed/simple; bh=tTyOuIX6wz/sOdwJQexkXBdPNLWWPCx3wYLRGpjeAwc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=DpuQbb+wVCZ6p3CL159VfSMGX8IMX31LweuNI/zrp3GQhRiViajrZkJ+0zwwzep+k+kEkq1wSSyMQ0UpVueVC63MUiuaTHanbh3D1/n3cO5h0LghgLuxA1X8Yay4XcmHHcAAzvGav9pe7OV3oZuZ1/qMVO2j63X7z6jHVAsCbac= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pX1dk5c+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pX1dk5c+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6E039C4CEE5; Fri, 17 Jan 2025 14:07:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737122825; bh=tTyOuIX6wz/sOdwJQexkXBdPNLWWPCx3wYLRGpjeAwc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=pX1dk5c+muJTHIpoac1FXIp6E+s8j3kNK6LO9apvxsJsfzfR0xlhADjIVcTpfBVPk wilW5xes54w8+xrdrOSsNvDELSEcweQIFyriQT3g9/dtExEJvbQrSdcJQ/1ZkqlsEm la9/1didEoUd/Ebcerkqw5VYVh2Ss0cq7Eqb53znoxZYPfUZqRf0Xrr4+Bb9mHL04T 8aAtJQAXSl7FY4nMfsjTflq47Z9eAxJhY1nSn8XG2n6hioG3fJf0CcTxzsJUMdNgVE JOnWwCE0BGZoH9+YOq3mKvkQWCs75+hsfNI+6j2aQhnFhAKZeEOrYNSMS2iqz2WFfw OvG4hH2e/zVew== From: Roger Quadros Date: Fri, 17 Jan 2025 16:06:33 +0200 Subject: [PATCH net-next v2 1/3] net: ethernet: ti: am65-cpsw: ensure proper channel cleanup in error path Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250117-am65-cpsw-streamline-v2-1-91a29c97e569@kernel.org> References: <20250117-am65-cpsw-streamline-v2-0-91a29c97e569@kernel.org> In-Reply-To: <20250117-am65-cpsw-streamline-v2-0-91a29c97e569@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Siddharth Vadapalli , srk@ti.com, danishanwar@ti.com, "Russell King (Oracle)" , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=7071; i=rogerq@kernel.org; h=from:subject:message-id; bh=tTyOuIX6wz/sOdwJQexkXBdPNLWWPCx3wYLRGpjeAwc=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBnimQAz6aKBdB9EoWTO0GfJHJMp1Ru+cvzG2dQU cCoZr4mrFmJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZ4pkAAAKCRDSWmvTvnYw kynpD/91z8xDFBCNWtnmW7Qbemstwey/6hMWzsFMjGokU0dVKyDj7B0O+VYE/qiu9Tg43QKyuws LvdztPzxSn3VG3jJ+p8Gwthxuuob4a0tkOZO25plbsufhLhd7jC97duQzcyLtmi8wM72Luu+a7H P+nklFAaSPG+U3AEQj6kJJg+D71fEysglWlZXxW5PiBpQjGSK8qAcx7IO2ynFTsyzH1V4c2lZAK zrqnTe+41QQ1QZ4udY6pXIET4RqKhh7tCphO/rZRQ2s2q8Al2To4KAaLLNWCmC/gUy5k9i2kWB4 E4ngN4SybVfG0RCzUKyx0lSHdNc3OIOlTuLqKrVf+LpP07h8xbf8sykRcFy3Sr6Zgr2JYiqFJeC MUenyACBQSn7sjYYxjSVKJ+tL+gTZuE7JiXa/lyFj5QxayWxOV2arIwjiH75Yq0y9aztbOBe3oG y8xB0e7SWGoWuRbqMP0QWwxhq4/PI3860dKo2NhMWK/ds5oBBov7P6AKpcmX4cfumGu+B4ARP2D 0KOn3kkQ9JSKwORlRoSR4e/Nk8ri5NbhYcHHASnrnDcC/wIaUDOJp5HD2HcpWVgS9DztR3GS30w fS7TTQ1v9aO7FXhI2/Org3L9vcB3WyAgA67TEJyv3niAA9Q1gkoI1oWTumMBWmRODXbrPrrHaey 6KwzNtcNzM9sjLg== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org We are missing netif_napi_del() and am65_cpsw_nuss_free_tx/rx_chns() in error path when am65_cpsw_nuss_init_tx/rx_chns() is used anywhere other than at probe(). i.e. am65_cpsw_nuss_update_tx_rx_chns and am65_cpsw_nuss_resume() As reported, in am65_cpsw_nuss_update_tx_rx_chns(), if am65_cpsw_nuss_init_tx_chns() partially fails then devm_add_action(dev, am65_cpsw_nuss_free_tx_chns,..) is added but the cleanup via am65_cpsw_nuss_free_tx_chns() will not run. Same issue exists for am65_cpsw_nuss_init_tx/rx_chns() failures in am65_cpsw_nuss_resume() as well. This would otherwise require more instances of devm_add/remove_action and is clearly more of a distraction than any benefit. So, drop devm_add/remove_action for am65_cpsw_nuss_free_tx/rx_chns() and call am65_cpsw_nuss_free_tx/rx_chns() and netif_napi_del() where required. Reported-by: Siddharth Vadapalli Closes: https://lore.kernel.org/all/m4rhkzcr7dlylxr54udyt6lal5s2q4krrvmyay6gzgzhcu4q2c@r34snfumzqxy/ Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 67 +++++++++++++++++++++----------- 1 file changed, 44 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index dcb6662b473d..f5ae63999516 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -2242,8 +2242,6 @@ static void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common) struct device *dev = common->dev; int i; - devm_remove_action(dev, am65_cpsw_nuss_free_tx_chns, common); - common->tx_ch_rate_msk = 0; for (i = 0; i < common->tx_ch_num; i++) { struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i]; @@ -2265,8 +2263,6 @@ static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common) for (i = 0; i < common->tx_ch_num; i++) { struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i]; - netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx, - am65_cpsw_nuss_tx_poll); hrtimer_init(&tx_chn->tx_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED); tx_chn->tx_hrtimer.function = &am65_cpsw_nuss_tx_timer_callback; @@ -2279,9 +2275,21 @@ static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common) tx_chn->id, tx_chn->irq, ret); goto err; } + + netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx, + am65_cpsw_nuss_tx_poll); } + return 0; + err: + for (--i ; i >= 0 ; i--) { + struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i]; + + netif_napi_del(&tx_chn->napi_tx); + devm_free_irq(dev, tx_chn->irq, tx_chn); + } + return ret; } @@ -2362,12 +2370,10 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_cpsw_common *common) goto err; } + return 0; + err: - i = devm_add_action(dev, am65_cpsw_nuss_free_tx_chns, common); - if (i) { - dev_err(dev, "Failed to add free_tx_chns action %d\n", i); - return i; - } + am65_cpsw_nuss_free_tx_chns(common); return ret; } @@ -2395,7 +2401,6 @@ static void am65_cpsw_nuss_remove_rx_chns(struct am65_cpsw_common *common) rx_chn = &common->rx_chns; flows = rx_chn->flows; - devm_remove_action(dev, am65_cpsw_nuss_free_rx_chns, common); for (i = 0; i < common->rx_ch_num_flows; i++) { if (!(flows[i].irq < 0)) @@ -2494,7 +2499,7 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) i, &rx_flow_cfg); if (ret) { dev_err(dev, "Failed to init rx flow%d %d\n", i, ret); - goto err; + goto err_flow; } if (!i) fdqring_id = @@ -2506,14 +2511,12 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) dev_err(dev, "Failed to get rx dma irq %d\n", flow->irq); ret = flow->irq; - goto err; + goto err_flow; } snprintf(flow->name, sizeof(flow->name), "%s-rx%d", dev_name(dev), i); - netif_napi_add(common->dma_ndev, &flow->napi_rx, - am65_cpsw_nuss_rx_poll); hrtimer_init(&flow->rx_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED); flow->rx_hrtimer.function = &am65_cpsw_nuss_rx_timer_callback; @@ -2526,20 +2529,28 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) dev_err(dev, "failure requesting rx %d irq %u, %d\n", i, flow->irq, ret); flow->irq = -EINVAL; - goto err; + goto err_flow; } + + netif_napi_add(common->dma_ndev, &flow->napi_rx, + am65_cpsw_nuss_rx_poll); } /* setup classifier to route priorities to flows */ cpsw_ale_classifier_setup_default(common->ale, common->rx_ch_num_flows); -err: - i = devm_add_action(dev, am65_cpsw_nuss_free_rx_chns, common); - if (i) { - dev_err(dev, "Failed to add free_rx_chns action %d\n", i); - return i; + return 0; + +err_flow: + for (--i; i >= 0 ; i--) { + flow = &rx_chn->flows[i]; + netif_napi_del(&flow->napi_rx); + devm_free_irq(dev, flow->irq, flow); } +err: + am65_cpsw_nuss_free_rx_chns(common); + return ret; } @@ -3349,7 +3360,7 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common) return ret; ret = am65_cpsw_nuss_init_rx_chns(common); if (ret) - return ret; + goto err_remove_tx; /* The DMA Channels are not guaranteed to be in a clean state. * Reset and disable them to ensure that they are back to the @@ -3370,7 +3381,7 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common) ret = am65_cpsw_nuss_register_devlink(common); if (ret) - return ret; + goto err_remove_rx; for (i = 0; i < common->port_num; i++) { port = &common->ports[i]; @@ -3401,6 +3412,10 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common) err_cleanup_ndev: am65_cpsw_nuss_cleanup_ndev(common); am65_cpsw_unregister_devlink(common); +err_remove_rx: + am65_cpsw_nuss_remove_rx_chns(common); +err_remove_tx: + am65_cpsw_nuss_remove_tx_chns(common); return ret; } @@ -3420,6 +3435,8 @@ int am65_cpsw_nuss_update_tx_rx_chns(struct am65_cpsw_common *common, return ret; ret = am65_cpsw_nuss_init_rx_chns(common); + if (ret) + am65_cpsw_nuss_remove_tx_chns(common); return ret; } @@ -3678,6 +3695,8 @@ static void am65_cpsw_nuss_remove(struct platform_device *pdev) */ am65_cpsw_nuss_cleanup_ndev(common); am65_cpsw_unregister_devlink(common); + am65_cpsw_nuss_remove_rx_chns(common); + am65_cpsw_nuss_remove_tx_chns(common); am65_cpsw_nuss_phylink_cleanup(common); am65_cpts_release(common->cpts); am65_cpsw_disable_serdes_phy(common); @@ -3739,8 +3758,10 @@ static int am65_cpsw_nuss_resume(struct device *dev) if (ret) return ret; ret = am65_cpsw_nuss_init_rx_chns(common); - if (ret) + if (ret) { + am65_cpsw_nuss_remove_tx_chns(common); return ret; + } /* If RX IRQ was disabled before suspend, keep it disabled */ for (i = 0; i < common->rx_ch_num_flows; i++) { From patchwork Fri Jan 17 14:06:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13943372 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B18931FFC67; Fri, 17 Jan 2025 14:07:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737122829; cv=none; b=OTwn1ep0odxQpfkijr45sgFoaqa53jH5fkdYTWVlS+3SdrKP3Hv+SHmbdDnReNEK6otOw2qM8vDm3w9RQ5z8I+bMB+oFvLa2g+35mwhVBFzrbfWIGDO3wJnr7dbgP9rbvsyWCMPHNlAvc3WL1e2kFoDrP4h6oLKDu6yY8sZ+Cuw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737122829; c=relaxed/simple; bh=d3cIgCE7zSt3uyvGlm5j4+GUZUGPxbHk+TqB/Ez3/tI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=se1s4iAbf8sDpVlZfapDqJW6emVOkFj0/xma0dicmj/m7HB4VEfIrOaEHZgeEVhgf+NQ7DRfG6IaWxVOxkpSGcpHUNSaD8RqYG1UNKuqFE8AfB0yTbuGlmHCFw8WnebWdjRKnfCQR2eibBBmS/5M+3z7Dcf+lLn4nAhfcPtVNXw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NVWRJS8H; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NVWRJS8H" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87656C4CEE2; Fri, 17 Jan 2025 14:07:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737122829; bh=d3cIgCE7zSt3uyvGlm5j4+GUZUGPxbHk+TqB/Ez3/tI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=NVWRJS8H82FKtveZuMW3iKixUeA9IAYsmCzXs9cR29D7SYc+CRaqUemrBlRQEy+Zy YUGQCtnWdeEPmELoEnuMeT7cmiz5uGY3EpgQAzXB5ItowUGbecVznbW/ue/O11Kwo0 QwW351guqjmkVJ94+bVyflaJtcSv1nDJ+S7bGeRDrwn9t8b2DtEcv3LwK1IIxV5WxD 6X2rGuwdlnfIqPG6JRzce4gcPaluY/7j1mZwRre2BZGT73XVg2ZdlGPT5SVC0IKb4W eH95NZTzELsmCzLRUhk1eenBS967WI//yLbJC0wwrXp1K/U7ZtscNR6TlGgbJR+dY5 2sBsEVS9mjV6A== From: Roger Quadros Date: Fri, 17 Jan 2025 16:06:34 +0200 Subject: [PATCH net-next v2 2/3] net: ethernet: ti: am65-cpsw: streamline RX queue creation and cleanup Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250117-am65-cpsw-streamline-v2-2-91a29c97e569@kernel.org> References: <20250117-am65-cpsw-streamline-v2-0-91a29c97e569@kernel.org> In-Reply-To: <20250117-am65-cpsw-streamline-v2-0-91a29c97e569@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Siddharth Vadapalli , srk@ti.com, danishanwar@ti.com, "Russell King (Oracle)" , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=10482; i=rogerq@kernel.org; h=from:subject:message-id; bh=d3cIgCE7zSt3uyvGlm5j4+GUZUGPxbHk+TqB/Ez3/tI=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBnimQAHB1QHaJ4nJywvzomDfxqj6gdrESOZvJ1N 8JxWccGd5GJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZ4pkAAAKCRDSWmvTvnYw kzwgD/9nio+teeadLTZaKPUa3VFU8LDQ53oC9lolXvO2UY9YJ7uFoEewfdbQXXbfmQ5p/UV2fhR Rpxd1+VQaIBT/JpCcikiXMIT/+/oWseuKg7JAViTpcZHcQyejqdFdzhEXPLQIp1EgTCTNDG+RJx vRDoMAmmllYX6QaDBUWCrBhjbj1mdGw+hP7UKZUjvZ0DLULBII31gj18TTTeCPAMTgk0mhFIqLD fcjhfCFqOQ77qBR6Ayq3lXqXTk+e9RU2c1MOaxxKly1LPhx+iPQ4BP4/GzmUPtXMqERFsbpoeuu h5rwCiKEGAn8ua3q8npdE+aO+ZpLEEtdqlbSLyD3bWQNhIiTrnLfJRs61/UUXrAErvSXxZGZGsz luya1QlJEH7k+gmgFm0ClV7q8m8Jgj0pjvlBD++7KZ1kSYO2DVTYiGnYxfViPWkAt5nEoECDhjd aJYtaPecsJBL5D2nkKWAXBI1zmJ9NG5ixpwrYfs08pW1iXleQD4MmpjOjpIcLCfM7HiVhloBrym Gq4Czqjqx7tLt0XhRJmes5JAzzt8IXNGPdlAqb0gZNIO9Y8EC8KkTTVfj7PxJwIBvjoxGfvcea4 ifjpSM9Q+SPAEtk9pGDpEmLZrUo5wzRIaRiTILgc0pdItYPl557dGlIb73ADKhri4zGrdMvKK5A boogE2Pwk0D/B1Q== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org Introduce am65_cpsw_create_rxqs() and am65_cpsw_destroy_rxqs() and use them. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 243 +++++++++++++++---------------- 1 file changed, 119 insertions(+), 124 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index f5ae63999516..5c87002eeee9 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -497,35 +497,61 @@ static void am65_cpsw_init_host_port_switch(struct am65_cpsw_common *common); static void am65_cpsw_init_host_port_emac(struct am65_cpsw_common *common); static void am65_cpsw_init_port_switch_ale(struct am65_cpsw_port *port); static void am65_cpsw_init_port_emac_ale(struct am65_cpsw_port *port); +static inline void am65_cpsw_put_page(struct am65_cpsw_rx_flow *flow, + struct page *page, + bool allow_direct); +static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma); -static void am65_cpsw_destroy_xdp_rxqs(struct am65_cpsw_common *common) +static void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct am65_cpsw_rx_flow *flow; struct xdp_rxq_info *rxq; - int id, port; + int port; - for (id = 0; id < common->rx_ch_num_flows; id++) { - flow = &rx_chn->flows[id]; + flow = &rx_chn->flows[id]; + napi_disable(&flow->napi_rx); + hrtimer_cancel(&flow->rx_hrtimer); + k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, id, rx_chn, + am65_cpsw_nuss_rx_cleanup, !!id); - for (port = 0; port < common->port_num; port++) { - if (!common->ports[port].ndev) - continue; + for (port = 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + continue; - rxq = &common->ports[port].xdp_rxq[id]; + rxq = &common->ports[port].xdp_rxq[id]; - if (xdp_rxq_info_is_reg(rxq)) - xdp_rxq_info_unreg(rxq); - } + if (xdp_rxq_info_is_reg(rxq)) + xdp_rxq_info_unreg(rxq); + } - if (flow->page_pool) { - page_pool_destroy(flow->page_pool); - flow->page_pool = NULL; - } + if (flow->page_pool) { + page_pool_destroy(flow->page_pool); + flow->page_pool = NULL; } } -static int am65_cpsw_create_xdp_rxqs(struct am65_cpsw_common *common) +static void am65_cpsw_destroy_rxqs(struct am65_cpsw_common *common) +{ + struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; + int id; + + reinit_completion(&common->tdown_complete); + k3_udma_glue_tdown_rx_chn(rx_chn->rx_chn, true); + + if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) { + id = wait_for_completion_timeout(&common->tdown_complete, msecs_to_jiffies(1000)); + if (!id) + dev_err(common->dev, "rx teardown timeout\n"); + } + + for (id = common->rx_ch_num_flows - 1; id >= 0; id--) + am65_cpsw_destroy_rxq(common, id); + + k3_udma_glue_disable_rx_chn(common->rx_chns.rx_chn); +} + +static int am65_cpsw_create_rxq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct page_pool_params pp_params = { @@ -540,45 +566,92 @@ static int am65_cpsw_create_xdp_rxqs(struct am65_cpsw_common *common) struct am65_cpsw_rx_flow *flow; struct xdp_rxq_info *rxq; struct page_pool *pool; - int id, port, ret; + struct page *page; + int port, ret, i; - for (id = 0; id < common->rx_ch_num_flows; id++) { - flow = &rx_chn->flows[id]; - pp_params.napi = &flow->napi_rx; - pool = page_pool_create(&pp_params); - if (IS_ERR(pool)) { - ret = PTR_ERR(pool); + flow = &rx_chn->flows[id]; + pp_params.napi = &flow->napi_rx; + pool = page_pool_create(&pp_params); + if (IS_ERR(pool)) { + ret = PTR_ERR(pool); + return ret; + } + + flow->page_pool = pool; + + /* using same page pool is allowed as no running rx handlers + * simultaneously for both ndevs + */ + for (port = 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + /* FIXME should we BUG here? */ + continue; + + rxq = &common->ports[port].xdp_rxq[id]; + ret = xdp_rxq_info_reg(rxq, common->ports[port].ndev, + id, flow->napi_rx.napi_id); + if (ret) + goto err; + + ret = xdp_rxq_info_reg_mem_model(rxq, + MEM_TYPE_PAGE_POOL, + pool); + if (ret) + goto err; + } + + for (i = 0; i < AM65_CPSW_MAX_RX_DESC; i++) { + page = page_pool_dev_alloc_pages(flow->page_pool); + if (!page) { + dev_err(common->dev, "cannot allocate page in flow %d\n", + id); + ret = -ENOMEM; goto err; } - flow->page_pool = pool; + ret = am65_cpsw_nuss_rx_push(common, page, id); + if (ret < 0) { + dev_err(common->dev, + "cannot submit page to rx channel flow %d, error %d\n", + id, ret); + am65_cpsw_put_page(flow, page, false); + goto err; + } + } - /* using same page pool is allowed as no running rx handlers - * simultaneously for both ndevs - */ - for (port = 0; port < common->port_num; port++) { - if (!common->ports[port].ndev) - continue; + napi_enable(&flow->napi_rx); + return 0; - rxq = &common->ports[port].xdp_rxq[id]; +err: + am65_cpsw_destroy_rxq(common, id); + return ret; +} - ret = xdp_rxq_info_reg(rxq, common->ports[port].ndev, - id, flow->napi_rx.napi_id); - if (ret) - goto err; +static int am65_cpsw_create_rxqs(struct am65_cpsw_common *common) +{ + int id, ret; - ret = xdp_rxq_info_reg_mem_model(rxq, - MEM_TYPE_PAGE_POOL, - pool); - if (ret) - goto err; + for (id = 0; id < common->rx_ch_num_flows; id++) { + ret = am65_cpsw_create_rxq(common, id); + if (ret) { + dev_err(common->dev, "couldn't create rxq %d: %d\n", + id, ret); + goto err; } } + ret = k3_udma_glue_enable_rx_chn(common->rx_chns.rx_chn); + if (ret) { + dev_err(common->dev, "couldn't enable rx chn: %d\n", ret); + goto err; + } + return 0; err: - am65_cpsw_destroy_xdp_rxqs(common); + for (--id; id >= 0; id--) + am65_cpsw_destroy_rxq(common, id); + return ret; } @@ -642,7 +715,6 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma) k3_udma_glue_rx_cppi5_to_dma_addr(rx_chn->rx_chn, &buf_dma); dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE); k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); - am65_cpsw_put_page(&rx_chn->flows[flow_id], page, false); } @@ -717,12 +789,9 @@ static struct sk_buff *am65_cpsw_build_skb(void *page_addr, static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) { struct am65_cpsw_host *host_p = am65_common_get_host(common); - struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; - int port_idx, i, ret, tx, flow_idx; - struct am65_cpsw_rx_flow *flow; + int port_idx, ret, tx; u32 val, port_mask; - struct page *page; if (common->usage_count) return 0; @@ -782,47 +851,9 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) am65_cpsw_qos_tx_p0_rate_init(common); - ret = am65_cpsw_create_xdp_rxqs(common); - if (ret) { - dev_err(common->dev, "Failed to create XDP rx queues\n"); + ret = am65_cpsw_create_rxqs(common); + if (ret) return ret; - } - - for (flow_idx = 0; flow_idx < common->rx_ch_num_flows; flow_idx++) { - flow = &rx_chn->flows[flow_idx]; - for (i = 0; i < AM65_CPSW_MAX_RX_DESC; i++) { - page = page_pool_dev_alloc_pages(flow->page_pool); - if (!page) { - dev_err(common->dev, "cannot allocate page in flow %d\n", - flow_idx); - ret = -ENOMEM; - goto fail_rx; - } - - ret = am65_cpsw_nuss_rx_push(common, page, flow_idx); - if (ret < 0) { - dev_err(common->dev, - "cannot submit page to rx channel flow %d, error %d\n", - flow_idx, ret); - am65_cpsw_put_page(flow, page, false); - goto fail_rx; - } - } - } - - ret = k3_udma_glue_enable_rx_chn(rx_chn->rx_chn); - if (ret) { - dev_err(common->dev, "couldn't enable rx chn: %d\n", ret); - goto fail_rx; - } - - for (i = 0; i < common->rx_ch_num_flows ; i++) { - napi_enable(&rx_chn->flows[i].napi_rx); - if (rx_chn->flows[i].irq_disabled) { - rx_chn->flows[i].irq_disabled = false; - enable_irq(rx_chn->flows[i].irq); - } - } for (tx = 0; tx < common->tx_ch_num; tx++) { ret = k3_udma_glue_enable_tx_chn(tx_chn[tx].tx_chn); @@ -845,30 +876,13 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) tx--; } - for (flow_idx = 0; i < common->rx_ch_num_flows; flow_idx++) { - flow = &rx_chn->flows[flow_idx]; - if (!flow->irq_disabled) { - disable_irq(flow->irq); - flow->irq_disabled = true; - } - napi_disable(&flow->napi_rx); - } - - k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); - -fail_rx: - for (i = 0; i < common->rx_ch_num_flows; i++) - k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, rx_chn, - am65_cpsw_nuss_rx_cleanup, !!i); - - am65_cpsw_destroy_xdp_rxqs(common); + am65_cpsw_destroy_rxqs(common); return ret; } static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common) { - struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; int i; @@ -902,31 +916,12 @@ static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common) k3_udma_glue_disable_tx_chn(tx_chn[i].tx_chn); } - reinit_completion(&common->tdown_complete); - k3_udma_glue_tdown_rx_chn(rx_chn->rx_chn, true); - - if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) { - i = wait_for_completion_timeout(&common->tdown_complete, msecs_to_jiffies(1000)); - if (!i) - dev_err(common->dev, "rx teardown timeout\n"); - } - - for (i = common->rx_ch_num_flows - 1; i >= 0; i--) { - napi_disable(&rx_chn->flows[i].napi_rx); - hrtimer_cancel(&rx_chn->flows[i].rx_hrtimer); - k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, rx_chn, - am65_cpsw_nuss_rx_cleanup, !!i); - } - - k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); - + am65_cpsw_destroy_rxqs(common); cpsw_ale_stop(common->ale); writel(0, common->cpsw_base + AM65_CPSW_REG_CTL); writel(0, common->cpsw_base + AM65_CPSW_REG_STAT_PORT_EN); - am65_cpsw_destroy_xdp_rxqs(common); - dev_dbg(common->dev, "cpsw_nuss stopped\n"); return 0; } From patchwork Fri Jan 17 14:06:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13943373 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCEDC202C3B; Fri, 17 Jan 2025 14:07:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737122833; cv=none; b=STNoe+VNITlsaEwwCnjMo0OxyD7XSbi6sUI1vncC7kepGm/aPCioi2iT3e+pz08sXVGr4SIrF1W5FSjg7wCFXgb4GKMESO0sRiKS+lq2/cqrEl5DDdeGE+fVKFl9jdcl/PBOma+Pp+DMj+cwwtrKW4916GnHvRLSj4mB3Q08HRo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737122833; c=relaxed/simple; bh=JoibIR4zZQkOBFkS+APsUy4BDrDrp+ZGllSQEZJySjk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=nd6Z+8mEWGNM9QIvEXMedyLo3infQ3y+KuzH4RGFKPeXoTYf6AzrONgUQ1fV8RK0mgioyEuLZV+jARQpUYBw/oUlH3rgtJPqGq7gB3Rhp9PrXAl4VAOVZwKUA9mnW8wt147OO2ThJ63dRGTngyIs9RT15/9z8VCVCFxA+DCKI0M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cSexfObQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cSexfObQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0C54C4CEDD; Fri, 17 Jan 2025 14:07:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737122833; bh=JoibIR4zZQkOBFkS+APsUy4BDrDrp+ZGllSQEZJySjk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=cSexfObQqkr26eU9SGU+Vnn4sXEKLdMeqTikE8lP/HN3CVJFeyA0atJzsRLymodO8 JEcxdVp1DSodxsADnQ0d8KsJqs9DXJmdQM+hC0pgYjDs5orV5rZksag1NhLRUy+oZP V9HAvxYFNWIyu/0Fmrxaeccb8CVlZFh2AVDSdeSK+LJwOlhzkKIu9GY+2Mr5eEj6bK S22v8cMsW2RYen8DElGPQyyT76EJIj2go5HAnAo5Ipf235nYi3LDKBmw5o0o3hFzn8 CpI4q5+Pt77J+PglBwmvq16arSFDLZ7deZqoqEBXRto0kQrBfPl6pgDQe7hxfOpLtV z8dzfTdj7CIuQ== From: Roger Quadros Date: Fri, 17 Jan 2025 16:06:35 +0200 Subject: [PATCH net-next v2 3/3] net: ethernet: ti: am65-cpsw: streamline TX queue creation and cleanup Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250117-am65-cpsw-streamline-v2-3-91a29c97e569@kernel.org> References: <20250117-am65-cpsw-streamline-v2-0-91a29c97e569@kernel.org> In-Reply-To: <20250117-am65-cpsw-streamline-v2-0-91a29c97e569@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Siddharth Vadapalli , srk@ti.com, danishanwar@ti.com, "Russell King (Oracle)" , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=5366; i=rogerq@kernel.org; h=from:subject:message-id; bh=JoibIR4zZQkOBFkS+APsUy4BDrDrp+ZGllSQEZJySjk=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBnimQAR7c1ywZ8Z2CxW68gWxO3mF6JEd84qOCqE dMPBlXM4kqJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZ4pkAAAKCRDSWmvTvnYw k8SbEADAvgAv7ERZnm8Oo2TGNJpRx/FBz9MJZRPBhoB1pqx6xfLUnWA5TLS8SE6YDrxQIL3RRzs FieYq7eoC3SOF0xaOCdO1CbraNz1J6AtlhlLhkU/AOEZSheDhOEUYPkHT5dkhLTpxUB9sBWU+oV k+QHvkPFV5uTeY9iW6LyOrKvsNIh8U+BruARKK8wEsOYgmhDpjg+ZthTwm46nKosk+AJ9K+OunA CazodQT2Cyeuvy6bdrWYHDfUx6Qd8MjNIB33oxaG3MkNff2Iro1U9hMA1lFQFw1BW79T/ofhT5b V0/zVwneOx7MmBOi+k/lb54z2146szAIyE6jaZrbnphffvySh69Qz+NfE9QP4tYk+LS3Og6sJNt kNTakMhDUC7eD0tOvQP8ZMIcR57QmY+Y0f6qz3BfAEg6A2NheZidBHpYO5Kz3s7DQas2BBq90EH Zp5bg10nmuHXfYMdZA5VUeDFCfjbIAqpKYP5zgJjkzqSqsSil8K4dlIrVeK49MAvE2gwR/MPQ3y uvzZd7+t8tb0cpzyqqYlsm8DyFe7cOfmLYwpqH/iU0nKBD5Q5+neM6rRn/1E1joqK/TGrPKJWaR Rsukg4DoioGaoEwn0hQPRIqV2079nlKN7pTrxM3F/9JX3q9gJ0YL+yV2KoNpBxgFXAlUMcGPrOP vSc+zPrF/ytsVSA== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org Introduce am65_cpsw_create_txqs() and am65_cpsw_destroy_txqs() and use them. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 123 +++++++++++++++++++------------ 1 file changed, 77 insertions(+), 46 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 5c87002eeee9..391e78425777 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -501,6 +501,7 @@ static inline void am65_cpsw_put_page(struct am65_cpsw_rx_flow *flow, struct page *page, bool allow_direct); static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma); +static void am65_cpsw_nuss_tx_cleanup(void *data, dma_addr_t desc_dma); static void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id) { @@ -655,6 +656,76 @@ static int am65_cpsw_create_rxqs(struct am65_cpsw_common *common) return ret; } +static void am65_cpsw_destroy_txq(struct am65_cpsw_common *common, int id) +{ + struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[id]; + + napi_disable(&tx_chn->napi_tx); + hrtimer_cancel(&tx_chn->tx_hrtimer); + k3_udma_glue_reset_tx_chn(tx_chn->tx_chn, tx_chn, + am65_cpsw_nuss_tx_cleanup); + k3_udma_glue_disable_tx_chn(tx_chn->tx_chn); +} + +static void am65_cpsw_destroy_txqs(struct am65_cpsw_common *common) +{ + struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; + int id; + + /* shutdown tx channels */ + atomic_set(&common->tdown_cnt, common->tx_ch_num); + /* ensure new tdown_cnt value is visible */ + smp_mb__after_atomic(); + reinit_completion(&common->tdown_complete); + + for (id = 0; id < common->tx_ch_num; id++) + k3_udma_glue_tdown_tx_chn(tx_chn[id].tx_chn, false); + + id = wait_for_completion_timeout(&common->tdown_complete, + msecs_to_jiffies(1000)); + if (!id) + dev_err(common->dev, "tx teardown timeout\n"); + + for (id = common->tx_ch_num - 1; id >= 0; id--) + am65_cpsw_destroy_txq(common, id); +} + +static int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id) +{ + struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[id]; + int ret; + + ret = k3_udma_glue_enable_tx_chn(tx_chn->tx_chn); + if (ret) + return ret; + + napi_enable(&tx_chn->napi_tx); + + return 0; +} + +static int am65_cpsw_create_txqs(struct am65_cpsw_common *common) +{ + int id, ret; + + for (id = 0; id < common->tx_ch_num; id++) { + ret = am65_cpsw_create_txq(common, id); + if (ret) { + dev_err(common->dev, "couldn't create txq %d: %d\n", + id, ret); + goto err; + } + } + + return 0; + +err: + for (--id; id >= 0; id--) + am65_cpsw_destroy_txq(common, id); + + return ret; +} + static int am65_cpsw_nuss_desc_idx(struct k3_cppi_desc_pool *desc_pool, void *desc, unsigned char dsize_log2) @@ -789,9 +860,8 @@ static struct sk_buff *am65_cpsw_build_skb(void *page_addr, static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) { struct am65_cpsw_host *host_p = am65_common_get_host(common); - struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; - int port_idx, ret, tx; u32 val, port_mask; + int port_idx, ret; if (common->usage_count) return 0; @@ -855,27 +925,14 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) if (ret) return ret; - for (tx = 0; tx < common->tx_ch_num; tx++) { - ret = k3_udma_glue_enable_tx_chn(tx_chn[tx].tx_chn); - if (ret) { - dev_err(common->dev, "couldn't enable tx chn %d: %d\n", - tx, ret); - tx--; - goto fail_tx; - } - napi_enable(&tx_chn[tx].napi_tx); - } + ret = am65_cpsw_create_txqs(common); + if (ret) + goto cleanup_rx; dev_dbg(common->dev, "cpsw_nuss started\n"); return 0; -fail_tx: - while (tx >= 0) { - napi_disable(&tx_chn[tx].napi_tx); - k3_udma_glue_disable_tx_chn(tx_chn[tx].tx_chn); - tx--; - } - +cleanup_rx: am65_cpsw_destroy_rxqs(common); return ret; @@ -883,39 +940,13 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common) { - struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; - int i; - if (common->usage_count != 1) return 0; cpsw_ale_control_set(common->ale, HOST_PORT_NUM, ALE_PORT_STATE, ALE_PORT_STATE_DISABLE); - /* shutdown tx channels */ - atomic_set(&common->tdown_cnt, common->tx_ch_num); - /* ensure new tdown_cnt value is visible */ - smp_mb__after_atomic(); - reinit_completion(&common->tdown_complete); - - for (i = 0; i < common->tx_ch_num; i++) - k3_udma_glue_tdown_tx_chn(tx_chn[i].tx_chn, false); - - i = wait_for_completion_timeout(&common->tdown_complete, - msecs_to_jiffies(1000)); - if (!i) - dev_err(common->dev, "tx timeout\n"); - for (i = 0; i < common->tx_ch_num; i++) { - napi_disable(&tx_chn[i].napi_tx); - hrtimer_cancel(&tx_chn[i].tx_hrtimer); - } - - for (i = 0; i < common->tx_ch_num; i++) { - k3_udma_glue_reset_tx_chn(tx_chn[i].tx_chn, &tx_chn[i], - am65_cpsw_nuss_tx_cleanup); - k3_udma_glue_disable_tx_chn(tx_chn[i].tx_chn); - } - + am65_cpsw_destroy_txqs(common); am65_cpsw_destroy_rxqs(common); cpsw_ale_stop(common->ale);