From patchwork Wed Jan 15 16:43:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13940649 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6031F1991AF; Wed, 15 Jan 2025 16:43:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736959392; cv=none; b=MN5i2D0QumdSeO5AEDvyq+EjfMb3DiIfQ5dDA47E5zjz6nD5sdteAAr49sDnkXZscoauIrwEO2ZIL5/wW6WuMotv/b77qJFYIdFX2T9k4jhUveqnVjeli3MYd7+mmDFLvYkAPxL6byIaGSWm05a6TfAYC4LRQ1iM9FhVwNjlTrk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736959392; c=relaxed/simple; bh=FF7NIpBO4KNtHeGTTIJE41zdUxNi60jZir5WWjnpILk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=gVspGhXg6NnLzRYzJl7M5zi6DmGu/BwI9QYzzKH/h3SdEMCBprSdzKHQzjMFY5BYjDIXjT6vNRuwdDTx9KB/Z+hTCKYDNLgvRW/WT+cUsZgBWBwDVj/bkdz8Bi85stdpbAO+pFegpD42pAojNnoo1JAVZpHg7cpE4CZSxWzgzYg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KPlrx8fS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KPlrx8fS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2943AC4CEE4; Wed, 15 Jan 2025 16:43:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736959391; bh=FF7NIpBO4KNtHeGTTIJE41zdUxNi60jZir5WWjnpILk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=KPlrx8fSXEc5xHJmhEurTM/TLNgafiQCVLYt0ceAMNkjyyExLivfJn4EfP+qog8PB SoePhXZhMmiVE9LwydmBwQ1I6zlO2fVtt3LNePJLRTp81ivcgToX1WaLzz31pK4ATP x+e5M1a1l2Wq/FXmT2YRRhhkqsgI6bFwYDKdRZA0F6Oo5dBun/iDTsxCDAnW/fgG9/ 3tWl8EFe8R+KElIiu8/CSKWBkIkLqbSKrQgv4PA5QDuWWmgijK8CGevKC/MnkvmXsT tHi77RJfEBswnG91tkRDG8QS9zP0svdt+KaS48mPCYrdR9HYcPHz62IOLAKkjbCuAK ENLCr3jNjY/ZA== From: Roger Quadros Date: Wed, 15 Jan 2025 18:43:00 +0200 Subject: [PATCH net-next 1/4] net: ethernet: am65-cpsw: call netif_carrier_on/off() when appropriate Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250115-am65-cpsw-streamline-v1-1-326975c36935@kernel.org> References: <20250115-am65-cpsw-streamline-v1-0-326975c36935@kernel.org> In-Reply-To: <20250115-am65-cpsw-streamline-v1-0-326975c36935@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Siddharth Vadapalli , srk@ti.com, danishanwar@ti.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=1237; i=rogerq@kernel.org; h=from:subject:message-id; bh=FF7NIpBO4KNtHeGTTIJE41zdUxNi60jZir5WWjnpILk=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBnh+WXn6RHnacu84s4TbA+fl8R/CJWNLPes9QHC n+dfp7NMnyJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZ4fllwAKCRDSWmvTvnYw k44cD/9BwXx6NOhJS1PPFmdglONqDlAJZvvtVbb1gOeFqyDIZyVtJXWOehNpcQdqcscohBHCOmT UmZpOAyiGjTf2Tq2k8rfUXDbyTi8+3Wb2noJf1uRAtLIjD92uGk53fN7TNLB424Pxe59HKAX4ki CJQ/mHEKhHtfKFVNHLEd6IGzrttGn+xzPeiAzE8YZguWOhJ70I+EGb8OzCEt7siJIL+OwJwvg35 oXLyxuuoI5MV21IWquUjG7y9MNPJkLc55gH1dba3EHlCQ3mzukjacRy3Rn/GhQnUGLJRhPUcQ7i Fmhv/2wI+9WKmQR3RCRdF6RIYBzNiYFarogx0BocPu06OcnfhpHiSQPOna77zja/GL+mGg7dmpf 268H3jXO4tNOz9bxqyDAaVc4bQGVuDhKiSvX5VcF/6k4ZgpPBNnPZXcuxeoBFH2OM/MpIFH0Bi+ Q76UQvU1oDn9xHORInoilv4hSeKZKZ9JSVftEbVh97HFFOU08s9N1NxVoG8Hfua2J1maWqPbLel 6kKq2Ez1FjizJwwoMrHjtKPCioqGJ5sKMXAKAftjJSXaWYWzdB4wc/D2ROzzsJ5gaCDnLXg+27O nat1ZSpVo0/pVvHzxHx8L84VG5h1tsZIuPm6I4AOmHlINFcJqU7P1VO4BUfT8go3782ttX4ltVt xTe0pIQ9ChAYgWw== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org Call netif_carrier_on/off when link is up/down. When link is up only wake TX netif queue if network device is running. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index dcb6662b473d..36c29d3db329 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -2155,6 +2155,7 @@ static void am65_cpsw_nuss_mac_link_down(struct phylink_config *config, unsigned cpsw_sl_ctl_clr(port->slave.mac_sl, mac_control); am65_cpsw_qos_link_down(ndev); + netif_carrier_off(ndev); netif_tx_stop_all_queues(ndev); } @@ -2196,7 +2197,9 @@ static void am65_cpsw_nuss_mac_link_up(struct phylink_config *config, struct phy cpsw_ale_control_set(common->ale, port->port_id, ALE_PORT_STATE, ALE_PORT_STATE_FORWARD); am65_cpsw_qos_link_up(ndev, speed); - netif_tx_wake_all_queues(ndev); + netif_carrier_on(ndev); + if (netif_running(ndev)) + netif_tx_wake_all_queues(ndev); } static const struct phylink_mac_ops am65_cpsw_phylink_mac_ops = { From patchwork Wed Jan 15 16:43:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13940650 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9329116A92E; Wed, 15 Jan 2025 16:43:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736959397; cv=none; b=jaamjCoFQYsL7BoQA7QTfC4cvYNpcsRLprUe+RiPaMixJMTARZDIWl7yeFXb/LvVvvYRGHczvnxnQpxPds5BMjPLs+rzJkQlBNyUq4yrlz6L1/RMfXhoGq/PsJGE52HnnTIVtxTApjB1Jp2BkmJFRuYbNAKfKLylDKhXTriHguo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736959397; c=relaxed/simple; bh=EK74UAq6GTG/g6jAR54DfVO7VAXVzZEuvAa3GSMYP7s=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=PhUdKSCgBL7qN+xubecTAODZ688yFf82ypozcHJ4cCq4W8U80A6eZox9WUl/NS1gngyPvvAgHwSoGdLeEeX1MpXgphNa17hIVUkWtgS0kMPN9IxlThYZxdatUP0p6eQ1LVqj/xQO9rSXKDVhaC1uEOFfHNo68D24z8JNec/jVks= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=N9knrWzr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="N9knrWzr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 54247C4CEE1; Wed, 15 Jan 2025 16:43:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736959396; bh=EK74UAq6GTG/g6jAR54DfVO7VAXVzZEuvAa3GSMYP7s=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=N9knrWzr8w/Axgl6A6IsIYoE3lUwbpj+NWDkrV5gzK5O1ePYlMY70htqvBYO7rtoi MuCv7bG+PxcFpOWeG3pJeRBpXA1+6FZMiezcXwPVKVwqd6AHRVNjxK1spvs9ZVfBh1 GDRXBTskXP/pd0L7z7/1ji0bekr3Zf1rvmtCbzCCUubEmhK5esOdEyEclFtYXEo4W1 gcWNKg/cq92SWDPKMcvq7O4PAdfYFIp4uka3kmABKdxKJ3oSuSg4So/rsjCqKlUE2M jmR5h7YLytFdsJc2bfh0sGDnyoRvEcAbvBNY2mCyK8/Lws5/uZJ0oFaKZZ/bV/yroX I0F6aqCMZrBkQ== From: Roger Quadros Date: Wed, 15 Jan 2025 18:43:01 +0200 Subject: [PATCH net-next 2/4] net: ethernet: ti: am65-cpsw: streamline .probe() error handling Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250115-am65-cpsw-streamline-v1-2-326975c36935@kernel.org> References: <20250115-am65-cpsw-streamline-v1-0-326975c36935@kernel.org> In-Reply-To: <20250115-am65-cpsw-streamline-v1-0-326975c36935@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Siddharth Vadapalli , srk@ti.com, danishanwar@ti.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=14375; i=rogerq@kernel.org; h=from:subject:message-id; bh=EK74UAq6GTG/g6jAR54DfVO7VAXVzZEuvAa3GSMYP7s=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBnh+WXjCQrj28NhSw1zRV9HiXLl1I7UVRGlOSMZ 7LnH6BwNz2JAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZ4fllwAKCRDSWmvTvnYw k0ovD/9IwKCq0AIWCgHCyP/5wJxbZzdwb/708lEPiJODMy4JA1Sky6ptnUYK8qydLOUp8EjuN6D 3VM/jXGtuyVz8227hpVKTfOZAmfTbLrtQd6SY+AiCJf2zX/nLyhHqIiNBM8rgCQtPBz+iL4pkwk P6yGYBg+7f5NOhQnVKWi54T5yWYNpl8345kqcuyEs8MWqCymRGDe0xbDd+dK5pDMXNQqFRxXvDv 55IZcHTjjbD50Pf60JPoBf3e8RbrxFbcjkp7Cp+iuSnS3ILpzmw/iN42RJO7P2Y1i3nr3RmB1cK WadU0HzC8rj4dZuAiKN9DYzaR1RtgAUfbpUT+ALK0H0HbizocI2bfMGZEAJF7tPikcl4OtBsSf8 PzsBx63gzK8/stioBzuyJoU0hRXplqFW03wn6q4N7bfM9NYP5Oz7sz4xUMAetAwPqT2vLL/AdUu bM9SCJHmohcZgzDlM4z9n7qvdNCW2gBSbCEAYsAOHIAGkSeULa7oA7+WgMZf2plPGQ7iip/WWhk jgyRpsH8m2vtvGSA0B0+/mpk7jvZw4u6qsdVZ4mIwOx01zoCgtU+/f4fvk4DDggmn7SoYNNA4xh IgPZAGZ5shUSzHpibmPX/JU7wbXwzQlC2z1VgcSpbjAR3+l8n4qnNyquBvyZ9LnkoPp/8mUUdgF vypmnw0kmYXzSPA== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org Keep things simple by explicitly cleaning up on .probe() error path or .remove(). Get rid of devm_add/remove_action() usage. Rename am65_cpsw_disable_serdes_phy() to am65_cpsw_nuss_cleanup_slave_ports() and move it right before am65_cpsw_nuss_init_slave_ports(). Get rid of am65_cpsw_nuss_phylink_cleanup() and introduce am65_cpsw_nuss_cleanup_ndevs() right before am65_cpsw_nuss_init_ndevs() Move channel initiailzation code out of am65_cpsw_nuss_register_ndevs() into new function am65_cpsw_nuss_init_chns(). Add am65_cpsw_nuss_remove_chns() to do reverse of am65_cpsw_nuss_init_chns(). Add am65_cpsw_nuss_unregister_ndev() to do reverse of am65_cpsw_nuss_register_ndevs(). Use the introduced helpers in probe/remove. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 276 ++++++++++++++----------------- 1 file changed, 126 insertions(+), 150 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 36c29d3db329..783ec461dbdc 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -2056,20 +2056,6 @@ static int am65_cpsw_enable_phy(struct phy *phy) return 0; } -static void am65_cpsw_disable_serdes_phy(struct am65_cpsw_common *common) -{ - struct am65_cpsw_port *port; - struct phy *phy; - int i; - - for (i = 0; i < common->port_num; i++) { - port = &common->ports[i]; - phy = port->slave.serdes_phy; - if (phy) - am65_cpsw_disable_phy(phy); - } -} - static int am65_cpsw_init_serdes_phy(struct device *dev, struct device_node *port_np, struct am65_cpsw_port *port) { @@ -2222,14 +2208,20 @@ static void am65_cpsw_nuss_slave_disable_unused(struct am65_cpsw_port *port) cpsw_sl_ctl_reset(port->slave.mac_sl); } -static void am65_cpsw_nuss_free_tx_chns(void *data) +static void am65_cpsw_nuss_cleanup_tx_chns(struct am65_cpsw_common *common) { - struct am65_cpsw_common *common = data; + struct device *dev = common->dev; int i; + common->tx_ch_rate_msk = 0; for (i = 0; i < common->tx_ch_num; i++) { struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i]; + if (tx_chn->irq) + devm_free_irq(dev, tx_chn->irq, tx_chn); + + netif_napi_del(&tx_chn->napi_tx); + if (!IS_ERR_OR_NULL(tx_chn->desc_pool)) k3_cppi_desc_pool_destroy(tx_chn->desc_pool); @@ -2240,26 +2232,6 @@ static void am65_cpsw_nuss_free_tx_chns(void *data) } } -static void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common) -{ - struct device *dev = common->dev; - int i; - - devm_remove_action(dev, am65_cpsw_nuss_free_tx_chns, common); - - common->tx_ch_rate_msk = 0; - for (i = 0; i < common->tx_ch_num; i++) { - struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i]; - - if (tx_chn->irq) - devm_free_irq(dev, tx_chn->irq, tx_chn); - - netif_napi_del(&tx_chn->napi_tx); - } - - am65_cpsw_nuss_free_tx_chns(common); -} - static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common) { struct device *dev = common->dev; @@ -2360,36 +2332,14 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_cpsw_common *common) } ret = am65_cpsw_nuss_ndev_add_tx_napi(common); - if (ret) { + if (ret) dev_err(dev, "Failed to add tx NAPI %d\n", ret); - goto err; - } err: - i = devm_add_action(dev, am65_cpsw_nuss_free_tx_chns, common); - if (i) { - dev_err(dev, "Failed to add free_tx_chns action %d\n", i); - return i; - } - return ret; } -static void am65_cpsw_nuss_free_rx_chns(void *data) -{ - struct am65_cpsw_common *common = data; - struct am65_cpsw_rx_chn *rx_chn; - - rx_chn = &common->rx_chns; - - if (!IS_ERR_OR_NULL(rx_chn->desc_pool)) - k3_cppi_desc_pool_destroy(rx_chn->desc_pool); - - if (!IS_ERR_OR_NULL(rx_chn->rx_chn)) - k3_udma_glue_release_rx_chn(rx_chn->rx_chn); -} - -static void am65_cpsw_nuss_remove_rx_chns(struct am65_cpsw_common *common) +static void am65_cpsw_nuss_cleanup_rx_chns(struct am65_cpsw_common *common) { struct device *dev = common->dev; struct am65_cpsw_rx_chn *rx_chn; @@ -2398,7 +2348,6 @@ static void am65_cpsw_nuss_remove_rx_chns(struct am65_cpsw_common *common) rx_chn = &common->rx_chns; flows = rx_chn->flows; - devm_remove_action(dev, am65_cpsw_nuss_free_rx_chns, common); for (i = 0; i < common->rx_ch_num_flows; i++) { if (!(flows[i].irq < 0)) @@ -2406,7 +2355,11 @@ static void am65_cpsw_nuss_remove_rx_chns(struct am65_cpsw_common *common) netif_napi_del(&flows[i].napi_rx); } - am65_cpsw_nuss_free_rx_chns(common); + if (!IS_ERR_OR_NULL(rx_chn->desc_pool)) + k3_cppi_desc_pool_destroy(rx_chn->desc_pool); + + if (!IS_ERR_OR_NULL(rx_chn->rx_chn)) + k3_udma_glue_release_rx_chn(rx_chn->rx_chn); common->rx_flow_id_base = -1; } @@ -2535,14 +2488,7 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) /* setup classifier to route priorities to flows */ cpsw_ale_classifier_setup_default(common->ale, common->rx_ch_num_flows); - err: - i = devm_add_action(dev, am65_cpsw_nuss_free_rx_chns, common); - if (i) { - dev_err(dev, "Failed to add free_rx_chns action %d\n", i); - return i; - } - return ret; } @@ -2626,6 +2572,22 @@ static int am65_cpsw_init_cpts(struct am65_cpsw_common *common) return 0; } +static void am65_cpsw_nuss_cleanup_slave_ports(struct am65_cpsw_common *common) +{ + struct am65_cpsw_port *port; + struct phy *phy; + int i; + + for (i = 0; i < common->port_num; i++) { + port = &common->ports[i]; + phy = port->slave.serdes_phy; + if (phy) { + am65_cpsw_disable_phy(phy); + port->slave.serdes_phy = NULL; + } + } +} + static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common) { struct device_node *node, *port_np; @@ -2743,18 +2705,6 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common) return ret; } -static void am65_cpsw_nuss_phylink_cleanup(struct am65_cpsw_common *common) -{ - struct am65_cpsw_port *port; - int i; - - for (i = 0; i < common->port_num; i++) { - port = &common->ports[i]; - if (port->slave.phylink) - phylink_destroy(port->slave.phylink); - } -} - static int am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common *common, u32 port_idx) { @@ -2863,34 +2813,42 @@ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common *common, u32 port_idx) return 0; } -static int am65_cpsw_nuss_init_ndevs(struct am65_cpsw_common *common) +static void am65_cpsw_nuss_cleanup_ndevs(struct am65_cpsw_common *common) { - int ret; + struct am65_cpsw_port *port; int i; for (i = 0; i < common->port_num; i++) { - ret = am65_cpsw_nuss_init_port_ndev(common, i); - if (ret) - return ret; + port = &common->ports[i]; + if (port->disabled) + continue; + + if (port->slave.phylink) { + phylink_destroy(port->slave.phylink); + port->slave.phylink = NULL; + } + + if (port->ndev) { + free_netdev(port->ndev); + port->ndev = NULL; + } } - return ret; + common->dma_ndev = NULL; } -static void am65_cpsw_nuss_cleanup_ndev(struct am65_cpsw_common *common) +static int am65_cpsw_nuss_init_ndevs(struct am65_cpsw_common *common) { - struct am65_cpsw_port *port; + int ret; int i; for (i = 0; i < common->port_num; i++) { - port = &common->ports[i]; - if (!port->ndev) - continue; - if (port->ndev->reg_state == NETREG_REGISTERED) - unregister_netdev(port->ndev); - free_netdev(port->ndev); - port->ndev = NULL; + ret = am65_cpsw_nuss_init_port_ndev(common, i); + if (ret) + return ret; } + + return ret; } static void am65_cpsw_port_offload_fwd_mark_update(struct am65_cpsw_common *common) @@ -3338,21 +3296,29 @@ static void am65_cpsw_unregister_devlink(struct am65_cpsw_common *common) devlink_free(common->devlink); } -static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common) +static void am65_cpsw_nuss_cleanup_chns(struct am65_cpsw_common *common) +{ + am65_cpsw_nuss_cleanup_rx_chns(common); + am65_cpsw_nuss_cleanup_tx_chns(common); +} + +static int am65_cpsw_nuss_init_chns(struct am65_cpsw_common *common) { struct am65_cpsw_rx_chn *rx_chan = &common->rx_chns; struct am65_cpsw_tx_chn *tx_chan = common->tx_chns; - struct device *dev = common->dev; - struct am65_cpsw_port *port; - int ret = 0, i; + int ret, i; /* init tx channels */ ret = am65_cpsw_nuss_init_tx_chns(common); if (ret) return ret; + + /* init rx channels */ ret = am65_cpsw_nuss_init_rx_chns(common); - if (ret) + if (ret) { + am65_cpsw_nuss_cleanup_tx_chns(common); return ret; + } /* The DMA Channels are not guaranteed to be in a clean state. * Reset and disable them to ensure that they are back to the @@ -3371,13 +3337,32 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common) k3_udma_glue_disable_rx_chn(rx_chan->rx_chn); - ret = am65_cpsw_nuss_register_devlink(common); - if (ret) - return ret; + return 0; +} + +static void am65_cpsw_nuss_unregister_ndev(struct am65_cpsw_common *common) +{ + struct am65_cpsw_port *port; + int i; for (i = 0; i < common->port_num; i++) { port = &common->ports[i]; + if (!port->ndev) + continue; + + if (port->ndev->reg_state == NETREG_REGISTERED) + unregister_netdev(port->ndev); + } +} + +static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common) +{ + struct device *dev = common->dev; + struct am65_cpsw_port *port; + int ret = 0, i; + for (i = 0; i < common->port_num; i++) { + port = &common->ports[i]; if (!port->ndev) continue; @@ -3387,25 +3372,11 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common) if (ret) { dev_err(dev, "error registering slave net device%i %d\n", i, ret); - goto err_cleanup_ndev; + return ret; } } - ret = am65_cpsw_register_notifiers(common); - if (ret) - goto err_cleanup_ndev; - - /* can't auto unregister ndev using devm_add_action() due to - * devres release sequence in DD core for DMA - */ - return 0; - -err_cleanup_ndev: - am65_cpsw_nuss_cleanup_ndev(common); - am65_cpsw_unregister_devlink(common); - - return ret; } int am65_cpsw_nuss_update_tx_rx_chns(struct am65_cpsw_common *common, @@ -3413,17 +3384,10 @@ int am65_cpsw_nuss_update_tx_rx_chns(struct am65_cpsw_common *common, { int ret; - am65_cpsw_nuss_remove_tx_chns(common); - am65_cpsw_nuss_remove_rx_chns(common); - + am65_cpsw_nuss_cleanup_chns(common); common->tx_ch_num = num_tx; common->rx_ch_num_flows = num_rx; - ret = am65_cpsw_nuss_init_tx_chns(common); - if (ret) - return ret; - - ret = am65_cpsw_nuss_init_rx_chns(common); - + ret = am65_cpsw_nuss_init_chns(common); return ret; } @@ -3599,7 +3563,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev) ret = am65_cpsw_nuss_init_slave_ports(common); if (ret) - goto err_of_clear; + goto err_ports_clear; /* init common data */ ale_params.dev = dev; @@ -3613,7 +3577,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev) if (IS_ERR(common->ale)) { dev_err(dev, "error initializing ale engine\n"); ret = PTR_ERR(common->ale); - goto err_of_clear; + goto err_ports_clear; } ale_entries = common->ale->params.ale_entries; @@ -3622,7 +3586,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev) GFP_KERNEL); ret = am65_cpsw_init_cpts(common); if (ret) - goto err_of_clear; + goto err_ports_clear; /* init ports */ for (i = 0; i < common->port_num; i++) @@ -3634,19 +3598,39 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev) ret = am65_cpsw_nuss_init_ndevs(common); if (ret) - goto err_ndevs_clear; + goto err_cpts_release; + + ret = am65_cpsw_nuss_init_chns(common); + if (ret) + goto err_cleanup_ndevs; + + ret = am65_cpsw_nuss_register_devlink(common); + if (ret) + goto err_cleanup_chns; ret = am65_cpsw_nuss_register_ndevs(common); if (ret) - goto err_ndevs_clear; + goto err_unregister_devlink; + + ret = am65_cpsw_register_notifiers(common); + if (ret) + goto err_unregister_ndev; pm_runtime_put(dev); return 0; -err_ndevs_clear: - am65_cpsw_nuss_cleanup_ndev(common); - am65_cpsw_nuss_phylink_cleanup(common); +err_unregister_ndev: + am65_cpsw_nuss_unregister_ndev(common); +err_unregister_devlink: + am65_cpsw_unregister_devlink(common); +err_cleanup_chns: + am65_cpsw_nuss_cleanup_chns(common); +err_cleanup_ndevs: + am65_cpsw_nuss_cleanup_ndevs(common); +err_cpts_release: am65_cpts_release(common->cpts); +err_ports_clear: + am65_cpsw_nuss_cleanup_slave_ports(common); err_of_clear: if (common->mdio_dev) of_platform_device_destroy(common->mdio_dev, NULL); @@ -3675,15 +3659,12 @@ static void am65_cpsw_nuss_remove(struct platform_device *pdev) } am65_cpsw_unregister_notifiers(common); - - /* must unregister ndevs here because DD release_driver routine calls - * dma_deconfigure(dev) before devres_release_all(dev) - */ - am65_cpsw_nuss_cleanup_ndev(common); + am65_cpsw_nuss_unregister_ndev(common); am65_cpsw_unregister_devlink(common); - am65_cpsw_nuss_phylink_cleanup(common); + am65_cpsw_nuss_cleanup_chns(common); + am65_cpsw_nuss_cleanup_ndevs(common); am65_cpts_release(common->cpts); - am65_cpsw_disable_serdes_phy(common); + am65_cpsw_nuss_cleanup_slave_ports(common); if (common->mdio_dev) of_platform_device_destroy(common->mdio_dev, NULL); @@ -3723,9 +3704,7 @@ static int am65_cpsw_nuss_suspend(struct device *dev) } am65_cpts_suspend(common->cpts); - - am65_cpsw_nuss_remove_rx_chns(common); - am65_cpsw_nuss_remove_tx_chns(common); + am65_cpsw_nuss_cleanup_chns(common); return 0; } @@ -3738,10 +3717,7 @@ static int am65_cpsw_nuss_resume(struct device *dev) struct net_device *ndev; int i, ret; - ret = am65_cpsw_nuss_init_tx_chns(common); - if (ret) - return ret; - ret = am65_cpsw_nuss_init_rx_chns(common); + ret = am65_cpsw_nuss_init_chns(common); if (ret) return ret; From patchwork Wed Jan 15 16:43:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13940651 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 687E71CEAC2; Wed, 15 Jan 2025 16:43:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736959400; cv=none; b=Xsh/dovzxB2Z58cQGiBGjw9ecwJRlad3HFTNzSYrOywHGbKdmtvQEf8g5FBbWMadbOLhQQA5XrgYBesPw9wSIMUl2pDajz1AV0StxCw+urkwQorV7s1WCqdwU+XQxNHdfigG5ttCkxciUYlxEentPTek4UPZ+8lmiGLkn1tnhDU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736959400; c=relaxed/simple; bh=jF71yECanwn9cNUTijcUmwJ+pkItliC0L18F9UiIHbA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=W0oGpc6PvT3Hy2XO/oWgBHdUh/Hb6Tu4T1Lw7se/V1av43iLsaYN5LiKYHgVqbAnXxFEgj9jyowjBscWDiYc+O5/2Awo12dNN+8VlO+KwWMdeyee4vXFMuae9IAH1uUjNLG/v1qU9Lvbx3HhtJXovcgJNmuNjkPDNaHocvuiI2w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=glenH+tW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="glenH+tW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9DDE7C4CEE0; Wed, 15 Jan 2025 16:43:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736959400; bh=jF71yECanwn9cNUTijcUmwJ+pkItliC0L18F9UiIHbA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=glenH+tWaMXvY0ysyt/1j6HGpRVpMqg+d4JD3znXxvMddcmlhv4UBmyeGdjyXe5Uf 0K8eOiTbY+uNcIDOEVvi2OkV3Rk1KWLbh8hsz35j6Swh0pYvhegfcLD1nyLKm+gpug kleCqb/OjTBDSNz6XzSMHMX4AKQebvnIb4wT2tOxEVsGz70pqq1/TxiexgkBCARnGi 505ZVYIpmZFTtrEcElSAk/jiukZbztjjeW2HQTuYIZ4PqwBMiBoylHNKZItzTRQ3Ly L/Yv+8iks/0aAhbGjfnP3NCuR8FmPAbkgtq4nQI7PYzABd+EGgsl0H82+JzWwkHLLO XODmd6C49TLcQ== From: Roger Quadros Date: Wed, 15 Jan 2025 18:43:02 +0200 Subject: [PATCH net-next 3/4] net: ethernet: ti: am65-cpsw: streamline RX queue creation and cleanup Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250115-am65-cpsw-streamline-v1-3-326975c36935@kernel.org> References: <20250115-am65-cpsw-streamline-v1-0-326975c36935@kernel.org> In-Reply-To: <20250115-am65-cpsw-streamline-v1-0-326975c36935@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Siddharth Vadapalli , srk@ti.com, danishanwar@ti.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=10482; i=rogerq@kernel.org; h=from:subject:message-id; bh=jF71yECanwn9cNUTijcUmwJ+pkItliC0L18F9UiIHbA=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBnh+WXFsAA/rFzr4KSVRZmD/sfa6+fPXtGRxHxv TsyqexzTvKJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZ4fllwAKCRDSWmvTvnYw kz+DEADTIHeKIbiYvGEXLjEBIWh3nVN6VO40XC4tXNqvHQefOQ4Zb/em09hvlGyE8vjG2sQuHwD yuZwZ6IY1E05W7LIU6u8BtQm1cpQlnqc/ZviL5uDH9DaEInjj1D93gd0Y5PU74zjPuHBevopOdc QJDFGkmUuAvvoSSwjCVnt15O0tQQTFCthmKPhJNGnqqoNh0bQn69mq6wq/cBrE/LXWlQCljNs6U Zdj1iFFmL+bxv6g3Ss1l1805OGuIkQ9vfwLUyRhEAc3xlGPzWpBUxYJhAnE6W7r4acCOeiP/miI XrnNqBrtlhpLGpNZqNRy4ek1/KNUueTGbDELB4Uw1EoD/vOY08c+IuqaO3fYFjILECA7bXPHeWW DAhGV4njuwy1Zh1HuB5PNQeQ7ZFaUKRzWcS0E2t6q1xbaZjXrpXj9EFTeLwi4tveH84yyvS3HYY xNrzUIyfRMV8CVtbqPiZQ2zGSDFinDeTG3XWcpjQ1zwVmic9EMiaGbyC0nesfFBRxIwnEavy+6P jH4jFRj3i7flXvCd3llecfaSNbB5QmbXPRgToaQthDJZs9OhBPL4pmwdW7CqRqvelvN7C8YoQ0/ frN7WalBq3AbGrhukvEF4lFLeGg3S/ml0NBmTZ9yUBUfzVjThsvBpPqJb0jVTghqgKRq6+KiG0j uVAaxRupnaxjELg== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org Introduce am65_cpsw_create_rxqs() and am65_cpsw_destroy_rxqs() and use them. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 243 +++++++++++++++---------------- 1 file changed, 119 insertions(+), 124 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 783ec461dbdc..b5e679bb3f3c 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -497,35 +497,61 @@ static void am65_cpsw_init_host_port_switch(struct am65_cpsw_common *common); static void am65_cpsw_init_host_port_emac(struct am65_cpsw_common *common); static void am65_cpsw_init_port_switch_ale(struct am65_cpsw_port *port); static void am65_cpsw_init_port_emac_ale(struct am65_cpsw_port *port); +static inline void am65_cpsw_put_page(struct am65_cpsw_rx_flow *flow, + struct page *page, + bool allow_direct); +static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma); -static void am65_cpsw_destroy_xdp_rxqs(struct am65_cpsw_common *common) +static void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct am65_cpsw_rx_flow *flow; struct xdp_rxq_info *rxq; - int id, port; + int port; - for (id = 0; id < common->rx_ch_num_flows; id++) { - flow = &rx_chn->flows[id]; + flow = &rx_chn->flows[id]; + napi_disable(&flow->napi_rx); + hrtimer_cancel(&flow->rx_hrtimer); + k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, id, rx_chn, + am65_cpsw_nuss_rx_cleanup, !!id); - for (port = 0; port < common->port_num; port++) { - if (!common->ports[port].ndev) - continue; + for (port = 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + continue; - rxq = &common->ports[port].xdp_rxq[id]; + rxq = &common->ports[port].xdp_rxq[id]; - if (xdp_rxq_info_is_reg(rxq)) - xdp_rxq_info_unreg(rxq); - } + if (xdp_rxq_info_is_reg(rxq)) + xdp_rxq_info_unreg(rxq); + } - if (flow->page_pool) { - page_pool_destroy(flow->page_pool); - flow->page_pool = NULL; - } + if (flow->page_pool) { + page_pool_destroy(flow->page_pool); + flow->page_pool = NULL; } } -static int am65_cpsw_create_xdp_rxqs(struct am65_cpsw_common *common) +static void am65_cpsw_destroy_rxqs(struct am65_cpsw_common *common) +{ + struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; + int id; + + reinit_completion(&common->tdown_complete); + k3_udma_glue_tdown_rx_chn(rx_chn->rx_chn, true); + + if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) { + id = wait_for_completion_timeout(&common->tdown_complete, msecs_to_jiffies(1000)); + if (!id) + dev_err(common->dev, "rx teardown timeout\n"); + } + + for (id = common->rx_ch_num_flows - 1; id >= 0; id--) + am65_cpsw_destroy_rxq(common, id); + + k3_udma_glue_disable_rx_chn(common->rx_chns.rx_chn); +} + +static int am65_cpsw_create_rxq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct page_pool_params pp_params = { @@ -540,45 +566,92 @@ static int am65_cpsw_create_xdp_rxqs(struct am65_cpsw_common *common) struct am65_cpsw_rx_flow *flow; struct xdp_rxq_info *rxq; struct page_pool *pool; - int id, port, ret; + struct page *page; + int port, ret, i; - for (id = 0; id < common->rx_ch_num_flows; id++) { - flow = &rx_chn->flows[id]; - pp_params.napi = &flow->napi_rx; - pool = page_pool_create(&pp_params); - if (IS_ERR(pool)) { - ret = PTR_ERR(pool); + flow = &rx_chn->flows[id]; + pp_params.napi = &flow->napi_rx; + pool = page_pool_create(&pp_params); + if (IS_ERR(pool)) { + ret = PTR_ERR(pool); + return ret; + } + + flow->page_pool = pool; + + /* using same page pool is allowed as no running rx handlers + * simultaneously for both ndevs + */ + for (port = 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + /* FIXME should we BUG here? */ + continue; + + rxq = &common->ports[port].xdp_rxq[id]; + ret = xdp_rxq_info_reg(rxq, common->ports[port].ndev, + id, flow->napi_rx.napi_id); + if (ret) + goto err; + + ret = xdp_rxq_info_reg_mem_model(rxq, + MEM_TYPE_PAGE_POOL, + pool); + if (ret) + goto err; + } + + for (i = 0; i < AM65_CPSW_MAX_RX_DESC; i++) { + page = page_pool_dev_alloc_pages(flow->page_pool); + if (!page) { + dev_err(common->dev, "cannot allocate page in flow %d\n", + id); + ret = -ENOMEM; goto err; } - flow->page_pool = pool; + ret = am65_cpsw_nuss_rx_push(common, page, id); + if (ret < 0) { + dev_err(common->dev, + "cannot submit page to rx channel flow %d, error %d\n", + id, ret); + am65_cpsw_put_page(flow, page, false); + goto err; + } + } - /* using same page pool is allowed as no running rx handlers - * simultaneously for both ndevs - */ - for (port = 0; port < common->port_num; port++) { - if (!common->ports[port].ndev) - continue; + napi_enable(&flow->napi_rx); + return 0; - rxq = &common->ports[port].xdp_rxq[id]; +err: + am65_cpsw_destroy_rxq(common, id); + return ret; +} - ret = xdp_rxq_info_reg(rxq, common->ports[port].ndev, - id, flow->napi_rx.napi_id); - if (ret) - goto err; +static int am65_cpsw_create_rxqs(struct am65_cpsw_common *common) +{ + int id, ret; - ret = xdp_rxq_info_reg_mem_model(rxq, - MEM_TYPE_PAGE_POOL, - pool); - if (ret) - goto err; + for (id = 0; id < common->rx_ch_num_flows; id++) { + ret = am65_cpsw_create_rxq(common, id); + if (ret) { + dev_err(common->dev, "couldn't create rxq %d: %d\n", + id, ret); + goto err; } } + ret = k3_udma_glue_enable_rx_chn(common->rx_chns.rx_chn); + if (ret) { + dev_err(common->dev, "couldn't enable rx chn: %d\n", ret); + goto err; + } + return 0; err: - am65_cpsw_destroy_xdp_rxqs(common); + for (--id; id >= 0; id--) + am65_cpsw_destroy_rxq(common, id); + return ret; } @@ -642,7 +715,6 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma) k3_udma_glue_rx_cppi5_to_dma_addr(rx_chn->rx_chn, &buf_dma); dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE); k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); - am65_cpsw_put_page(&rx_chn->flows[flow_id], page, false); } @@ -717,12 +789,9 @@ static struct sk_buff *am65_cpsw_build_skb(void *page_addr, static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) { struct am65_cpsw_host *host_p = am65_common_get_host(common); - struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; - int port_idx, i, ret, tx, flow_idx; - struct am65_cpsw_rx_flow *flow; + int port_idx, ret, tx; u32 val, port_mask; - struct page *page; if (common->usage_count) return 0; @@ -782,47 +851,9 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) am65_cpsw_qos_tx_p0_rate_init(common); - ret = am65_cpsw_create_xdp_rxqs(common); - if (ret) { - dev_err(common->dev, "Failed to create XDP rx queues\n"); + ret = am65_cpsw_create_rxqs(common); + if (ret) return ret; - } - - for (flow_idx = 0; flow_idx < common->rx_ch_num_flows; flow_idx++) { - flow = &rx_chn->flows[flow_idx]; - for (i = 0; i < AM65_CPSW_MAX_RX_DESC; i++) { - page = page_pool_dev_alloc_pages(flow->page_pool); - if (!page) { - dev_err(common->dev, "cannot allocate page in flow %d\n", - flow_idx); - ret = -ENOMEM; - goto fail_rx; - } - - ret = am65_cpsw_nuss_rx_push(common, page, flow_idx); - if (ret < 0) { - dev_err(common->dev, - "cannot submit page to rx channel flow %d, error %d\n", - flow_idx, ret); - am65_cpsw_put_page(flow, page, false); - goto fail_rx; - } - } - } - - ret = k3_udma_glue_enable_rx_chn(rx_chn->rx_chn); - if (ret) { - dev_err(common->dev, "couldn't enable rx chn: %d\n", ret); - goto fail_rx; - } - - for (i = 0; i < common->rx_ch_num_flows ; i++) { - napi_enable(&rx_chn->flows[i].napi_rx); - if (rx_chn->flows[i].irq_disabled) { - rx_chn->flows[i].irq_disabled = false; - enable_irq(rx_chn->flows[i].irq); - } - } for (tx = 0; tx < common->tx_ch_num; tx++) { ret = k3_udma_glue_enable_tx_chn(tx_chn[tx].tx_chn); @@ -845,30 +876,13 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) tx--; } - for (flow_idx = 0; i < common->rx_ch_num_flows; flow_idx++) { - flow = &rx_chn->flows[flow_idx]; - if (!flow->irq_disabled) { - disable_irq(flow->irq); - flow->irq_disabled = true; - } - napi_disable(&flow->napi_rx); - } - - k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); - -fail_rx: - for (i = 0; i < common->rx_ch_num_flows; i++) - k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, rx_chn, - am65_cpsw_nuss_rx_cleanup, !!i); - - am65_cpsw_destroy_xdp_rxqs(common); + am65_cpsw_destroy_rxqs(common); return ret; } static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common) { - struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; int i; @@ -902,31 +916,12 @@ static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common) k3_udma_glue_disable_tx_chn(tx_chn[i].tx_chn); } - reinit_completion(&common->tdown_complete); - k3_udma_glue_tdown_rx_chn(rx_chn->rx_chn, true); - - if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) { - i = wait_for_completion_timeout(&common->tdown_complete, msecs_to_jiffies(1000)); - if (!i) - dev_err(common->dev, "rx teardown timeout\n"); - } - - for (i = common->rx_ch_num_flows - 1; i >= 0; i--) { - napi_disable(&rx_chn->flows[i].napi_rx); - hrtimer_cancel(&rx_chn->flows[i].rx_hrtimer); - k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, rx_chn, - am65_cpsw_nuss_rx_cleanup, !!i); - } - - k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); - + am65_cpsw_destroy_rxqs(common); cpsw_ale_stop(common->ale); writel(0, common->cpsw_base + AM65_CPSW_REG_CTL); writel(0, common->cpsw_base + AM65_CPSW_REG_STAT_PORT_EN); - am65_cpsw_destroy_xdp_rxqs(common); - dev_dbg(common->dev, "cpsw_nuss stopped\n"); return 0; } From patchwork Wed Jan 15 16:43:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13940652 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8FEC71D5AD3; Wed, 15 Jan 2025 16:43:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736959404; cv=none; b=Eh7wSAh37yiVCYp6dkMJy+JFvguSQYqYFKy7jO7D8z1At6bsNsxUvQE3eEkMtqw4hoQXoWLhqtgDqYW4RePGadRDapRjhu+K5Hqxs7cl+EgSJTxXhnLILwYqZCL+Q8qtsmVLHZNVnT6aKVvlGm6WfI/upIUQQPc7b11FB1Xp0ss= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736959404; c=relaxed/simple; bh=QOp2WlEwA+Z1LybALk22qqERK8TTocfaR6CLWR1V+mc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=IOQfDj3ipsRKU4rryWroKjvPhZ4N5fac0SNh62zl/Qj/LAZ3eXuVrfxtLk5RyuHxGvzBs/NKaCEetRR+vC1peehVQVzO5b522NmR0eRROVu14Ftf2mW6leR6EetGuTCZxQKYHURgu3r0sliQOBuqdlNpryFceGB8ELxOlbMDLek= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WdS7+rPs; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WdS7+rPs" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0667C4CEE4; Wed, 15 Jan 2025 16:43:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736959404; bh=QOp2WlEwA+Z1LybALk22qqERK8TTocfaR6CLWR1V+mc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=WdS7+rPs3Ixf76c3FSWkH80J6rrzaiRVNwkM57KWQIddHjkky9AhXQ0WuGMP0DBQH LlATspFvdFYxGrdMliQ37dl83CSZtGEubOb3RMRls5F95OrY/71HMpnCnJZ66NCgeD M/BowfTtCcGm8DSlftOYvq4Rtp5Z/CCHr1nluAMgmywMAoCKV7BpV/OC8pHbrPUYCY gdrga8lrrJABhZ/2k9ckY0Vq9cAqTqs8rLUs1DJknWdPmt4Fpe8f1fE6UQ9Bc1baLO aqEYj+G+uMhM33rblj2yKszGa1CRXnvgb9wP5KJumCMRQsUpbVfELAq07YnkGsQefD MwFbwDN5/SEhg== From: Roger Quadros Date: Wed, 15 Jan 2025 18:43:03 +0200 Subject: [PATCH net-next 4/4] net: ethernet: ti: am65-cpsw: streamline TX queue creation and cleanup Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250115-am65-cpsw-streamline-v1-4-326975c36935@kernel.org> References: <20250115-am65-cpsw-streamline-v1-0-326975c36935@kernel.org> In-Reply-To: <20250115-am65-cpsw-streamline-v1-0-326975c36935@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Siddharth Vadapalli , srk@ti.com, danishanwar@ti.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=5366; i=rogerq@kernel.org; h=from:subject:message-id; bh=QOp2WlEwA+Z1LybALk22qqERK8TTocfaR6CLWR1V+mc=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBnh+WXnrAhu1fO1/1jbaRbXGBLaowUBkMzlfG8F JhMEeISyrGJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZ4fllwAKCRDSWmvTvnYw k2Y2D/oCt6etsCeeE4rxpQfxaPpwosAcH6a9rL5OdSv5CS/Qmc3ACbYisDOWLLJBe/9wpxbMWlC QZk3cQc0VX6X4PvwGP+FP/I14SrKI4XWGIwVujlfNWBFe4r91yu0H9YrlqJ61ECG47pcclRV4Yz Pwoi33zYRr2x0KdrRMl9tC6b95zGuoCTRjaI+Pei5GvTPd43pcXlZkMaKgmYBmZXeDTtxUFarEU IsgWSF3AkXavcdt0DjIoQ1xecbUrIKuUYj8LpBznZr9JVYZAFzLHDgEQyEynWfj82FCYwQmpB+m Jbks10LujFkWqBnA1fC1OqR4/NDZbozzctuV6Kjxt+PwDoP93npgUP5pB3FDtMmV31/NKeTuKW1 zf9UD2xdRh6aCzam6v+Ll0BB1JcROvilEHSHep53WaTGFFOZ+p3D+9biF3XNC2+lXD6tQtODd9o gNGCXmIXS8UC4xf/FhEe8yhdqsqbTMHpStHZNcUsSdGv+9BzQiOROLAM2T0VlSdEMv763CElAq2 68cpmWBSV3jid2QY38yHGSlHYEnE/XL/f1KpZKLEL9QQD1PdXxykmqCtdiJw+0G068bWwlvlruD GA3gL+hwF1sybp4n2wCFUduibGA9pr4KQoHsts5JKFx+U9/7pvoNP3pCm/SjZ5/ahdUUCEhXSEP a71lqg7mgNmzDNw== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org Introduce am65_cpsw_create_txqs() and am65_cpsw_destroy_txqs() and use them. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 123 +++++++++++++++++++------------ 1 file changed, 77 insertions(+), 46 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index b5e679bb3f3c..55c50fefe4b5 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -501,6 +501,7 @@ static inline void am65_cpsw_put_page(struct am65_cpsw_rx_flow *flow, struct page *page, bool allow_direct); static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma); +static void am65_cpsw_nuss_tx_cleanup(void *data, dma_addr_t desc_dma); static void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id) { @@ -655,6 +656,76 @@ static int am65_cpsw_create_rxqs(struct am65_cpsw_common *common) return ret; } +static void am65_cpsw_destroy_txq(struct am65_cpsw_common *common, int id) +{ + struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[id]; + + napi_disable(&tx_chn->napi_tx); + hrtimer_cancel(&tx_chn->tx_hrtimer); + k3_udma_glue_reset_tx_chn(tx_chn->tx_chn, tx_chn, + am65_cpsw_nuss_tx_cleanup); + k3_udma_glue_disable_tx_chn(tx_chn->tx_chn); +} + +static void am65_cpsw_destroy_txqs(struct am65_cpsw_common *common) +{ + struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; + int id; + + /* shutdown tx channels */ + atomic_set(&common->tdown_cnt, common->tx_ch_num); + /* ensure new tdown_cnt value is visible */ + smp_mb__after_atomic(); + reinit_completion(&common->tdown_complete); + + for (id = 0; id < common->tx_ch_num; id++) + k3_udma_glue_tdown_tx_chn(tx_chn[id].tx_chn, false); + + id = wait_for_completion_timeout(&common->tdown_complete, + msecs_to_jiffies(1000)); + if (!id) + dev_err(common->dev, "tx teardown timeout\n"); + + for (id = common->tx_ch_num - 1; id >= 0; id--) + am65_cpsw_destroy_txq(common, id); +} + +static int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id) +{ + struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[id]; + int ret; + + ret = k3_udma_glue_enable_tx_chn(tx_chn->tx_chn); + if (ret) + return ret; + + napi_enable(&tx_chn->napi_tx); + + return 0; +} + +static int am65_cpsw_create_txqs(struct am65_cpsw_common *common) +{ + int id, ret; + + for (id = 0; id < common->tx_ch_num; id++) { + ret = am65_cpsw_create_txq(common, id); + if (ret) { + dev_err(common->dev, "couldn't create txq %d: %d\n", + id, ret); + goto err; + } + } + + return 0; + +err: + for (--id; id >= 0; id--) + am65_cpsw_destroy_txq(common, id); + + return ret; +} + static int am65_cpsw_nuss_desc_idx(struct k3_cppi_desc_pool *desc_pool, void *desc, unsigned char dsize_log2) @@ -789,9 +860,8 @@ static struct sk_buff *am65_cpsw_build_skb(void *page_addr, static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) { struct am65_cpsw_host *host_p = am65_common_get_host(common); - struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; - int port_idx, ret, tx; u32 val, port_mask; + int port_idx, ret; if (common->usage_count) return 0; @@ -855,27 +925,14 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) if (ret) return ret; - for (tx = 0; tx < common->tx_ch_num; tx++) { - ret = k3_udma_glue_enable_tx_chn(tx_chn[tx].tx_chn); - if (ret) { - dev_err(common->dev, "couldn't enable tx chn %d: %d\n", - tx, ret); - tx--; - goto fail_tx; - } - napi_enable(&tx_chn[tx].napi_tx); - } + ret = am65_cpsw_create_txqs(common); + if (ret) + goto cleanup_rx; dev_dbg(common->dev, "cpsw_nuss started\n"); return 0; -fail_tx: - while (tx >= 0) { - napi_disable(&tx_chn[tx].napi_tx); - k3_udma_glue_disable_tx_chn(tx_chn[tx].tx_chn); - tx--; - } - +cleanup_rx: am65_cpsw_destroy_rxqs(common); return ret; @@ -883,39 +940,13 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common) { - struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; - int i; - if (common->usage_count != 1) return 0; cpsw_ale_control_set(common->ale, HOST_PORT_NUM, ALE_PORT_STATE, ALE_PORT_STATE_DISABLE); - /* shutdown tx channels */ - atomic_set(&common->tdown_cnt, common->tx_ch_num); - /* ensure new tdown_cnt value is visible */ - smp_mb__after_atomic(); - reinit_completion(&common->tdown_complete); - - for (i = 0; i < common->tx_ch_num; i++) - k3_udma_glue_tdown_tx_chn(tx_chn[i].tx_chn, false); - - i = wait_for_completion_timeout(&common->tdown_complete, - msecs_to_jiffies(1000)); - if (!i) - dev_err(common->dev, "tx timeout\n"); - for (i = 0; i < common->tx_ch_num; i++) { - napi_disable(&tx_chn[i].napi_tx); - hrtimer_cancel(&tx_chn[i].tx_hrtimer); - } - - for (i = 0; i < common->tx_ch_num; i++) { - k3_udma_glue_reset_tx_chn(tx_chn[i].tx_chn, &tx_chn[i], - am65_cpsw_nuss_tx_cleanup); - k3_udma_glue_disable_tx_chn(tx_chn[i].tx_chn); - } - + am65_cpsw_destroy_txqs(common); am65_cpsw_destroy_rxqs(common); cpsw_ale_stop(common->ale);