From patchwork Thu Aug 29 12:03:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13783078 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E257719413F; Thu, 29 Aug 2024 12:03:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724933025; cv=none; b=ghQ++vUGrp1CkrEHsIliEqFo1mkfZsq9qqWTmxXapahaoR44sOmkfQ+jKxrtW0oK5GLgtDw5NYW1MDQtDpPoaBabDKtKiaFFuJqXAlb9PuL95n2wyJx1je4Bif/CULTASwWB+OJeBqwXnYNSxaK1DtTov1vnAuE/FLr9qy7sSvY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724933025; c=relaxed/simple; bh=r5XZpgwDTpFEg9NaErhvFH2hf10hQkTPuSlKas4duBw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=mJZwgEIXlZos9JMTOhTqFR0cq7/MhzlIALNIYvtLAD5P4VOkS5+fVMYtWm8wXB3Lej8pKOQ+/RLS6fU6E466Gh3iBRgFyqwOtnG/Oqqbqm46rszyuGwCfPzXODWFOhj2yIuqaHZkLPuYVxkqK3StvwFE3gVUeQKZSD8/3t6GT7Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dlXG5kV3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dlXG5kV3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7C5CC4CEC7; Thu, 29 Aug 2024 12:03:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1724933024; bh=r5XZpgwDTpFEg9NaErhvFH2hf10hQkTPuSlKas4duBw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=dlXG5kV3uqV1qmhrcaYDhIYNUntyYUrsSyAYF70k857u93alTy/gN15j8WBI0kDOZ 3PxpIYBA43vCdELF59pFCAMOtIps0KcJdzGLM8LTfv4e9jh4tu4XCUYFiRuilgSf32 rEwjim9px6/TFKTHcaJKRu20vTaWXldh2VEzjGjEP1X7S7gTSTAMExgCVwRjkBCATJ TxKxP7xYhoAMJDz9y88B26Fw/WmcCJHKATdCbTqG0LKZFCQMEJTkkIKGzbzourOz44 iMhHIEvibuEdHUAk9mqiSUMP3OR5D+VuQAB3BJznPC2gYdUNaUAU7nxjLOXkcfLlHM l6QH9qfc7B0sg== From: Roger Quadros Date: Thu, 29 Aug 2024 15:03:19 +0300 Subject: [PATCH net 1/3] net: ethernet: ti: am65-cpsw: fix XDP_DROP, XDP_TX and XDP_REDIRECT Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240829-am65-cpsw-xdp-v1-1-ff3c81054a5e@kernel.org> References: <20240829-am65-cpsw-xdp-v1-0-ff3c81054a5e@kernel.org> In-Reply-To: <20240829-am65-cpsw-xdp-v1-0-ff3c81054a5e@kernel.org> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Julien Panis , Jacob Keller Cc: Siddharth Vadapalli , Md Danish Anwar , Vignesh Raghavendra , Govindarajan Sriramakrishnan , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=7282; i=rogerq@kernel.org; h=from:subject:message-id; bh=r5XZpgwDTpFEg9NaErhvFH2hf10hQkTPuSlKas4duBw=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBm0GOXTVM39qQTKlHGotBjOdoTbjnrFnzJqMVN7 1ixQN/xwQ+JAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZtBjlwAKCRDSWmvTvnYw kxjMEACZa7YTR1P0eJc5F8QIO6VEwb9jTHY2RqZhZti0fp06h7E4VDCuXQ1MeNaIDPeFceFCvTs 9xPOUo9l2DTf1C2b9o6vJ3CBuWQjW4dqccwbve92xjG2tbZ7tKEwgvsj/ECoK+PFgKXc3dBo81M gJNZdwtjZzHjScE6DFwJ5O0uW+hVPP4HDKe79CVYE/bvnoVgK5d9sIgRNUFYk3m8La1/Jn5QvNc fLeL/JL8vilC+W5Ux2oNdw4DNIYPvPzwnMbJO6Cd1mNKC0jPMjBvi37UXB2cRf3Skd3Z4YQ0NPa i2yl/x8ff45yFmGWd56qMOqpXjeEx0m47RhoAKrLGPjSxkvsCMERL+j+oCosvnzvw8l3jZqn1F7 VvLDQmADsWdz/Wmu581KheghVeWT/FBY98rw6+GsvUm1aTyP0OQpqW3VD28hcyni7dLWD09FA5P p/NYvZ6RFIv/0jTKY0C62UJQsMU1dbzDLRzauaCJXln88GfdA7kayEmjUTws2yddAqm0Rrppi03 miuvMB4QrUUbKHMGHi9WAwxVxNSBzf9sDwY31e7JeQEev4jA/OdJwG5+yHXv0B47TcKKYbKC4zN xYh+O9+M6OV/Q400MaQ0tEN9cL7Brazrr5F+aSSWHwfR5SmOavUrKglRd4gET+5jGnrsNfPWuod NzJMEDB4/IOakpw== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org The following XDP_DROP test from [1] stalls the interface after 250 packets. ~# xdb-bench drop -m native eth0 This is because new RX requests are never queued. Fix that. The below XDP_TX test from [1] fails with a warning [ 499.947381] XDP_WARN: xdp_update_frame_from_buff(line:277): Driver BUG: missing reserved tailroom ~# xdb-bench tx -m native eth0 Fix that by using PAGE_SIZE during xdp_init_buf(). In XDP_REDIRECT case only 1 packet was processed in rx_poll. Fix it to process up to budget packets. Fix all XDP error cases to call trace_xdp_exception() and drop the packet in am65_cpsw_run_xdp(). [1] xdp-tools suite https://github.com/xdp-project/xdp-tools Fixes: 8acacc40f733 ("net: ethernet: ti: am65-cpsw: Add minimal XDP support") Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 62 +++++++++++++++++--------------- 1 file changed, 34 insertions(+), 28 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 81d9f21086ec..9fd2ba26716c 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -156,12 +156,13 @@ #define AM65_CPSW_CPPI_TX_PKT_TYPE 0x7 /* XDP */ -#define AM65_CPSW_XDP_CONSUMED 2 -#define AM65_CPSW_XDP_REDIRECT 1 +#define AM65_CPSW_XDP_CONSUMED BIT(1) +#define AM65_CPSW_XDP_REDIRECT BIT(0) #define AM65_CPSW_XDP_PASS 0 /* Include headroom compatible with both skb and xdpf */ -#define AM65_CPSW_HEADROOM (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN) +#define AM65_CPSW_HEADROOM_NA (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN) +#define AM65_CPSW_HEADROOM ALIGN(AM65_CPSW_HEADROOM_NA, sizeof(long)) static void am65_cpsw_port_set_sl_mac(struct am65_cpsw_port *slave, const u8 *dev_addr) @@ -933,7 +934,7 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev, host_desc = k3_cppi_desc_pool_alloc(tx_chn->desc_pool); if (unlikely(!host_desc)) { ndev->stats.tx_dropped++; - return -ENOMEM; + return AM65_CPSW_XDP_CONSUMED; /* drop */ } am65_cpsw_nuss_set_buf_type(tx_chn, host_desc, buf_type); @@ -942,7 +943,7 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev, pkt_len, DMA_TO_DEVICE); if (unlikely(dma_mapping_error(tx_chn->dma_dev, dma_buf))) { ndev->stats.tx_dropped++; - ret = -ENOMEM; + ret = AM65_CPSW_XDP_CONSUMED; /* drop */ goto pool_free; } @@ -977,6 +978,7 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev, /* Inform BQL */ netdev_tx_completed_queue(netif_txq, 1, pkt_len); ndev->stats.tx_errors++; + ret = AM65_CPSW_XDP_CONSUMED; /* drop */ goto dma_unmap; } @@ -1004,6 +1006,7 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, struct bpf_prog *prog; struct page *page; u32 act; + int err; prog = READ_ONCE(port->xdp_prog); if (!prog) @@ -1023,14 +1026,14 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, xdpf = xdp_convert_buff_to_frame(xdp); if (unlikely(!xdpf)) - break; + goto drop; __netif_tx_lock(netif_txq, cpu); - ret = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf, + err = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf, AM65_CPSW_TX_BUF_TYPE_XDP_TX); __netif_tx_unlock(netif_txq); - if (ret) - break; + if (err) + goto drop; ndev->stats.rx_bytes += *len; ndev->stats.rx_packets++; @@ -1038,7 +1041,7 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, goto out; case XDP_REDIRECT: if (unlikely(xdp_do_redirect(ndev, xdp, prog))) - break; + goto drop; ndev->stats.rx_bytes += *len; ndev->stats.rx_packets++; @@ -1048,6 +1051,7 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, bpf_warn_invalid_xdp_action(ndev, prog, act); fallthrough; case XDP_ABORTED: +drop: trace_xdp_exception(ndev, prog, act); fallthrough; case XDP_DROP: @@ -1056,7 +1060,6 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, page = virt_to_head_page(xdp->data); am65_cpsw_put_page(rx_chn, page, true, desc_idx); - out: return ret; } @@ -1095,7 +1098,7 @@ static void am65_cpsw_nuss_rx_csum(struct sk_buff *skb, u32 csum_info) } static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, - u32 flow_idx, int cpu) + u32 flow_idx, int cpu, int *xdp_state) { struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; u32 buf_dma_len, pkt_len, port_id = 0, csum_info; @@ -1114,6 +1117,7 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, void **swdata; u32 *psdata; + *xdp_state = AM65_CPSW_XDP_PASS; ret = k3_udma_glue_pop_rx_chn(rx_chn->rx_chn, flow_idx, &desc_dma); if (ret) { if (ret != -ENODATA) @@ -1161,15 +1165,13 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, } if (port->xdp_prog) { - xdp_init_buff(&xdp, AM65_CPSW_MAX_PACKET_SIZE, &port->xdp_rxq); - - xdp_prepare_buff(&xdp, page_addr, skb_headroom(skb), + xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq); + xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM, pkt_len, false); - - ret = am65_cpsw_run_xdp(common, port, &xdp, desc_idx, - cpu, &pkt_len); - if (ret != AM65_CPSW_XDP_PASS) - return ret; + *xdp_state = am65_cpsw_run_xdp(common, port, &xdp, desc_idx, + cpu, &pkt_len); + if (*xdp_state != AM65_CPSW_XDP_PASS) + goto allocate; /* Compute additional headroom to be reserved */ headroom = (xdp.data - xdp.data_hard_start) - skb_headroom(skb); @@ -1193,9 +1195,13 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, stats->rx_bytes += pkt_len; u64_stats_update_end(&stats->syncp); +allocate: new_page = page_pool_dev_alloc_pages(rx_chn->page_pool); - if (unlikely(!new_page)) + if (unlikely(!new_page)) { + dev_err(dev, "page alloc failed\n"); return -ENOMEM; + } + rx_chn->pages[desc_idx] = new_page; if (netif_dormant(ndev)) { @@ -1229,8 +1235,9 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget) struct am65_cpsw_common *common = am65_cpsw_napi_to_common(napi_rx); int flow = AM65_CPSW_MAX_RX_FLOWS; int cpu = smp_processor_id(); - bool xdp_redirect = false; + int xdp_state_or = 0; int cur_budget, ret; + int xdp_state; int num_rx = 0; /* process every flow */ @@ -1238,12 +1245,11 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget) cur_budget = budget - num_rx; while (cur_budget--) { - ret = am65_cpsw_nuss_rx_packets(common, flow, cpu); - if (ret) { - if (ret == AM65_CPSW_XDP_REDIRECT) - xdp_redirect = true; + ret = am65_cpsw_nuss_rx_packets(common, flow, cpu, + &xdp_state); + xdp_state_or |= xdp_state; + if (ret) break; - } num_rx++; } @@ -1251,7 +1257,7 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget) break; } - if (xdp_redirect) + if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) xdp_do_flush(); dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget); From patchwork Thu Aug 29 12:03:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13783079 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53EE41A707D; Thu, 29 Aug 2024 12:03:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724933029; cv=none; b=lJUwo29//2sDWqcjgC2jk3I6wdrnpG97ZbugFMPKF7Ceu4se9zptbdTfcKxLvym7BO575BJJEKgRxNmER268pk0zPgax1hjrIiYrl2uyvjxM9+9IbH1qdNih8mA1XJJWZBtbP5qE6Lf5iUINjLPikAD7EEY62Vc3BSX77Rqsr60= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724933029; c=relaxed/simple; bh=GHCT7Eny5kworVq/bVRbh0/g6kq8ndCladI6RYKjfZU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=cMbNAwl6etT6fmZcTlaAXUteyl+5k9gsiZsPFQpqrtzeMjHjACYSQKi03Mf7yZaq7ugklOXYojE7hN8EJFQmpjQsjgQoudAnKbaNu1O1WNZPJnBLyuuONC/Z5FlVee+UUH5avFEJ0vsUTPatkBzvhzGPIFyS147cS5MzSz7dG6M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sLKrs4n9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sLKrs4n9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0C445C4CEC8; Thu, 29 Aug 2024 12:03:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1724933028; bh=GHCT7Eny5kworVq/bVRbh0/g6kq8ndCladI6RYKjfZU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=sLKrs4n9s8Nn5O4DFdsHMc3MnWkhOwsHX06/i7t0SBUU5jyuXWljkuqaF4UE1pYZd 6TAK02Q14hVBgSn5K7nG4adND09eJ95QpRaqDQwLwmUb/S1lewzC+y1v7DqN2+H4PA 2Kr5r8e5iXMElLug7Vw+FCVwNdkqZXxhc/9/vGFRrIs5vYNud0vzRPFb0eOGvNSzKx iAFXOwDU7gofM2bIljfEAOvhpQyf7J7iXvARMUHxpqH8dWT43MLywX3wOaKBo7PoUX RMZLyA48tV5O9lEWJshPWnqIuoHZ/hQFBy1VhbDsQd+9U/4M9o0Veg7RPlkF0/LEQZ BBtvw9rnlBK5w== From: Roger Quadros Date: Thu, 29 Aug 2024 15:03:20 +0300 Subject: [PATCH net 2/3] net: ethernet: ti: am65-cpsw: Fix NULL dereference on XDP_TX Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240829-am65-cpsw-xdp-v1-2-ff3c81054a5e@kernel.org> References: <20240829-am65-cpsw-xdp-v1-0-ff3c81054a5e@kernel.org> In-Reply-To: <20240829-am65-cpsw-xdp-v1-0-ff3c81054a5e@kernel.org> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Julien Panis , Jacob Keller Cc: Siddharth Vadapalli , Md Danish Anwar , Vignesh Raghavendra , Govindarajan Sriramakrishnan , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=1562; i=rogerq@kernel.org; h=from:subject:message-id; bh=GHCT7Eny5kworVq/bVRbh0/g6kq8ndCladI6RYKjfZU=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBm0GOYInJM1XuAwIFmU2Dbw0vgdjHGy05PqofiE xmQ4Q5BouqJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZtBjmAAKCRDSWmvTvnYw k6xBD/9cjNmmUOkVF7bjOpO6lhvey+mUmi24pb60reELEzfBZ1cZrDzzlpinwEB4OjzlwbMd7QB 7BArQR/93N0bhJQbaOYRIl+8KmRyOyslRhwYeISGK9Spl/sHxDibfr3CuTti44AeEBaJE8klV67 NETAJioTH89Yjc0S+UOqged7JP0L9lB7HuiOyLhyHalLo6/wVxcOfk1K3dG3mfDm7FulUptZ/5m eEkmEOx3sbqPgLGiRdExyQwocVq9+M1PF2/i1gG+mrQ1enAChiMvp2COHOoBA68VcdENtqParp2 r6NIRzW4iYxED5WMfJISnaDB3X44bhV4GqPh0FslxWX1A3mYXuvbplr6K2GCQmgKeeFXNFnpDJM voX+79cJgSt0narzSu04X4MD5CHDBZbLev5hqhCY3vl7SzZghsUTRhspydpHnXsT6xE1PzYEW9+ 7LcAV3HDRIN5fjI7U745SzF/iiEBtknJIqfdHZlcqAe8dKdUWlcAMLtTktR/EnToSnlP5k722Lk ACanXpE5x1dF+CjrpS3pMONy2h0BicGTBw9A/c4/CA2UCdxzsvM6qzYy0o/1pQhVGB+2jPmYeJ8 UjgyyvhHVDOn8aichY4ifkbvjp8XkRnIZpoHNLf3GdFBFpPWQI5jXggEguDLprtkhoD9jDwTvfF iTizEGGYv6SSjEA== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org If number of TX queues are set to 1 we get a NULL pointer dereference during XDP_TX. ~# ethtool -L eth0 tx 1 ~# ./xdp-trafficgen udp -A -a eth0 -t 2 Transmitting on eth0 (ifindex 2) [ 241.135257] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000030 Fix this by using actual TX queues instead of max TX queues when picking the TX channel in am65_cpsw_ndo_xdp_xmit(). Fixes: 8acacc40f733 ("net: ethernet: ti: am65-cpsw: Add minimal XDP support") Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 9fd2ba26716c..03577a008df2 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -1924,12 +1924,13 @@ static int am65_cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf) static int am65_cpsw_ndo_xdp_xmit(struct net_device *ndev, int n, struct xdp_frame **frames, u32 flags) { + struct am65_cpsw_common *common = am65_ndev_to_common(ndev); struct am65_cpsw_tx_chn *tx_chn; struct netdev_queue *netif_txq; int cpu = smp_processor_id(); int i, nxmit = 0; - tx_chn = &am65_ndev_to_common(ndev)->tx_chns[cpu % AM65_CPSW_MAX_TX_QUEUES]; + tx_chn = &common->tx_chns[cpu % common->tx_ch_num]; netif_txq = netdev_get_tx_queue(ndev, tx_chn->id); __netif_tx_lock(netif_txq, cpu); From patchwork Thu Aug 29 12:03:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 13783080 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAC2E1ABEC3; Thu, 29 Aug 2024 12:03:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724933033; cv=none; b=kqOWCQvfYuH0hgqxwKTRyA5z9Tb4FeT3atQGSPthuGMa9LIVoGWTfuIQcOtKyR9sWMmKBc6BvXxDb8cbs4f3NvZXJXjol0328W4B/dyY5TnrRffl6WfiH312pVqKSbY5uS/mm9SY+e7HGUFXvcVz6w6iuql+A88139Yl5lcUdt8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724933033; c=relaxed/simple; bh=dOuHzNG0XJEDwPjcpbqy+p9jK8QPYMUAH4STnCVbKKs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=nUWKi2fSzobSFBFEKBPOyIC2udcbkakNGgn69ui4Y9OBE7gD9lUnQmaZ9w139MSIf4QFvfsaO48vntxqypAJfg6Fp6tj1FpoWookCSrapBDah5pS7y8DrByEpXZ/JQJZmXLsBYj8a5IXz8booSnAyrach6yeLvLBh4MWnazjCe4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=St+4Cc6v; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="St+4Cc6v" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52154C4CECA; Thu, 29 Aug 2024 12:03:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1724933033; bh=dOuHzNG0XJEDwPjcpbqy+p9jK8QPYMUAH4STnCVbKKs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=St+4Cc6vLOF5wOhulxwlwALvCnP3BKvh4SNWEC7rLZRAl9Z8BPYLB1CSHkCuVfKJv QZaCRTI9ah70Wx1+RRTFctfRkTYYzhvucgnaxfvNKRC2gycJevuBs5V7OB4v8O9tkj 3H+2lpHmRCZdv+rXkAKy29dS5VSzA1nTr93hl0x0lFqlXk39BK8WWESpzAkzWJ4Dof dUMlGhqWtxZDRH8HPcNLDVUeAvbVmYajFjeuV06tyE6NpQ+ncgHfA0dGSBusJdgZNZ WkYaTHdVzVpxmUxCXjkgRiAtfh1k1XPY745TWKqSvSnfr4BB+7IZRL3kzej56KouVN kyJxrFmS5j1cg== From: Roger Quadros Date: Thu, 29 Aug 2024 15:03:21 +0300 Subject: [PATCH net 3/3] net: ethernet: ti: am65-cpsw: Fix RX statistics for XDP_TX and XDP_REDIRECT Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240829-am65-cpsw-xdp-v1-3-ff3c81054a5e@kernel.org> References: <20240829-am65-cpsw-xdp-v1-0-ff3c81054a5e@kernel.org> In-Reply-To: <20240829-am65-cpsw-xdp-v1-0-ff3c81054a5e@kernel.org> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Julien Panis , Jacob Keller Cc: Siddharth Vadapalli , Md Danish Anwar , Vignesh Raghavendra , Govindarajan Sriramakrishnan , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=2188; i=rogerq@kernel.org; h=from:subject:message-id; bh=dOuHzNG0XJEDwPjcpbqy+p9jK8QPYMUAH4STnCVbKKs=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBm0GOYnDkrJLqbi4vIFwnW8rFo8vIQx1fBibN/K ZpOCKxSF4iJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZtBjmAAKCRDSWmvTvnYw k0DaEADTAS1GImIarr3+LWXXZruTSBp7t0/Olxp5KKZuuFEnaL/QyOKyrtbG+laJ05YqXovzQqZ toAHNNXuQkzYrvpXEVz+5NkYB2Jmjsd5hLR8JxKZfgVkuZGUB59TLTSSvfnrGp0hH8698kbVcUB wYaIxozO9+OTNoIyMPR95rPQj0L0uVwntPYOPXYVpNoUoelFDnN66rlrCMPadEtoBKkNh7dDwKk yegwDEqnIGusZuNDy0bFHQaCEE1JGI5hoQGl0qfuRGi7KombOAG0Vwp4paQlByAY9WzvSUIO6ZY LWEQ9ZtCi0F8vs3NzbjulUFlQ6gxB8mTYJbHxSELLVpQQ6EEn2LM+pQDcZirq0GFplBjoBsAo9C jBpcLFXBS1+mOrcwJ1Cl00ygiCoLiFhypTbMCzdgURO5n2SbjK9Hft0ORyjb6fp8aZ2CY4TvHaU jrVkX+fCgUUtVPDhO7BXvKVNhU98U5l0KIFAhZNqh7aKaImfXlgvr7LXmruY5pR01yd7th5T+jf d2EecJ4EZ6sz0MEnULGd1/DSI7jisjS4gjEiEezU8nLoHAqsC/horrHPH4sdHypOzP8OsjZEjuG Ek5MDXZgpRqGnwPxEWRWwzz24/eFs9EI+RtyyQkQk5M3H32c33Tvim9RXqm6JG7307MjbD/37CY kW6NYDpM4iJN4fw== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 X-Patchwork-Delegate: kuba@kernel.org We are not using ndev->stats for rx_packets and rx_bytes anymore. Instead, we use per CPU stats which are collated in am65_cpsw_nuss_ndo_get_stats(). Fix RX statistics for XDP_TX and XDP_REDIRECT cases. Fixes: 8acacc40f733 ("net: ethernet: ti: am65-cpsw: Add minimal XDP support") Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 03577a008df2..b06b8872b4eb 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -998,7 +998,9 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, int desc_idx, int cpu, int *len) { struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; + struct am65_cpsw_ndev_priv *ndev_priv; struct net_device *ndev = port->ndev; + struct am65_cpsw_ndev_stats *stats; int ret = AM65_CPSW_XDP_CONSUMED; struct am65_cpsw_tx_chn *tx_chn; struct netdev_queue *netif_txq; @@ -1016,6 +1018,9 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, /* XDP prog might have changed packet data and boundaries */ *len = xdp->data_end - xdp->data; + ndev_priv = netdev_priv(ndev); + stats = this_cpu_ptr(ndev_priv->stats); + switch (act) { case XDP_PASS: ret = AM65_CPSW_XDP_PASS; @@ -1035,16 +1040,20 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, if (err) goto drop; - ndev->stats.rx_bytes += *len; - ndev->stats.rx_packets++; + u64_stats_update_begin(&stats->syncp); + stats->rx_bytes += *len; + stats->rx_packets++; + u64_stats_update_end(&stats->syncp); ret = AM65_CPSW_XDP_CONSUMED; goto out; case XDP_REDIRECT: if (unlikely(xdp_do_redirect(ndev, xdp, prog))) goto drop; - ndev->stats.rx_bytes += *len; - ndev->stats.rx_packets++; + u64_stats_update_begin(&stats->syncp); + stats->rx_bytes += *len; + stats->rx_packets++; + u64_stats_update_end(&stats->syncp); ret = AM65_CPSW_XDP_REDIRECT; goto out; default: