diff mbox series

[net-next] mlxsw: Switch to napi_gro_receive()

Message ID 21258fe55f608ccf1ee2783a5a4534220af28903.1734354812.git.petrm@nvidia.com (mailing list archive)
State Accepted
Commit 1ba06ca96ca255c079ce5ea6a75cc0bfd5e97921
Delegated to: Netdev Maintainers
Headers show
Series [net-next] mlxsw: Switch to napi_gro_receive() | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 7 of 7 maintainers
netdev/build_clang success Errors and warnings before: 240 this patch: 240
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 45 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-12-18--03-01 (tests: 878)

Commit Message

Petr Machata Dec. 16, 2024, 1:18 p.m. UTC
From: Ido Schimmel <idosch@nvidia.com>

Benefit from the recent conversion of the driver to NAPI and enable GRO
support through the use of napi_gro_receive(). Pass the NAPI pointer
from the bus driver (mlxsw_pci) to the switch driver (mlxsw_spectrum)
through the skb control block where various packet metadata is already
encoded.

The main motivation is to improve forwarding performance through the use
of GRO fraglist [1]. In my testing, when the forwarding data path is
simple (routing between two ports) there is not much difference in
forwarding performance between GRO disabled and GRO enabled with
fraglist.

The improvement becomes more noticeable as the data path becomes more
complex since it is traversed less times with GRO enabled. For example,
with 10 ingress and 10 egress flower filters with different priorities
on the two ports between which routing is performed, there is an
improvement of about 140% in forwarded bandwidth.

[1] https://lore.kernel.org/netdev/20200125102645.4782-1-steffen.klassert@secunet.com/

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlxsw/core.h          | 1 +
 drivers/net/ethernet/mellanox/mlxsw/pci.c           | 4 +++-
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c      | 2 +-
 drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c | 2 +-
 4 files changed, 6 insertions(+), 3 deletions(-)

Comments

Alexander Lobakin Dec. 16, 2024, 5:44 p.m. UTC | #1
From: Petr Machata <petrm@nvidia.com>
Date: Mon, 16 Dec 2024 14:18:44 +0100

> From: Ido Schimmel <idosch@nvidia.com>
> 
> Benefit from the recent conversion of the driver to NAPI and enable GRO
> support through the use of napi_gro_receive(). Pass the NAPI pointer
> from the bus driver (mlxsw_pci) to the switch driver (mlxsw_spectrum)
> through the skb control block where various packet metadata is already
> encoded.
> 
> The main motivation is to improve forwarding performance through the use
> of GRO fraglist [1]. In my testing, when the forwarding data path is
> simple (routing between two ports) there is not much difference in
> forwarding performance between GRO disabled and GRO enabled with
> fraglist.
> 
> The improvement becomes more noticeable as the data path becomes more
> complex since it is traversed less times with GRO enabled. For example,
> with 10 ingress and 10 egress flower filters with different priorities
> on the two ports between which routing is performed, there is an
> improvement of about 140% in forwarded bandwidth.
> 
> [1] https://lore.kernel.org/netdev/20200125102645.4782-1-steffen.klassert@secunet.com/
> 
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> Reviewed-by: Petr Machata <petrm@nvidia.com>
> Reviewed-by: Amit Cohen <amcohen@nvidia.com>
> Signed-off-by: Petr Machata <petrm@nvidia.com>

Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>

Thanks,
Olek
patchwork-bot+netdevbpf@kernel.org Dec. 18, 2024, 3:50 a.m. UTC | #2
Hello:

This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Mon, 16 Dec 2024 14:18:44 +0100 you wrote:
> From: Ido Schimmel <idosch@nvidia.com>
> 
> Benefit from the recent conversion of the driver to NAPI and enable GRO
> support through the use of napi_gro_receive(). Pass the NAPI pointer
> from the bus driver (mlxsw_pci) to the switch driver (mlxsw_spectrum)
> through the skb control block where various packet metadata is already
> encoded.
> 
> [...]

Here is the summary with links:
  - [net-next] mlxsw: Switch to napi_gro_receive()
    https://git.kernel.org/netdev/net-next/c/1ba06ca96ca2

You are awesome, thank you!
diff mbox series

Patch

diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h
index 8150d20cc5dc..59b4d26b4931 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.h
@@ -84,6 +84,7 @@  struct mlxsw_txhdr_info {
 };
 
 struct mlxsw_rx_md_info {
+	struct napi_struct *napi;
 	u32 cookie_index;
 	u32 latency;
 	u32 tx_congestion;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
index 0e4711804198..38e7bd3d365b 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
@@ -945,6 +945,7 @@  mlxsw_pci_xdp_handle(struct mlxsw_pci *mlxsw_pci, struct mlxsw_pci_queue *q,
 }
 
 static void mlxsw_pci_cqe_rdq_handle(struct mlxsw_pci *mlxsw_pci,
+				     struct napi_struct *napi,
 				     struct mlxsw_pci_queue *q,
 				     u16 consumer_counter_limit,
 				     enum mlxsw_pci_cqe_v cqe_v, char *cqe)
@@ -1032,6 +1033,7 @@  static void mlxsw_pci_cqe_rdq_handle(struct mlxsw_pci *mlxsw_pci,
 	}
 
 	mlxsw_pci_skb_cb_ts_set(mlxsw_pci, skb, cqe_v, cqe);
+	mlxsw_skb_cb(skb)->rx_md_info.napi = napi;
 
 	mlxsw_core_skb_receive(mlxsw_pci->core, skb, &rx_info);
 
@@ -1120,7 +1122,7 @@  static int mlxsw_pci_napi_poll_cq_rx(struct napi_struct *napi, int budget)
 			continue;
 		}
 
-		mlxsw_pci_cqe_rdq_handle(mlxsw_pci, rdq,
+		mlxsw_pci_cqe_rdq_handle(mlxsw_pci, napi, rdq,
 					 wqe_counter, q->u.cq.v, cqe);
 
 		if (++work_done == budget)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index 146acb5f0092..78a519149fa4 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -2412,7 +2412,7 @@  void mlxsw_sp_rx_listener_no_mark_func(struct sk_buff *skb,
 	u64_stats_update_end(&pcpu_stats->syncp);
 
 	skb->protocol = eth_type_trans(skb, skb->dev);
-	netif_receive_skb(skb);
+	napi_gro_receive(mlxsw_skb_cb(skb)->rx_md_info.napi, skb);
 }
 
 static void mlxsw_sp_rx_listener_mark_func(struct sk_buff *skb, u16 local_port,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
index 899c954e0e5f..1f9c1c86839f 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
@@ -173,7 +173,7 @@  static void mlxsw_sp_rx_no_mark_listener(struct sk_buff *skb, u16 local_port,
 	if (err)
 		return;
 
-	netif_receive_skb(skb);
+	napi_gro_receive(mlxsw_skb_cb(skb)->rx_md_info.napi, skb);
 }
 
 static void mlxsw_sp_rx_mark_listener(struct sk_buff *skb, u16 local_port,