diff mbox

[v6,for-4.13,2/7] mlx5e: don't assume anything on the irq affinity mappings of the device

Message ID 1497796677-15794-3-git-send-email-sagi@grimberg.me (mailing list archive)
State Superseded
Headers show

Commit Message

Sagi Grimberg June 18, 2017, 2:37 p.m. UTC
mlx5e currently assumes that irq affinity is really spread first
irq vectors across device home node cpus. This was designed to provide
a good OOB performance, however, feeding RSS indirection table with
only a subset if the RX rings is overall a loss both in RX efficiency
(napi is processing more flows per-cpu) and in latency QoS in case the
application is running on a cpu core that is not included in the RSS
indirection table (with more QPI traffic).

With the new generic affinity mappings this is no longer the case,
hence mlx5e should not rely on this anymore.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 10 ----------
 1 file changed, 10 deletions(-)
diff mbox

Patch

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 2a3c59e55dcf..1e344b445a47 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -3733,18 +3733,8 @@  void mlx5e_build_default_indir_rqt(struct mlx5_core_dev *mdev,
 				   u32 *indirection_rqt, int len,
 				   int num_channels)
 {
-	int node = mdev->priv.numa_node;
-	int node_num_of_cores;
 	int i;
 
-	if (node == -1)
-		node = first_online_node;
-
-	node_num_of_cores = cpumask_weight(cpumask_of_node(node));
-
-	if (node_num_of_cores)
-		num_channels = min_t(int, num_channels, node_num_of_cores);
-
 	for (i = 0; i < len; i++)
 		indirection_rqt[i] = i % num_channels;
 }