Message ID | 1497533594-11579-3-git-send-email-sagi@grimberg.me (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
On Thu, Jun 15, 2017 at 04:33:09PM +0300, Sagi Grimberg wrote: > mlx5e currently assumes that irq affinity is really spread first > irq vectors across device home node cpus, with the new generic affinity > mappings this is no longer the case, hence mlxe should not rely on > this anymore. Looks fine, but the explanation sounds a bit short - only spreading the vectors of the single node sounds rather odd, so there needs to be an explanation of why this was done before and isn't valid anymore. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 2a3c59e55dcf..1e344b445a47 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -3733,18 +3733,8 @@ void mlx5e_build_default_indir_rqt(struct mlx5_core_dev *mdev, u32 *indirection_rqt, int len, int num_channels) { - int node = mdev->priv.numa_node; - int node_num_of_cores; int i; - if (node == -1) - node = first_online_node; - - node_num_of_cores = cpumask_weight(cpumask_of_node(node)); - - if (node_num_of_cores) - num_channels = min_t(int, num_channels, node_num_of_cores); - for (i = 0; i < len; i++) indirection_rqt[i] = i % num_channels; }
mlx5e currently assumes that irq affinity is really spread first irq vectors across device home node cpus, with the new generic affinity mappings this is no longer the case, hence mlxe should not rely on this anymore. Signed-off-by: Sagi Grimberg <sagi@grimberg.me> --- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 10 ---------- 1 file changed, 10 deletions(-)