Message ID | 20200527055014.355093-1-leon@kernel.org (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | Jason Gunthorpe |
Headers | show |
Series | [rdma-next,v1] RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode | expand |
On Wed, May 27, 2020 at 08:50:14AM +0300, Leon Romanovsky wrote: > From: Mark Zhang <markz@mellanox.com> > > The mlx5 VF driver doesn't set QP tx port affinity because it doesn't > know if the lag is active or not, since the "lag_active" works only for > PF interfaces. In this case for VF interfaces only one lag is used > which brings performance issue. > > Add a lag_tx_port_affinity CAP bit; When it is enabled and > "num_lag_ports > 1", then driver always set QP tx affinity, regardless > of lag state. > > Signed-off-by: Mark Zhang <markz@mellanox.com> > Reviewed-by: Maor Gottlieb <maorg@mellanox.com> > Signed-off-by: Leon Romanovsky <leonro@mellanox.com> > --- > Changelog > v1: > * Fixed wrong check of num_lag_ports. > v0: https://lore.kernel.org/linux-rdma/20200526143457.218840-1-leon@kernel.org > --- > drivers/infiniband/hw/mlx5/main.c | 2 +- > drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 +++++++ > drivers/infiniband/hw/mlx5/qp.c | 3 ++- > 3 files changed, 10 insertions(+), 2 deletions(-) Applied to for-next thanks Jason
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 570c519ca530..4719da201382 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -1984,7 +1984,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx, context->lib_caps = req.lib_caps; print_lib_caps(dev, context->lib_caps); - if (dev->lag_active) { + if (mlx5_ib_lag_should_assign_affinity(dev)) { u8 port = mlx5_core_native_port_num(dev->mdev) - 1; atomic_set(&context->tx_port_affinity, diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index b486139b08ce..236c5c4a3637 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1553,4 +1553,11 @@ static inline bool mlx5_ib_can_use_umr(struct mlx5_ib_dev *dev, int mlx5_ib_enable_driver(struct ib_device *dev); int mlx5_ib_test_wc(struct mlx5_ib_dev *dev); + +static inline bool mlx5_ib_lag_should_assign_affinity(struct mlx5_ib_dev *dev) +{ + return dev->lag_active || + (MLX5_CAP_GEN(dev->mdev, num_lag_ports) > 1 && + MLX5_CAP_GEN(dev->mdev, lag_tx_port_affinity)); +} #endif /* MLX5_IB_H */ diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 1988a0375696..9364a7a76ac2 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -3653,7 +3653,8 @@ static unsigned int get_tx_affinity(struct ib_qp *qp, struct mlx5_ib_qp_base *qp_base; unsigned int tx_affinity; - if (!(dev->lag_active && qp_supports_affinity(qp))) + if (!(mlx5_ib_lag_should_assign_affinity(dev) && + qp_supports_affinity(qp))) return 0; if (mqp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)