diff mbox series

mlx5: remove support for ib_get_vector_affinity

Message ID 20181101161312.14624-1-sagi@grimberg.me (mailing list archive)
State Accepted
Headers show
Series mlx5: remove support for ib_get_vector_affinity | expand

Commit Message

Sagi Grimberg Nov. 1, 2018, 4:13 p.m. UTC
Devices that does not use managed affinity can not export a vector
affinity as the consumer relies on having a static mapping it can map
to upper layer affinity (e.g. sw queues). If the driver allows the user
to set the device irq affinity, then the affinitization of a long term
existing entites is not relevant.

For example, nvme-rdma controllers queue-irq affinitization is determined
at init time so if the irq affinity changes over time, we are no longer
aligned.

Cc: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/infiniband/hw/mlx5/main.c | 9 ---------
 include/linux/mlx5/driver.h       | 6 ------
 2 files changed, 15 deletions(-)

Comments

Shiraz Saleem Nov. 1, 2018, 4:24 p.m. UTC | #1
On Thu, Nov 01, 2018 at 09:13:12AM -0700, Sagi Grimberg wrote:
> Devices that does not use managed affinity can not export a vector
> affinity as the consumer relies on having a static mapping it can map
> to upper layer affinity (e.g. sw queues). If the driver allows the user
> to set the device irq affinity, then the affinitization of a long term
> existing entites is not relevant.
> 
> For example, nvme-rdma controllers queue-irq affinitization is determined
> at init time so if the irq affinity changes over time, we are no longer
> aligned.
> 
> Cc: Leon Romanovsky <leonro@mellanox.com>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>

Hi Sagi - If you dont mind, can do the same patch for i40iw? Since our device IRQs also do not
use managed affinity.

https://elixir.bootlin.com/linux/latest/source/drivers/infiniband/hw/i40iw/i40iw_verbs.c#L2726

Shiraz
Leon Romanovsky Nov. 1, 2018, 5:50 p.m. UTC | #2
On Thu, Nov 01, 2018 at 09:13:12AM -0700, Sagi Grimberg wrote:
> Devices that does not use managed affinity can not export a vector
> affinity as the consumer relies on having a static mapping it can map
> to upper layer affinity (e.g. sw queues). If the driver allows the user
> to set the device irq affinity, then the affinitization of a long term
> existing entites is not relevant.
>
> For example, nvme-rdma controllers queue-irq affinitization is determined
> at init time so if the irq affinity changes over time, we are no longer
> aligned.
>
> Cc: Leon Romanovsky <leonro@mellanox.com>

Something wrong with your git send-email, this CC wasn't added to CCed list.

> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/infiniband/hw/mlx5/main.c | 9 ---------
>  include/linux/mlx5/driver.h       | 6 ------
>  2 files changed, 15 deletions(-)
>

You added, you removed, good :)

Thanks,
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Sagi Grimberg Nov. 1, 2018, 7:07 p.m. UTC | #3
>> Cc: Leon Romanovsky <leonro@mellanox.com>
> 
> Something wrong with your git send-email, this CC wasn't added to CCed list.

Yea, I suppressed that... forgot to explicitly CC you.

> 
>> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
>> ---
>>   drivers/infiniband/hw/mlx5/main.c | 9 ---------
>>   include/linux/mlx5/driver.h       | 6 ------
>>   2 files changed, 15 deletions(-)
>>
> 
> You added, you removed, good :)

Not so great because now nvme-rdma now works sub optimally on mlx5
devices, but that is what we need to pay I guess...

> Thanks,
> Acked-by: Leon Romanovsky <leonro@mellanox.com>

Thanks
Doug Ledford Nov. 6, 2018, 9:46 p.m. UTC | #4
On Thu, 2018-11-01 at 12:07 -0700, Sagi Grimberg wrote:
> > > Cc: Leon Romanovsky <leonro@mellanox.com>
> > 
> > Something wrong with your git send-email, this CC wasn't added to CCed list.
> 
> Yea, I suppressed that... forgot to explicitly CC you.
> 
> > 
> > > Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> > > ---
> > >   drivers/infiniband/hw/mlx5/main.c | 9 ---------
> > >   include/linux/mlx5/driver.h       | 6 ------
> > >   2 files changed, 15 deletions(-)
> > > 
> > 
> > You added, you removed, good :)
> 
> Not so great because now nvme-rdma now works sub optimally on mlx5
> devices, but that is what we need to pay I guess...
> 
> > Thanks,
> > Acked-by: Leon Romanovsky <leonro@mellanox.com>
> 
> Thanks

Fixed up a minor context issue and applied to for-next.
diff mbox series

Patch

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index c414f3809e5c..257d1c7823f3 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -5303,14 +5303,6 @@  static void init_delay_drop(struct mlx5_ib_dev *dev)
 		mlx5_ib_warn(dev, "Failed to init delay drop debugfs\n");
 }
 
-static const struct cpumask *
-mlx5_ib_get_vector_affinity(struct ib_device *ibdev, int comp_vector)
-{
-	struct mlx5_ib_dev *dev = to_mdev(ibdev);
-
-	return mlx5_get_vector_affinity_hint(dev->mdev, comp_vector);
-}
-
 /* The mlx5_ib_multiport_mutex should be held when calling this function */
 static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev,
 				      struct mlx5_ib_multiport_info *mpi)
@@ -5823,7 +5815,6 @@  int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev)
 	dev->ib_dev.map_mr_sg		= mlx5_ib_map_mr_sg;
 	dev->ib_dev.check_mr_status	= mlx5_ib_check_mr_status;
 	dev->ib_dev.get_dev_fw_str      = get_dev_fw_str;
-	dev->ib_dev.get_vector_affinity	= mlx5_ib_get_vector_affinity;
 	if (MLX5_CAP_GEN(mdev, ipoib_enhanced_offloads))
 		dev->ib_dev.alloc_rdma_netdev	= mlx5_ib_alloc_rdma_netdev;
 
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 66d94b4557cf..011e2d195ea8 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -1310,10 +1310,4 @@  enum {
 	MLX5_TRIGGERED_CMD_COMP = (u64)1 << 32,
 };
 
-static inline const struct cpumask *
-mlx5_get_vector_affinity_hint(struct mlx5_core_dev *dev, int vector)
-{
-	return dev->priv.irq_info[vector].mask;
-}
-
 #endif /* MLX5_DRIVER_H */