mbox series

[rdma-next,v1,0/4] Let IB core distribute cache update events

Message ID 20191212113024.336702-1-leon@kernel.org (mailing list archive)
Headers show
Series Let IB core distribute cache update events | expand

Message

Leon Romanovsky Dec. 12, 2019, 11:30 a.m. UTC
From: Leon Romanovsky <leonro@mellanox.com>

Changelog:
---
v0->v1:
 - Address Jason's comment to split qp event handler lock with IB device event
 - Added patch that add qp_ prefix to reflect QP event operation

Note: Patch #1 can go to the -rc too, and is sent here because "fix" is
needed only if we are using those cache patches.
-------------------------------------------------------------------------
From Parav,

Currently when low level driver notifies Pkey, GID, port change
events, they are notified to the registered handlers in the order
they are registered.

IB core and other ULPs such as IPoIB are interested in GID, LID,
Pkey change events. Since all GID query done by ULPs is serviced by
IB core, IB core is yet to update the GID cache when IPoIB queries
the GID, resulting into not updating IPoIB address.

Hence, all events which require cache update are handled first by
the IB core. Once cache update work is completed, IB core distributes
the event to subscriber clients.

This is tested with opensm's /etc/rdma/opensm.conf subnet_prefix
configuration update where before the update

$ ip link show dev ib0

ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc pfifo_fast
state UP mode DEFAULT group default qlen 256
    link/infiniband
80:00:01:07:fe:80:00:00:00:00:00:00:24:8a:07:03:00:b3:d1:12 brd
00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff

And after the subnet prefix update:

ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc pfifo_fast
state UP mode DEFAULT group default qlen 256
    link/infiniband
80:00:01:07:fe:80:00:00:00:00:00:02:24:8a:07:03:00:b3:d1:12 brd
00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff

Thanks

Parav Pandit (4):
  IB/mlx5: Do reverse sequence during device removal
  IB/core: Let IB core distribute cache update events
  IB/core: Cut down single member ib_cache structure
  IB/core: Prefix qp to event_handler_lock

 drivers/infiniband/core/cache.c     | 151 +++++++++++++++++-----------
 drivers/infiniband/core/core_priv.h |   1 +
 drivers/infiniband/core/device.c    |  35 ++-----
 drivers/infiniband/core/verbs.c     |  12 +--
 drivers/infiniband/hw/mlx5/main.c   |   2 +
 include/rdma/ib_verbs.h             |  16 +--
 6 files changed, 118 insertions(+), 99 deletions(-)

--
2.20.1

Comments

Jason Gunthorpe Jan. 8, 2020, 12:28 a.m. UTC | #1
On Thu, Dec 12, 2019 at 01:30:20PM +0200, Leon Romanovsky wrote:

> Parav Pandit (4):
>   IB/mlx5: Do reverse sequence during device removal
>   IB/core: Let IB core distribute cache update events
>   IB/core: Cut down single member ib_cache structure
>   IB/core: Prefix qp to event_handler_lock

I used qp_open_list_lock in the last patch, and I'm still interested
if/why globally serializing the qp handlers is required, or if that
could be rw spinlock too.

Otherwise applied to for-next

Thanks,
Jason
Parav Pandit Jan. 8, 2020, 11:42 a.m. UTC | #2
On 1/8/2020 5:58 AM, Jason Gunthorpe wrote:
> On Thu, Dec 12, 2019 at 01:30:20PM +0200, Leon Romanovsky wrote:
> 
>> Parav Pandit (4):
>>   IB/mlx5: Do reverse sequence during device removal
>>   IB/core: Let IB core distribute cache update events
>>   IB/core: Cut down single member ib_cache structure
>>   IB/core: Prefix qp to event_handler_lock
> 
> I used qp_open_list_lock in the last patch, and I'm still interested
> if/why globally serializing the qp handlers is required, or if that
> could be rw spinlock too.
> 
My understanding is as in email of patch-2, its open_list_lock.
probably there isn't too much contention, but yes it can be changed to
rw spinlock.

> Otherwise applied to for-next
> 
Thanks.