diff mbox series

rdma/core: add __module_get()/module_put() to cma_[de]ref_dev()

Message ID 20190930090455.10772-1-metze@samba.org (mailing list archive)
State Rejected
Delegated to: Jason Gunthorpe
Headers show
Series rdma/core: add __module_get()/module_put() to cma_[de]ref_dev() | expand

Commit Message

Stefan Metzmacher Sept. 30, 2019, 9:04 a.m. UTC
Currently there seems to be a problem when an RDMA listener or connection
is active on an ib_device.

'rmmod rdma_rxe' (the same for 'siw' and most likely all
others) just hangs like this until shutdown the listeners and
connections:

  [<0>] remove_client_context+0x97/0xe0 [ib_core]
  [<0>] disable_device+0x90/0x120 [ib_core]
  [<0>] __ib_unregister_device+0x41/0xa0 [ib_core]
  [<0>] ib_unregister_driver+0xbb/0x100 [ib_core]
  [<0>] rxe_module_exit+0x1a/0x8aa [rdma_rxe]
  [<0>] __x64_sys_delete_module+0x147/0x290
  [<0>] do_syscall_64+0x5a/0x130
  [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9

The following would be expected:

  rmmod: ERROR: Module rdma_rxe is in use

And this change provides that.

Once all add listeners and connections are gone
the module can be removed again.

Signed-off-by: Stefan Metzmacher <metze@samba.org>
Cc: stable@vger.kernel.org
---
 drivers/infiniband/core/cma.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Jason Gunthorpe Sept. 30, 2019, 12:41 p.m. UTC | #1
On Mon, Sep 30, 2019 at 11:04:55AM +0200, Stefan Metzmacher wrote:
> Currently there seems to be a problem when an RDMA listener or connection
> is active on an ib_device.
> 
> 'rmmod rdma_rxe' (the same for 'siw' and most likely all
> others) just hangs like this until shutdown the listeners and
> connections:
> 
>   [<0>] remove_client_context+0x97/0xe0 [ib_core]
>   [<0>] disable_device+0x90/0x120 [ib_core]
>   [<0>] __ib_unregister_device+0x41/0xa0 [ib_core]
>   [<0>] ib_unregister_driver+0xbb/0x100 [ib_core]
>   [<0>] rxe_module_exit+0x1a/0x8aa [rdma_rxe]
>   [<0>] __x64_sys_delete_module+0x147/0x290
>   [<0>] do_syscall_64+0x5a/0x130
>   [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 
> The following would be expected:
> 
>   rmmod: ERROR: Module rdma_rxe is in use
> 
> And this change provides that.
> 
> Once all add listeners and connections are gone
> the module can be removed again.
> 
> Signed-off-by: Stefan Metzmacher <metze@samba.org>
> Cc: stable@vger.kernel.org
> ---
>  drivers/infiniband/core/cma.c | 2 ++
>  1 file changed, 2 insertions(+)

How do you even get here? Are you using in-kernel modules to access
rxe?

Drivers are supposed to declare a DEVICE_FATAL error when their module
is removed and then progress toward cleaning up. It would seem this is
missing in rxe.

Globally blocking module unload would break the existing dis-associate
flows, and blocking until listeners are removed seems like all rdma
drivers will instantly become permanetly blocked when things like SRP
or IPoIB CM mode are running?

I think the proper thing is to fix rxe (and probably siw) to signal
the DEVICE_FATAL so the CMA listeners can cleanly disconnect

Jason
Stefan Metzmacher Sept. 30, 2019, 1:52 p.m. UTC | #2
Am 30.09.19 um 14:41 schrieb Jason Gunthorpe:
> On Mon, Sep 30, 2019 at 11:04:55AM +0200, Stefan Metzmacher wrote:
>> Currently there seems to be a problem when an RDMA listener or connection
>> is active on an ib_device.
>>
>> 'rmmod rdma_rxe' (the same for 'siw' and most likely all
>> others) just hangs like this until shutdown the listeners and
>> connections:
>>
>>   [<0>] remove_client_context+0x97/0xe0 [ib_core]
>>   [<0>] disable_device+0x90/0x120 [ib_core]
>>   [<0>] __ib_unregister_device+0x41/0xa0 [ib_core]
>>   [<0>] ib_unregister_driver+0xbb/0x100 [ib_core]
>>   [<0>] rxe_module_exit+0x1a/0x8aa [rdma_rxe]
>>   [<0>] __x64_sys_delete_module+0x147/0x290
>>   [<0>] do_syscall_64+0x5a/0x130
>>   [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>
>> The following would be expected:
>>
>>   rmmod: ERROR: Module rdma_rxe is in use
>>
>> And this change provides that.
>>
>> Once all add listeners and connections are gone
>> the module can be removed again.
>>
>> Signed-off-by: Stefan Metzmacher <metze@samba.org>
>> Cc: stable@vger.kernel.org
>> ---
>>  drivers/infiniband/core/cma.c | 2 ++
>>  1 file changed, 2 insertions(+)
> 
> How do you even get here? Are you using in-kernel modules to access
> rxe?

Yes, I'm testing my smbdirect driver with Samba.
https://git.samba.org/?p=metze/linux/smbdirect.git;a=summary

Samba's 'smbd' listens for incoming connections.

> Drivers are supposed to declare a DEVICE_FATAL error when their module
> is removed and then progress toward cleaning up. It would seem this is
> missing in rxe.

It's a bit hard for me to follow the code path from other drivers
and how an unload of them would trigger IB_EVENT_DEVICE_FATAL.

Can you give me an explicit example of what the backtrace would
look like for the code that triggers IB_EVENT_DEVICE_FATAL?

> Globally blocking module unload would break the existing dis-associate
> flows, and blocking until listeners are removed seems like all rdma
> drivers will instantly become permanetly blocked when things like SRP
> or IPoIB CM mode are running?

So the design is to allow drivers to be unloaded while there are
active connections?

If so is this specific to RDMA drivers?

For filesystems it typically not allowed that the driver is unloaded
while it's mounted.

> I think the proper thing is to fix rxe (and probably siw) to signal
> the DEVICE_FATAL so the CMA listeners can cleanly disconnect

I just found that drivers/nvme/host/rdma.c and
drivers/nvme/target/rdma.c both use ib_register_client();
in order to get notified that a device is going to be removed.

Maybe I should also use ib_register_client()?

Thanks!
metze
Jason Gunthorpe Oct. 1, 2019, 2:45 p.m. UTC | #3
On Mon, Sep 30, 2019 at 03:52:51PM +0200, Stefan Metzmacher wrote:
> Am 30.09.19 um 14:41 schrieb Jason Gunthorpe:
> > On Mon, Sep 30, 2019 at 11:04:55AM +0200, Stefan Metzmacher wrote:
> >> Currently there seems to be a problem when an RDMA listener or connection
> >> is active on an ib_device.
> >>
> >> 'rmmod rdma_rxe' (the same for 'siw' and most likely all
> >> others) just hangs like this until shutdown the listeners and
> >> connections:
> >>
> >>   [<0>] remove_client_context+0x97/0xe0 [ib_core]
> >>   [<0>] disable_device+0x90/0x120 [ib_core]
> >>   [<0>] __ib_unregister_device+0x41/0xa0 [ib_core]
> >>   [<0>] ib_unregister_driver+0xbb/0x100 [ib_core]
> >>   [<0>] rxe_module_exit+0x1a/0x8aa [rdma_rxe]
> >>   [<0>] __x64_sys_delete_module+0x147/0x290
> >>   [<0>] do_syscall_64+0x5a/0x130
> >>   [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> >>
> >> The following would be expected:
> >>
> >>   rmmod: ERROR: Module rdma_rxe is in use
> >>
> >> And this change provides that.
> >>
> >> Once all add listeners and connections are gone
> >> the module can be removed again.
> >>
> >> Signed-off-by: Stefan Metzmacher <metze@samba.org>
> >> Cc: stable@vger.kernel.org
> >>  drivers/infiniband/core/cma.c | 2 ++
> >>  1 file changed, 2 insertions(+)
> > 
> > How do you even get here? Are you using in-kernel modules to access
> > rxe?
> 
> Yes, I'm testing my smbdirect driver with Samba.
> https://git.samba.org/?p=metze/linux/smbdirect.git;a=summary
> 
> Samba's 'smbd' listens for incoming connections.
> 
> > Drivers are supposed to declare a DEVICE_FATAL error when their module
> > is removed and then progress toward cleaning up. It would seem this is
> > missing in rxe.
> 
> It's a bit hard for me to follow the code path from other drivers
> and how an unload of them would trigger IB_EVENT_DEVICE_FATAL.

I think the drivers just call a ib_dispatch_event on
IB_EVENT_DEVICE_FATAL when they start to remove, ie before calling
ib_device_unregister()

> > Globally blocking module unload would break the existing dis-associate
> > flows, and blocking until listeners are removed seems like all rdma
> > drivers will instantly become permanetly blocked when things like SRP
> > or IPoIB CM mode are running?
> 
> So the design is to allow drivers to be unloaded while there are
> active connections?
> 
> If so is this specific to RDMA drivers?

No, it is normal for networking, you can ip link set down and unload a
net driver even though there are sockets open that might traverse it
 
> > I think the proper thing is to fix rxe (and probably siw) to signal
> > the DEVICE_FATAL so the CMA listeners can cleanly disconnect
>
> I just found that drivers/nvme/host/rdma.c and
> drivers/nvme/target/rdma.c both use ib_register_client();
> in order to get notified that a device is going to be removed.
> 
> Maybe I should also use ib_register_client()?

Oh, yes, all kernel clients must use register_client and related to
manage their connection to the RDMA stack otherwise they are probably
racy. The remove callback there is the same idea as the device_fatal
scheme is for userspace.

How do you discover the RDMA device to use? Just call into CM and let
it sort it out? That actually seems reasonable, but then CM should
take care of the remove() to kill connections, I suppose it doesn't..

Jason
Stefan Metzmacher Oct. 1, 2019, 10:03 p.m. UTC | #4
>>> Globally blocking module unload would break the existing dis-associate
>>> flows, and blocking until listeners are removed seems like all rdma
>>> drivers will instantly become permanetly blocked when things like SRP
>>> or IPoIB CM mode are running?
>>
>> So the design is to allow drivers to be unloaded while there are
>> active connections?
>>
>> If so is this specific to RDMA drivers?
> 
> No, it is normal for networking, you can ip link set down and unload a
> net driver even though there are sockets open that might traverse it

Ok.

>>> I think the proper thing is to fix rxe (and probably siw) to signal
>>> the DEVICE_FATAL so the CMA listeners can cleanly disconnect
>>
>> I just found that drivers/nvme/host/rdma.c and
>> drivers/nvme/target/rdma.c both use ib_register_client();
>> in order to get notified that a device is going to be removed.
>>
>> Maybe I should also use ib_register_client()?
> 
> Oh, yes, all kernel clients must use register_client and related to
> manage their connection to the RDMA stack otherwise they are probably
> racy. The remove callback there is the same idea as the device_fatal
> scheme is for userspace.

Ok, thanks! I'll take a look at it.

> How do you discover the RDMA device to use? Just call into CM and let
> it sort it out? That actually seems reasonable, but then CM should
> take care of the remove() to kill connections, I suppose it doesn't..

On the client:

rdma_create_id()
rdma_resolve_addr()
rdma_resolve_route()
rdma_connect()

On the server:
rdma_create_id()
rdma_bind_addr()
rdma_listen()
rdma_accept()

I just pass in an ipv4 or ipv6 addresses.

metze
Jason Gunthorpe Oct. 2, 2019, 12:57 a.m. UTC | #5
On Wed, Oct 02, 2019 at 12:03:14AM +0200, Stefan Metzmacher wrote:
> >>> Globally blocking module unload would break the existing dis-associate
> >>> flows, and blocking until listeners are removed seems like all rdma
> >>> drivers will instantly become permanetly blocked when things like SRP
> >>> or IPoIB CM mode are running?
> >>
> >> So the design is to allow drivers to be unloaded while there are
> >> active connections?
> >>
> >> If so is this specific to RDMA drivers?
> > 
> > No, it is normal for networking, you can ip link set down and unload a
> > net driver even though there are sockets open that might traverse it
> 
> Ok.
> 
> >>> I think the proper thing is to fix rxe (and probably siw) to signal
> >>> the DEVICE_FATAL so the CMA listeners can cleanly disconnect
> >>
> >> I just found that drivers/nvme/host/rdma.c and
> >> drivers/nvme/target/rdma.c both use ib_register_client();
> >> in order to get notified that a device is going to be removed.
> >>
> >> Maybe I should also use ib_register_client()?
> > 
> > Oh, yes, all kernel clients must use register_client and related to
> > manage their connection to the RDMA stack otherwise they are probably
> > racy. The remove callback there is the same idea as the device_fatal
> > scheme is for userspace.
> 
> Ok, thanks! I'll take a look at it.
> 
> > How do you discover the RDMA device to use? Just call into CM and let
> > it sort it out? That actually seems reasonable, but then CM should
> > take care of the remove() to kill connections, I suppose it doesn't..
> 
> On the client:
> 
> rdma_create_id()
> rdma_resolve_addr()
> rdma_resolve_route()
> rdma_connect()
> 
> On the server:
> rdma_create_id()
> rdma_bind_addr()
> rdma_listen()
> rdma_accept()
> 
> I just pass in an ipv4 or ipv6 addresses.

Ah, the only working flow today is to destroy your IDs when a remove
comes, and CM IDs become linked to a single rdma device so you know
which IDs to tear down on destroy

Jason
diff mbox series

Patch

diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index a68d0ccf67a4..d10f3d01fa02 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -276,6 +276,7 @@  enum {
 void cma_ref_dev(struct cma_device *cma_dev)
 {
 	atomic_inc(&cma_dev->refcount);
+	__module_get(cma_dev->device->ops.owner);
 }
 
 struct cma_device *cma_enum_devices_by_ibdev(cma_device_filter	filter,
@@ -512,6 +513,7 @@  static void cma_attach_to_dev(struct rdma_id_private *id_priv,
 
 void cma_deref_dev(struct cma_device *cma_dev)
 {
+	module_put(cma_dev->device->ops.owner);
 	if (atomic_dec_and_test(&cma_dev->refcount))
 		complete(&cma_dev->comp);
 }