mbox series

[PATCHv3,0/8] Fix the problem that rxe can not work in net namespace

Message ID 20230214060634.427162-1-yanjun.zhu@intel.com (mailing list archive)
Headers show
Series Fix the problem that rxe can not work in net namespace | expand

Message

Zhu Yanjun Feb. 14, 2023, 6:06 a.m. UTC
From: Zhu Yanjun <yanjun.zhu@linux.dev>

When run "ip link add" command to add a rxe rdma link in a net
namespace, normally this rxe rdma link can not work in a net
name space.

The root cause is that a sock listening on udp port 4791 is created
in init_net when the rdma_rxe module is loaded into kernel. That is,
the sock listening on udp port 4791 is created in init_net. Other net
namespace is difficult to use this sock.

The following commits will solve this problem.

In the first commit, move the creating sock listening on udp port 4791
from module_init function to rdma link creating functions. That is,
after the module rdma_rxe is loaded, the sock will not be created.
When run "rdma link add ..." command, the sock will be created. So
when creating a rdma link in the net namespace, the sock will be
created in this net namespace.

In the second commit, the functions udp4_lib_lookup and udp6_lib_lookup
will check the sock exists in the net namespace or not. If yes, rdma
link will increase the reference count of this sock, then continue other
jobs instead of creating a new sock to listen on udp port 4791. Since the
network notifier is global, when the module rdma_rxe is loaded, this
notifier will be registered.

After the rdma link is created, the command "rdma link del" is to
delete rdma link at the same time the sock is checked. If the reference
count of this sock is greater than the sock reference count needed by
udp tunnel, the sock reference count is decreased by one. If equal, it
indicates that this rdma link is the last one. As such, the udp tunnel
is shut down and the sock is closed. The above work should be
implemented in linkdel function. But currently no dellink function in
rxe. So the 3rd commit addes dellink function pointer. And the 4th
commit implements the dellink function in rxe.

To now, it is not necessary to keep a global variable to store the sock
listening udp port 4791. This global variable can be replaced by the
functions udp4_lib_lookup and udp6_lib_lookup totally. Because the
function udp6_lib_lookup is in the fast path, a member variable l_sk6
is added to store the sock. If l_sk6 is NULL, udp6_lib_lookup is called
to lookup the sock, then the sock is stored in l_sk6, in the future,it
can be used directly.

All the above work has been done in init_net. And it can also work in
the net namespace. So the init_net is replaced by the individual net
namespace. This is what the 6th commit does. Because rxe device is
dependent on the net device and the sock listening on udp port 4791,
every rxe device is in exclusive mode in the individual net namespace.
Other rdma netns operations will be considerred in the future.

In the 7th commit, the register_pernet_subsys/unregister_pernet_subsys
functions are added. When a new net namespace is created, the init
function will initialize the sk4 and sk6 socks. Then the 2 socks will
be released when the net namespace is destroyed. The functions
rxe_ns_pernet_sk4/rxe_ns_pernet_set_sk4 will get and set sk4 in the net
namespace. The functions rxe_ns_pernet_sk6/rxe_ns_pernet_set_sk6 will
handle sk6. Then sk4 and sk6 are used in the previous commits.

As the sk4 and sk6 in pernet namespace can be accessed, it is not
necessary to add a new l_sk6. As such, in the 8th commit, the l_sk6 is
replaced with the sk6 in pernet namespace.

Test steps:
1) Suppose that 2 NICs are in 2 different net namespaces.

  # ip netns exec net0 ip link
  3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
     link/ether 00:1e:67:a0:22:3f brd ff:ff:ff:ff:ff:ff
     altname enp5s0

  # ip netns exec net1 ip link
  4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
     link/ether f8:e4:3b:3b:e4:10 brd ff:ff:ff:ff:ff:ff

2) Add rdma link in the different net namespace
    net0:
    # ip netns exec net0 rdma link add rxe0 type rxe netdev eno2

    net1:
    # ip netns exec net1 rdma link add rxe1 type rxe netdev eno3

3) Run rping test.
    net0
    # ip netns exec net0 rping -s -a 192.168.2.1 -C 1&
    [1] 1737
    # ip netns exec net1 rping -c -a 192.168.2.1 -d -v -C 1
    verbose
    count 1
    ...
    ping data: rdma-ping-0: ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqr
    ...

4) Remove the rdma links from the net namespaces.
    net0:
    # ip netns exec net0 ss -lu
    State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
    UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
    UNCONN    0         0         [::]:4791             [::]:*

    # ip netns exec net0 rdma link del rxe0
    
    # ip netns exec net0 ss -lu
    State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
    
    net1:
    # ip netns exec net0 ss -lu
    State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
    UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
    UNCONN    0         0         [::]:4791             [::]:*
    
    # ip netns exec net1 rdma link del rxe1

    # ip netns exec net0 ss -lu
    State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process

V2->V3: 1) Add "rdma link del" example in the cover letter, and use "ss -lu" to
           verify rdma link is removed.
        2) Add register_pernet_subsys/unregister_pernet_subsys net namespace
        3) Replace l_sk6 with sk6 of pernet_name_space

V1->V2: Add the explicit initialization of sk6.

Zhu Yanjun (8):
  RDMA/rxe: Creating listening sock in newlink function
  RDMA/rxe: Support more rdma links in init_net
  RDMA/nldev: Add dellink function pointer
  RDMA/rxe: Implement dellink in rxe
  RDMA/rxe: Replace global variable with sock lookup functions
  RDMA/rxe: add the support of net namespace
  RDMA/rxe: Add the support of net namespace notifier
  RDMA/rxe: Replace l_sk6 with sk6 in net namespace

 drivers/infiniband/core/nldev.c     |   6 ++
 drivers/infiniband/sw/rxe/Makefile  |   3 +-
 drivers/infiniband/sw/rxe/rxe.c     |  35 +++++++-
 drivers/infiniband/sw/rxe/rxe_net.c | 113 +++++++++++++++++-------
 drivers/infiniband/sw/rxe/rxe_net.h |   9 +-
 drivers/infiniband/sw/rxe/rxe_ns.c  | 128 ++++++++++++++++++++++++++++
 drivers/infiniband/sw/rxe/rxe_ns.h  |  11 +++
 include/rdma/rdma_netlink.h         |   2 +
 8 files changed, 267 insertions(+), 40 deletions(-)
 create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.c
 create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.h

Comments

Zhu Yanjun Feb. 23, 2023, 12:31 a.m. UTC | #1
在 2023/2/14 14:06, Zhu Yanjun 写道:
> From: Zhu Yanjun <yanjun.zhu@linux.dev>
> 
> When run "ip link add" command to add a rxe rdma link in a net
> namespace, normally this rxe rdma link can not work in a net
> name space.
> 
> The root cause is that a sock listening on udp port 4791 is created
> in init_net when the rdma_rxe module is loaded into kernel. That is,
> the sock listening on udp port 4791 is created in init_net. Other net
> namespace is difficult to use this sock.
> 
> The following commits will solve this problem.
> 
> In the first commit, move the creating sock listening on udp port 4791
> from module_init function to rdma link creating functions. That is,
> after the module rdma_rxe is loaded, the sock will not be created.
> When run "rdma link add ..." command, the sock will be created. So
> when creating a rdma link in the net namespace, the sock will be
> created in this net namespace.
> 
> In the second commit, the functions udp4_lib_lookup and udp6_lib_lookup
> will check the sock exists in the net namespace or not. If yes, rdma
> link will increase the reference count of this sock, then continue other
> jobs instead of creating a new sock to listen on udp port 4791. Since the
> network notifier is global, when the module rdma_rxe is loaded, this
> notifier will be registered.
> 
> After the rdma link is created, the command "rdma link del" is to
> delete rdma link at the same time the sock is checked. If the reference
> count of this sock is greater than the sock reference count needed by
> udp tunnel, the sock reference count is decreased by one. If equal, it
> indicates that this rdma link is the last one. As such, the udp tunnel
> is shut down and the sock is closed. The above work should be
> implemented in linkdel function. But currently no dellink function in
> rxe. So the 3rd commit addes dellink function pointer. And the 4th
> commit implements the dellink function in rxe.
> 
> To now, it is not necessary to keep a global variable to store the sock
> listening udp port 4791. This global variable can be replaced by the
> functions udp4_lib_lookup and udp6_lib_lookup totally. Because the
> function udp6_lib_lookup is in the fast path, a member variable l_sk6
> is added to store the sock. If l_sk6 is NULL, udp6_lib_lookup is called
> to lookup the sock, then the sock is stored in l_sk6, in the future,it
> can be used directly.
> 
> All the above work has been done in init_net. And it can also work in
> the net namespace. So the init_net is replaced by the individual net
> namespace. This is what the 6th commit does. Because rxe device is
> dependent on the net device and the sock listening on udp port 4791,
> every rxe device is in exclusive mode in the individual net namespace.
> Other rdma netns operations will be considerred in the future.
> 
> In the 7th commit, the register_pernet_subsys/unregister_pernet_subsys
> functions are added. When a new net namespace is created, the init
> function will initialize the sk4 and sk6 socks. Then the 2 socks will
> be released when the net namespace is destroyed. The functions
> rxe_ns_pernet_sk4/rxe_ns_pernet_set_sk4 will get and set sk4 in the net
> namespace. The functions rxe_ns_pernet_sk6/rxe_ns_pernet_set_sk6 will
> handle sk6. Then sk4 and sk6 are used in the previous commits.
> 
> As the sk4 and sk6 in pernet namespace can be accessed, it is not
> necessary to add a new l_sk6. As such, in the 8th commit, the l_sk6 is
> replaced with the sk6 in pernet namespace.
> 
> Test steps:
> 1) Suppose that 2 NICs are in 2 different net namespaces.
> 
>    # ip netns exec net0 ip link
>    3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
>       link/ether 00:1e:67:a0:22:3f brd ff:ff:ff:ff:ff:ff
>       altname enp5s0
> 
>    # ip netns exec net1 ip link
>    4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
>       link/ether f8:e4:3b:3b:e4:10 brd ff:ff:ff:ff:ff:ff
> 
> 2) Add rdma link in the different net namespace
>      net0:
>      # ip netns exec net0 rdma link add rxe0 type rxe netdev eno2
> 
>      net1:
>      # ip netns exec net1 rdma link add rxe1 type rxe netdev eno3
> 
> 3) Run rping test.
>      net0
>      # ip netns exec net0 rping -s -a 192.168.2.1 -C 1&
>      [1] 1737
>      # ip netns exec net1 rping -c -a 192.168.2.1 -d -v -C 1
>      verbose
>      count 1
>      ...
>      ping data: rdma-ping-0: ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqr
>      ...
> 
> 4) Remove the rdma links from the net namespaces.
>      net0:
>      # ip netns exec net0 ss -lu
>      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>      UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
>      UNCONN    0         0         [::]:4791             [::]:*
> 
>      # ip netns exec net0 rdma link del rxe0
>      
>      # ip netns exec net0 ss -lu
>      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>      
>      net1:
>      # ip netns exec net0 ss -lu
>      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>      UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
>      UNCONN    0         0         [::]:4791             [::]:*
>      
>      # ip netns exec net1 rdma link del rxe1
> 
>      # ip netns exec net0 ss -lu
>      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
> 
> V2->V3: 1) Add "rdma link del" example in the cover letter, and use "ss -lu" to
>             verify rdma link is removed.
>          2) Add register_pernet_subsys/unregister_pernet_subsys net namespace
>          3) Replace l_sk6 with sk6 of pernet_name_space
> 
> V1->V2: Add the explicit initialization of sk6.

Add netdev@vger.kernel.org.

Zhu Yanjun

> 
> Zhu Yanjun (8):
>    RDMA/rxe: Creating listening sock in newlink function
>    RDMA/rxe: Support more rdma links in init_net
>    RDMA/nldev: Add dellink function pointer
>    RDMA/rxe: Implement dellink in rxe
>    RDMA/rxe: Replace global variable with sock lookup functions
>    RDMA/rxe: add the support of net namespace
>    RDMA/rxe: Add the support of net namespace notifier
>    RDMA/rxe: Replace l_sk6 with sk6 in net namespace
> 
>   drivers/infiniband/core/nldev.c     |   6 ++
>   drivers/infiniband/sw/rxe/Makefile  |   3 +-
>   drivers/infiniband/sw/rxe/rxe.c     |  35 +++++++-
>   drivers/infiniband/sw/rxe/rxe_net.c | 113 +++++++++++++++++-------
>   drivers/infiniband/sw/rxe/rxe_net.h |   9 +-
>   drivers/infiniband/sw/rxe/rxe_ns.c  | 128 ++++++++++++++++++++++++++++
>   drivers/infiniband/sw/rxe/rxe_ns.h  |  11 +++
>   include/rdma/rdma_netlink.h         |   2 +
>   8 files changed, 267 insertions(+), 40 deletions(-)
>   create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.c
>   create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.h
>
Jakub Kicinski Feb. 23, 2023, 4:56 a.m. UTC | #2
On Thu, 23 Feb 2023 08:31:49 +0800 Zhu Yanjun wrote:
> > V1->V2: Add the explicit initialization of sk6.  
> 
> Add netdev@vger.kernel.org.

On the commit letter? Thanks, but that's not how it works. 
Repost the patches if you want us to see them.
Zhu Yanjun Feb. 23, 2023, 11:42 a.m. UTC | #3
在 2023/2/23 12:56, Jakub Kicinski 写道:
> On Thu, 23 Feb 2023 08:31:49 +0800 Zhu Yanjun wrote:
>>> V1->V2: Add the explicit initialization of sk6.
>> Add netdev@vger.kernel.org.
> On the commit letter? Thanks, but that's not how it works.
> Repost the patches if you want us to see them.

Got it. I will resend all the commits.

Zhu Yanjun
Rain River Feb. 25, 2023, 8:43 a.m. UTC | #4
On Thu, Feb 23, 2023 at 8:37 AM Zhu Yanjun <yanjun.zhu@linux.dev> wrote:
>
> 在 2023/2/14 14:06, Zhu Yanjun 写道:
> > From: Zhu Yanjun <yanjun.zhu@linux.dev>
> >
> > When run "ip link add" command to add a rxe rdma link in a net
> > namespace, normally this rxe rdma link can not work in a net
> > name space.
> >
> > The root cause is that a sock listening on udp port 4791 is created
> > in init_net when the rdma_rxe module is loaded into kernel. That is,
> > the sock listening on udp port 4791 is created in init_net. Other net
> > namespace is difficult to use this sock.
> >
> > The following commits will solve this problem.
> >
> > In the first commit, move the creating sock listening on udp port 4791
> > from module_init function to rdma link creating functions. That is,
> > after the module rdma_rxe is loaded, the sock will not be created.
> > When run "rdma link add ..." command, the sock will be created. So
> > when creating a rdma link in the net namespace, the sock will be
> > created in this net namespace.
> >
> > In the second commit, the functions udp4_lib_lookup and udp6_lib_lookup
> > will check the sock exists in the net namespace or not. If yes, rdma
> > link will increase the reference count of this sock, then continue other
> > jobs instead of creating a new sock to listen on udp port 4791. Since the
> > network notifier is global, when the module rdma_rxe is loaded, this
> > notifier will be registered.
> >
> > After the rdma link is created, the command "rdma link del" is to
> > delete rdma link at the same time the sock is checked. If the reference
> > count of this sock is greater than the sock reference count needed by
> > udp tunnel, the sock reference count is decreased by one. If equal, it
> > indicates that this rdma link is the last one. As such, the udp tunnel
> > is shut down and the sock is closed. The above work should be
> > implemented in linkdel function. But currently no dellink function in
> > rxe. So the 3rd commit addes dellink function pointer. And the 4th
> > commit implements the dellink function in rxe.
> >
> > To now, it is not necessary to keep a global variable to store the sock
> > listening udp port 4791. This global variable can be replaced by the
> > functions udp4_lib_lookup and udp6_lib_lookup totally. Because the
> > function udp6_lib_lookup is in the fast path, a member variable l_sk6
> > is added to store the sock. If l_sk6 is NULL, udp6_lib_lookup is called
> > to lookup the sock, then the sock is stored in l_sk6, in the future,it
> > can be used directly.
> >
> > All the above work has been done in init_net. And it can also work in
> > the net namespace. So the init_net is replaced by the individual net
> > namespace. This is what the 6th commit does. Because rxe device is
> > dependent on the net device and the sock listening on udp port 4791,
> > every rxe device is in exclusive mode in the individual net namespace.
> > Other rdma netns operations will be considerred in the future.
> >
> > In the 7th commit, the register_pernet_subsys/unregister_pernet_subsys
> > functions are added. When a new net namespace is created, the init
> > function will initialize the sk4 and sk6 socks. Then the 2 socks will
> > be released when the net namespace is destroyed. The functions
> > rxe_ns_pernet_sk4/rxe_ns_pernet_set_sk4 will get and set sk4 in the net
> > namespace. The functions rxe_ns_pernet_sk6/rxe_ns_pernet_set_sk6 will
> > handle sk6. Then sk4 and sk6 are used in the previous commits.
> >
> > As the sk4 and sk6 in pernet namespace can be accessed, it is not
> > necessary to add a new l_sk6. As such, in the 8th commit, the l_sk6 is
> > replaced with the sk6 in pernet namespace.
> >
> > Test steps:
> > 1) Suppose that 2 NICs are in 2 different net namespaces.
> >
> >    # ip netns exec net0 ip link
> >    3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
> >       link/ether 00:1e:67:a0:22:3f brd ff:ff:ff:ff:ff:ff
> >       altname enp5s0
> >
> >    # ip netns exec net1 ip link
> >    4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> >       link/ether f8:e4:3b:3b:e4:10 brd ff:ff:ff:ff:ff:ff
> >
> > 2) Add rdma link in the different net namespace
> >      net0:
> >      # ip netns exec net0 rdma link add rxe0 type rxe netdev eno2
> >
> >      net1:
> >      # ip netns exec net1 rdma link add rxe1 type rxe netdev eno3
> >
> > 3) Run rping test.
> >      net0
> >      # ip netns exec net0 rping -s -a 192.168.2.1 -C 1&
> >      [1] 1737
> >      # ip netns exec net1 rping -c -a 192.168.2.1 -d -v -C 1
> >      verbose
> >      count 1
> >      ...
> >      ping data: rdma-ping-0: ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqr
> >      ...
> >
> > 4) Remove the rdma links from the net namespaces.
> >      net0:
> >      # ip netns exec net0 ss -lu
> >      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
> >      UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
> >      UNCONN    0         0         [::]:4791             [::]:*
> >
> >      # ip netns exec net0 rdma link del rxe0
> >
> >      # ip netns exec net0 ss -lu
> >      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
> >
> >      net1:
> >      # ip netns exec net0 ss -lu
> >      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
> >      UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
> >      UNCONN    0         0         [::]:4791             [::]:*
> >
> >      # ip netns exec net1 rdma link del rxe1
> >
> >      # ip netns exec net0 ss -lu
> >      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
> >
> > V2->V3: 1) Add "rdma link del" example in the cover letter, and use "ss -lu" to
> >             verify rdma link is removed.
> >          2) Add register_pernet_subsys/unregister_pernet_subsys net namespace
> >          3) Replace l_sk6 with sk6 of pernet_name_space

Thanks,

Tested-by: Rain River <rain.1986.08.12@gmail.com>

> >
> > V1->V2: Add the explicit initialization of sk6.
>
> Add netdev@vger.kernel.org.
>
> Zhu Yanjun
>
> >
> > Zhu Yanjun (8):
> >    RDMA/rxe: Creating listening sock in newlink function
> >    RDMA/rxe: Support more rdma links in init_net
> >    RDMA/nldev: Add dellink function pointer
> >    RDMA/rxe: Implement dellink in rxe
> >    RDMA/rxe: Replace global variable with sock lookup functions
> >    RDMA/rxe: add the support of net namespace
> >    RDMA/rxe: Add the support of net namespace notifier
> >    RDMA/rxe: Replace l_sk6 with sk6 in net namespace
> >
> >   drivers/infiniband/core/nldev.c     |   6 ++
> >   drivers/infiniband/sw/rxe/Makefile  |   3 +-
> >   drivers/infiniband/sw/rxe/rxe.c     |  35 +++++++-
> >   drivers/infiniband/sw/rxe/rxe_net.c | 113 +++++++++++++++++-------
> >   drivers/infiniband/sw/rxe/rxe_net.h |   9 +-
> >   drivers/infiniband/sw/rxe/rxe_ns.c  | 128 ++++++++++++++++++++++++++++
> >   drivers/infiniband/sw/rxe/rxe_ns.h  |  11 +++
> >   include/rdma/rdma_netlink.h         |   2 +
> >   8 files changed, 267 insertions(+), 40 deletions(-)
> >   create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.c
> >   create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.h
> >
>
Mark Lehrer April 12, 2023, 5:22 p.m. UTC | #5
> When run "ip link add" command to add a rxe rdma link in a net
> namespace, normally this rxe rdma link can not work in a net
> name space.

Thank you for this patch, Yanjun!  It is very helpful for some
research I'm doing.  I just tested the patch and now I have success
with utilities like rping and ib_send_bw.  It looks like rdma_cm is at
least doing the basics with no problems.

However, I am still not able to "nvme discover" - this fails with
rdma_resolve_addr error -101.  It looks like this function is part of
rdma_cma.  Is this expected to work, or is more patching needed for
nvme-cli to have success?

It looks like the kernel nvme-fabrics driver is making the call to
rdma_resolve_addr here.  According to strace, nvme-cli is just opening
the fabrics device and writing the host NQN etc.  Is there an easy way
to prove that rdma_resolve_addr is working from userland?

Thanks,
Mark



On Mon, Feb 13, 2023 at 11:13 PM Zhu Yanjun <yanjun.zhu@intel.com> wrote:
>
> From: Zhu Yanjun <yanjun.zhu@linux.dev>
>
> When run "ip link add" command to add a rxe rdma link in a net
> namespace, normally this rxe rdma link can not work in a net
> name space.
>
> The root cause is that a sock listening on udp port 4791 is created
> in init_net when the rdma_rxe module is loaded into kernel. That is,
> the sock listening on udp port 4791 is created in init_net. Other net
> namespace is difficult to use this sock.
>
> The following commits will solve this problem.
>
> In the first commit, move the creating sock listening on udp port 4791
> from module_init function to rdma link creating functions. That is,
> after the module rdma_rxe is loaded, the sock will not be created.
> When run "rdma link add ..." command, the sock will be created. So
> when creating a rdma link in the net namespace, the sock will be
> created in this net namespace.
>
> In the second commit, the functions udp4_lib_lookup and udp6_lib_lookup
> will check the sock exists in the net namespace or not. If yes, rdma
> link will increase the reference count of this sock, then continue other
> jobs instead of creating a new sock to listen on udp port 4791. Since the
> network notifier is global, when the module rdma_rxe is loaded, this
> notifier will be registered.
>
> After the rdma link is created, the command "rdma link del" is to
> delete rdma link at the same time the sock is checked. If the reference
> count of this sock is greater than the sock reference count needed by
> udp tunnel, the sock reference count is decreased by one. If equal, it
> indicates that this rdma link is the last one. As such, the udp tunnel
> is shut down and the sock is closed. The above work should be
> implemented in linkdel function. But currently no dellink function in
> rxe. So the 3rd commit addes dellink function pointer. And the 4th
> commit implements the dellink function in rxe.
>
> To now, it is not necessary to keep a global variable to store the sock
> listening udp port 4791. This global variable can be replaced by the
> functions udp4_lib_lookup and udp6_lib_lookup totally. Because the
> function udp6_lib_lookup is in the fast path, a member variable l_sk6
> is added to store the sock. If l_sk6 is NULL, udp6_lib_lookup is called
> to lookup the sock, then the sock is stored in l_sk6, in the future,it
> can be used directly.
>
> All the above work has been done in init_net. And it can also work in
> the net namespace. So the init_net is replaced by the individual net
> namespace. This is what the 6th commit does. Because rxe device is
> dependent on the net device and the sock listening on udp port 4791,
> every rxe device is in exclusive mode in the individual net namespace.
> Other rdma netns operations will be considerred in the future.
>
> In the 7th commit, the register_pernet_subsys/unregister_pernet_subsys
> functions are added. When a new net namespace is created, the init
> function will initialize the sk4 and sk6 socks. Then the 2 socks will
> be released when the net namespace is destroyed. The functions
> rxe_ns_pernet_sk4/rxe_ns_pernet_set_sk4 will get and set sk4 in the net
> namespace. The functions rxe_ns_pernet_sk6/rxe_ns_pernet_set_sk6 will
> handle sk6. Then sk4 and sk6 are used in the previous commits.
>
> As the sk4 and sk6 in pernet namespace can be accessed, it is not
> necessary to add a new l_sk6. As such, in the 8th commit, the l_sk6 is
> replaced with the sk6 in pernet namespace.
>
> Test steps:
> 1) Suppose that 2 NICs are in 2 different net namespaces.
>
>   # ip netns exec net0 ip link
>   3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
>      link/ether 00:1e:67:a0:22:3f brd ff:ff:ff:ff:ff:ff
>      altname enp5s0
>
>   # ip netns exec net1 ip link
>   4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
>      link/ether f8:e4:3b:3b:e4:10 brd ff:ff:ff:ff:ff:ff
>
> 2) Add rdma link in the different net namespace
>     net0:
>     # ip netns exec net0 rdma link add rxe0 type rxe netdev eno2
>
>     net1:
>     # ip netns exec net1 rdma link add rxe1 type rxe netdev eno3
>
> 3) Run rping test.
>     net0
>     # ip netns exec net0 rping -s -a 192.168.2.1 -C 1&
>     [1] 1737
>     # ip netns exec net1 rping -c -a 192.168.2.1 -d -v -C 1
>     verbose
>     count 1
>     ...
>     ping data: rdma-ping-0: ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqr
>     ...
>
> 4) Remove the rdma links from the net namespaces.
>     net0:
>     # ip netns exec net0 ss -lu
>     State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>     UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
>     UNCONN    0         0         [::]:4791             [::]:*
>
>     # ip netns exec net0 rdma link del rxe0
>
>     # ip netns exec net0 ss -lu
>     State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>
>     net1:
>     # ip netns exec net0 ss -lu
>     State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>     UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
>     UNCONN    0         0         [::]:4791             [::]:*
>
>     # ip netns exec net1 rdma link del rxe1
>
>     # ip netns exec net0 ss -lu
>     State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>
> V2->V3: 1) Add "rdma link del" example in the cover letter, and use "ss -lu" to
>            verify rdma link is removed.
>         2) Add register_pernet_subsys/unregister_pernet_subsys net namespace
>         3) Replace l_sk6 with sk6 of pernet_name_space
>
> V1->V2: Add the explicit initialization of sk6.
>
> Zhu Yanjun (8):
>   RDMA/rxe: Creating listening sock in newlink function
>   RDMA/rxe: Support more rdma links in init_net
>   RDMA/nldev: Add dellink function pointer
>   RDMA/rxe: Implement dellink in rxe
>   RDMA/rxe: Replace global variable with sock lookup functions
>   RDMA/rxe: add the support of net namespace
>   RDMA/rxe: Add the support of net namespace notifier
>   RDMA/rxe: Replace l_sk6 with sk6 in net namespace
>
>  drivers/infiniband/core/nldev.c     |   6 ++
>  drivers/infiniband/sw/rxe/Makefile  |   3 +-
>  drivers/infiniband/sw/rxe/rxe.c     |  35 +++++++-
>  drivers/infiniband/sw/rxe/rxe_net.c | 113 +++++++++++++++++-------
>  drivers/infiniband/sw/rxe/rxe_net.h |   9 +-
>  drivers/infiniband/sw/rxe/rxe_ns.c  | 128 ++++++++++++++++++++++++++++
>  drivers/infiniband/sw/rxe/rxe_ns.h  |  11 +++
>  include/rdma/rdma_netlink.h         |   2 +
>  8 files changed, 267 insertions(+), 40 deletions(-)
>  create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.c
>  create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.h
>
> --
> 2.34.1
>
Mark Lehrer April 12, 2023, 9:01 p.m. UTC | #6
> the fabrics device and writing the host NQN etc.  Is there an easy way
> to prove that rdma_resolve_addr is working from userland?

Actually I meant "is there a way to prove that the kernel
rdma_resolve_addr() works with netns?"

It seems like this is the real problem.  If we run commands like nvme
discover & nvme connect within the netns context, the system will use
the non-netns IP & RDMA stacks to connect.  As an aside - this seems
like it would be a major security issue for container systems, doesn't
it?

I'll investigate to see if the fabrics module & nvme-cli have a way to
set and use the proper netns context.

Thanks,
Mark
Zhu Yanjun April 13, 2023, 7:17 a.m. UTC | #7
在 2023/4/13 1:22, Mark Lehrer 写道:
>> When run "ip link add" command to add a rxe rdma link in a net
>> namespace, normally this rxe rdma link can not work in a net
>> name space.
> Thank you for this patch, Yanjun!  It is very helpful for some
> research I'm doing.  I just tested the patch and now I have success
> with utilities like rping and ib_send_bw.  It looks like rdma_cm is at
> least doing the basics with no problems.
>
> However, I am still not able to "nvme discover" - this fails with
> rdma_resolve_addr error -101.  It looks like this function is part of
> rdma_cma.  Is this expected to work, or is more patching needed for
> nvme-cli to have success?

Thanks for your testing.

These commits are to make SoftRoCE work in the different net namespaces.

Especially in the same host, in 2 or more different net namespace, SoftRoCE

can connect to each other.

I just make rping and perftest tests. And I do not make NVMe tests.

If you let me know how to reproduce the problem that you confronted,

it can help me a lot to understand this problem and fix it.

Thanks,

Zhu Yanjun

>
> It looks like the kernel nvme-fabrics driver is making the call to
> rdma_resolve_addr here.  According to strace, nvme-cli is just opening
> the fabrics device and writing the host NQN etc.  Is there an easy way
> to prove that rdma_resolve_addr is working from userland?
>
> Thanks,
> Mark
>
>
>
> On Mon, Feb 13, 2023 at 11:13 PM Zhu Yanjun <yanjun.zhu@intel.com> wrote:
>> From: Zhu Yanjun <yanjun.zhu@linux.dev>
>>
>> When run "ip link add" command to add a rxe rdma link in a net
>> namespace, normally this rxe rdma link can not work in a net
>> name space.
>>
>> The root cause is that a sock listening on udp port 4791 is created
>> in init_net when the rdma_rxe module is loaded into kernel. That is,
>> the sock listening on udp port 4791 is created in init_net. Other net
>> namespace is difficult to use this sock.
>>
>> The following commits will solve this problem.
>>
>> In the first commit, move the creating sock listening on udp port 4791
>> from module_init function to rdma link creating functions. That is,
>> after the module rdma_rxe is loaded, the sock will not be created.
>> When run "rdma link add ..." command, the sock will be created. So
>> when creating a rdma link in the net namespace, the sock will be
>> created in this net namespace.
>>
>> In the second commit, the functions udp4_lib_lookup and udp6_lib_lookup
>> will check the sock exists in the net namespace or not. If yes, rdma
>> link will increase the reference count of this sock, then continue other
>> jobs instead of creating a new sock to listen on udp port 4791. Since the
>> network notifier is global, when the module rdma_rxe is loaded, this
>> notifier will be registered.
>>
>> After the rdma link is created, the command "rdma link del" is to
>> delete rdma link at the same time the sock is checked. If the reference
>> count of this sock is greater than the sock reference count needed by
>> udp tunnel, the sock reference count is decreased by one. If equal, it
>> indicates that this rdma link is the last one. As such, the udp tunnel
>> is shut down and the sock is closed. The above work should be
>> implemented in linkdel function. But currently no dellink function in
>> rxe. So the 3rd commit addes dellink function pointer. And the 4th
>> commit implements the dellink function in rxe.
>>
>> To now, it is not necessary to keep a global variable to store the sock
>> listening udp port 4791. This global variable can be replaced by the
>> functions udp4_lib_lookup and udp6_lib_lookup totally. Because the
>> function udp6_lib_lookup is in the fast path, a member variable l_sk6
>> is added to store the sock. If l_sk6 is NULL, udp6_lib_lookup is called
>> to lookup the sock, then the sock is stored in l_sk6, in the future,it
>> can be used directly.
>>
>> All the above work has been done in init_net. And it can also work in
>> the net namespace. So the init_net is replaced by the individual net
>> namespace. This is what the 6th commit does. Because rxe device is
>> dependent on the net device and the sock listening on udp port 4791,
>> every rxe device is in exclusive mode in the individual net namespace.
>> Other rdma netns operations will be considerred in the future.
>>
>> In the 7th commit, the register_pernet_subsys/unregister_pernet_subsys
>> functions are added. When a new net namespace is created, the init
>> function will initialize the sk4 and sk6 socks. Then the 2 socks will
>> be released when the net namespace is destroyed. The functions
>> rxe_ns_pernet_sk4/rxe_ns_pernet_set_sk4 will get and set sk4 in the net
>> namespace. The functions rxe_ns_pernet_sk6/rxe_ns_pernet_set_sk6 will
>> handle sk6. Then sk4 and sk6 are used in the previous commits.
>>
>> As the sk4 and sk6 in pernet namespace can be accessed, it is not
>> necessary to add a new l_sk6. As such, in the 8th commit, the l_sk6 is
>> replaced with the sk6 in pernet namespace.
>>
>> Test steps:
>> 1) Suppose that 2 NICs are in 2 different net namespaces.
>>
>>    # ip netns exec net0 ip link
>>    3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
>>       link/ether 00:1e:67:a0:22:3f brd ff:ff:ff:ff:ff:ff
>>       altname enp5s0
>>
>>    # ip netns exec net1 ip link
>>    4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
>>       link/ether f8:e4:3b:3b:e4:10 brd ff:ff:ff:ff:ff:ff
>>
>> 2) Add rdma link in the different net namespace
>>      net0:
>>      # ip netns exec net0 rdma link add rxe0 type rxe netdev eno2
>>
>>      net1:
>>      # ip netns exec net1 rdma link add rxe1 type rxe netdev eno3
>>
>> 3) Run rping test.
>>      net0
>>      # ip netns exec net0 rping -s -a 192.168.2.1 -C 1&
>>      [1] 1737
>>      # ip netns exec net1 rping -c -a 192.168.2.1 -d -v -C 1
>>      verbose
>>      count 1
>>      ...
>>      ping data: rdma-ping-0: ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqr
>>      ...
>>
>> 4) Remove the rdma links from the net namespaces.
>>      net0:
>>      # ip netns exec net0 ss -lu
>>      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>>      UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
>>      UNCONN    0         0         [::]:4791             [::]:*
>>
>>      # ip netns exec net0 rdma link del rxe0
>>
>>      # ip netns exec net0 ss -lu
>>      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>>
>>      net1:
>>      # ip netns exec net0 ss -lu
>>      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>>      UNCONN    0         0         0.0.0.0:4791          0.0.0.0:*
>>      UNCONN    0         0         [::]:4791             [::]:*
>>
>>      # ip netns exec net1 rdma link del rxe1
>>
>>      # ip netns exec net0 ss -lu
>>      State     Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
>>
>> V2->V3: 1) Add "rdma link del" example in the cover letter, and use "ss -lu" to
>>             verify rdma link is removed.
>>          2) Add register_pernet_subsys/unregister_pernet_subsys net namespace
>>          3) Replace l_sk6 with sk6 of pernet_name_space
>>
>> V1->V2: Add the explicit initialization of sk6.
>>
>> Zhu Yanjun (8):
>>    RDMA/rxe: Creating listening sock in newlink function
>>    RDMA/rxe: Support more rdma links in init_net
>>    RDMA/nldev: Add dellink function pointer
>>    RDMA/rxe: Implement dellink in rxe
>>    RDMA/rxe: Replace global variable with sock lookup functions
>>    RDMA/rxe: add the support of net namespace
>>    RDMA/rxe: Add the support of net namespace notifier
>>    RDMA/rxe: Replace l_sk6 with sk6 in net namespace
>>
>>   drivers/infiniband/core/nldev.c     |   6 ++
>>   drivers/infiniband/sw/rxe/Makefile  |   3 +-
>>   drivers/infiniband/sw/rxe/rxe.c     |  35 +++++++-
>>   drivers/infiniband/sw/rxe/rxe_net.c | 113 +++++++++++++++++-------
>>   drivers/infiniband/sw/rxe/rxe_net.h |   9 +-
>>   drivers/infiniband/sw/rxe/rxe_ns.c  | 128 ++++++++++++++++++++++++++++
>>   drivers/infiniband/sw/rxe/rxe_ns.h  |  11 +++
>>   include/rdma/rdma_netlink.h         |   2 +
>>   8 files changed, 267 insertions(+), 40 deletions(-)
>>   create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.c
>>   create mode 100644 drivers/infiniband/sw/rxe/rxe_ns.h
>>
>> --
>> 2.34.1
>>
Zhu Yanjun April 13, 2023, 7:22 a.m. UTC | #8
在 2023/4/13 5:01, Mark Lehrer 写道:
>> the fabrics device and writing the host NQN etc.  Is there an easy way
>> to prove that rdma_resolve_addr is working from userland?
> Actually I meant "is there a way to prove that the kernel
> rdma_resolve_addr() works with netns?"

I think rdma_resolve_addr can work with netns because rdma on mlx5 can 
work well with netns.

I do not delve into the source code. But IMO, this function should be 
used in rdma on mlx5.

>
> It seems like this is the real problem.  If we run commands like nvme
> discover & nvme connect within the netns context, the system will use
> the non-netns IP & RDMA stacks to connect.  As an aside - this seems
> like it would be a major security issue for container systems, doesn't
> it?

Do you make tests nvme + mlx5 + net ns in your host? Can it work?

Thanks

Zhu Yanjun

>
> I'll investigate to see if the fabrics module & nvme-cli have a way to
> set and use the proper netns context.
>
> Thanks,
> Mark
Mark Lehrer April 13, 2023, 1 p.m. UTC | #9
> Do you make tests nvme + mlx5 + net ns in your host? Can it work?

Sort of, but not really.  In our last test, we configured a virtual
function and put it in the netns context, but also configured a
physical function outside the netns context.  TCP NVMe connections
always used the correct interface.

However, the RoCEv2 NVMe connection always used the physical function,
regardless of the user space netns context of the nvme-cli process.
When we ran "ip link set <physical function> down" the RoCEv2 NVMe
connections stopped working, but TCP NVMe connections were fine.
We'll be doing more tests today to make sure we're not doing something
wrong.

Thanks,
Mark




On Thu, Apr 13, 2023 at 7:22 AM Zhu Yanjun <yanjun.zhu@linux.dev> wrote:
>
>
> 在 2023/4/13 5:01, Mark Lehrer 写道:
> >> the fabrics device and writing the host NQN etc.  Is there an easy way
> >> to prove that rdma_resolve_addr is working from userland?
> > Actually I meant "is there a way to prove that the kernel
> > rdma_resolve_addr() works with netns?"
>
> I think rdma_resolve_addr can work with netns because rdma on mlx5 can
> work well with netns.
>
> I do not delve into the source code. But IMO, this function should be
> used in rdma on mlx5.
>
> >
> > It seems like this is the real problem.  If we run commands like nvme
> > discover & nvme connect within the netns context, the system will use
> > the non-netns IP & RDMA stacks to connect.  As an aside - this seems
> > like it would be a major security issue for container systems, doesn't
> > it?
>
> Do you make tests nvme + mlx5 + net ns in your host? Can it work?
>
> Thanks
>
> Zhu Yanjun
>
> >
> > I'll investigate to see if the fabrics module & nvme-cli have a way to
> > set and use the proper netns context.
> >
> > Thanks,
> > Mark
Parav Pandit April 13, 2023, 1:05 p.m. UTC | #10
> From: Mark Lehrer <lehrer@gmail.com>
> Sent: Thursday, April 13, 2023 9:01 AM
> 
> > Do you make tests nvme + mlx5 + net ns in your host? Can it work?
> 
> Sort of, but not really.  In our last test, we configured a virtual function and put
> it in the netns context, but also configured a physical function outside the netns
> context.  TCP NVMe connections always used the correct interface.
> 
Didn’t get a chance to review the thread discussion.
The way to use VF is:

1. rdma system in exclusive mode
$ rdma system set netns exclusive

2. Move netdevice of the VF to the net ns
$ ip link set [ DEV ] netns NSNAME

3. Move RDMA device of the VF to the net ns
$ rdma dev set [ DEV ] netns NSNAME

You are probably missing #1 and #3 configuration.
#1 should be done before creating any namespaces.

Man pages for #1 and #3:
[a] https://man7.org/linux/man-pages/man8/rdma-system.8.html
[b] https://man7.org/linux/man-pages/man8/rdma-dev.8.html

> However, the RoCEv2 NVMe connection always used the physical function,
> regardless of the user space netns context of the nvme-cli process.
> When we ran "ip link set <physical function> down" the RoCEv2 NVMe
> connections stopped working, but TCP NVMe connections were fine.
> We'll be doing more tests today to make sure we're not doing something
> wrong.
> 
> Thanks,
> Mark
> 
> 
> 
> 
> On Thu, Apr 13, 2023 at 7:22 AM Zhu Yanjun <yanjun.zhu@linux.dev> wrote:
> >
> >
> > 在 2023/4/13 5:01, Mark Lehrer 写道:
> > >> the fabrics device and writing the host NQN etc.  Is there an easy
> > >> way to prove that rdma_resolve_addr is working from userland?
> > > Actually I meant "is there a way to prove that the kernel
> > > rdma_resolve_addr() works with netns?"
> >
> > I think rdma_resolve_addr can work with netns because rdma on mlx5 can
> > work well with netns.
> >
> > I do not delve into the source code. But IMO, this function should be
> > used in rdma on mlx5.
> >
> > >
> > > It seems like this is the real problem.  If we run commands like
> > > nvme discover & nvme connect within the netns context, the system
> > > will use the non-netns IP & RDMA stacks to connect.  As an aside -
> > > this seems like it would be a major security issue for container
> > > systems, doesn't it?
> >
> > Do you make tests nvme + mlx5 + net ns in your host? Can it work?
> >
> > Thanks
> >
> > Zhu Yanjun
> >
> > >
> > > I'll investigate to see if the fabrics module & nvme-cli have a way
> > > to set and use the proper netns context.
> > >
> > > Thanks,
> > > Mark
Mark Lehrer April 13, 2023, 3:38 p.m. UTC | #11
> Didn’t get a chance to review the thread discussion.
> The way to use VF is:

Virtual functions were just a debugging aid.  We really just want to
use a single physical function and put it into the netns.  However, we
will do additional VF tests as it still may be a viable workaround.

When using the physical function, we are still having no joy using
exclusive mode with mlx5:


# nvme discover -t rdma -a 192.168.42.11 -s 4420
Discovery Log Number of Records 2, Generation counter 2
=====Discovery Log Entry 0======
... (works as expected)

# rdma system set netns exclusive
# ip netns add netnstest
# ip link set eth1 netns netnstest
# rdma dev set mlx5_0 netns netnstest
# nsenter --net=/var/run/netns/netnstest /bin/bash
# ip link set eth1 up
# ip addr add 192.168.42.12/24 dev eth1
(tested ib_send_bw here, works perfectly)

# nvme discover -t rdma -a 192.168.42.11 -s 4420
Failed to write to /dev/nvme-fabrics: Connection reset by peer
failed to add controller, error Unknown error -1

# dmesg | tail -3
[  240.361647] mlx5_core 0000:05:00.0 eth1: Link up
[  240.371772] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
[  259.964542] nvme nvme0: rdma connection establishment failed (-104)

Am I missing something here?

Thanks,
Mark


On Thu, Apr 13, 2023 at 7:05 AM Parav Pandit <parav@nvidia.com> wrote:
>
>
>
> > From: Mark Lehrer <lehrer@gmail.com>
> > Sent: Thursday, April 13, 2023 9:01 AM
> >
> > > Do you make tests nvme + mlx5 + net ns in your host? Can it work?
> >
> > Sort of, but not really.  In our last test, we configured a virtual function and put
> > it in the netns context, but also configured a physical function outside the netns
> > context.  TCP NVMe connections always used the correct interface.
> >
> Didn’t get a chance to review the thread discussion.
> The way to use VF is:
>
> 1. rdma system in exclusive mode
> $ rdma system set netns exclusive
>
> 2. Move netdevice of the VF to the net ns
> $ ip link set [ DEV ] netns NSNAME
>
> 3. Move RDMA device of the VF to the net ns
> $ rdma dev set [ DEV ] netns NSNAME
>
> You are probably missing #1 and #3 configuration.
> #1 should be done before creating any namespaces.
>
> Man pages for #1 and #3:
> [a] https://man7.org/linux/man-pages/man8/rdma-system.8.html
> [b] https://man7.org/linux/man-pages/man8/rdma-dev.8.html
>
> > However, the RoCEv2 NVMe connection always used the physical function,
> > regardless of the user space netns context of the nvme-cli process.
> > When we ran "ip link set <physical function> down" the RoCEv2 NVMe
> > connections stopped working, but TCP NVMe connections were fine.
> > We'll be doing more tests today to make sure we're not doing something
> > wrong.
> >
> > Thanks,
> > Mark
> >
> >
> >
> >
> > On Thu, Apr 13, 2023 at 7:22 AM Zhu Yanjun <yanjun.zhu@linux.dev> wrote:
> > >
> > >
> > > 在 2023/4/13 5:01, Mark Lehrer 写道:
> > > >> the fabrics device and writing the host NQN etc.  Is there an easy
> > > >> way to prove that rdma_resolve_addr is working from userland?
> > > > Actually I meant "is there a way to prove that the kernel
> > > > rdma_resolve_addr() works with netns?"
> > >
> > > I think rdma_resolve_addr can work with netns because rdma on mlx5 can
> > > work well with netns.
> > >
> > > I do not delve into the source code. But IMO, this function should be
> > > used in rdma on mlx5.
> > >
> > > >
> > > > It seems like this is the real problem.  If we run commands like
> > > > nvme discover & nvme connect within the netns context, the system
> > > > will use the non-netns IP & RDMA stacks to connect.  As an aside -
> > > > this seems like it would be a major security issue for container
> > > > systems, doesn't it?
> > >
> > > Do you make tests nvme + mlx5 + net ns in your host? Can it work?
> > >
> > > Thanks
> > >
> > > Zhu Yanjun
> > >
> > > >
> > > > I'll investigate to see if the fabrics module & nvme-cli have a way
> > > > to set and use the proper netns context.
> > > >
> > > > Thanks,
> > > > Mark
Parav Pandit April 13, 2023, 4:20 p.m. UTC | #12
> From: Mark Lehrer <lehrer@gmail.com>
> Sent: Thursday, April 13, 2023 11:39 AM
> 
> > Didn’t get a chance to review the thread discussion.
> > The way to use VF is:
> 
> Virtual functions were just a debugging aid.  We really just want to use a single
> physical function and put it into the netns.  However, we will do additional VF
> tests as it still may be a viable workaround.
> 
> When using the physical function, we are still having no joy using exclusive
> mode with mlx5:
> 

static int nvmet_rdma_enable_port(struct nvmet_rdma_port *port)
{
        struct sockaddr *addr = (struct sockaddr *)&port->addr;
        struct rdma_cm_id *cm_id;
        int ret;

        cm_id = rdma_create_id(&init_net, nvmet_rdma_cm_handler, port,
                                                     ^^^^^^^
Nvme target is not net ns aware.

                        RDMA_PS_TCP, IB_QPT_RC);
        if (IS_ERR(cm_id)) {
                pr_err("CM ID creation failed\n");
                return PTR_ERR(cm_id);
        }

> 
> # nvme discover -t rdma -a 192.168.42.11 -s 4420 Discovery Log Number of
> Records 2, Generation counter 2 =====Discovery Log Entry 0====== ... (works
> as expected)
> 
> # rdma system set netns exclusive
> # ip netns add netnstest
> # ip link set eth1 netns netnstest
> # rdma dev set mlx5_0 netns netnstest
> # nsenter --net=/var/run/netns/netnstest /bin/bash # ip link set eth1 up # ip
> addr add 192.168.42.12/24 dev eth1 (tested ib_send_bw here, works perfectly)
> 
> # nvme discover -t rdma -a 192.168.42.11 -s 4420 Failed to write to /dev/nvme-
> fabrics: Connection reset by peer failed to add controller, error Unknown error
> -1
> 
> # dmesg | tail -3
> [  240.361647] mlx5_core 0000:05:00.0 eth1: Link up [  240.371772] IPv6:
> ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [  259.964542] nvme
> nvme0: rdma connection establishment failed (-104)
> 
> Am I missing something here?
> 
> Thanks,
> Mark
> 
> 
> On Thu, Apr 13, 2023 at 7:05 AM Parav Pandit <parav@nvidia.com> wrote:
> >
> >
> >
> > > From: Mark Lehrer <lehrer@gmail.com>
> > > Sent: Thursday, April 13, 2023 9:01 AM
> > >
> > > > Do you make tests nvme + mlx5 + net ns in your host? Can it work?
> > >
> > > Sort of, but not really.  In our last test, we configured a virtual
> > > function and put it in the netns context, but also configured a
> > > physical function outside the netns context.  TCP NVMe connections always
> used the correct interface.
> > >
> > Didn’t get a chance to review the thread discussion.
> > The way to use VF is:
> >
> > 1. rdma system in exclusive mode
> > $ rdma system set netns exclusive
> >
> > 2. Move netdevice of the VF to the net ns $ ip link set [ DEV ] netns
> > NSNAME
> >
> > 3. Move RDMA device of the VF to the net ns $ rdma dev set [ DEV ]
> > netns NSNAME
> >
> > You are probably missing #1 and #3 configuration.
> > #1 should be done before creating any namespaces.
> >
> > Man pages for #1 and #3:
> > [a] https://man7.org/linux/man-pages/man8/rdma-system.8.html
> > [b] https://man7.org/linux/man-pages/man8/rdma-dev.8.html
> >
> > > However, the RoCEv2 NVMe connection always used the physical
> > > function, regardless of the user space netns context of the nvme-cli process.
> > > When we ran "ip link set <physical function> down" the RoCEv2 NVMe
> > > connections stopped working, but TCP NVMe connections were fine.
> > > We'll be doing more tests today to make sure we're not doing
> > > something wrong.
> > >
> > > Thanks,
> > > Mark
> > >
> > >
> > >
> > >
> > > On Thu, Apr 13, 2023 at 7:22 AM Zhu Yanjun <yanjun.zhu@linux.dev>
> wrote:
> > > >
> > > >
> > > > 在 2023/4/13 5:01, Mark Lehrer 写道:
> > > > >> the fabrics device and writing the host NQN etc.  Is there an
> > > > >> easy way to prove that rdma_resolve_addr is working from userland?
> > > > > Actually I meant "is there a way to prove that the kernel
> > > > > rdma_resolve_addr() works with netns?"
> > > >
> > > > I think rdma_resolve_addr can work with netns because rdma on mlx5
> > > > can work well with netns.
> > > >
> > > > I do not delve into the source code. But IMO, this function should
> > > > be used in rdma on mlx5.
> > > >
> > > > >
> > > > > It seems like this is the real problem.  If we run commands like
> > > > > nvme discover & nvme connect within the netns context, the
> > > > > system will use the non-netns IP & RDMA stacks to connect.  As
> > > > > an aside - this seems like it would be a major security issue
> > > > > for container systems, doesn't it?
> > > >
> > > > Do you make tests nvme + mlx5 + net ns in your host? Can it work?
> > > >
> > > > Thanks
> > > >
> > > > Zhu Yanjun
> > > >
> > > > >
> > > > > I'll investigate to see if the fabrics module & nvme-cli have a
> > > > > way to set and use the proper netns context.
> > > > >
> > > > > Thanks,
> > > > > Mark
Parav Pandit April 13, 2023, 4:23 p.m. UTC | #13
> From: Parav Pandit <parav@nvidia.com>
> Sent: Thursday, April 13, 2023 12:20 PM
> 
> > From: Mark Lehrer <lehrer@gmail.com>
> > Sent: Thursday, April 13, 2023 11:39 AM
> >
> > > Didn’t get a chance to review the thread discussion.
> > > The way to use VF is:
> >
> > Virtual functions were just a debugging aid.  We really just want to
> > use a single physical function and put it into the netns.  However, we
> > will do additional VF tests as it still may be a viable workaround.
> >
> > When using the physical function, we are still having no joy using
> > exclusive mode with mlx5:
> >
> 
> static int nvmet_rdma_enable_port(struct nvmet_rdma_port *port) {
>         struct sockaddr *addr = (struct sockaddr *)&port->addr;
>         struct rdma_cm_id *cm_id;
>         int ret;
> 
>         cm_id = rdma_create_id(&init_net, nvmet_rdma_cm_handler, port,
>                                                      ^^^^^^^ Nvme target is not net ns aware.
> 
>                         RDMA_PS_TCP, IB_QPT_RC);
>         if (IS_ERR(cm_id)) {
>                 pr_err("CM ID creation failed\n");
>                 return PTR_ERR(cm_id);
>         }
> 
> >
Clicked send email too early.

574 static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
 575                 int idx, size_t queue_size)
 576 {
[..]
597         queue->cm_id = rdma_create_id(&init_net, nvme_rdma_cm_handler, queue,
 598                         RDMA_PS_TCP, IB_QPT_RC);
 599         if (IS_ERR(queue->cm_id)) {

Initiator is not net ns aware.
Given some of the work involves workqueue operation, it needs to hold the reference to net ns and implement the net ns delete routine to terminate.
Mark Lehrer April 13, 2023, 4:37 p.m. UTC | #14
> Initiator is not net ns aware.

Am I correct in my assessment that this could be a container jailbreak
risk?  We aren't using containers, but we were shocked that RoCEv2
connections magically worked through the physical function which was
not in the netns context.


Thanks,
Mark

On Thu, Apr 13, 2023 at 10:23 AM Parav Pandit <parav@nvidia.com> wrote:
>
>
> > From: Parav Pandit <parav@nvidia.com>
> > Sent: Thursday, April 13, 2023 12:20 PM
> >
> > > From: Mark Lehrer <lehrer@gmail.com>
> > > Sent: Thursday, April 13, 2023 11:39 AM
> > >
> > > > Didn’t get a chance to review the thread discussion.
> > > > The way to use VF is:
> > >
> > > Virtual functions were just a debugging aid.  We really just want to
> > > use a single physical function and put it into the netns.  However, we
> > > will do additional VF tests as it still may be a viable workaround.
> > >
> > > When using the physical function, we are still having no joy using
> > > exclusive mode with mlx5:
> > >
> >
> > static int nvmet_rdma_enable_port(struct nvmet_rdma_port *port) {
> >         struct sockaddr *addr = (struct sockaddr *)&port->addr;
> >         struct rdma_cm_id *cm_id;
> >         int ret;
> >
> >         cm_id = rdma_create_id(&init_net, nvmet_rdma_cm_handler, port,
> >                                                      ^^^^^^^ Nvme target is not net ns aware.
> >
> >                         RDMA_PS_TCP, IB_QPT_RC);
> >         if (IS_ERR(cm_id)) {
> >                 pr_err("CM ID creation failed\n");
> >                 return PTR_ERR(cm_id);
> >         }
> >
> > >
> Clicked send email too early.
>
> 574 static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
>  575                 int idx, size_t queue_size)
>  576 {
> [..]
> 597         queue->cm_id = rdma_create_id(&init_net, nvme_rdma_cm_handler, queue,
>  598                         RDMA_PS_TCP, IB_QPT_RC);
>  599         if (IS_ERR(queue->cm_id)) {
>
> Initiator is not net ns aware.
> Given some of the work involves workqueue operation, it needs to hold the reference to net ns and implement the net ns delete routine to terminate.
Parav Pandit April 13, 2023, 4:42 p.m. UTC | #15
> From: Mark Lehrer <lehrer@gmail.com>
> Sent: Thursday, April 13, 2023 12:38 PM
> 
> > Initiator is not net ns aware.
> 
> Am I correct in my assessment that this could be a container jailbreak risk?  We
> aren't using containers, 
Unlikely. because container orchestration must need to give access to the nvme char/misc device to the container.
And it should do it only when nvme initiator/target are net ns aware.

> but we were shocked that RoCEv2 connections
> magically worked through the physical function which was not in the netns
> context.

I do not understand this part.
If you are in exclusive mode rdma devices must be in respective/appropriate net ns.
It unlikely works, may be some misconfiguration. Hard to way without exact commands.
Zhu Yanjun April 14, 2023, 3:49 p.m. UTC | #16
在 2023/4/14 0:42, Parav Pandit 写道:
>
>> From: Mark Lehrer <lehrer@gmail.com>
>> Sent: Thursday, April 13, 2023 12:38 PM
>>
>>> Initiator is not net ns aware.
>> Am I correct in my assessment that this could be a container jailbreak risk?  We
>> aren't using containers,
> Unlikely. because container orchestration must need to give access to the nvme char/misc device to the container.
> And it should do it only when nvme initiator/target are net ns aware.
>
>> but we were shocked that RoCEv2 connections
>> magically worked through the physical function which was not in the netns
>> context.
> I do not understand this part.
> If you are in exclusive mode rdma devices must be in respective/appropriate net ns.

After applying these commits, rxe works in the exclusive mode.

Zhu Yanjun

> It unlikely works, may be some misconfiguration. Hard to way without exact commands.
Mark Lehrer April 14, 2023, 4:24 p.m. UTC | #17
Apologies if you get this twice, lindbergh rejected my email for
admittedly legitimate reasons.

>> If you are in exclusive mode rdma devices must be in respective/appropriate net ns.
>
> After applying these commits, rxe works in the exclusive mode.

Yanjun,

Thanks again for the original patch.  It is good for the soft roce
driver to be a "reference" for proper rdma functionality.  What is
still needed for this fix to make it to mainline?

As an aside - is rdma_rxe now good enough for Red Hat to build it by
default again in EL10, or is more work needed?

I'm going to try making the nvme-fabrics set of modules use the
network namespace properly with RoCEv2.  TCP seems to work properly
already, so this should be more of a "port" than real development.
Are you (or anyone else) interested in working on this too?  I'm more
familiar with the video frame buffer area of the kernel, so first I'm
familiarizing myself with how nvme-fabrics works with TCP & netns.

Thanks,
Mark
Zhu Yanjun April 15, 2023, 1:35 p.m. UTC | #18
在 2023/4/15 0:24, Mark Lehrer 写道:
> Apologies if you get this twice, lindbergh rejected my email for
> admittedly legitimate reasons.
> 
>>> If you are in exclusive mode rdma devices must be in respective/appropriate net ns.
>>
>> After applying these commits, rxe works in the exclusive mode.
> 
> Yanjun,
> 
> Thanks again for the original patch.  It is good for the soft roce
> driver to be a "reference" for proper rdma functionality.  What is
> still needed for this fix to make it to mainline?

I am working hard to push these commits to mainline.

Zhu Yanjun

> 
> As an aside - is rdma_rxe now good enough for Red Hat to build it by
> default again in EL10, or is more work needed?
> 
> I'm going to try making the nvme-fabrics set of modules use the
> network namespace properly with RoCEv2.  TCP seems to work properly
> already, so this should be more of a "port" than real development.
> Are you (or anyone else) interested in working on this too?  I'm more
> familiar with the video frame buffer area of the kernel, so first I'm
> familiarizing myself with how nvme-fabrics works with TCP & netns.
> 
> Thanks,
> Mark
Parav Pandit April 19, 2023, 12:43 a.m. UTC | #19
> From: Mark Lehrer <lehrer@gmail.com>
> Sent: Friday, April 14, 2023 12:24 PM
 
> I'm going to try making the nvme-fabrics set of modules use the network
> namespace properly with RoCEv2.  TCP seems to work properly already, so this
> should be more of a "port" than real development.
TCP without net ns notifier missed the net ns delete scenario results in a use after free bug, that should be fixed first as its critical.

> Are you (or anyone else) interested in working on this too?  I'm more familiar
> with the video frame buffer area of the kernel, so first I'm familiarizing myself
> with how nvme-fabrics works with TCP & netns.
> 
> Thanks,
> Mark
Zhu Yanjun April 19, 2023, 4:19 a.m. UTC | #20
在 2023/4/19 8:43, Parav Pandit 写道:
>
>> From: Mark Lehrer <lehrer@gmail.com>
>> Sent: Friday, April 14, 2023 12:24 PM
>   
>> I'm going to try making the nvme-fabrics set of modules use the network
>> namespace properly with RoCEv2.  TCP seems to work properly already, so this
>> should be more of a "port" than real development.
> TCP without net ns notifier missed the net ns delete scenario results in a use after free bug, that should be fixed first as its critical.

Sure. I also confronted this mentioned problem. If I remember correctly, 
a net ns callback can fix this problem.

Zhu Yanjun

>
>> Are you (or anyone else) interested in working on this too?  I'm more familiar
>> with the video frame buffer area of the kernel, so first I'm familiarizing myself
>> with how nvme-fabrics works with TCP & netns.
>>
>> Thanks,
>> Mark
Mark Lehrer April 19, 2023, 6:01 p.m. UTC | #21
> TCP without net ns notifier missed the net ns delete scenario results in a use after free bug, that should be fixed first as its critical.
>
> Sure. I also confronted this mentioned problem. If I remember correctly,
> a net ns callback can fix this problem.

I'm not sure if the bug fix will be this in depth, but I have a
related question.  What is the proper way for the kernel nvme
initiator code to know which netns context to use?  e.g. should we
take the pid of the process that opened /dev/nvme-fabrics and look it
up (presumaly this will be nvme-cli), and will this method give us
enough details for both tcp & rdma?

Mark


On Tue, Apr 18, 2023 at 10:19 PM Zhu Yanjun <yanjun.zhu@linux.dev> wrote:
>
>
> 在 2023/4/19 8:43, Parav Pandit 写道:
> >
> >> From: Mark Lehrer <lehrer@gmail.com>
> >> Sent: Friday, April 14, 2023 12:24 PM
> >
> >> I'm going to try making the nvme-fabrics set of modules use the network
> >> namespace properly with RoCEv2.  TCP seems to work properly already, so this
> >> should be more of a "port" than real development.
> > TCP without net ns notifier missed the net ns delete scenario results in a use after free bug, that should be fixed first as its critical.
>
> Sure. I also confronted this mentioned problem. If I remember correctly,
> a net ns callback can fix this problem.
>
> Zhu Yanjun
>
> >
> >> Are you (or anyone else) interested in working on this too?  I'm more familiar
> >> with the video frame buffer area of the kernel, so first I'm familiarizing myself
> >> with how nvme-fabrics works with TCP & netns.
> >>
> >> Thanks,
> >> Mark
Zhu Yanjun April 20, 2023, 2:28 p.m. UTC | #22
在 2023/4/20 2:01, Mark Lehrer 写道:
>> TCP without net ns notifier missed the net ns delete scenario results in a use after free bug, that should be fixed first as its critical.
>>
>> Sure. I also confronted this mentioned problem. If I remember correctly,
>> a net ns callback can fix this problem.
> I'm not sure if the bug fix will be this in depth, but I have a
> related question.  What is the proper way for the kernel nvme
> initiator code to know which netns context to use?  e.g. should we
> take the pid of the process that opened /dev/nvme-fabrics and look it
> up (presumaly this will be nvme-cli), and will this method give us
> enough details for both tcp & rdma?

Please check the netns callback functions. You will find all the answers 
to your questions.

Zhu Yanjun

>
> Mark
>
>
> On Tue, Apr 18, 2023 at 10:19 PM Zhu Yanjun <yanjun.zhu@linux.dev> wrote:
>>
>> 在 2023/4/19 8:43, Parav Pandit 写道:
>>>> From: Mark Lehrer <lehrer@gmail.com>
>>>> Sent: Friday, April 14, 2023 12:24 PM
>>>> I'm going to try making the nvme-fabrics set of modules use the network
>>>> namespace properly with RoCEv2.  TCP seems to work properly already, so this
>>>> should be more of a "port" than real development.
>>> TCP without net ns notifier missed the net ns delete scenario results in a use after free bug, that should be fixed first as its critical.
>> Sure. I also confronted this mentioned problem. If I remember correctly,
>> a net ns callback can fix this problem.
>>
>> Zhu Yanjun
>>
>>>> Are you (or anyone else) interested in working on this too?  I'm more familiar
>>>> with the video frame buffer area of the kernel, so first I'm familiarizing myself
>>>> with how nvme-fabrics works with TCP & netns.
>>>>
>>>> Thanks,
>>>> Mark