mbox series

[v1,0/5] NFSv3 client RDMA multipath enhancements

Message ID 20210121191020.3144948-1-dan@kernelim.com (mailing list archive)
Headers show
Series NFSv3 client RDMA multipath enhancements | expand

Message

Dan Aloni Jan. 21, 2021, 7:10 p.m. UTC
Hi,

The purpose of the following changes is to allow specifying multiple
target IP addresses in a single mount. Combining this with nconnect and
servers that support exposing multiple ports, we can achieve load
balancing and much greater throughput, especially on RDMA setups,
even with the older NFSv3 protocol.

The changes allow specifing a new `remoteports=<IP-addresses-ranges>`
mount option providing a group of IP addresses, from which `nconnect` at
sunrpc scope picks target transport address in round-robin. There's also
an accompanying `localports` parameter that allows local address bind so
that the source port is better controlled in a way to ensure that
transports are not hogging a single local interface.

This patchset targets the linux-next tree.

Dan Aloni (5):
  sunrpc: Allow specifying a vector of IP addresses for nconnect
  xprtrdma: Bind to a local address if requested
  nfs: Extend nconnect with remoteports and localports mount params
  sunrpc: Add srcaddr to xprt sysfs debug
  nfs: Increase NFS_MAX_CONNECTIONS

 fs/nfs/client.c                            |  24 +++
 fs/nfs/fs_context.c                        | 173 ++++++++++++++++++++-
 fs/nfs/internal.h                          |   4 +
 include/linux/nfs_fs_sb.h                  |   2 +
 include/linux/sunrpc/clnt.h                |   9 ++
 include/linux/sunrpc/xprt.h                |   1 +
 net/sunrpc/clnt.c                          |  47 ++++++
 net/sunrpc/debugfs.c                       |   8 +-
 net/sunrpc/xprtrdma/svc_rdma_backchannel.c |   2 +-
 net/sunrpc/xprtrdma/transport.c            |  17 +-
 net/sunrpc/xprtrdma/verbs.c                |  15 +-
 net/sunrpc/xprtrdma/xprt_rdma.h            |   5 +-
 net/sunrpc/xprtsock.c                      |  49 +++---
 13 files changed, 329 insertions(+), 27 deletions(-)

Comments

Chuck Lever Jan. 21, 2021, 7:50 p.m. UTC | #1
Hey Dan-


First, thanks for posting patches!


> On Jan 21, 2021, at 2:10 PM, Dan Aloni <dan@kernelim.com> wrote:
> 
> Hi,
> 
> The purpose of the following changes is to allow specifying multiple
> target IP addresses in a single mount. Combining this with nconnect and
> servers that support exposing multiple ports,

"port" is probably a bad term to use here, as that term already
has a particular meaning when it comes to IP addresses. In
standards documents, we've stuck with the term "endpoint".

I worked with the IETF's nfsv4 WG a couple years ago to produce
a document that describes how we want NFS servers to advertise
their network configuration to clients.

https://datatracker.ietf.org/doc/rfc8587/

That gives a flavor for what we've done for NFSv4. IMO anything
done for NFSv3 ought to leverage similar principles and tactics.


> we can achieve load
> balancing and much greater throughput, especially on RDMA setups,
> even with the older NFSv3 protocol.

I support the basic goal of increasing transport parallelism.

As you probably became aware as you worked on these patches, the
Linux client shares one or a small set of connections across all
mount points of the same server. So a mount option that adds this
kind of control is going to be awkward.

Anna has proposed a /sys API that would enable this information to
be programmed into the kernel for all mount points sharing the
same set of connections. That would be a little nicer for building
separate administrator tools against, or even for providing an
automation mechanism (like an orchestrator) that would enable
clients to automatically fail over to a different server interface.

IMO I'd prefer to see a user space policy / tool that manages
endpoint lists and passes them to the kernel client dynamically
via Anna's API instead of adding one or more mount options, which
would be fixed for the life of the mount and shared with other
mount points that use the same transports to communicate with
the NFS server.


As far as the NUMA affinity issues go, in the past I've attempted
to provide some degree of CPU affinity between RPC Call and Reply
handling only to find that it reduced performance unacceptably.
Perhaps something that is node-aware or LLC-aware would be better
than CPU affinity, and I'm happy to discuss that and any other
ways we think can improve NFS behavior on NUMA systems. It's quite
true that RDMA transports are more sensitive to NUMA than
traditional socket-based ones.


> The changes allow specifing a new `remoteports=<IP-addresses-ranges>`
> mount option providing a group of IP addresses, from which `nconnect` at
> sunrpc scope picks target transport address in round-robin. There's also
> an accompanying `localports` parameter that allows local address bind so
> that the source port is better controlled in a way to ensure that
> transports are not hogging a single local interface.
> 
> This patchset targets the linux-next tree.
> 
> Dan Aloni (5):
>  sunrpc: Allow specifying a vector of IP addresses for nconnect
>  xprtrdma: Bind to a local address if requested
>  nfs: Extend nconnect with remoteports and localports mount params
>  sunrpc: Add srcaddr to xprt sysfs debug
>  nfs: Increase NFS_MAX_CONNECTIONS
> 
> fs/nfs/client.c                            |  24 +++
> fs/nfs/fs_context.c                        | 173 ++++++++++++++++++++-
> fs/nfs/internal.h                          |   4 +
> include/linux/nfs_fs_sb.h                  |   2 +
> include/linux/sunrpc/clnt.h                |   9 ++
> include/linux/sunrpc/xprt.h                |   1 +
> net/sunrpc/clnt.c                          |  47 ++++++
> net/sunrpc/debugfs.c                       |   8 +-
> net/sunrpc/xprtrdma/svc_rdma_backchannel.c |   2 +-
> net/sunrpc/xprtrdma/transport.c            |  17 +-
> net/sunrpc/xprtrdma/verbs.c                |  15 +-
> net/sunrpc/xprtrdma/xprt_rdma.h            |   5 +-
> net/sunrpc/xprtsock.c                      |  49 +++---
> 13 files changed, 329 insertions(+), 27 deletions(-)
> 
> -- 
> 2.26.2
> 

--
Chuck Lever
Dan Aloni Jan. 24, 2021, 5:37 p.m. UTC | #2
On Thu, Jan 21, 2021 at 07:50:41PM +0000, Chuck Lever wrote:
> I worked with the IETF's nfsv4 WG a couple years ago to produce
> a document that describes how we want NFS servers to advertise
> their network configuration to clients.
> 
> https://datatracker.ietf.org/doc/rfc8587/
> 
> That gives a flavor for what we've done for NFSv4. IMO anything
> done for NFSv3 ought to leverage similar principles and tactics.
 
Thanks for the pointer - I'll read and take it into consideration.

> > we can achieve load
> > balancing and much greater throughput, especially on RDMA setups,
> > even with the older NFSv3 protocol.
> 
> I support the basic goal of increasing transport parallelism.
> 
> As you probably became aware as you worked on these patches, the
> Linux client shares one or a small set of connections across all
> mount points of the same server. So a mount option that adds this
> kind of control is going to be awkward.

I tend to agree, from a developer perspective, but just to give an
idea that from an admin POV it is often is not immediately apparent that
this is what happens behind the scenes (i.e. the `nfs_match_client`
function), so in our case the users have not reported back that our
addition to the mount parameters looked weird, considering it as
naturally extending nconnect, which I think falls under similar
considerations - giving deeper details regarding how transports should
behave during the mount command and not afterwards, regarding what
actual NFS sessions are established.

Surely there may be better ways to do this, following from what's
discussed next.

> Anna has proposed a /sys API that would enable this information to
> be programmed into the kernel for all mount points sharing the
> same set of connections. That would be a little nicer for building
> separate administrator tools against, or even for providing an
> automation mechanism (like an orchestrator) that would enable
> clients to automatically fail over to a different server interface.
>
> IMO I'd prefer to see a user space policy / tool that manages
> endpoint lists and passes them to the kernel client dynamically
> via Anna's API instead of adding one or more mount options, which
> would be fixed for the life of the mount and shared with other
> mount points that use the same transports to communicate with
> the NFS server.

I see now that these are fairly recent patches that I've unfortunately
missed while working on other things. If this is the intended API to
help manage active NFS sessions, I would very much like to help on
testing and extending this code.

So a good way to go with this would be to look into supporting an 'add
transport' op by extending on the new interface, and for optionally
specifying local address bind similarly to the work I've done for the
mount options.

I'll also be glad to contribute to nfs-utils so that we'd have the
anticipated userspace tool, maybe 'nfs' (like `/sbin/ip` from iproute),
that can executed for this purpose, e.g. 'nfs transport add <IP> mnt
<PATH>'.

Also, from a lower level API perspective, we would need a way to figure
out client ID from a mount point, so that ID can be used at the relevant
sysfs directory. Perhaps this can be done via a new ioctl on the
mount point itself?

> As far as the NUMA affinity issues go, in the past I've attempted
> to provide some degree of CPU affinity between RPC Call and Reply
> handling only to find that it reduced performance unacceptably.
> Perhaps something that is node-aware or LLC-aware would be better
> than CPU affinity, and I'm happy to discuss that and any other
> ways we think can improve NFS behavior on NUMA systems. It's quite
> true that RDMA transports are more sensitive to NUMA than
> traditional socket-based ones.

Also to consider that RDMA is special for this as CPU memory caching can
be skipped, and even main memory - for example a special case where the
NFS read/write payload memory is not the main system memory but mapped
from PCI, and the kernel's own PCI_P2PDMA distance matrix can be used
for better xprt selection.