mbox series

[v3,0/5] Send RPC-on-TCP with one sock_sendmsg() call

Message ID 168979108540.1905271.9720708849149797793.stgit@morisot.1015granger.net (mailing list archive)
Headers show
Series Send RPC-on-TCP with one sock_sendmsg() call | expand

Message

Chuck Lever July 19, 2023, 6:30 p.m. UTC
After some discussion with David Howells at LSF/MM 2023, we arrived
at a plan to use a single sock_sendmsg() call for transmitting an
RPC message on socket-based transports. This is an initial part of
the transition to support handling folios with file content, but it
has scalability benefits as well.

Initial performance benchmark results show 5-10% throughput gains
with a fast link layer and a tmpfs export. I've added some other
ideas to this series for further discussion -- these have also shown
performance benefits in my testing.


Changes since v2:
* Keep rq_bvec instead of switching to a per-transport bio_vec array
* Remove the cork/uncork logic in svc_tcp_sendto
* Attempt to mitigate wake-up storms when receiving large RPC messages

Changes since RFC:
* Moved xdr_buf-to-bio_vec array helper to generic XDR code
* Added bio_vec array bounds-checking
* Re-ordered patches

---

Chuck Lever (5):
      SUNRPC: Convert svc_tcp_sendmsg to use bio_vecs directly
      SUNRPC: Send RPC message on TCP with a single sock_sendmsg() call
      SUNRPC: Convert svc_udp_sendto() to use the per-socket bio_vec array
      SUNRPC: Revert e0a912e8ddba
      SUNRPC: Reduce thread wake-up rate when receiving large RPC messages


 include/linux/sunrpc/svcsock.h |   4 +-
 include/linux/sunrpc/xdr.h     |   2 +
 net/sunrpc/svcsock.c           | 127 +++++++++++++++------------------
 net/sunrpc/xdr.c               |  50 +++++++++++++
 4 files changed, 112 insertions(+), 71 deletions(-)

--
Chuck Lever

Comments

David Howells July 26, 2023, 12:13 p.m. UTC | #1
Chuck Lever <cel@kernel.org> wrote:

> After some discussion with David Howells at LSF/MM 2023, we arrived
> at a plan to use a single sock_sendmsg() call for transmitting an
> RPC message on socket-based transports. This is an initial part of
> the transition to support handling folios with file content, but it
> has scalability benefits as well.
> 
> Initial performance benchmark results show 5-10% throughput gains
> with a fast link layer and a tmpfs export. I've added some other
> ideas to this series for further discussion -- these have also shown
> performance benefits in my testing.

I like it :-)

David