diff mbox series

[net-next,3/6] sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage

Message ID 20230609100221.2620633-4-dhowells@redhat.com (mailing list archive)
State Accepted
Commit 5df5dd03a8f71ca9640f208d8f523856e1069ee7
Delegated to: Netdev Maintainers
Headers show
Series splice, net: Some miscellaneous MSG_SPLICE_PAGES changes | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 27 this patch: 27
netdev/cc_maintainers success CCed 10 of 10 maintainers
netdev/build_clang success Errors and warnings before: 214 this patch: 214
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 27 this patch: 27
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 74 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

David Howells June 9, 2023, 10:02 a.m. UTC
When transmitting data, call down into TCP using sendmsg with
MSG_SPLICE_PAGES to indicate that content should be spliced rather than
performing sendpage calls to transmit header, data pages and trailer.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Chuck Lever <chuck.lever@oracle.com>
cc: Trond Myklebust <trond.myklebust@hammerspace.com>
cc: Anna Schumaker <anna@kernel.org>
cc: Jeff Layton <jlayton@kernel.org>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Jens Axboe <axboe@kernel.dk>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-nfs@vger.kernel.org
cc: netdev@vger.kernel.org
---
 include/linux/sunrpc/svc.h | 11 +++++------
 net/sunrpc/svcsock.c       | 38 ++++++++++++--------------------------
 2 files changed, 17 insertions(+), 32 deletions(-)

Comments

Jeff Layton Aug. 11, 2023, 10:50 p.m. UTC | #1
On Fri, 2023-06-09 at 11:02 +0100, David Howells wrote:
> When transmitting data, call down into TCP using sendmsg with
> MSG_SPLICE_PAGES to indicate that content should be spliced rather than
> performing sendpage calls to transmit header, data pages and trailer.
> 
> Signed-off-by: David Howells <dhowells@redhat.com>
> Acked-by: Chuck Lever <chuck.lever@oracle.com>
> cc: Trond Myklebust <trond.myklebust@hammerspace.com>
> cc: Anna Schumaker <anna@kernel.org>
> cc: Jeff Layton <jlayton@kernel.org>
> cc: "David S. Miller" <davem@davemloft.net>
> cc: Eric Dumazet <edumazet@google.com>
> cc: Jakub Kicinski <kuba@kernel.org>
> cc: Paolo Abeni <pabeni@redhat.com>
> cc: Jens Axboe <axboe@kernel.dk>
> cc: Matthew Wilcox <willy@infradead.org>
> cc: linux-nfs@vger.kernel.org
> cc: netdev@vger.kernel.org
> ---
>  include/linux/sunrpc/svc.h | 11 +++++------
>  net/sunrpc/svcsock.c       | 38 ++++++++++++--------------------------
>  2 files changed, 17 insertions(+), 32 deletions(-)
> 

I'm seeing a regression in pynfs runs with v6.5-rc5. 3 tests are failing
in a similar fashion. WRT1b is one of them

[vagrant@jlayton-kdo-nfsd nfs4.0]$  ./testserver.py --rundeps --maketree --uid=0 --gid=0 localhost:/export/pynfs/4.0/ WRT1b                                                     
**************************************************                                                                                                                              
WRT1b    st_write.testSimpleWrite2                                : FAILURE                                                                                                     
           READ returned                                                                                                                                                        
           b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',                                                                                                     
           expected b'\x00\x00\x00\x00\x00write data'                                                                                                                           
INIT     st_setclientid.testValid                                 : PASS                                                                                                        
MKFILE   st_open.testOpen                                         : PASS                                                                                                        
**************************************************                                                                                                                              
Command line asked for 3 of 679 tests                                                                                                                                           
Of those: 0 Skipped, 1 Failed, 0 Warned, 2 Passed                                                   


This test just writes "write data" starting at offset 30 and then reads
the data back. It looks like we're seeing zeroes in the read reply where
the data should be.

A bisect landed on this patch, which I'm assuming is the same as this
commit in mainline:

    5df5dd03a8f7 sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage

...any thoughts as to what might be wrong?

> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index 762d7231e574..f66ec8fdb331 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv *serv)
>  extern u32 svc_max_payload(const struct svc_rqst *rqstp);
>  
>  /*
> - * RPC Requsts and replies are stored in one or more pages.
> + * RPC Requests and replies are stored in one or more pages.
>   * We maintain an array of pages for each server thread.
>   * Requests are copied into these pages as they arrive.  Remaining
>   * pages are available to write the reply into.
>   *
> - * Pages are sent using ->sendpage so each server thread needs to
> - * allocate more to replace those used in sending.  To help keep track
> - * of these pages we have a receive list where all pages initialy live,
> - * and a send list where pages are moved to when there are to be part
> - * of a reply.
> + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread
> + * needs to allocate more to replace those used in sending.  To help keep track
> + * of these pages we have a receive list where all pages initialy live, and a
> + * send list where pages are moved to when there are to be part of a reply.
>   *
>   * We use xdr_buf for holding responses as it fits well with NFS
>   * read responses (that have a header, and some data pages, and possibly
> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
> index f77cebe2c071..9d9f522e3ae1 100644
> --- a/net/sunrpc/svcsock.c
> +++ b/net/sunrpc/svcsock.c
> @@ -1203,13 +1203,14 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp)
>  static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec,
>  			      int flags)
>  {
> -	return kernel_sendpage(sock, virt_to_page(vec->iov_base),
> -			       offset_in_page(vec->iov_base),
> -			       vec->iov_len, flags);
> +	struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | flags, };
> +
> +	iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, vec, 1, vec->iov_len);
> +	return sock_sendmsg(sock, &msg);
>  }
>  
>  /*
> - * kernel_sendpage() is used exclusively to reduce the number of
> + * MSG_SPLICE_PAGES is used exclusively to reduce the number of
>   * copy operations in this path. Therefore the caller must ensure
>   * that the pages backing @xdr are unchanging.
>   *
> @@ -1249,28 +1250,13 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr,
>  	if (ret != head->iov_len)
>  		goto out;
>  
> -	if (xdr->page_len) {
> -		unsigned int offset, len, remaining;
> -		struct bio_vec *bvec;
> -
> -		bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT);
> -		offset = offset_in_page(xdr->page_base);
> -		remaining = xdr->page_len;
> -		while (remaining > 0) {
> -			len = min(remaining, bvec->bv_len - offset);
> -			ret = kernel_sendpage(sock, bvec->bv_page,
> -					      bvec->bv_offset + offset,
> -					      len, 0);
> -			if (ret < 0)
> -				return ret;
> -			*sentp += ret;
> -			if (ret != len)
> -				goto out;
> -			remaining -= len;
> -			offset = 0;
> -			bvec++;
> -		}
> -	}
> +	msg.msg_flags = MSG_SPLICE_PAGES;
> +	iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec,
> +		      xdr_buf_pagecount(xdr), xdr->page_len);
> +	ret = sock_sendmsg(sock, &msg);
> +	if (ret < 0)
> +		return ret;
> +	*sentp += ret;
>  
>  	if (tail->iov_len) {
>  		ret = svc_tcp_send_kvec(sock, tail, 0);
>
Jeff Layton Aug. 11, 2023, 11:07 p.m. UTC | #2
On Fri, 2023-08-11 at 18:50 -0400, Jeff Layton wrote:
> On Fri, 2023-06-09 at 11:02 +0100, David Howells wrote:
> > When transmitting data, call down into TCP using sendmsg with
> > MSG_SPLICE_PAGES to indicate that content should be spliced rather than
> > performing sendpage calls to transmit header, data pages and trailer.
> > 
> > Signed-off-by: David Howells <dhowells@redhat.com>
> > Acked-by: Chuck Lever <chuck.lever@oracle.com>
> > cc: Trond Myklebust <trond.myklebust@hammerspace.com>
> > cc: Anna Schumaker <anna@kernel.org>
> > cc: Jeff Layton <jlayton@kernel.org>
> > cc: "David S. Miller" <davem@davemloft.net>
> > cc: Eric Dumazet <edumazet@google.com>
> > cc: Jakub Kicinski <kuba@kernel.org>
> > cc: Paolo Abeni <pabeni@redhat.com>
> > cc: Jens Axboe <axboe@kernel.dk>
> > cc: Matthew Wilcox <willy@infradead.org>
> > cc: linux-nfs@vger.kernel.org
> > cc: netdev@vger.kernel.org
> > ---
> >  include/linux/sunrpc/svc.h | 11 +++++------
> >  net/sunrpc/svcsock.c       | 38 ++++++++++++--------------------------
> >  2 files changed, 17 insertions(+), 32 deletions(-)
> > 
> 
> I'm seeing a regression in pynfs runs with v6.5-rc5. 3 tests are failing
> in a similar fashion. WRT1b is one of them
> 
> [vagrant@jlayton-kdo-nfsd nfs4.0]$  ./testserver.py --rundeps --maketree --uid=0 --gid=0 localhost:/export/pynfs/4.0/ WRT1b                                                     
> **************************************************                                                                                                                              
> WRT1b    st_write.testSimpleWrite2                                : FAILURE                                                                                                     
>            READ returned                                                                                                                                                        
>            b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',                                                                                                     
>            expected b'\x00\x00\x00\x00\x00write data'                                                                                                                           
> INIT     st_setclientid.testValid                                 : PASS                                                                                                        
> MKFILE   st_open.testOpen                                         : PASS                                                                                                        
> **************************************************                                                                                                                              
> Command line asked for 3 of 679 tests                                                                                                                                           
> Of those: 0 Skipped, 1 Failed, 0 Warned, 2 Passed                                                   

FWIW, here's a capture that shows the problem. See frames 109-112 in
particular. If no one has thoughts on this one, I'll plan to have a look
early next week.

Thanks,
Jeff Layton Aug. 12, 2023, 11:45 a.m. UTC | #3
On Fri, 2023-08-11 at 19:07 -0400, Jeff Layton wrote:
> On Fri, 2023-08-11 at 18:50 -0400, Jeff Layton wrote:
> > On Fri, 2023-06-09 at 11:02 +0100, David Howells wrote:
> > > When transmitting data, call down into TCP using sendmsg with
> > > MSG_SPLICE_PAGES to indicate that content should be spliced rather than
> > > performing sendpage calls to transmit header, data pages and trailer.
> > > 
> > > Signed-off-by: David Howells <dhowells@redhat.com>
> > > Acked-by: Chuck Lever <chuck.lever@oracle.com>
> > > cc: Trond Myklebust <trond.myklebust@hammerspace.com>
> > > cc: Anna Schumaker <anna@kernel.org>
> > > cc: Jeff Layton <jlayton@kernel.org>
> > > cc: "David S. Miller" <davem@davemloft.net>
> > > cc: Eric Dumazet <edumazet@google.com>
> > > cc: Jakub Kicinski <kuba@kernel.org>
> > > cc: Paolo Abeni <pabeni@redhat.com>
> > > cc: Jens Axboe <axboe@kernel.dk>
> > > cc: Matthew Wilcox <willy@infradead.org>
> > > cc: linux-nfs@vger.kernel.org
> > > cc: netdev@vger.kernel.org
> > > ---
> > >  include/linux/sunrpc/svc.h | 11 +++++------
> > >  net/sunrpc/svcsock.c       | 38 ++++++++++++--------------------------
> > >  2 files changed, 17 insertions(+), 32 deletions(-)
> > > 
> > 
> > I'm seeing a regression in pynfs runs with v6.5-rc5. 3 tests are failing
> > in a similar fashion. WRT1b is one of them
> > 
> > [vagrant@jlayton-kdo-nfsd nfs4.0]$  ./testserver.py --rundeps --maketree --uid=0 --gid=0 localhost:/export/pynfs/4.0/ WRT1b                                                     
> > **************************************************                                                                                                                              
> > WRT1b    st_write.testSimpleWrite2                                : FAILURE                                                                                                     
> >            READ returned                                                                                                                                                        
> >            b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',                                                                                                     
> >            expected b'\x00\x00\x00\x00\x00write data'                                                                                                                           
> > INIT     st_setclientid.testValid                                 : PASS                                                                                                        
> > MKFILE   st_open.testOpen                                         : PASS                                                                                                        
> > **************************************************                                                                                                                              
> > Command line asked for 3 of 679 tests                                                                                                                                           
> > Of those: 0 Skipped, 1 Failed, 0 Warned, 2 Passed                                                   
> 
> FWIW, here's a capture that shows the problem. See frames 109-112 in
> particular. If no one has thoughts on this one, I'll plan to have a look
> early next week.

Since Chuck's nfsd-next branch (which is based on v6.5-rc5) wasn't
showing this issue, I ran a bisect to see what fixes it, and it landed
on this patch:

commit ed9cd98404c8ae5d0bdd6e7ce52e458a8e0841bb
Author: Chuck Lever <chuck.lever@oracle.com>
Date:   Wed Jul 19 14:31:03 2023 -0400

    SUNRPC: Convert svc_tcp_sendmsg to use bio_vecs directly
    
    Add a helper to convert a whole xdr_buf directly into an array of
    bio_vecs, then send this array instead of iterating piecemeal over
    the xdr_buf containing the outbound RPC message.
    
    Reviewed-by: David Howells <dhowells@redhat.com>
    Signed-off-by: Chuck Lever <chuck.lever@oracle.com>


I'll follow up on that thread. I think we may want to pull this patch
into mainline for v6.5.
diff mbox series

Patch

diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 762d7231e574..f66ec8fdb331 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -161,16 +161,15 @@  static inline bool svc_put_not_last(struct svc_serv *serv)
 extern u32 svc_max_payload(const struct svc_rqst *rqstp);
 
 /*
- * RPC Requsts and replies are stored in one or more pages.
+ * RPC Requests and replies are stored in one or more pages.
  * We maintain an array of pages for each server thread.
  * Requests are copied into these pages as they arrive.  Remaining
  * pages are available to write the reply into.
  *
- * Pages are sent using ->sendpage so each server thread needs to
- * allocate more to replace those used in sending.  To help keep track
- * of these pages we have a receive list where all pages initialy live,
- * and a send list where pages are moved to when there are to be part
- * of a reply.
+ * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread
+ * needs to allocate more to replace those used in sending.  To help keep track
+ * of these pages we have a receive list where all pages initialy live, and a
+ * send list where pages are moved to when there are to be part of a reply.
  *
  * We use xdr_buf for holding responses as it fits well with NFS
  * read responses (that have a header, and some data pages, and possibly
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index f77cebe2c071..9d9f522e3ae1 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1203,13 +1203,14 @@  static int svc_tcp_recvfrom(struct svc_rqst *rqstp)
 static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec,
 			      int flags)
 {
-	return kernel_sendpage(sock, virt_to_page(vec->iov_base),
-			       offset_in_page(vec->iov_base),
-			       vec->iov_len, flags);
+	struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | flags, };
+
+	iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, vec, 1, vec->iov_len);
+	return sock_sendmsg(sock, &msg);
 }
 
 /*
- * kernel_sendpage() is used exclusively to reduce the number of
+ * MSG_SPLICE_PAGES is used exclusively to reduce the number of
  * copy operations in this path. Therefore the caller must ensure
  * that the pages backing @xdr are unchanging.
  *
@@ -1249,28 +1250,13 @@  static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr,
 	if (ret != head->iov_len)
 		goto out;
 
-	if (xdr->page_len) {
-		unsigned int offset, len, remaining;
-		struct bio_vec *bvec;
-
-		bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT);
-		offset = offset_in_page(xdr->page_base);
-		remaining = xdr->page_len;
-		while (remaining > 0) {
-			len = min(remaining, bvec->bv_len - offset);
-			ret = kernel_sendpage(sock, bvec->bv_page,
-					      bvec->bv_offset + offset,
-					      len, 0);
-			if (ret < 0)
-				return ret;
-			*sentp += ret;
-			if (ret != len)
-				goto out;
-			remaining -= len;
-			offset = 0;
-			bvec++;
-		}
-	}
+	msg.msg_flags = MSG_SPLICE_PAGES;
+	iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec,
+		      xdr_buf_pagecount(xdr), xdr->page_len);
+	ret = sock_sendmsg(sock, &msg);
+	if (ret < 0)
+		return ret;
+	*sentp += ret;
 
 	if (tail->iov_len) {
 		ret = svc_tcp_send_kvec(sock, tail, 0);