diff mbox series

[5/8] SUNRPC: Don't truncate tail in xdr_inline_pages()

Message ID 20201122205229.3826-6-trondmy@kernel.org (mailing list archive)
State New, archived
Headers show
Series Fix various issues in the SUNRPC xdr code | expand

Commit Message

Trond Myklebust Nov. 22, 2020, 8:52 p.m. UTC
From: Trond Myklebust <trond.myklebust@hammerspace.com>

True that if the length of the pages[] array is not 4-byte aligned, then
we will need to store the padding in the tail, but there is no need to
truncate the total buffer length here.

Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
---
 net/sunrpc/xdr.c | 3 ---
 1 file changed, 3 deletions(-)

Comments

Chuck Lever Nov. 23, 2020, 1:24 a.m. UTC | #1
> On Nov 22, 2020, at 3:52 PM, trondmy@kernel.org wrote:
> 
> From: Trond Myklebust <trond.myklebust@hammerspace.com>
> 
> True that if the length of the pages[] array is not 4-byte aligned, then
> we will need to store the padding in the tail, but there is no need to
> truncate the total buffer length here.

This description confuses me. The existing code reduces the length of
the tail, not the "total buffer length." And what the removed logic is
doing is taking out the length of the XDR pad for the pages array when
it is not expected to be used.


> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> ---
> net/sunrpc/xdr.c | 3 ---
> 1 file changed, 3 deletions(-)
> 
> diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
> index 3ce0a5daa9eb..5a450055469f 100644
> --- a/net/sunrpc/xdr.c
> +++ b/net/sunrpc/xdr.c
> @@ -193,9 +193,6 @@ xdr_inline_pages(struct xdr_buf *xdr, unsigned int offset,
> 
> 	tail->iov_base = buf + offset;
> 	tail->iov_len = buflen - offset;
> -	if ((xdr->page_len & 3) == 0)
> -		tail->iov_len -= sizeof(__be32);
> -
> 	xdr->buflen += len;
> }
> EXPORT_SYMBOL_GPL(xdr_inline_pages);
> -- 
> 2.28.0
> 

--
Chuck Lever
Trond Myklebust Nov. 23, 2020, 4:29 a.m. UTC | #2
On Sun, 2020-11-22 at 20:24 -0500, Chuck Lever wrote:
> 
> 
> > On Nov 22, 2020, at 3:52 PM, trondmy@kernel.org wrote:
> > 
> > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > 
> > True that if the length of the pages[] array is not 4-byte aligned,
> > then
> > we will need to store the padding in the tail, but there is no need
> > to
> > truncate the total buffer length here.
> 
> This description confuses me. The existing code reduces the length of
> the tail, not the "total buffer length." And what the removed logic
> is
> doing is taking out the length of the XDR pad for the pages array
> when
> it is not expected to be used.

Why are we bothering to do that? There is nothing problematic with just
ignoring this test and leaving the tail length as it is, nor is there
anything to be gained by applying it.

> > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> > ---
> > net/sunrpc/xdr.c | 3 ---
> > 1 file changed, 3 deletions(-)
> > 
> > diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
> > index 3ce0a5daa9eb..5a450055469f 100644
> > --- a/net/sunrpc/xdr.c
> > +++ b/net/sunrpc/xdr.c
> > @@ -193,9 +193,6 @@ xdr_inline_pages(struct xdr_buf *xdr, unsigned
> > int offset,
> > 
> >         tail->iov_base = buf + offset;
> >         tail->iov_len = buflen - offset;
> > -       if ((xdr->page_len & 3) == 0)
> > -               tail->iov_len -= sizeof(__be32);
> > -
> >         xdr->buflen += len;
> > }
> > EXPORT_SYMBOL_GPL(xdr_inline_pages);
> > -- 
> > 2.28.0
> > 
> 
> --
> Chuck Lever
> 
> 
>
Chuck Lever Nov. 23, 2020, 2:52 p.m. UTC | #3
> On Nov 22, 2020, at 11:29 PM, Trond Myklebust <trondmy@hammerspace.com> wrote:
> 
> On Sun, 2020-11-22 at 20:24 -0500, Chuck Lever wrote:
>> 
>> 
>>> On Nov 22, 2020, at 3:52 PM, trondmy@kernel.org wrote:
>>> 
>>> From: Trond Myklebust <trond.myklebust@hammerspace.com>
>>> 
>>> True that if the length of the pages[] array is not 4-byte aligned,
>>> then
>>> we will need to store the padding in the tail, but there is no need
>>> to
>>> truncate the total buffer length here.
>> 
>> This description confuses me. The existing code reduces the length of
>> the tail, not the "total buffer length." And what the removed logic
>> is
>> doing is taking out the length of the XDR pad for the pages array
>> when
>> it is not expected to be used.
> 
> Why are we bothering to do that? There is nothing problematic with just
> ignoring this test and leaving the tail length as it is, nor is there
> anything to be gained by applying it.

You are correct that leaving the buffer a little long is not going
to harm normal operation. After all, we lived with a wildly over-
estimated slack length for years.

The purpose of this code path is to prepare the receive buffer with
the memory resources and expected length of the Reply. The series
of patches that introduced this particular change was all about
ensuring that the estimated length of the reply message was exact.

If the reply message size is overestimated, that moves the end-of-
message sentinel that is later set by xdr_init_decode(). We then
miss subtle problems like our fixed size estimates are incorrect
or a man-in-the-middle is extending the RPC message or the server
is malfunctioning.

<scratches chin>

After moving the ->pages pad into ->pages, I'm wondering if you
should revert 02ef04e432ba ("NFS: Account for XDR pad of buf->pages") --
the maxsz macros don't need to account for the XDR pad of ->pages
any more. Then the below hunk makes sense. The patch description
still doesn't, though ;-)

And then you should confirm that we are still getting the receive
buffer size estimate right for krb5i and krb5p.


>>> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
>>> ---
>>> net/sunrpc/xdr.c | 3 ---
>>> 1 file changed, 3 deletions(-)
>>> 
>>> diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
>>> index 3ce0a5daa9eb..5a450055469f 100644
>>> --- a/net/sunrpc/xdr.c
>>> +++ b/net/sunrpc/xdr.c
>>> @@ -193,9 +193,6 @@ xdr_inline_pages(struct xdr_buf *xdr, unsigned
>>> int offset,
>>> 
>>>         tail->iov_base = buf + offset;
>>>         tail->iov_len = buflen - offset;
>>> -       if ((xdr->page_len & 3) == 0)
>>> -               tail->iov_len -= sizeof(__be32);
>>> -
>>>         xdr->buflen += len;
>>> }
>>> EXPORT_SYMBOL_GPL(xdr_inline_pages);
>>> -- 
>>> 2.28.0
>>> 
>> 
>> --
>> Chuck Lever
>> 
>> 
>> 
> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> trond.myklebust@hammerspace.com

--
Chuck Lever
Trond Myklebust Nov. 23, 2020, 3:37 p.m. UTC | #4
On Mon, 2020-11-23 at 09:52 -0500, Chuck Lever wrote:
> 
> 
> > On Nov 22, 2020, at 11:29 PM, Trond Myklebust <
> > trondmy@hammerspace.com> wrote:
> > 
> > On Sun, 2020-11-22 at 20:24 -0500, Chuck Lever wrote:
> > > 
> > > 
> > > > On Nov 22, 2020, at 3:52 PM, trondmy@kernel.org wrote:
> > > > 
> > > > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > > > 
> > > > True that if the length of the pages[] array is not 4-byte
> > > > aligned,
> > > > then
> > > > we will need to store the padding in the tail, but there is no
> > > > need
> > > > to
> > > > truncate the total buffer length here.
> > > 
> > > This description confuses me. The existing code reduces the
> > > length of
> > > the tail, not the "total buffer length." And what the removed
> > > logic
> > > is
> > > doing is taking out the length of the XDR pad for the pages array
> > > when
> > > it is not expected to be used.
> > 
> > Why are we bothering to do that? There is nothing problematic with
> > just
> > ignoring this test and leaving the tail length as it is, nor is
> > there
> > anything to be gained by applying it.
> 
> You are correct that leaving the buffer a little long is not going
> to harm normal operation. After all, we lived with a wildly over-
> estimated slack length for years.
> 
> The purpose of this code path is to prepare the receive buffer with
> the memory resources and expected length of the Reply. The series
> of patches that introduced this particular change was all about
> ensuring that the estimated length of the reply message was exact.
> 
> If the reply message size is overestimated, that moves the end-of-
> message sentinel that is later set by xdr_init_decode(). We then
> miss subtle problems like our fixed size estimates are incorrect
> or a man-in-the-middle is extending the RPC message or the server
> is malfunctioning.
> 
> <scratches chin>
> 
> After moving the ->pages pad into ->pages, I'm wondering if you
> should revert 02ef04e432ba ("NFS: Account for XDR pad of buf->pages")
> --
> the maxsz macros don't need to account for the XDR pad of ->pages
> any more. Then the below hunk makes sense. The patch description
> still doesn't, though ;-)
> 

I don't think it needs to be reverted. I think you are right to include
the padding in the buffer size that we use to set the value of task-
>tk_rqstp->rq_rcvsize.

That said, it seems wrong to include that padding as part of the
'hdrsize' argument in rpc_prepare_reply_pages(). That just causes
confusion, because the padding is not part of the header in front of
the array of pages. It is part of the tail data after the array of
pages. So I think a cleanup there may be warranted.

The other thing that I'm considering is that we may want to optimise to
avoid setting up an RDMA SEND just for the padding if that is truly the
last word in the RPC call (it matters less if there is other data that
requires us to set up such a SEND anyway). Not sure how to do that in a
clean manner, though. Perhaps we'd have to pass in the padding size as
a separate argument to xdr_inline_pages() (and also to
rpc_prepare_reply_pages())?


> And then you should confirm that we are still getting the receive
> buffer size estimate right for krb5i and krb5p.
> 
> 
> > > > Signed-off-by: Trond Myklebust
> > > > <trond.myklebust@hammerspace.com>
> > > > ---
> > > > net/sunrpc/xdr.c | 3 ---
> > > > 1 file changed, 3 deletions(-)
> > > > 
> > > > diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
> > > > index 3ce0a5daa9eb..5a450055469f 100644
> > > > --- a/net/sunrpc/xdr.c
> > > > +++ b/net/sunrpc/xdr.c
> > > > @@ -193,9 +193,6 @@ xdr_inline_pages(struct xdr_buf *xdr,
> > > > unsigned
> > > > int offset,
> > > > 
> > > >         tail->iov_base = buf + offset;
> > > >         tail->iov_len = buflen - offset;
> > > > -       if ((xdr->page_len & 3) == 0)
> > > > -               tail->iov_len -= sizeof(__be32);
> > > > -
> > > >         xdr->buflen += len;
> > > > }
> > > > EXPORT_SYMBOL_GPL(xdr_inline_pages);
> > > > -- 
> > > > 2.28.0
>
Chuck Lever Nov. 23, 2020, 3:47 p.m. UTC | #5
> On Nov 23, 2020, at 10:37 AM, Trond Myklebust <trondmy@hammerspace.com> wrote:
> 
> On Mon, 2020-11-23 at 09:52 -0500, Chuck Lever wrote:
>> 
>> 
>>> On Nov 22, 2020, at 11:29 PM, Trond Myklebust <
>>> trondmy@hammerspace.com> wrote:
>>> 
>>> On Sun, 2020-11-22 at 20:24 -0500, Chuck Lever wrote:
>>>> 
>>>> 
>>>>> On Nov 22, 2020, at 3:52 PM, trondmy@kernel.org wrote:
>>>>> 
>>>>> From: Trond Myklebust <trond.myklebust@hammerspace.com>
>>>>> 
>>>>> True that if the length of the pages[] array is not 4-byte
>>>>> aligned,
>>>>> then
>>>>> we will need to store the padding in the tail, but there is no
>>>>> need
>>>>> to
>>>>> truncate the total buffer length here.
>>>> 
>>>> This description confuses me. The existing code reduces the
>>>> length of
>>>> the tail, not the "total buffer length." And what the removed
>>>> logic
>>>> is
>>>> doing is taking out the length of the XDR pad for the pages array
>>>> when
>>>> it is not expected to be used.
>>> 
>>> Why are we bothering to do that? There is nothing problematic with
>>> just
>>> ignoring this test and leaving the tail length as it is, nor is
>>> there
>>> anything to be gained by applying it.
>> 
>> You are correct that leaving the buffer a little long is not going
>> to harm normal operation. After all, we lived with a wildly over-
>> estimated slack length for years.
>> 
>> The purpose of this code path is to prepare the receive buffer with
>> the memory resources and expected length of the Reply. The series
>> of patches that introduced this particular change was all about
>> ensuring that the estimated length of the reply message was exact.
>> 
>> If the reply message size is overestimated, that moves the end-of-
>> message sentinel that is later set by xdr_init_decode(). We then
>> miss subtle problems like our fixed size estimates are incorrect
>> or a man-in-the-middle is extending the RPC message or the server
>> is malfunctioning.
>> 
>> <scratches chin>
>> 
>> After moving the ->pages pad into ->pages, I'm wondering if you
>> should revert 02ef04e432ba ("NFS: Account for XDR pad of buf->pages")
>> --
>> the maxsz macros don't need to account for the XDR pad of ->pages
>> any more. Then the below hunk makes sense. The patch description
>> still doesn't, though ;-)
>> 
> 
> I don't think it needs to be reverted. I think you are right to include
> the padding in the buffer size that we use to set the value of task-
>> tk_rqstp->rq_rcvsize.
> 
> That said, it seems wrong to include that padding as part of the
> 'hdrsize' argument in rpc_prepare_reply_pages(). That just causes
> confusion, because the padding is not part of the header in front of
> the array of pages. It is part of the tail data after the array of
> pages. So I think a cleanup there may be warranted.

Agreed, dealing with the tail size is confusing.


> The other thing that I'm considering is that we may want to optimise to
> avoid setting up an RDMA SEND just for the padding if that is truly the
> last word in the RPC call (it matters less if there is other data that
> requires us to set up such a SEND anyway). Not sure how to do that in a
> clean manner, though. Perhaps we'd have to pass in the padding size as
> a separate argument to xdr_inline_pages() (and also to
> rpc_prepare_reply_pages())?

In the current version of RPC/RDMA, there's always exactly one RDMA
Send per RPC message.

The Linux client implementation is also careful to exclude XDR padding
in both Read and Write chunks because the protocol makes the inclusion
of padding on the wire optional.

The only issue I see is that the upper layer needs to identify to the
transport the exact size of the data item that is being transferred
in a chunk so that the padding can be properly excluded. Currently
rpcrdma makes some assumptions about how the data items are laid out
in the xdr_buf when XDRBUF_READ/WRITE is set.


>> And then you should confirm that we are still getting the receive
>> buffer size estimate right for krb5i and krb5p.
>> 
>> 
>>>>> Signed-off-by: Trond Myklebust
>>>>> <trond.myklebust@hammerspace.com>
>>>>> ---
>>>>> net/sunrpc/xdr.c | 3 ---
>>>>> 1 file changed, 3 deletions(-)
>>>>> 
>>>>> diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
>>>>> index 3ce0a5daa9eb..5a450055469f 100644
>>>>> --- a/net/sunrpc/xdr.c
>>>>> +++ b/net/sunrpc/xdr.c
>>>>> @@ -193,9 +193,6 @@ xdr_inline_pages(struct xdr_buf *xdr,
>>>>> unsigned
>>>>> int offset,
>>>>> 
>>>>>         tail->iov_base = buf + offset;
>>>>>         tail->iov_len = buflen - offset;
>>>>> -       if ((xdr->page_len & 3) == 0)
>>>>> -               tail->iov_len -= sizeof(__be32);
>>>>> -
>>>>>         xdr->buflen += len;
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(xdr_inline_pages);
>>>>> -- 
>>>>> 2.28.0
>> 
> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> trond.myklebust@hammerspace.com

--
Chuck Lever
Anna Schumaker Nov. 23, 2020, 5:24 p.m. UTC | #6
Hi Trond,

On Sun, Nov 22, 2020 at 4:07 PM <trondmy@kernel.org> wrote:
>
> From: Trond Myklebust <trond.myklebust@hammerspace.com>
>
> True that if the length of the pages[] array is not 4-byte aligned, then
> we will need to store the padding in the tail, but there is no need to
> truncate the total buffer length here.

Just a heads up, after applying this patch there are a *lot* of
xfstests that fail when run on v4.2 against a server that supports
READ_PLUS

Anna

>
> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> ---
>  net/sunrpc/xdr.c | 3 ---
>  1 file changed, 3 deletions(-)
>
> diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
> index 3ce0a5daa9eb..5a450055469f 100644
> --- a/net/sunrpc/xdr.c
> +++ b/net/sunrpc/xdr.c
> @@ -193,9 +193,6 @@ xdr_inline_pages(struct xdr_buf *xdr, unsigned int offset,
>
>         tail->iov_base = buf + offset;
>         tail->iov_len = buflen - offset;
> -       if ((xdr->page_len & 3) == 0)
> -               tail->iov_len -= sizeof(__be32);
> -
>         xdr->buflen += len;
>  }
>  EXPORT_SYMBOL_GPL(xdr_inline_pages);
> --
> 2.28.0
>
diff mbox series

Patch

diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
index 3ce0a5daa9eb..5a450055469f 100644
--- a/net/sunrpc/xdr.c
+++ b/net/sunrpc/xdr.c
@@ -193,9 +193,6 @@  xdr_inline_pages(struct xdr_buf *xdr, unsigned int offset,
 
 	tail->iov_base = buf + offset;
 	tail->iov_len = buflen - offset;
-	if ((xdr->page_len & 3) == 0)
-		tail->iov_len -= sizeof(__be32);
-
 	xdr->buflen += len;
 }
 EXPORT_SYMBOL_GPL(xdr_inline_pages);