diff mbox series

[v3,3/3] SUNRPC: Ensure we refresh the bvec after RPCSEC_GSS encoding

Message ID 20181130215555.91107-3-trond.myklebust@hammerspace.com (mailing list archive)
State New, archived
Headers show
Series [v3,1/3] SUNRPC: call_connect_status() must handle tasks that got transmitted | expand

Commit Message

Trond Myklebust Nov. 30, 2018, 9:55 p.m. UTC
A call to gss_wrap_req_priv() will end up replacing the original array
in rqstp->rq_snd_buf.pages with a new one containing the encrypted
data. In order to avoid having the rqstp->rq_snd_buf.bvec point to the
wrong page data, we need to refresh that too.

Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
---
 include/linux/sunrpc/xdr.h | 1 -
 net/sunrpc/xprt.c          | 2 ++
 net/sunrpc/xprtsock.c      | 1 +
 3 files changed, 3 insertions(+), 1 deletion(-)

Comments

Chuck Lever III Nov. 30, 2018, 10:15 p.m. UTC | #1
> On Nov 30, 2018, at 4:55 PM, Trond Myklebust <trondmy@gmail.com> wrote:
> 
> A call to gss_wrap_req_priv() will end up replacing the original array
> in rqstp->rq_snd_buf.pages with a new one containing the encrypted
> data. In order to avoid having the rqstp->rq_snd_buf.bvec point to the
> wrong page data, we need to refresh that too.

I would add a note here that this patch fixes a memory leak. And
you might want to add a Fixes: tag.

I'm trying it out now.


> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> ---
> include/linux/sunrpc/xdr.h | 1 -
> net/sunrpc/xprt.c          | 2 ++
> net/sunrpc/xprtsock.c      | 1 +
> 3 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
> index 43106ffa6788..2ec128060239 100644
> --- a/include/linux/sunrpc/xdr.h
> +++ b/include/linux/sunrpc/xdr.h
> @@ -72,7 +72,6 @@ xdr_buf_init(struct xdr_buf *buf, void *start, size_t len)
> 	buf->head[0].iov_base = start;
> 	buf->head[0].iov_len = len;
> 	buf->tail[0].iov_len = 0;
> -	buf->bvec = NULL;
> 	buf->pages = NULL;
> 	buf->page_len = 0;
> 	buf->flags = 0;
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 86bea4520c4d..122c91c28e7c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -1623,6 +1623,8 @@ xprt_request_init(struct rpc_task *task)
> 	req->rq_snd_buf.buflen = 0;
> 	req->rq_rcv_buf.len = 0;
> 	req->rq_rcv_buf.buflen = 0;
> +	req->rq_snd_buf.bvec = NULL;
> +	req->rq_rcv_buf.bvec = NULL;
> 	req->rq_release_snd_buf = NULL;
> 	xprt_reset_majortimeo(req);
> 	dprintk("RPC: %5u reserved req %p xid %08x\n", task->tk_pid,
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index ae77c71c1f64..615ef2397fc5 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -843,6 +843,7 @@ static int xs_nospace(struct rpc_rqst *req)
> static void
> xs_stream_prepare_request(struct rpc_rqst *req)
> {
> +	xdr_free_bvec(&req->rq_rcv_buf);
> 	req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf, GFP_NOIO);
> }
> 
> -- 
> 2.19.2
> 

--
Chuck Lever
Trond Myklebust Nov. 30, 2018, 10:18 p.m. UTC | #2
On Fri, 2018-11-30 at 17:15 -0500, Chuck Lever wrote:
> > On Nov 30, 2018, at 4:55 PM, Trond Myklebust <trondmy@gmail.com>
> > wrote:
> > 
> > A call to gss_wrap_req_priv() will end up replacing the original
> > array
> > in rqstp->rq_snd_buf.pages with a new one containing the encrypted
> > data. In order to avoid having the rqstp->rq_snd_buf.bvec point to
> > the
> > wrong page data, we need to refresh that too.
> 
> I would add a note here that this patch fixes a memory leak. And
> you might want to add a Fixes: tag.
> 

It only applies to new code that went into 4.20, so it won't need any
stable backporting.

That said, I'm realising (slowly - apparently I'm asleep today) that
this is receive side code, so

a) The contents of rq_snd_buf are irrelevant.
b) We want to beware of changing it while there is a copy in
rq_private_buf.

IOW: a v4 is forthcoming.

> I'm trying it out now.
> 
> 
> > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> > ---
> > include/linux/sunrpc/xdr.h | 1 -
> > net/sunrpc/xprt.c          | 2 ++
> > net/sunrpc/xprtsock.c      | 1 +
> > 3 files changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/sunrpc/xdr.h
> > b/include/linux/sunrpc/xdr.h
> > index 43106ffa6788..2ec128060239 100644
> > --- a/include/linux/sunrpc/xdr.h
> > +++ b/include/linux/sunrpc/xdr.h
> > @@ -72,7 +72,6 @@ xdr_buf_init(struct xdr_buf *buf, void *start,
> > size_t len)
> > 	buf->head[0].iov_base = start;
> > 	buf->head[0].iov_len = len;
> > 	buf->tail[0].iov_len = 0;
> > -	buf->bvec = NULL;
> > 	buf->pages = NULL;
> > 	buf->page_len = 0;
> > 	buf->flags = 0;
> > diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> > index 86bea4520c4d..122c91c28e7c 100644
> > --- a/net/sunrpc/xprt.c
> > +++ b/net/sunrpc/xprt.c
> > @@ -1623,6 +1623,8 @@ xprt_request_init(struct rpc_task *task)
> > 	req->rq_snd_buf.buflen = 0;
> > 	req->rq_rcv_buf.len = 0;
> > 	req->rq_rcv_buf.buflen = 0;
> > +	req->rq_snd_buf.bvec = NULL;
> > +	req->rq_rcv_buf.bvec = NULL;
> > 	req->rq_release_snd_buf = NULL;
> > 	xprt_reset_majortimeo(req);
> > 	dprintk("RPC: %5u reserved req %p xid %08x\n", task->tk_pid,
> > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > index ae77c71c1f64..615ef2397fc5 100644
> > --- a/net/sunrpc/xprtsock.c
> > +++ b/net/sunrpc/xprtsock.c
> > @@ -843,6 +843,7 @@ static int xs_nospace(struct rpc_rqst *req)
> > static void
> > xs_stream_prepare_request(struct rpc_rqst *req)
> > {
> > +	xdr_free_bvec(&req->rq_rcv_buf);
> > 	req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf,
> > GFP_NOIO);
> > }
> > 
> > -- 
> > 2.19.2
> > 
> 
> --
> Chuck Lever
> 
> 
>
Chuck Lever III Nov. 30, 2018, 10:23 p.m. UTC | #3
> On Nov 30, 2018, at 5:18 PM, Trond Myklebust <trondmy@hammerspace.com> wrote:
> 
> On Fri, 2018-11-30 at 17:15 -0500, Chuck Lever wrote:
>>> On Nov 30, 2018, at 4:55 PM, Trond Myklebust <trondmy@gmail.com>
>>> wrote:
>>> 
>>> A call to gss_wrap_req_priv() will end up replacing the original
>>> array
>>> in rqstp->rq_snd_buf.pages with a new one containing the encrypted
>>> data. In order to avoid having the rqstp->rq_snd_buf.bvec point to
>>> the
>>> wrong page data, we need to refresh that too.
>> 
>> I would add a note here that this patch fixes a memory leak. And
>> you might want to add a Fixes: tag.
>> 
> 
> It only applies to new code that went into 4.20, so it won't need any
> stable backporting.
> 
> That said, I'm realising (slowly - apparently I'm asleep today) that
> this is receive side code, so
> 
> a) The contents of rq_snd_buf are irrelevant.
> b) We want to beware of changing it while there is a copy in
> rq_private_buf.
> 
> IOW: a v4 is forthcoming.

fwiw, i see the soft IRQ warnings with NFS/RDMA too.

I would like to turn those into a pr_warn_rate_limited. I don't
see much point in the backtrace blather.


>> I'm trying it out now.
>> 
>> 
>>> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
>>> ---
>>> include/linux/sunrpc/xdr.h | 1 -
>>> net/sunrpc/xprt.c          | 2 ++
>>> net/sunrpc/xprtsock.c      | 1 +
>>> 3 files changed, 3 insertions(+), 1 deletion(-)
>>> 
>>> diff --git a/include/linux/sunrpc/xdr.h
>>> b/include/linux/sunrpc/xdr.h
>>> index 43106ffa6788..2ec128060239 100644
>>> --- a/include/linux/sunrpc/xdr.h
>>> +++ b/include/linux/sunrpc/xdr.h
>>> @@ -72,7 +72,6 @@ xdr_buf_init(struct xdr_buf *buf, void *start,
>>> size_t len)
>>> 	buf->head[0].iov_base = start;
>>> 	buf->head[0].iov_len = len;
>>> 	buf->tail[0].iov_len = 0;
>>> -	buf->bvec = NULL;
>>> 	buf->pages = NULL;
>>> 	buf->page_len = 0;
>>> 	buf->flags = 0;
>>> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
>>> index 86bea4520c4d..122c91c28e7c 100644
>>> --- a/net/sunrpc/xprt.c
>>> +++ b/net/sunrpc/xprt.c
>>> @@ -1623,6 +1623,8 @@ xprt_request_init(struct rpc_task *task)
>>> 	req->rq_snd_buf.buflen = 0;
>>> 	req->rq_rcv_buf.len = 0;
>>> 	req->rq_rcv_buf.buflen = 0;
>>> +	req->rq_snd_buf.bvec = NULL;
>>> +	req->rq_rcv_buf.bvec = NULL;
>>> 	req->rq_release_snd_buf = NULL;
>>> 	xprt_reset_majortimeo(req);
>>> 	dprintk("RPC: %5u reserved req %p xid %08x\n", task->tk_pid,
>>> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
>>> index ae77c71c1f64..615ef2397fc5 100644
>>> --- a/net/sunrpc/xprtsock.c
>>> +++ b/net/sunrpc/xprtsock.c
>>> @@ -843,6 +843,7 @@ static int xs_nospace(struct rpc_rqst *req)
>>> static void
>>> xs_stream_prepare_request(struct rpc_rqst *req)
>>> {
>>> +	xdr_free_bvec(&req->rq_rcv_buf);
>>> 	req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf,
>>> GFP_NOIO);
>>> }
>>> 
>>> -- 
>>> 2.19.2
>>> 
>> 
>> --
>> Chuck Lever
>> 
>> 
>> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> trond.myklebust@hammerspace.com

--
Chuck Lever
Trond Myklebust Nov. 30, 2018, 10:30 p.m. UTC | #4
On Fri, 2018-11-30 at 17:23 -0500, Chuck Lever wrote:
> > On Nov 30, 2018, at 5:18 PM, Trond Myklebust <
> > trondmy@hammerspace.com> wrote:
> > 
> > On Fri, 2018-11-30 at 17:15 -0500, Chuck Lever wrote:
> > > > On Nov 30, 2018, at 4:55 PM, Trond Myklebust <trondmy@gmail.com
> > > > >
> > > > wrote:
> > > > 
> > > > A call to gss_wrap_req_priv() will end up replacing the
> > > > original
> > > > array
> > > > in rqstp->rq_snd_buf.pages with a new one containing the
> > > > encrypted
> > > > data. In order to avoid having the rqstp->rq_snd_buf.bvec point
> > > > to
> > > > the
> > > > wrong page data, we need to refresh that too.
> > > 
> > > I would add a note here that this patch fixes a memory leak. And
> > > you might want to add a Fixes: tag.
> > > 
> > 
> > It only applies to new code that went into 4.20, so it won't need
> > any
> > stable backporting.
> > 
> > That said, I'm realising (slowly - apparently I'm asleep today)
> > that
> > this is receive side code, so
> > 
> > a) The contents of rq_snd_buf are irrelevant.
> > b) We want to beware of changing it while there is a copy in
> > rq_private_buf.
> > 
> > IOW: a v4 is forthcoming.
> 
> fwiw, i see the soft IRQ warnings with NFS/RDMA too.
> 
> I would like to turn those into a pr_warn_rate_limited. I don't
> see much point in the backtrace blather.
> 

Your initial email mentioned these soft IRQ warnings, but didn't
provide an example. Which warnings are these exactly, and where are
they coming from?
Chuck Lever III Nov. 30, 2018, 10:31 p.m. UTC | #5
> On Nov 30, 2018, at 5:30 PM, Trond Myklebust <trondmy@hammerspace.com> wrote:
> 
> On Fri, 2018-11-30 at 17:23 -0500, Chuck Lever wrote:
>>> On Nov 30, 2018, at 5:18 PM, Trond Myklebust <
>>> trondmy@hammerspace.com> wrote:
>>> 
>>> On Fri, 2018-11-30 at 17:15 -0500, Chuck Lever wrote:
>>>>> On Nov 30, 2018, at 4:55 PM, Trond Myklebust <trondmy@gmail.com
>>>>>> 
>>>>> wrote:
>>>>> 
>>>>> A call to gss_wrap_req_priv() will end up replacing the
>>>>> original
>>>>> array
>>>>> in rqstp->rq_snd_buf.pages with a new one containing the
>>>>> encrypted
>>>>> data. In order to avoid having the rqstp->rq_snd_buf.bvec point
>>>>> to
>>>>> the
>>>>> wrong page data, we need to refresh that too.
>>>> 
>>>> I would add a note here that this patch fixes a memory leak. And
>>>> you might want to add a Fixes: tag.
>>>> 
>>> 
>>> It only applies to new code that went into 4.20, so it won't need
>>> any
>>> stable backporting.
>>> 
>>> That said, I'm realising (slowly - apparently I'm asleep today)
>>> that
>>> this is receive side code, so
>>> 
>>> a) The contents of rq_snd_buf are irrelevant.
>>> b) We want to beware of changing it while there is a copy in
>>> rq_private_buf.
>>> 
>>> IOW: a v4 is forthcoming.
>> 
>> fwiw, i see the soft IRQ warnings with NFS/RDMA too.
>> 
>> I would like to turn those into a pr_warn_rate_limited. I don't
>> see much point in the backtrace blather.
>> 
> 
> Your initial email mentioned these soft IRQ warnings, but didn't
> provide an example. Which warnings are these exactly, and where are
> they coming from?

The WARN_ON in call_decode that fires when rq_rcv_buf does not
exactly match rq_private_buf.


--
Chuck Lever
Trond Myklebust Nov. 30, 2018, 10:36 p.m. UTC | #6
On Fri, 2018-11-30 at 17:31 -0500, Chuck Lever wrote:
> > On Nov 30, 2018, at 5:30 PM, Trond Myklebust <
> > trondmy@hammerspace.com> wrote:
> > 
> > On Fri, 2018-11-30 at 17:23 -0500, Chuck Lever wrote:
> > > > On Nov 30, 2018, at 5:18 PM, Trond Myklebust <
> > > > trondmy@hammerspace.com> wrote:
> > > > 
> > > > On Fri, 2018-11-30 at 17:15 -0500, Chuck Lever wrote:
> > > > > > On Nov 30, 2018, at 4:55 PM, Trond Myklebust <
> > > > > > trondmy@gmail.com
> > > > > > wrote:
> > > > > > 
> > > > > > A call to gss_wrap_req_priv() will end up replacing the
> > > > > > original
> > > > > > array
> > > > > > in rqstp->rq_snd_buf.pages with a new one containing the
> > > > > > encrypted
> > > > > > data. In order to avoid having the rqstp->rq_snd_buf.bvec
> > > > > > point
> > > > > > to
> > > > > > the
> > > > > > wrong page data, we need to refresh that too.
> > > > > 
> > > > > I would add a note here that this patch fixes a memory leak.
> > > > > And
> > > > > you might want to add a Fixes: tag.
> > > > > 
> > > > 
> > > > It only applies to new code that went into 4.20, so it won't
> > > > need
> > > > any
> > > > stable backporting.
> > > > 
> > > > That said, I'm realising (slowly - apparently I'm asleep today)
> > > > that
> > > > this is receive side code, so
> > > > 
> > > > a) The contents of rq_snd_buf are irrelevant.
> > > > b) We want to beware of changing it while there is a copy in
> > > > rq_private_buf.
> > > > 
> > > > IOW: a v4 is forthcoming.
> > > 
> > > fwiw, i see the soft IRQ warnings with NFS/RDMA too.
> > > 
> > > I would like to turn those into a pr_warn_rate_limited. I don't
> > > see much point in the backtrace blather.
> > > 
> > 
> > Your initial email mentioned these soft IRQ warnings, but didn't
> > provide an example. Which warnings are these exactly, and where are
> > they coming from?
> 
> The WARN_ON in call_decode that fires when rq_rcv_buf does not
> exactly match rq_private_buf.
> 

Oh that warning... That might actually be triggering on the bvec value
clobber.
diff mbox series

Patch

diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
index 43106ffa6788..2ec128060239 100644
--- a/include/linux/sunrpc/xdr.h
+++ b/include/linux/sunrpc/xdr.h
@@ -72,7 +72,6 @@  xdr_buf_init(struct xdr_buf *buf, void *start, size_t len)
 	buf->head[0].iov_base = start;
 	buf->head[0].iov_len = len;
 	buf->tail[0].iov_len = 0;
-	buf->bvec = NULL;
 	buf->pages = NULL;
 	buf->page_len = 0;
 	buf->flags = 0;
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index 86bea4520c4d..122c91c28e7c 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -1623,6 +1623,8 @@  xprt_request_init(struct rpc_task *task)
 	req->rq_snd_buf.buflen = 0;
 	req->rq_rcv_buf.len = 0;
 	req->rq_rcv_buf.buflen = 0;
+	req->rq_snd_buf.bvec = NULL;
+	req->rq_rcv_buf.bvec = NULL;
 	req->rq_release_snd_buf = NULL;
 	xprt_reset_majortimeo(req);
 	dprintk("RPC: %5u reserved req %p xid %08x\n", task->tk_pid,
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index ae77c71c1f64..615ef2397fc5 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -843,6 +843,7 @@  static int xs_nospace(struct rpc_rqst *req)
 static void
 xs_stream_prepare_request(struct rpc_rqst *req)
 {
+	xdr_free_bvec(&req->rq_rcv_buf);
 	req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf, GFP_NOIO);
 }