diff mbox series

[v2] xprtrdma: treat all calls not a bcall when bc_serv is NULL

Message ID 906dd00c-7999-4d36-0cf1-155ceb595ba9@gmail.com (mailing list archive)
State New, archived
Headers show
Series [v2] xprtrdma: treat all calls not a bcall when bc_serv is NULL | expand

Commit Message

Kinglong Mee May 22, 2022, 12:36 p.m. UTC
When a rdma server returns a fault format reply, nfs v3 client may
treats it as a bcall when bc service is not exist.

The debug message at rpcrdma_bc_receive_call are,

[56579.837169] RPC:       rpcrdma_bc_receive_call: callback XID 
00000001, length=20
[56579.837174] RPC:       rpcrdma_bc_receive_call: 00 00 00 01 00 00 00 
00 00 00 00 00 00 00 00 00 00 00 00 04

After that, rpcrdma_bc_receive_call will meets NULL pointer as,

[  226.057890] BUG: unable to handle kernel NULL pointer dereference at 
00000000000000c8
...
[  226.058704] RIP: 0010:_raw_spin_lock+0xc/0x20
...
[  226.059732] Call Trace:
[  226.059878]  rpcrdma_bc_receive_call+0x138/0x327 [rpcrdma]
[  226.060011]  __ib_process_cq+0x89/0x170 [ib_core]
[  226.060092]  ib_cq_poll_work+0x26/0x80 [ib_core]
[  226.060257]  process_one_work+0x1a7/0x360
[  226.060367]  ? create_worker+0x1a0/0x1a0
[  226.060440]  worker_thread+0x30/0x390
[  226.060500]  ? create_worker+0x1a0/0x1a0
[  226.060574]  kthread+0x116/0x130
[  226.060661]  ? kthread_flush_work_fn+0x10/0x10
[  226.060724]  ret_from_fork+0x35/0x40
...

Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
---
  net/sunrpc/xprtrdma/rpc_rdma.c | 5 +++++
  1 file changed, 5 insertions(+)

  	 */

Comments

Chuck Lever May 22, 2022, 3:52 p.m. UTC | #1
> On May 22, 2022, at 8:36 AM, Kinglong Mee <kinglongmee@gmail.com> wrote:
> 
> When a rdma server returns a fault format reply, nfs v3 client may
> treats it as a bcall when bc service is not exist.
> 
> The debug message at rpcrdma_bc_receive_call are,
> 
> [56579.837169] RPC:       rpcrdma_bc_receive_call: callback XID 00000001, length=20
> [56579.837174] RPC:       rpcrdma_bc_receive_call: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 04
> 
> After that, rpcrdma_bc_receive_call will meets NULL pointer as,
> 
> [  226.057890] BUG: unable to handle kernel NULL pointer dereference at 00000000000000c8
> ...
> [  226.058704] RIP: 0010:_raw_spin_lock+0xc/0x20
> ...
> [  226.059732] Call Trace:
> [  226.059878]  rpcrdma_bc_receive_call+0x138/0x327 [rpcrdma]
> [  226.060011]  __ib_process_cq+0x89/0x170 [ib_core]
> [  226.060092]  ib_cq_poll_work+0x26/0x80 [ib_core]
> [  226.060257]  process_one_work+0x1a7/0x360
> [  226.060367]  ? create_worker+0x1a0/0x1a0
> [  226.060440]  worker_thread+0x30/0x390
> [  226.060500]  ? create_worker+0x1a0/0x1a0
> [  226.060574]  kthread+0x116/0x130
> [  226.060661]  ? kthread_flush_work_fn+0x10/0x10
> [  226.060724]  ret_from_fork+0x35/0x40
> ...
> 
> Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>

Patch is good, Reviewed-by: Chuck Lever <chuck.lever@oracle.com>

Commentary (no changes to v2 needed):

The description still suggests we are adapting the client to
work around a broken server. I don't feel this is the right
reason to accept this change. Instead: we really don't want
the client to be vulnerable to bad input for any reason.
After this fix is applied, bad RPC messages are eventually
dropped rather than processed, which is the desired behavior.

Anna, I recommend adding Cc: stable for this fix.

Kinglong, please ensure that client Receive resources are not
leaked in this case. If they are, send along an additional
patch; if not, let us know and we can close this issue. Thank
you!


> ---
> net/sunrpc/xprtrdma/rpc_rdma.c | 5 +++++
> 1 file changed, 5 insertions(+)
> 
> diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
> index 281ddb87ac8d..190a4de239c8 100644
> --- a/net/sunrpc/xprtrdma/rpc_rdma.c
> +++ b/net/sunrpc/xprtrdma/rpc_rdma.c
> @@ -1121,6 +1121,7 @@ static bool
> rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep)
> #if defined(CONFIG_SUNRPC_BACKCHANNEL)
> {
> +	struct rpc_xprt *xprt = &r_xprt->rx_xprt;
> 	struct xdr_stream *xdr = &rep->rr_stream;
> 	__be32 *p;
> 
> @@ -1144,6 +1145,10 @@ rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep)
> 	if (*p != cpu_to_be32(RPC_CALL))
> 		return false;
> 
> +	/* No bc service. */
> +	if (xprt->bc_serv == NULL)
> +		return false;
> +
> 	/* Now that we are sure this is a backchannel call,
> 	 * advance to the RPC header.
> 	 */
> -- 
> 2.27.0
> 

--
Chuck Lever
Kinglong Mee May 23, 2022, 1:39 a.m. UTC | #2
On 2022/5/22 11:52 PM, Chuck Lever III wrote:
> 
> 
>> On May 22, 2022, at 8:36 AM, Kinglong Mee <kinglongmee@gmail.com> wrote:
>>
>> When a rdma server returns a fault format reply, nfs v3 client may
>> treats it as a bcall when bc service is not exist.
>>
>> The debug message at rpcrdma_bc_receive_call are,
>>
>> [56579.837169] RPC:       rpcrdma_bc_receive_call: callback XID 00000001, length=20
>> [56579.837174] RPC:       rpcrdma_bc_receive_call: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 04
>>
>> After that, rpcrdma_bc_receive_call will meets NULL pointer as,
>>
>> [  226.057890] BUG: unable to handle kernel NULL pointer dereference at 00000000000000c8
>> ...
>> [  226.058704] RIP: 0010:_raw_spin_lock+0xc/0x20
>> ...
>> [  226.059732] Call Trace:
>> [  226.059878]  rpcrdma_bc_receive_call+0x138/0x327 [rpcrdma]
>> [  226.060011]  __ib_process_cq+0x89/0x170 [ib_core]
>> [  226.060092]  ib_cq_poll_work+0x26/0x80 [ib_core]
>> [  226.060257]  process_one_work+0x1a7/0x360
>> [  226.060367]  ? create_worker+0x1a0/0x1a0
>> [  226.060440]  worker_thread+0x30/0x390
>> [  226.060500]  ? create_worker+0x1a0/0x1a0
>> [  226.060574]  kthread+0x116/0x130
>> [  226.060661]  ? kthread_flush_work_fn+0x10/0x10
>> [  226.060724]  ret_from_fork+0x35/0x40
>> ...
>>
>> Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
> 
> Patch is good, Reviewed-by: Chuck Lever <chuck.lever@oracle.com>
> 
> Commentary (no changes to v2 needed):
> 
> The description still suggests we are adapting the client to
> work around a broken server. I don't feel this is the right
> reason to accept this change. Instead: we really don't want
> the client to be vulnerable to bad input for any reason.
> After this fix is applied, bad RPC messages are eventually
> dropped rather than processed, which is the desired behavior.
> 
> Anna, I recommend adding Cc: stable for this fix.
> 
> Kinglong, please ensure that client Receive resources are not
> leaked in this case. If they are, send along an additional
> patch; if not, let us know and we can close this issue. 

I have double check of the code.
I think client Receive resources are not leaked here.
We can close it.

Thanks,
Kinglong Mee

> 
> 
>> ---
>> net/sunrpc/xprtrdma/rpc_rdma.c | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
>> index 281ddb87ac8d..190a4de239c8 100644
>> --- a/net/sunrpc/xprtrdma/rpc_rdma.c
>> +++ b/net/sunrpc/xprtrdma/rpc_rdma.c
>> @@ -1121,6 +1121,7 @@ static bool
>> rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep)
>> #if defined(CONFIG_SUNRPC_BACKCHANNEL)
>> {
>> +	struct rpc_xprt *xprt = &r_xprt->rx_xprt;
>> 	struct xdr_stream *xdr = &rep->rr_stream;
>> 	__be32 *p;
>>
>> @@ -1144,6 +1145,10 @@ rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep)
>> 	if (*p != cpu_to_be32(RPC_CALL))
>> 		return false;
>>
>> +	/* No bc service. */
>> +	if (xprt->bc_serv == NULL)
>> +		return false;
>> +
>> 	/* Now that we are sure this is a backchannel call,
>> 	 * advance to the RPC header.
>> 	 */
>> -- 
>> 2.27.0
>>
> 
> --
> Chuck Lever
> 
> 
>
diff mbox series

Patch

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 281ddb87ac8d..190a4de239c8 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -1121,6 +1121,7 @@  static bool
  rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep)
  #if defined(CONFIG_SUNRPC_BACKCHANNEL)
  {
+	struct rpc_xprt *xprt = &r_xprt->rx_xprt;
  	struct xdr_stream *xdr = &rep->rr_stream;
  	__be32 *p;

@@ -1144,6 +1145,10 @@  rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, 
struct rpcrdma_rep *rep)
  	if (*p != cpu_to_be32(RPC_CALL))
  		return false;

+	/* No bc service. */
+	if (xprt->bc_serv == NULL)
+		return false;
+
  	/* Now that we are sure this is a backchannel call,
  	 * advance to the RPC header.