diff mbox series

[net] rxrpc: Fix a race between socket set up and I/O thread creation

Message ID 1210177.1727215681@warthog.procyon.org.uk (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series [net] rxrpc: Fix a race between socket set up and I/O thread creation | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 16 this patch: 16
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 7 of 7 maintainers
netdev/build_clang success Errors and warnings before: 16 this patch: 16
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 19 this patch: 19
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 42 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-09-26--21-00 (tests: 768)

Commit Message

David Howells Sept. 24, 2024, 10:08 p.m. UTC
In rxrpc_open_socket(), it sets up the socket and then sets up the I/O
thread that will handle it.  This is a problem, however, as there's a gap
between the two phases in which a packet may come into rxrpc_encap_rcv()
from the UDP packet but we oops when trying to wake the not-yet created I/O
thread.

As a quick fix, just make rxrpc_encap_rcv() discard the packet if there's
no I/O thread yet.

A better, but more intrusive fix would perhaps be to rearrange things such
that the socket creation is done by the I/O thread.

Fixes: a275da62e8c1 ("rxrpc: Create a per-local endpoint receive queue and I/O thread")
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
cc: yuxuanzhe@outlook.com
cc: Marc Dionne <marc.dionne@auristor.com>
cc: "David S. Miller" <davem@davemloft.net>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: linux-afs@lists.infradead.org
cc: netdev@vger.kernel.org
---
 net/rxrpc/ar-internal.h  |    2 +-
 net/rxrpc/io_thread.c    |    7 ++++---
 net/rxrpc/local_object.c |    2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

Comments

Simon Horman Sept. 25, 2024, 6:33 p.m. UTC | #1
On Tue, Sep 24, 2024 at 11:08:01PM +0100, David Howells wrote:
> In rxrpc_open_socket(), it sets up the socket and then sets up the I/O
> thread that will handle it.  This is a problem, however, as there's a gap
> between the two phases in which a packet may come into rxrpc_encap_rcv()
> from the UDP packet but we oops when trying to wake the not-yet created I/O
> thread.
> 
> As a quick fix, just make rxrpc_encap_rcv() discard the packet if there's
> no I/O thread yet.
> 
> A better, but more intrusive fix would perhaps be to rearrange things such
> that the socket creation is done by the I/O thread.
> 
> Fixes: a275da62e8c1 ("rxrpc: Create a per-local endpoint receive queue and I/O thread")
> Signed-off-by: David Howells <dhowells@redhat.com>
> Reviewed-by: Eric Dumazet <edumazet@google.com>

...:wq

> diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c
> index 0300baa9afcd..5c0a5374d51a 100644
> --- a/net/rxrpc/io_thread.c
> +++ b/net/rxrpc/io_thread.c
> @@ -27,8 +27,9 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
>  {
>  	struct sk_buff_head *rx_queue;
>  	struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk);
> +	struct task_struct *io_thread = READ_ONCE(local->io_thread);

Hi David,

The line above dereferences local.
But the line below assumes that it may be NULL.
This seems inconsistent.

Flagged by Smatch.

>  
> -	if (unlikely(!local)) {
> +	if (unlikely(!local || !io_thread)) {
>  		kfree_skb(skb);
>  		return 0;
>  	}
Paolo Abeni Oct. 1, 2024, 10:53 a.m. UTC | #2
On 9/25/24 20:33, Simon Horman wrote:
> On Tue, Sep 24, 2024 at 11:08:01PM +0100, David Howells wrote:
>> In rxrpc_open_socket(), it sets up the socket and then sets up the I/O
>> thread that will handle it.  This is a problem, however, as there's a gap
>> between the two phases in which a packet may come into rxrpc_encap_rcv()
>> from the UDP packet but we oops when trying to wake the not-yet created I/O
>> thread.
>>
>> As a quick fix, just make rxrpc_encap_rcv() discard the packet if there's
>> no I/O thread yet.
>>
>> A better, but more intrusive fix would perhaps be to rearrange things such
>> that the socket creation is done by the I/O thread.
>>
>> Fixes: a275da62e8c1 ("rxrpc: Create a per-local endpoint receive queue and I/O thread")
>> Signed-off-by: David Howells <dhowells@redhat.com>
>> Reviewed-by: Eric Dumazet <edumazet@google.com>
> 
> ...:wq
> 
>> diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c
>> index 0300baa9afcd..5c0a5374d51a 100644
>> --- a/net/rxrpc/io_thread.c
>> +++ b/net/rxrpc/io_thread.c
>> @@ -27,8 +27,9 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
>>   {
>>   	struct sk_buff_head *rx_queue;
>>   	struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk);
>> +	struct task_struct *io_thread = READ_ONCE(local->io_thread);
> 
> Hi David,
> 
> The line above dereferences local.
> But the line below assumes that it may be NULL.
> This seems inconsistent.

sk->sk_user_data is cleared just before io_thread by rxrpc_io_thread(), 
I think accessing a NULL 'local' here should be possible.

@David, could you please respin addressing the above?

Thanks!

Paolo
diff mbox series

Patch

diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index 80d682f89b23..d0fd37bdcfe9 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -1056,7 +1056,7 @@  bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
 int rxrpc_io_thread(void *data);
 static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local)
 {
-	wake_up_process(local->io_thread);
+	wake_up_process(READ_ONCE(local->io_thread));
 }
 
 static inline bool rxrpc_protocol_error(struct sk_buff *skb, enum rxrpc_abort_reason why)
diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c
index 0300baa9afcd..5c0a5374d51a 100644
--- a/net/rxrpc/io_thread.c
+++ b/net/rxrpc/io_thread.c
@@ -27,8 +27,9 @@  int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
 {
 	struct sk_buff_head *rx_queue;
 	struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk);
+	struct task_struct *io_thread = READ_ONCE(local->io_thread);
 
-	if (unlikely(!local)) {
+	if (unlikely(!local || !io_thread)) {
 		kfree_skb(skb);
 		return 0;
 	}
@@ -47,7 +48,7 @@  int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
 #endif
 
 	skb_queue_tail(rx_queue, skb);
-	rxrpc_wake_up_io_thread(local);
+	wake_up_process(io_thread);
 	return 0;
 }
 
@@ -565,7 +566,7 @@  int rxrpc_io_thread(void *data)
 	__set_current_state(TASK_RUNNING);
 	rxrpc_see_local(local, rxrpc_local_stop);
 	rxrpc_destroy_local(local);
-	local->io_thread = NULL;
+	WRITE_ONCE(local->io_thread, NULL);
 	rxrpc_see_local(local, rxrpc_local_stopped);
 	return 0;
 }
diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
index 504453c688d7..f9623ace2201 100644
--- a/net/rxrpc/local_object.c
+++ b/net/rxrpc/local_object.c
@@ -232,7 +232,7 @@  static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
 	}
 
 	wait_for_completion(&local->io_thread_ready);
-	local->io_thread = io_thread;
+	WRITE_ONCE(local->io_thread, io_thread);
 	_leave(" = 0");
 	return 0;