diff mbox series

[v2,net] tcp/dccp: Don't use timer_pending() in reqsk_queue_unlink().

Message ID 20241009174226.7738-1-kuniyu@amazon.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [v2,net] tcp/dccp: Don't use timer_pending() in reqsk_queue_unlink(). | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 6 this patch: 6
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers warning 1 maintainers not CCed: bpf@vger.kernel.org
netdev/build_clang success Errors and warnings before: 6 this patch: 6
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 8 this patch: 8
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 50 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 2 this patch: 2
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-10-10--09-00 (tests: 775)

Commit Message

Kuniyuki Iwashima Oct. 9, 2024, 5:42 p.m. UTC
Martin KaFai Lau reported use-after-free [0] in reqsk_timer_handler().

  """
  We are seeing a use-after-free from a bpf prog attached to
  trace_tcp_retransmit_synack. The program passes the req->sk to the
  bpf_sk_storage_get_tracing kernel helper which does check for null
  before using it.
  """

The commit 83fccfc3940c ("inet: fix potential deadlock in
reqsk_queue_unlink()") added timer_pending() in reqsk_queue_unlink() not
to call del_timer_sync() from reqsk_timer_handler(), but it introduced a
small race window.

Before the timer is called, expire_timers() calls detach_timer(timer, true)
to clear timer->entry.pprev and marks it as not pending.

If reqsk_queue_unlink() checks timer_pending() just before expire_timers()
calls detach_timer(), TCP will miss del_timer_sync(); the reqsk timer will
continue running and send multiple SYN+ACKs until it expires.

The reported UAF could happen if req->sk is close()d earlier than the timer
expiration, which is 63s by default.

The scenario would be

  1. inet_csk_complete_hashdance() calls inet_csk_reqsk_queue_drop(),
     but del_timer_sync() is missed

  2. reqsk timer is executed and scheduled again

  3. req->sk is accept()ed and reqsk_put() decrements rsk_refcnt, but
     reqsk timer still has another one, and inet_csk_accept() does not
     clear req->sk for non-TFO sockets

  4. sk is close()d

  5. reqsk timer is executed again, and BPF touches req->sk

Let's not use timer_pending() by passing the caller context to
__inet_csk_reqsk_queue_drop().

Note that reqsk timer is pinned, so del_timer_sync() should not be missed
in most use cases.

[0]
BUG: KFENCE: use-after-free read in bpf_sk_storage_get_tracing+0x2e/0x1b0

Use-after-free read at 0x00000000a891fb3a (in kfence-#1):
bpf_sk_storage_get_tracing+0x2e/0x1b0
bpf_prog_5ea3e95db6da0438_tcp_retransmit_synack+0x1d20/0x1dda
bpf_trace_run2+0x4c/0xc0
tcp_rtx_synack+0xf9/0x100
reqsk_timer_handler+0xda/0x3d0
run_timer_softirq+0x292/0x8a0
irq_exit_rcu+0xf5/0x320
sysvec_apic_timer_interrupt+0x6d/0x80
asm_sysvec_apic_timer_interrupt+0x16/0x20
intel_idle_irq+0x5a/0xa0
cpuidle_enter_state+0x94/0x273
cpu_startup_entry+0x15e/0x260
start_secondary+0x8a/0x90
secondary_startup_64_no_verify+0xfa/0xfb

kfence-#1: 0x00000000a72cc7b6-0x00000000d97616d9, size=2376, cache=TCPv6

allocated by task 0 on cpu 9 at 260507.901592s:
sk_prot_alloc+0x35/0x140
sk_clone_lock+0x1f/0x3f0
inet_csk_clone_lock+0x15/0x160
tcp_create_openreq_child+0x1f/0x410
tcp_v6_syn_recv_sock+0x1da/0x700
tcp_check_req+0x1fb/0x510
tcp_v6_rcv+0x98b/0x1420
ipv6_list_rcv+0x2258/0x26e0
napi_complete_done+0x5b1/0x2990
mlx5e_napi_poll+0x2ae/0x8d0
net_rx_action+0x13e/0x590
irq_exit_rcu+0xf5/0x320
common_interrupt+0x80/0x90
asm_common_interrupt+0x22/0x40
cpuidle_enter_state+0xfb/0x273
cpu_startup_entry+0x15e/0x260
start_secondary+0x8a/0x90
secondary_startup_64_no_verify+0xfa/0xfb

freed by task 0 on cpu 9 at 260507.927527s:
rcu_core_si+0x4ff/0xf10
irq_exit_rcu+0xf5/0x320
sysvec_apic_timer_interrupt+0x6d/0x80
asm_sysvec_apic_timer_interrupt+0x16/0x20
cpuidle_enter_state+0xfb/0x273
cpu_startup_entry+0x15e/0x260
start_secondary+0x8a/0x90
secondary_startup_64_no_verify+0xfa/0xfb

Fixes: 83fccfc3940c ("inet: fix potential deadlock in reqsk_queue_unlink()")
Reported-by: Martin KaFai Lau <martin.lau@kernel.org>
Closes: https://lore.kernel.org/netdev/eb6684d0-ffd9-4bdc-9196-33f690c25824@linux.dev/
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
v2:
  * Added issue scenario in changelog
  * Correct reqsk for __inet_csk_reqsk_queue_drop()

v1: https://lore.kernel.org/netdev/20241007141557.14424-1-kuniyu@amazon.com/
---
 net/ipv4/inet_connection_sock.c | 21 ++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

Comments

Martin KaFai Lau Oct. 10, 2024, 5:46 a.m. UTC | #1
On 10/9/24 10:42 AM, Kuniyuki Iwashima wrote:
> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> index 2c5632d4fddb..23cff5278a64 100644
> --- a/net/ipv4/inet_connection_sock.c
> +++ b/net/ipv4/inet_connection_sock.c
> @@ -1045,12 +1045,13 @@ static bool reqsk_queue_unlink(struct request_sock *req)
>   		found = __sk_nulls_del_node_init_rcu(sk);
>   		spin_unlock(lock);
>   	}
> -	if (timer_pending(&req->rsk_timer) && del_timer_sync(&req->rsk_timer))
> -		reqsk_put(req);
> +
>   	return found;
>   }
>   
> -bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
> +static bool __inet_csk_reqsk_queue_drop(struct sock *sk,
> +					struct request_sock *req,
> +					bool from_timer)
>   {
>   	bool unlinked = reqsk_queue_unlink(req);
>   
> @@ -1058,8 +1059,17 @@ bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
>   		reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
>   		reqsk_put(req);
>   	}
> +
> +	if (!from_timer && timer_delete_sync(&req->rsk_timer))

timer_delete_sync() is now done after the above reqsk_queue_removed().
The reqsk_timer_handler() may do the "req->num_timeout++" while the above 
reqsk_queue_removed() needs to check for req->num_timeout. Would it race?

Others lgtm. Thanks for the patch.

> +		reqsk_put(req);
> +
>   	return unlinked;
>   }
Kuniyuki Iwashima Oct. 10, 2024, 5:36 p.m. UTC | #2
From: Martin KaFai Lau <martin.lau@linux.dev>
Date: Wed, 9 Oct 2024 22:46:57 -0700
> On 10/9/24 10:42 AM, Kuniyuki Iwashima wrote:
> > diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> > index 2c5632d4fddb..23cff5278a64 100644
> > --- a/net/ipv4/inet_connection_sock.c
> > +++ b/net/ipv4/inet_connection_sock.c
> > @@ -1045,12 +1045,13 @@ static bool reqsk_queue_unlink(struct request_sock *req)
> >   		found = __sk_nulls_del_node_init_rcu(sk);
> >   		spin_unlock(lock);
> >   	}
> > -	if (timer_pending(&req->rsk_timer) && del_timer_sync(&req->rsk_timer))
> > -		reqsk_put(req);
> > +
> >   	return found;
> >   }
> >   
> > -bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
> > +static bool __inet_csk_reqsk_queue_drop(struct sock *sk,
> > +					struct request_sock *req,
> > +					bool from_timer)
> >   {
> >   	bool unlinked = reqsk_queue_unlink(req);
> >   
> > @@ -1058,8 +1059,17 @@ bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
> >   		reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
> >   		reqsk_put(req);
> >   	}
> > +
> > +	if (!from_timer && timer_delete_sync(&req->rsk_timer))
> 
> timer_delete_sync() is now done after the above reqsk_queue_removed().
> The reqsk_timer_handler() may do the "req->num_timeout++" while the above 
> reqsk_queue_removed() needs to check for req->num_timeout. Would it race?

Ah thanks!
I moved it for better @unlinked access, but will move above.

Btw, do you have any hint why the connection was processed on a different
cpu, not one where reqsk timer was pinned ?
Martin KaFai Lau Oct. 12, 2024, 4:13 a.m. UTC | #3
On 10/10/24 10:36 AM, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau <martin.lau@linux.dev>
> Date: Wed, 9 Oct 2024 22:46:57 -0700
>> On 10/9/24 10:42 AM, Kuniyuki Iwashima wrote:
>>> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
>>> index 2c5632d4fddb..23cff5278a64 100644
>>> --- a/net/ipv4/inet_connection_sock.c
>>> +++ b/net/ipv4/inet_connection_sock.c
>>> @@ -1045,12 +1045,13 @@ static bool reqsk_queue_unlink(struct request_sock *req)
>>>    		found = __sk_nulls_del_node_init_rcu(sk);
>>>    		spin_unlock(lock);
>>>    	}
>>> -	if (timer_pending(&req->rsk_timer) && del_timer_sync(&req->rsk_timer))
>>> -		reqsk_put(req);
>>> +
>>>    	return found;
>>>    }
>>>    
>>> -bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
>>> +static bool __inet_csk_reqsk_queue_drop(struct sock *sk,
>>> +					struct request_sock *req,
>>> +					bool from_timer)
>>>    {
>>>    	bool unlinked = reqsk_queue_unlink(req);
>>>    
>>> @@ -1058,8 +1059,17 @@ bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
>>>    		reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
>>>    		reqsk_put(req);
>>>    	}
>>> +
>>> +	if (!from_timer && timer_delete_sync(&req->rsk_timer))
>>
>> timer_delete_sync() is now done after the above reqsk_queue_removed().
>> The reqsk_timer_handler() may do the "req->num_timeout++" while the above
>> reqsk_queue_removed() needs to check for req->num_timeout. Would it race?
> 
> Ah thanks!
> I moved it for better @unlinked access, but will move above.
> 
> Btw, do you have any hint why the connection was processed on a different
> cpu, not one where reqsk timer was pinned ?

Just saw this after replying on v1. I don't know what exactly caused this. I am 
only aware we have a recent steering test to test different packet steering setup.

[ I had some email client issues, so the reply ordering has been wrong :( ]
diff mbox series

Patch

diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 2c5632d4fddb..23cff5278a64 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -1045,12 +1045,13 @@  static bool reqsk_queue_unlink(struct request_sock *req)
 		found = __sk_nulls_del_node_init_rcu(sk);
 		spin_unlock(lock);
 	}
-	if (timer_pending(&req->rsk_timer) && del_timer_sync(&req->rsk_timer))
-		reqsk_put(req);
+
 	return found;
 }
 
-bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
+static bool __inet_csk_reqsk_queue_drop(struct sock *sk,
+					struct request_sock *req,
+					bool from_timer)
 {
 	bool unlinked = reqsk_queue_unlink(req);
 
@@ -1058,8 +1059,17 @@  bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
 		reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
 		reqsk_put(req);
 	}
+
+	if (!from_timer && timer_delete_sync(&req->rsk_timer))
+		reqsk_put(req);
+
 	return unlinked;
 }
+
+bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
+{
+	return __inet_csk_reqsk_queue_drop(sk, req, false);
+}
 EXPORT_SYMBOL(inet_csk_reqsk_queue_drop);
 
 void inet_csk_reqsk_queue_drop_and_put(struct sock *sk, struct request_sock *req)
@@ -1152,7 +1162,7 @@  static void reqsk_timer_handler(struct timer_list *t)
 
 		if (!inet_ehash_insert(req_to_sk(nreq), req_to_sk(oreq), NULL)) {
 			/* delete timer */
-			inet_csk_reqsk_queue_drop(sk_listener, nreq);
+			__inet_csk_reqsk_queue_drop(sk_listener, nreq, true);
 			goto no_ownership;
 		}
 
@@ -1178,7 +1188,8 @@  static void reqsk_timer_handler(struct timer_list *t)
 	}
 
 drop:
-	inet_csk_reqsk_queue_drop_and_put(oreq->rsk_listener, oreq);
+	__inet_csk_reqsk_queue_drop(sk_listener, oreq, true);
+	reqsk_put(req);
 }
 
 static bool reqsk_queue_hash_req(struct request_sock *req,