diff mbox series

[v2,net] tcp: avoid the lookup process failing to get sk in ehash table

Message ID 20230114132705.78400-1-kerneljasonxing@gmail.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [v2,net] tcp: avoid the lookup process failing to get sk in ehash table | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net
netdev/fixes_present success Fixes tag present in non-next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 2 this patch: 2
netdev/cc_maintainers success CCed 7 of 7 maintainers
netdev/build_clang success Errors and warnings before: 1 this patch: 1
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 2 this patch: 2
netdev/checkpatch warning CHECK: Alignment should match open parenthesis WARNING: line length of 81 exceeds 80 columns WARNING: line length of 82 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Jason Xing Jan. 14, 2023, 1:27 p.m. UTC
From: Jason Xing <kernelxing@tencent.com>

While one cpu is working on looking up the right socket from ehash
table, another cpu is done deleting the request socket and is about
to add (or is adding) the big socket from the table. It means that
we could miss both of them, even though it has little chance.

Let me draw a call trace map of the server side.
   CPU 0                           CPU 1
   -----                           -----
tcp_v4_rcv()                  syn_recv_sock()
                            inet_ehash_insert()
                            -> sk_nulls_del_node_init_rcu(osk)
__inet_lookup_established()
                            -> __sk_nulls_add_node_rcu(sk, list)

Notice that the CPU 0 is receiving the data after the final ack
during 3-way shakehands and CPU 1 is still handling the final ack.

Why could this be a real problem?
This case is happening only when the final ack and the first data
receiving by different CPUs. Then the server receiving data with
ACK flag tries to search one proper established socket from ehash
table, but apparently it fails as my map shows above. After that,
the server fetches a listener socket and then sends a RST because
it finds a ACK flag in the skb (data), which obeys RST definition
in RFC 793.

Besides, Eric pointed out there's one more race condition where it
handles tw socket hashdance. Only by adding to the tail of the list
before deleting the old one can we avoid the race if the reader has
already begun the bucket traversal and it would possibly miss the head.

Many thanks to Eric for great help from beginning to end.

Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jason Xing <kernelxing@tencent.com>
Link: https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/
---
v2:
1) adding the sk node into the tail of list to prevent the race.
2) fix the race condition when handling time-wait socket hashdance.
---
 net/ipv4/inet_hashtables.c    | 10 ++++++++++
 net/ipv4/inet_timewait_sock.c |  6 +++---
 2 files changed, 13 insertions(+), 3 deletions(-)

Comments

Eric Dumazet Jan. 15, 2023, 4:11 p.m. UTC | #1
On Sat, Jan 14, 2023 at 2:27 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> From: Jason Xing <kernelxing@tencent.com>
>
>
> Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> Suggested-by: Eric Dumazet <edumazet@google.com>
> Signed-off-by: Jason Xing <kernelxing@tencent.com>
> Link: https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/
> ---
> v2:
> 1) adding the sk node into the tail of list to prevent the race.
> 2) fix the race condition when handling time-wait socket hashdance.
> ---
>  net/ipv4/inet_hashtables.c    | 10 ++++++++++
>  net/ipv4/inet_timewait_sock.c |  6 +++---
>  2 files changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> index 24a38b56fab9..b0b54ad55507 100644
> --- a/net/ipv4/inet_hashtables.c
> +++ b/net/ipv4/inet_hashtables.c
> @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         spin_lock(lock);
>         if (osk) {
>                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> +               if (sk_hashed(osk))


nit: this should be:

if (sk_hashed(osk)) {  [1]
    /* multi-line ....
     * .... comment.
     */
   ret = sk_nulls_del_node_init_rcu(osk);
   goto unlock;
}
if (found_dup_sk) {  [2]

1) parentheses needed in [1]
2) No else if in [2], since you added a "goto unlock;"

> +                       /* Before deleting the node, we insert a new one to make
> +                        * sure that the look-up-sk process would not miss either
> +                        * of them and that at least one node would exist in ehash
> +                        * table all the time. Otherwise there's a tiny chance
> +                        * that lookup process could find nothing in ehash table.
> +                        */
> +                       __sk_nulls_add_node_tail_rcu(sk, list);
>                 ret = sk_nulls_del_node_init_rcu(osk);
> +               goto unlock;
>         } else if (found_dup_sk) {
>                 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
>                 if (*found_dup_sk)
> @@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         if (ret)
>                 __sk_nulls_add_node_rcu(sk, list);
>
> +unlock:
>         spin_unlock(lock);

Thanks.
Jason Xing Jan. 16, 2023, 12:36 a.m. UTC | #2
On Mon, Jan 16, 2023 at 12:12 AM Eric Dumazet <edumazet@google.com> wrote:
>
> On Sat, Jan 14, 2023 at 2:27 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
> >
> > From: Jason Xing <kernelxing@tencent.com>
> >
> >
> > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> > Suggested-by: Eric Dumazet <edumazet@google.com>
> > Signed-off-by: Jason Xing <kernelxing@tencent.com>
> > Link: https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/
> > ---
> > v2:
> > 1) adding the sk node into the tail of list to prevent the race.
> > 2) fix the race condition when handling time-wait socket hashdance.
> > ---
> >  net/ipv4/inet_hashtables.c    | 10 ++++++++++
> >  net/ipv4/inet_timewait_sock.c |  6 +++---
> >  2 files changed, 13 insertions(+), 3 deletions(-)
> >
> > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> > index 24a38b56fab9..b0b54ad55507 100644
> > --- a/net/ipv4/inet_hashtables.c
> > +++ b/net/ipv4/inet_hashtables.c
> > @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> >         spin_lock(lock);
> >         if (osk) {
> >                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> > +               if (sk_hashed(osk))
>
>
> nit: this should be:
>
> if (sk_hashed(osk)) {  [1]
>     /* multi-line ....
>      * .... comment.
>      */
>    ret = sk_nulls_del_node_init_rcu(osk);
>    goto unlock;
> }
> if (found_dup_sk) {  [2]
>
> 1) parentheses needed in [1]
> 2) No else if in [2], since you added a "goto unlock;"
>

I'll do that. It looks much better.

Thanks,
Jason

> > +                       /* Before deleting the node, we insert a new one to make
> > +                        * sure that the look-up-sk process would not miss either
> > +                        * of them and that at least one node would exist in ehash
> > +                        * table all the time. Otherwise there's a tiny chance
> > +                        * that lookup process could find nothing in ehash table.
> > +                        */
> > +                       __sk_nulls_add_node_tail_rcu(sk, list);
> >                 ret = sk_nulls_del_node_init_rcu(osk);
> > +               goto unlock;
> >         } else if (found_dup_sk) {
> >                 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
> >                 if (*found_dup_sk)
> > @@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> >         if (ret)
> >                 __sk_nulls_add_node_rcu(sk, list);
> >
> > +unlock:
> >         spin_unlock(lock);
>
> Thanks.
Jason Xing Jan. 16, 2023, 2:24 a.m. UTC | #3
On Mon, Jan 16, 2023 at 8:36 AM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> On Mon, Jan 16, 2023 at 12:12 AM Eric Dumazet <edumazet@google.com> wrote:
> >
> > On Sat, Jan 14, 2023 at 2:27 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
> > >
> > > From: Jason Xing <kernelxing@tencent.com>
> > >
> > >
> > > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> > > Suggested-by: Eric Dumazet <edumazet@google.com>
> > > Signed-off-by: Jason Xing <kernelxing@tencent.com>
> > > Link: https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/
> > > ---
> > > v2:
> > > 1) adding the sk node into the tail of list to prevent the race.
> > > 2) fix the race condition when handling time-wait socket hashdance.
> > > ---
> > >  net/ipv4/inet_hashtables.c    | 10 ++++++++++
> > >  net/ipv4/inet_timewait_sock.c |  6 +++---
> > >  2 files changed, 13 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> > > index 24a38b56fab9..b0b54ad55507 100644
> > > --- a/net/ipv4/inet_hashtables.c
> > > +++ b/net/ipv4/inet_hashtables.c
> > > @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> > >         spin_lock(lock);
> > >         if (osk) {
> > >                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> > > +               if (sk_hashed(osk))
> >
> >
> > nit: this should be:
> >
> > if (sk_hashed(osk)) {  [1]
> >     /* multi-line ....
> >      * .... comment.
> >      */
> >    ret = sk_nulls_del_node_init_rcu(osk);
> >    goto unlock;
> > }

Well, after I dug into this part, I found something as below.
If we enter into the 'if (osk) {', we should always skip the next
if-statement which is 'if (found_dup_sk) {' and return a proper value
depending on if the osk is hashed.
However, the code as above would leave variable @ret to be true if the
sk_hashed(osk) returned false, then It would not go to unlock and then
add the node to the list and at last return true which is unexpected.

> > if (found_dup_sk) {  [2]
> >
> > 1) parentheses needed in [1]

> > 2) No else if in [2], since you added a "goto unlock;"

I think this modification is fine and makes the code clearer.

Thanks,
Jason

> >
>
> I'll do that. It looks much better.
>
> Thanks,
> Jason
>
> > > +                       /* Before deleting the node, we insert a new one to make
> > > +                        * sure that the look-up-sk process would not miss either
> > > +                        * of them and that at least one node would exist in ehash
> > > +                        * table all the time. Otherwise there's a tiny chance
> > > +                        * that lookup process could find nothing in ehash table.
> > > +                        */
> > > +                       __sk_nulls_add_node_tail_rcu(sk, list);
> > >                 ret = sk_nulls_del_node_init_rcu(osk);
> > > +               goto unlock;
> > >         } else if (found_dup_sk) {
> > >                 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
> > >                 if (*found_dup_sk)
> > > @@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> > >         if (ret)
> > >                 __sk_nulls_add_node_rcu(sk, list);
> > >
> > > +unlock:
> > >         spin_unlock(lock);
> >
> > Thanks.
diff mbox series

Patch

diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index 24a38b56fab9..b0b54ad55507 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -650,7 +650,16 @@  bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
 	spin_lock(lock);
 	if (osk) {
 		WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
+		if (sk_hashed(osk))
+			/* Before deleting the node, we insert a new one to make
+			 * sure that the look-up-sk process would not miss either
+			 * of them and that at least one node would exist in ehash
+			 * table all the time. Otherwise there's a tiny chance
+			 * that lookup process could find nothing in ehash table.
+			 */
+			__sk_nulls_add_node_tail_rcu(sk, list);
 		ret = sk_nulls_del_node_init_rcu(osk);
+		goto unlock;
 	} else if (found_dup_sk) {
 		*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
 		if (*found_dup_sk)
@@ -660,6 +669,7 @@  bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
 	if (ret)
 		__sk_nulls_add_node_rcu(sk, list);
 
+unlock:
 	spin_unlock(lock);
 
 	return ret;
diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
index 1d77d992e6e7..6d681ef52bb2 100644
--- a/net/ipv4/inet_timewait_sock.c
+++ b/net/ipv4/inet_timewait_sock.c
@@ -91,10 +91,10 @@  void inet_twsk_put(struct inet_timewait_sock *tw)
 }
 EXPORT_SYMBOL_GPL(inet_twsk_put);
 
-static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
+static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw,
 				   struct hlist_nulls_head *list)
 {
-	hlist_nulls_add_head_rcu(&tw->tw_node, list);
+	hlist_nulls_add_tail_rcu(&tw->tw_node, list);
 }
 
 static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
@@ -147,7 +147,7 @@  void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
 
 	spin_lock(lock);
 
-	inet_twsk_add_node_rcu(tw, &ehead->chain);
+	inet_twsk_add_node_tail_rcu(tw, &ehead->chain);
 
 	/* Step 3: Remove SK from hash chain */
 	if (__sk_nulls_del_node_init_rcu(sk))