Message ID | 20230116103341.70956-1-kerneljasonxing@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [v5,net] tcp: avoid the lookup process failing to get sk in ehash table | expand |
On Mon, Jan 16, 2023 at 11:33 AM Jason Xing <kerneljasonxing@gmail.com> wrote: > > From: Jason Xing <kernelxing@tencent.com> > ... > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions") > Suggested-by: Eric Dumazet <edumazet@google.com> > Signed-off-by: Jason Xing <kernelxing@tencent.com> > Link: https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/ > --- Reviewed-by: Eric Dumazet <edumazet@google.com> Thanks !
From: Jason Xing <kerneljasonxing@gmail.com> Date: Mon, 16 Jan 2023 18:33:41 +0800 > From: Jason Xing <kernelxing@tencent.com> > > While one cpu is working on looking up the right socket from ehash > table, another cpu is done deleting the request socket and is about > to add (or is adding) the big socket from the table. It means that > we could miss both of them, even though it has little chance. > > Let me draw a call trace map of the server side. > CPU 0 CPU 1 > ----- ----- > tcp_v4_rcv() syn_recv_sock() > inet_ehash_insert() > -> sk_nulls_del_node_init_rcu(osk) > __inet_lookup_established() > -> __sk_nulls_add_node_rcu(sk, list) > > Notice that the CPU 0 is receiving the data after the final ack > during 3-way shakehands and CPU 1 is still handling the final ack. > > Why could this be a real problem? > This case is happening only when the final ack and the first data > receiving by different CPUs. Then the server receiving data with > ACK flag tries to search one proper established socket from ehash > table, but apparently it fails as my map shows above. After that, > the server fetches a listener socket and then sends a RST because > it finds a ACK flag in the skb (data), which obeys RST definition > in RFC 793. > > Besides, Eric pointed out there's one more race condition where it > handles tw socket hashdance. Only by adding to the tail of the list > before deleting the old one can we avoid the race if the reader has > already begun the bucket traversal and it would possibly miss the head. > > Many thanks to Eric for great help from beginning to end. > > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions") > Suggested-by: Eric Dumazet <edumazet@google.com> > Signed-off-by: Jason Xing <kernelxing@tencent.com> > Link: https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/ I guess there could be regression if a workload has many long-lived connections, but the change itself looks good. I left a minor comment below. Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> > --- > v5: > 1) adjust the style once more. > > v4: > 1) adjust the code style and make it easier to read. > > v3: > 1) get rid of else-if statement. > > v2: > 1) adding the sk node into the tail of list to prevent the race. > 2) fix the race condition when handling time-wait socket hashdance. > --- > net/ipv4/inet_hashtables.c | 17 +++++++++++++++-- > net/ipv4/inet_timewait_sock.c | 6 +++--- > 2 files changed, 18 insertions(+), 5 deletions(-) > > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c > index 24a38b56fab9..f58d73888638 100644 > --- a/net/ipv4/inet_hashtables.c > +++ b/net/ipv4/inet_hashtables.c > @@ -650,8 +650,20 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) > spin_lock(lock); > if (osk) { > WARN_ON_ONCE(sk->sk_hash != osk->sk_hash); > - ret = sk_nulls_del_node_init_rcu(osk); > - } else if (found_dup_sk) { > + ret = sk_hashed(osk); > + if (ret) { > + /* Before deleting the node, we insert a new one to make > + * sure that the look-up-sk process would not miss either > + * of them and that at least one node would exist in ehash > + * table all the time. Otherwise there's a tiny chance > + * that lookup process could find nothing in ehash table. > + */ > + __sk_nulls_add_node_tail_rcu(sk, list); > + sk_nulls_del_node_init_rcu(osk); > + } > + goto unlock; > + } > + if (found_dup_sk) { > *found_dup_sk = inet_ehash_lookup_by_sk(sk, list); > if (*found_dup_sk) > ret = false; > @@ -660,6 +672,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) > if (ret) > __sk_nulls_add_node_rcu(sk, list); > > +unlock: > spin_unlock(lock); > > return ret; > diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c > index 1d77d992e6e7..6d681ef52bb2 100644 > --- a/net/ipv4/inet_timewait_sock.c > +++ b/net/ipv4/inet_timewait_sock.c > @@ -91,10 +91,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw) > } > EXPORT_SYMBOL_GPL(inet_twsk_put); > > -static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw, > +static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw, > struct hlist_nulls_head *list) nit: Please indent here. > { > - hlist_nulls_add_head_rcu(&tw->tw_node, list); > + hlist_nulls_add_tail_rcu(&tw->tw_node, list); > } > > static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw, > @@ -147,7 +147,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk, > > spin_lock(lock); > > - inet_twsk_add_node_rcu(tw, &ehead->chain); > + inet_twsk_add_node_tail_rcu(tw, &ehead->chain); > > /* Step 3: Remove SK from hash chain */ > if (__sk_nulls_del_node_init_rcu(sk)) > -- > 2.37.3
On Tue, Jan 17, 2023 at 12:54 AM Kuniyuki Iwashima <kuniyu@amazon.com> wrote: > > From: Jason Xing <kerneljasonxing@gmail.com> > Date: Mon, 16 Jan 2023 18:33:41 +0800 > > From: Jason Xing <kernelxing@tencent.com> > > > > While one cpu is working on looking up the right socket from ehash > > table, another cpu is done deleting the request socket and is about > > to add (or is adding) the big socket from the table. It means that > > we could miss both of them, even though it has little chance. > > > > Let me draw a call trace map of the server side. > > CPU 0 CPU 1 > > ----- ----- > > tcp_v4_rcv() syn_recv_sock() > > inet_ehash_insert() > > -> sk_nulls_del_node_init_rcu(osk) > > __inet_lookup_established() > > -> __sk_nulls_add_node_rcu(sk, list) > > > > Notice that the CPU 0 is receiving the data after the final ack > > during 3-way shakehands and CPU 1 is still handling the final ack. > > > > Why could this be a real problem? > > This case is happening only when the final ack and the first data > > receiving by different CPUs. Then the server receiving data with > > ACK flag tries to search one proper established socket from ehash > > table, but apparently it fails as my map shows above. After that, > > the server fetches a listener socket and then sends a RST because > > it finds a ACK flag in the skb (data), which obeys RST definition > > in RFC 793. > > > > Besides, Eric pointed out there's one more race condition where it > > handles tw socket hashdance. Only by adding to the tail of the list > > before deleting the old one can we avoid the race if the reader has > > already begun the bucket traversal and it would possibly miss the head. > > > > Many thanks to Eric for great help from beginning to end. > > > > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions") > > Suggested-by: Eric Dumazet <edumazet@google.com> > > Signed-off-by: Jason Xing <kernelxing@tencent.com> > > Link: https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/ > > I guess there could be regression if a workload has many long-lived Sorry, I don't understand. This patch does not add two kinds of sockets into the ehash table all the time, but reverses the order of deleting and adding sockets only. The final result is the same as the old time. I'm wondering why it could cause some regressions if there are loads of long-lived connections. > connections, but the change itself looks good. I left a minor comment > below. > > Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> > Thanks for reviewing :) > > > --- > > v5: > > 1) adjust the style once more. > > > > v4: > > 1) adjust the code style and make it easier to read. > > > > v3: > > 1) get rid of else-if statement. > > > > v2: > > 1) adding the sk node into the tail of list to prevent the race. > > 2) fix the race condition when handling time-wait socket hashdance. > > --- > > net/ipv4/inet_hashtables.c | 17 +++++++++++++++-- > > net/ipv4/inet_timewait_sock.c | 6 +++--- > > 2 files changed, 18 insertions(+), 5 deletions(-) > > > > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c > > index 24a38b56fab9..f58d73888638 100644 > > --- a/net/ipv4/inet_hashtables.c > > +++ b/net/ipv4/inet_hashtables.c > > @@ -650,8 +650,20 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) > > spin_lock(lock); > > if (osk) { > > WARN_ON_ONCE(sk->sk_hash != osk->sk_hash); > > - ret = sk_nulls_del_node_init_rcu(osk); > > - } else if (found_dup_sk) { > > + ret = sk_hashed(osk); > > + if (ret) { > > + /* Before deleting the node, we insert a new one to make > > + * sure that the look-up-sk process would not miss either > > + * of them and that at least one node would exist in ehash > > + * table all the time. Otherwise there's a tiny chance > > + * that lookup process could find nothing in ehash table. > > + */ > > + __sk_nulls_add_node_tail_rcu(sk, list); > > + sk_nulls_del_node_init_rcu(osk); > > + } > > + goto unlock; > > + } > > + if (found_dup_sk) { > > *found_dup_sk = inet_ehash_lookup_by_sk(sk, list); > > if (*found_dup_sk) > > ret = false; > > @@ -660,6 +672,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) > > if (ret) > > __sk_nulls_add_node_rcu(sk, list); > > > > +unlock: > > spin_unlock(lock); > > > > return ret; > > diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c > > index 1d77d992e6e7..6d681ef52bb2 100644 > > --- a/net/ipv4/inet_timewait_sock.c > > +++ b/net/ipv4/inet_timewait_sock.c > > @@ -91,10 +91,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw) > > } > > EXPORT_SYMBOL_GPL(inet_twsk_put); > > > > -static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw, > > +static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw, > > struct hlist_nulls_head *list) > > nit: Please indent here. > Before I submitted the patch, I did the check through ./script/checkpatch.py and it outputted some information (no warning, no error) as you said. The reason I didn't change that is I would like to leave this part untouch as it used to be. I have no clue about whether I should send a v7 patch to adjust the format if necessary. Thanks, Jason > > > { > > - hlist_nulls_add_head_rcu(&tw->tw_node, list); > > + hlist_nulls_add_tail_rcu(&tw->tw_node, list); > > } > > > > static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw, > > @@ -147,7 +147,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk, > > > > spin_lock(lock); > > > > - inet_twsk_add_node_rcu(tw, &ehead->chain); > > + inet_twsk_add_node_tail_rcu(tw, &ehead->chain); > > > > /* Step 3: Remove SK from hash chain */ > > if (__sk_nulls_del_node_init_rcu(sk)) > > -- > > 2.37.3
From: Jason Xing <kerneljasonxing@gmail.com> Date: Tue, 17 Jan 2023 10:15:45 +0800 > On Tue, Jan 17, 2023 at 12:54 AM Kuniyuki Iwashima <kuniyu@amazon.com> wrote: > > > > From: Jason Xing <kerneljasonxing@gmail.com> > > Date: Mon, 16 Jan 2023 18:33:41 +0800 > > > From: Jason Xing <kernelxing@tencent.com> > > > > > > While one cpu is working on looking up the right socket from ehash > > > table, another cpu is done deleting the request socket and is about > > > to add (or is adding) the big socket from the table. It means that > > > we could miss both of them, even though it has little chance. > > > > > > Let me draw a call trace map of the server side. > > > CPU 0 CPU 1 > > > ----- ----- > > > tcp_v4_rcv() syn_recv_sock() > > > inet_ehash_insert() > > > -> sk_nulls_del_node_init_rcu(osk) > > > __inet_lookup_established() > > > -> __sk_nulls_add_node_rcu(sk, list) > > > > > > Notice that the CPU 0 is receiving the data after the final ack > > > during 3-way shakehands and CPU 1 is still handling the final ack. > > > > > > Why could this be a real problem? > > > This case is happening only when the final ack and the first data > > > receiving by different CPUs. Then the server receiving data with > > > ACK flag tries to search one proper established socket from ehash > > > table, but apparently it fails as my map shows above. After that, > > > the server fetches a listener socket and then sends a RST because > > > it finds a ACK flag in the skb (data), which obeys RST definition > > > in RFC 793. > > > > > > Besides, Eric pointed out there's one more race condition where it > > > handles tw socket hashdance. Only by adding to the tail of the list > > > before deleting the old one can we avoid the race if the reader has > > > already begun the bucket traversal and it would possibly miss the head. > > > > > > Many thanks to Eric for great help from beginning to end. > > > > > > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions") > > > Suggested-by: Eric Dumazet <edumazet@google.com> > > > Signed-off-by: Jason Xing <kernelxing@tencent.com> > > > Link: https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/ > > > > I guess there could be regression if a workload has many long-lived > > Sorry, I don't understand. This patch does not add two kinds of > sockets into the ehash table all the time, but reverses the order of > deleting and adding sockets only. Not really. It also reverses the order of sockets in ehash. We were able to find newer sockets faster than older ones. If a workload has many long-lived sockets, they would add constant time on newer socket's lookup. > The final result is the same as the > old time. I'm wondering why it could cause some regressions if there > are loads of long-lived connections. > > > connections, but the change itself looks good. I left a minor comment > > below. > > > > Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> > > > > Thanks for reviewing :) > > > > > > --- > > > v5: > > > 1) adjust the style once more. > > > > > > v4: > > > 1) adjust the code style and make it easier to read. > > > > > > v3: > > > 1) get rid of else-if statement. > > > > > > v2: > > > 1) adding the sk node into the tail of list to prevent the race. > > > 2) fix the race condition when handling time-wait socket hashdance. > > > --- > > > net/ipv4/inet_hashtables.c | 17 +++++++++++++++-- > > > net/ipv4/inet_timewait_sock.c | 6 +++--- > > > 2 files changed, 18 insertions(+), 5 deletions(-) > > > > > > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c > > > index 24a38b56fab9..f58d73888638 100644 > > > --- a/net/ipv4/inet_hashtables.c > > > +++ b/net/ipv4/inet_hashtables.c > > > @@ -650,8 +650,20 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) > > > spin_lock(lock); > > > if (osk) { > > > WARN_ON_ONCE(sk->sk_hash != osk->sk_hash); > > > - ret = sk_nulls_del_node_init_rcu(osk); > > > - } else if (found_dup_sk) { > > > + ret = sk_hashed(osk); > > > + if (ret) { > > > + /* Before deleting the node, we insert a new one to make > > > + * sure that the look-up-sk process would not miss either > > > + * of them and that at least one node would exist in ehash > > > + * table all the time. Otherwise there's a tiny chance > > > + * that lookup process could find nothing in ehash table. > > > + */ > > > + __sk_nulls_add_node_tail_rcu(sk, list); > > > + sk_nulls_del_node_init_rcu(osk); > > > + } > > > + goto unlock; > > > + } > > > + if (found_dup_sk) { > > > *found_dup_sk = inet_ehash_lookup_by_sk(sk, list); > > > if (*found_dup_sk) > > > ret = false; > > > @@ -660,6 +672,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) > > > if (ret) > > > __sk_nulls_add_node_rcu(sk, list); > > > > > > +unlock: > > > spin_unlock(lock); > > > > > > return ret; > > > diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c > > > index 1d77d992e6e7..6d681ef52bb2 100644 > > > --- a/net/ipv4/inet_timewait_sock.c > > > +++ b/net/ipv4/inet_timewait_sock.c > > > @@ -91,10 +91,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw) > > > } > > > EXPORT_SYMBOL_GPL(inet_twsk_put); > > > > > > -static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw, > > > +static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw, > > > struct hlist_nulls_head *list) > > > > nit: Please indent here. > > > > Before I submitted the patch, I did the check through > ./script/checkpatch.py and it outputted some information (no warning, > no error) as you said. > The reason I didn't change that is I would like to leave this part > untouch as it used to be. I have no clue about whether I should send a > v7 patch to adjust the format if necessary. checkpatch.pl does not check everything. You will find most functions under net/ipv4/*.c have same indentation in arguments. I would recommend enforcing such styles on editor like $ cat ~/.emacs.d/init.el (setq-default c-default-style "linux") Thanks, Kuniyuki > > Thanks, > Jason > > > > > > { > > > - hlist_nulls_add_head_rcu(&tw->tw_node, list); > > > + hlist_nulls_add_tail_rcu(&tw->tw_node, list); > > > } > > > > > > static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw, > > > @@ -147,7 +147,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk, > > > > > > spin_lock(lock); > > > > > > - inet_twsk_add_node_rcu(tw, &ehead->chain); > > > + inet_twsk_add_node_tail_rcu(tw, &ehead->chain); > > > > > > /* Step 3: Remove SK from hash chain */ > > > if (__sk_nulls_del_node_init_rcu(sk)) > > > -- > > > 2.37.3
On Wed, Jan 18, 2023 at 12:36 AM Kuniyuki Iwashima <kuniyu@amazon.com> wrote: > > From: Jason Xing <kerneljasonxing@gmail.com> > Date: Tue, 17 Jan 2023 10:15:45 +0800 > > On Tue, Jan 17, 2023 at 12:54 AM Kuniyuki Iwashima <kuniyu@amazon.com> wrote: > > > > > > From: Jason Xing <kerneljasonxing@gmail.com> > > > Date: Mon, 16 Jan 2023 18:33:41 +0800 > > > > From: Jason Xing <kernelxing@tencent.com> > > > > > > > > While one cpu is working on looking up the right socket from ehash > > > > table, another cpu is done deleting the request socket and is about > > > > to add (or is adding) the big socket from the table. It means that > > > > we could miss both of them, even though it has little chance. > > > > > > > > Let me draw a call trace map of the server side. > > > > CPU 0 CPU 1 > > > > ----- ----- > > > > tcp_v4_rcv() syn_recv_sock() > > > > inet_ehash_insert() > > > > -> sk_nulls_del_node_init_rcu(osk) > > > > __inet_lookup_established() > > > > -> __sk_nulls_add_node_rcu(sk, list) > > > > > > > > Notice that the CPU 0 is receiving the data after the final ack > > > > during 3-way shakehands and CPU 1 is still handling the final ack. > > > > > > > > Why could this be a real problem? > > > > This case is happening only when the final ack and the first data > > > > receiving by different CPUs. Then the server receiving data with > > > > ACK flag tries to search one proper established socket from ehash > > > > table, but apparently it fails as my map shows above. After that, > > > > the server fetches a listener socket and then sends a RST because > > > > it finds a ACK flag in the skb (data), which obeys RST definition > > > > in RFC 793. > > > > > > > > Besides, Eric pointed out there's one more race condition where it > > > > handles tw socket hashdance. Only by adding to the tail of the list > > > > before deleting the old one can we avoid the race if the reader has > > > > already begun the bucket traversal and it would possibly miss the head. > > > > > > > > Many thanks to Eric for great help from beginning to end. > > > > > > > > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions") > > > > Suggested-by: Eric Dumazet <edumazet@google.com> > > > > Signed-off-by: Jason Xing <kernelxing@tencent.com> > > > > Link: https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/ > > > > > > I guess there could be regression if a workload has many long-lived > > > > Sorry, I don't understand. This patch does not add two kinds of > > sockets into the ehash table all the time, but reverses the order of > > deleting and adding sockets only. > > Not really. It also reverses the order of sockets in ehash. We were > able to find newer sockets faster than older ones. If a workload has > many long-lived sockets, they would add constant time on newer socket's > lookup. > > > > The final result is the same as the > > old time. I'm wondering why it could cause some regressions if there > > are loads of long-lived connections. > > > > > connections, but the change itself looks good. I left a minor comment > > > below. > > > > > > Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> > > > > > > > Thanks for reviewing :) > > > > > > > > > --- > > > > v5: > > > > 1) adjust the style once more. > > > > > > > > v4: > > > > 1) adjust the code style and make it easier to read. > > > > > > > > v3: > > > > 1) get rid of else-if statement. > > > > > > > > v2: > > > > 1) adding the sk node into the tail of list to prevent the race. > > > > 2) fix the race condition when handling time-wait socket hashdance. > > > > --- > > > > net/ipv4/inet_hashtables.c | 17 +++++++++++++++-- > > > > net/ipv4/inet_timewait_sock.c | 6 +++--- > > > > 2 files changed, 18 insertions(+), 5 deletions(-) > > > > > > > > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c > > > > index 24a38b56fab9..f58d73888638 100644 > > > > --- a/net/ipv4/inet_hashtables.c > > > > +++ b/net/ipv4/inet_hashtables.c > > > > @@ -650,8 +650,20 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) > > > > spin_lock(lock); > > > > if (osk) { > > > > WARN_ON_ONCE(sk->sk_hash != osk->sk_hash); > > > > - ret = sk_nulls_del_node_init_rcu(osk); > > > > - } else if (found_dup_sk) { > > > > + ret = sk_hashed(osk); > > > > + if (ret) { > > > > + /* Before deleting the node, we insert a new one to make > > > > + * sure that the look-up-sk process would not miss either > > > > + * of them and that at least one node would exist in ehash > > > > + * table all the time. Otherwise there's a tiny chance > > > > + * that lookup process could find nothing in ehash table. > > > > + */ > > > > + __sk_nulls_add_node_tail_rcu(sk, list); > > > > + sk_nulls_del_node_init_rcu(osk); > > > > + } > > > > + goto unlock; > > > > + } > > > > + if (found_dup_sk) { > > > > *found_dup_sk = inet_ehash_lookup_by_sk(sk, list); > > > > if (*found_dup_sk) > > > > ret = false; > > > > @@ -660,6 +672,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) > > > > if (ret) > > > > __sk_nulls_add_node_rcu(sk, list); > > > > > > > > +unlock: > > > > spin_unlock(lock); > > > > > > > > return ret; > > > > diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c > > > > index 1d77d992e6e7..6d681ef52bb2 100644 > > > > --- a/net/ipv4/inet_timewait_sock.c > > > > +++ b/net/ipv4/inet_timewait_sock.c > > > > @@ -91,10 +91,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw) > > > > } > > > > EXPORT_SYMBOL_GPL(inet_twsk_put); > > > > > > > > -static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw, > > > > +static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw, > > > > struct hlist_nulls_head *list) > > > > > > nit: Please indent here. > > > > > > > Before I submitted the patch, I did the check through > > ./script/checkpatch.py and it outputted some information (no warning, > > no error) as you said. > > The reason I didn't change that is I would like to leave this part > > untouch as it used to be. I have no clue about whether I should send a > > v7 patch to adjust the format if necessary. > > checkpatch.pl does not check everything. You will find most functions > under net/ipv4/*.c have same indentation in arguments. I would recommend > enforcing such styles on editor like > Well, there are two other lines which have the same indent problem. I'm going to clean them both up as below. 1) inet_twsk_add_bind_node() 2) inet_twsk_add_bind2_node() Thanks, Jason > $ cat ~/.emacs.d/init.el > (setq-default c-default-style "linux") > > Thanks, > Kuniyuki > > > > > Thanks, > > Jason > > > > > > > > > { > > > > - hlist_nulls_add_head_rcu(&tw->tw_node, list); > > > > + hlist_nulls_add_tail_rcu(&tw->tw_node, list); > > > > } > > > > > > > > static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw, > > > > @@ -147,7 +147,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk, > > > > > > > > spin_lock(lock); > > > > > > > > - inet_twsk_add_node_rcu(tw, &ehead->chain); > > > > + inet_twsk_add_node_tail_rcu(tw, &ehead->chain); > > > > > > > > /* Step 3: Remove SK from hash chain */ > > > > if (__sk_nulls_del_node_init_rcu(sk)) > > > > -- > > > > 2.37.3
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 24a38b56fab9..f58d73888638 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -650,8 +650,20 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) spin_lock(lock); if (osk) { WARN_ON_ONCE(sk->sk_hash != osk->sk_hash); - ret = sk_nulls_del_node_init_rcu(osk); - } else if (found_dup_sk) { + ret = sk_hashed(osk); + if (ret) { + /* Before deleting the node, we insert a new one to make + * sure that the look-up-sk process would not miss either + * of them and that at least one node would exist in ehash + * table all the time. Otherwise there's a tiny chance + * that lookup process could find nothing in ehash table. + */ + __sk_nulls_add_node_tail_rcu(sk, list); + sk_nulls_del_node_init_rcu(osk); + } + goto unlock; + } + if (found_dup_sk) { *found_dup_sk = inet_ehash_lookup_by_sk(sk, list); if (*found_dup_sk) ret = false; @@ -660,6 +672,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) if (ret) __sk_nulls_add_node_rcu(sk, list); +unlock: spin_unlock(lock); return ret; diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c index 1d77d992e6e7..6d681ef52bb2 100644 --- a/net/ipv4/inet_timewait_sock.c +++ b/net/ipv4/inet_timewait_sock.c @@ -91,10 +91,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw) } EXPORT_SYMBOL_GPL(inet_twsk_put); -static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw, +static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw, struct hlist_nulls_head *list) { - hlist_nulls_add_head_rcu(&tw->tw_node, list); + hlist_nulls_add_tail_rcu(&tw->tw_node, list); } static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw, @@ -147,7 +147,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk, spin_lock(lock); - inet_twsk_add_node_rcu(tw, &ehead->chain); + inet_twsk_add_node_tail_rcu(tw, &ehead->chain); /* Step 3: Remove SK from hash chain */ if (__sk_nulls_del_node_init_rcu(sk))