Message ID | 20210719091218.2969611-1-eric.dumazet@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 6f20c8adb1813467ea52c1296d52c4e95978cb2f |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | success | Link |
netdev/fixes_present | success | Link |
netdev/patch_count | success | Link |
netdev/tree_selection | success | Clearly marked for net |
netdev/subject_prefix | success | Link |
netdev/cc_maintainers | warning | 2 maintainers not CCed: yoshfuji@linux-ipv6.org dsahern@kernel.org |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | success | Errors and warnings before: 1 this patch: 1 |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | warning | WARNING: 'wont' may be misspelled - perhaps 'won't'? WARNING: line length of 82 exceeds 80 columns |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 1 this patch: 1 |
netdev/header_inline | success | Link |
On Mon, Jul 19, 2021 at 2:12 AM Eric Dumazet <eric.dumazet@gmail.com> wrote: > > From: Eric Dumazet <edumazet@google.com> > > tfo_active_disable_stamp is read and written locklessly. > We need to annotate these accesses appropriately. > > Then, we need to perform the atomic_inc(tfo_active_disable_times) > after the timestamp has been updated, and thus add barriers > to make sure tcp_fastopen_active_should_disable() wont read > a stale timestamp. > > Fixes: cf1ef3f0719b ("net/tcp_fastopen: Disable active side TFO in certain scenarios") > Signed-off-by: Eric Dumazet <edumazet@google.com> > Cc: Wei Wang <weiwan@google.com> > Cc: Yuchung Cheng <ycheng@google.com> > Cc: Neal Cardwell <ncardwell@google.com> > --- Thanks Eric! Acked-by: Wei Wang <weiwan@google.com> > net/ipv4/tcp_fastopen.c | 19 ++++++++++++++++--- > 1 file changed, 16 insertions(+), 3 deletions(-) > > diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c > index 47c32604d38fca960d2cd56f3588bfd2e390b789..b32af76e21325373126b51423496e3b8d47d97ff 100644 > --- a/net/ipv4/tcp_fastopen.c > +++ b/net/ipv4/tcp_fastopen.c > @@ -507,8 +507,15 @@ void tcp_fastopen_active_disable(struct sock *sk) > { > struct net *net = sock_net(sk); > > + /* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */ > + WRITE_ONCE(net->ipv4.tfo_active_disable_stamp, jiffies); > + > + /* Paired with smp_rmb() in tcp_fastopen_active_should_disable(). > + * We want net->ipv4.tfo_active_disable_stamp to be updated first. > + */ > + smp_mb__before_atomic(); > atomic_inc(&net->ipv4.tfo_active_disable_times); > - net->ipv4.tfo_active_disable_stamp = jiffies; > + > NET_INC_STATS(net, LINUX_MIB_TCPFASTOPENBLACKHOLE); > } > > @@ -526,10 +533,16 @@ bool tcp_fastopen_active_should_disable(struct sock *sk) > if (!tfo_da_times) > return false; > > + /* Paired with smp_mb__before_atomic() in tcp_fastopen_active_disable() */ > + smp_rmb(); > + > /* Limit timeout to max: 2^6 * initial timeout */ > multiplier = 1 << min(tfo_da_times - 1, 6); > - timeout = multiplier * tfo_bh_timeout * HZ; > - if (time_before(jiffies, sock_net(sk)->ipv4.tfo_active_disable_stamp + timeout)) > + > + /* Paired with the WRITE_ONCE() in tcp_fastopen_active_disable(). */ > + timeout = READ_ONCE(sock_net(sk)->ipv4.tfo_active_disable_stamp) + > + multiplier * tfo_bh_timeout * HZ; > + if (time_before(jiffies, timeout)) > return true; > > /* Mark check bit so we can check for successful active TFO > -- > 2.32.0.402.g57bb445576-goog >
Hello: This patch was applied to netdev/net.git (refs/heads/master): On Mon, 19 Jul 2021 02:12:18 -0700 you wrote: > From: Eric Dumazet <edumazet@google.com> > > tfo_active_disable_stamp is read and written locklessly. > We need to annotate these accesses appropriately. > > Then, we need to perform the atomic_inc(tfo_active_disable_times) > after the timestamp has been updated, and thus add barriers > to make sure tcp_fastopen_active_should_disable() wont read > a stale timestamp. > > [...] Here is the summary with links: - [net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp https://git.kernel.org/netdev/net/c/6f20c8adb181 You are awesome, thank you! -- Deet-doot-dot, I am a bot. https://korg.docs.kernel.org/patchwork/pwbot.html
diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c index 47c32604d38fca960d2cd56f3588bfd2e390b789..b32af76e21325373126b51423496e3b8d47d97ff 100644 --- a/net/ipv4/tcp_fastopen.c +++ b/net/ipv4/tcp_fastopen.c @@ -507,8 +507,15 @@ void tcp_fastopen_active_disable(struct sock *sk) { struct net *net = sock_net(sk); + /* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */ + WRITE_ONCE(net->ipv4.tfo_active_disable_stamp, jiffies); + + /* Paired with smp_rmb() in tcp_fastopen_active_should_disable(). + * We want net->ipv4.tfo_active_disable_stamp to be updated first. + */ + smp_mb__before_atomic(); atomic_inc(&net->ipv4.tfo_active_disable_times); - net->ipv4.tfo_active_disable_stamp = jiffies; + NET_INC_STATS(net, LINUX_MIB_TCPFASTOPENBLACKHOLE); } @@ -526,10 +533,16 @@ bool tcp_fastopen_active_should_disable(struct sock *sk) if (!tfo_da_times) return false; + /* Paired with smp_mb__before_atomic() in tcp_fastopen_active_disable() */ + smp_rmb(); + /* Limit timeout to max: 2^6 * initial timeout */ multiplier = 1 << min(tfo_da_times - 1, 6); - timeout = multiplier * tfo_bh_timeout * HZ; - if (time_before(jiffies, sock_net(sk)->ipv4.tfo_active_disable_stamp + timeout)) + + /* Paired with the WRITE_ONCE() in tcp_fastopen_active_disable(). */ + timeout = READ_ONCE(sock_net(sk)->ipv4.tfo_active_disable_stamp) + + multiplier * tfo_bh_timeout * HZ; + if (time_before(jiffies, timeout)) return true; /* Mark check bit so we can check for successful active TFO