Message ID | 20240308112504.29099-3-kerneljasonxing@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 683a67da95616c91a85b98e41dc8eefe9f2b29e7 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | annotate data-races around sysctl_tcp_wmem[0] | expand |
On Fri, Mar 8, 2024 at 12:25 PM Jason Xing <kerneljasonxing@gmail.com> wrote: > > From: Jason Xing <kernelxing@tencent.com> > > When reading wmem[0], it could be changed concurrently without > READ_ONCE() protection. So add one annotation here. > > Signed-off-by: Jason Xing <kernelxing@tencent.com> > --- > net/ipv4/tcp.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c > index c5b83875411a..e3904c006e63 100644 > --- a/net/ipv4/tcp.c > +++ b/net/ipv4/tcp.c > @@ -975,7 +975,7 @@ int tcp_wmem_schedule(struct sock *sk, int copy) > * Use whatever is left in sk->sk_forward_alloc and tcp_wmem[0] > * to guarantee some progress. > */ > - left = sock_net(sk)->ipv4.sysctl_tcp_wmem[0] - sk->sk_wmem_queued; > + left = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[0]) - sk->sk_wmem_queued; SGTM, you could have split the long line... Reviewed-by: Eric Dumazet <edumazet@google.com>
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index c5b83875411a..e3904c006e63 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -975,7 +975,7 @@ int tcp_wmem_schedule(struct sock *sk, int copy) * Use whatever is left in sk->sk_forward_alloc and tcp_wmem[0] * to guarantee some progress. */ - left = sock_net(sk)->ipv4.sysctl_tcp_wmem[0] - sk->sk_wmem_queued; + left = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[0]) - sk->sk_wmem_queued; if (left > 0) sk_forced_mem_schedule(sk, min(left, copy)); return min(copy, sk->sk_forward_alloc);