Message ID | 20230920232706.498747-3-john.fastabend@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | bpf, sockmap complete fixes for avail bytes | expand |
On Wed, Sep 20, 2023 at 04:27 PM -07, John Fastabend wrote: > When data is peek'd off the receive queue we shouldn't considered it > copied from tcp_sock side. When we increment copied_seq this will confuse > tcp_data_ready() because copied_seq can be arbitrarily increased. From] > application side it results in poll() operations not waking up when > expected. > > Notice tcp stack without BPF recvmsg programs also does not increment > copied_seq. > > We broke this when we moved copied_seq into recvmsg to only update when > actual copy was happening. But, it wasn't working correctly either before > because the tcp_data_ready() tried to use the copied_seq value to see > if data was read by user yet. See fixes tags. > > Fixes: e5c6de5fa0258 ("bpf, sockmap: Incorrectly handling copied_seq") > Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") > Signed-off-by: John Fastabend <john.fastabend@gmail.com> > --- > net/ipv4/tcp_bpf.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c > index 81f0dff69e0b..327268203001 100644 > --- a/net/ipv4/tcp_bpf.c > +++ b/net/ipv4/tcp_bpf.c > @@ -222,6 +222,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, > int *addr_len) > { > struct tcp_sock *tcp = tcp_sk(sk); > + int peek = flags & MSG_PEEK; > u32 seq = tcp->copied_seq; > struct sk_psock *psock; > int copied = 0; > @@ -311,7 +312,8 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, > copied = -EAGAIN; > } > out: > - WRITE_ONCE(tcp->copied_seq, seq); > + if (!peek) > + WRITE_ONCE(tcp->copied_seq, seq); > tcp_rcv_space_adjust(sk); > if (copied > 0) > __tcp_cleanup_rbuf(sk, copied); I was surprised to see that we recalculate TCP buffer space and ACK frames when peeking at the receive queue. But tcp_recvmsg seems to do the same. Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 81f0dff69e0b..327268203001 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -222,6 +222,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, int *addr_len) { struct tcp_sock *tcp = tcp_sk(sk); + int peek = flags & MSG_PEEK; u32 seq = tcp->copied_seq; struct sk_psock *psock; int copied = 0; @@ -311,7 +312,8 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, copied = -EAGAIN; } out: - WRITE_ONCE(tcp->copied_seq, seq); + if (!peek) + WRITE_ONCE(tcp->copied_seq, seq); tcp_rcv_space_adjust(sk); if (copied > 0) __tcp_cleanup_rbuf(sk, copied);
When data is peek'd off the receive queue we shouldn't considered it copied from tcp_sock side. When we increment copied_seq this will confuse tcp_data_ready() because copied_seq can be arbitrarily increased. From] application side it results in poll() operations not waking up when expected. Notice tcp stack without BPF recvmsg programs also does not increment copied_seq. We broke this when we moved copied_seq into recvmsg to only update when actual copy was happening. But, it wasn't working correctly either before because the tcp_data_ready() tried to use the copied_seq value to see if data was read by user yet. See fixes tags. Fixes: e5c6de5fa0258 ("bpf, sockmap: Incorrectly handling copied_seq") Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend <john.fastabend@gmail.com> --- net/ipv4/tcp_bpf.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)