Message ID | 20211123202535.1843771-1-eric.dumazet@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 4e1fddc98d2585ddd4792b5e44433dcee7ece001 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net] tcp_cubic: fix spurious Hystart ACK train detections for not-cwnd-limited flows | expand |
Hello: This patch was applied to netdev/net.git (master) by Jakub Kicinski <kuba@kernel.org>: On Tue, 23 Nov 2021 12:25:35 -0800 you wrote: > From: Eric Dumazet <edumazet@google.com> > > While testing BIG TCP patch series, I was expecting that TCP_RR workloads > with 80KB requests/answers would send one 80KB TSO packet, > then being received as a single GRO packet. > > It turns out this was not happening, and the root cause was that > cubic Hystart ACK train was triggering after a few (2 or 3) rounds of RPC. > > [...] Here is the summary with links: - [net] tcp_cubic: fix spurious Hystart ACK train detections for not-cwnd-limited flows https://git.kernel.org/netdev/net/c/4e1fddc98d25 You are awesome, thank you!
diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c index 5e9d9c51164c4d23a90ebd2be0d7bf85098b47dc..e07837e23b3fd2435c87320945528abdee9817cc 100644 --- a/net/ipv4/tcp_cubic.c +++ b/net/ipv4/tcp_cubic.c @@ -330,8 +330,6 @@ static void cubictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked) return; if (tcp_in_slow_start(tp)) { - if (hystart && after(ack, ca->end_seq)) - bictcp_hystart_reset(sk); acked = tcp_slow_start(tp, acked); if (!acked) return; @@ -391,6 +389,9 @@ static void hystart_update(struct sock *sk, u32 delay) struct bictcp *ca = inet_csk_ca(sk); u32 threshold; + if (after(tp->snd_una, ca->end_seq)) + bictcp_hystart_reset(sk); + if (hystart_detect & HYSTART_ACK_TRAIN) { u32 now = bictcp_clock_us(sk);