Message ID | 1ca08110195a81b95a38449f1a12b5593dfad864.1721921695.git.pabeni@redhat.com (mailing list archive) |
---|---|
State | Accepted, archived |
Commit | 2e6def94f2129c60ee24541749ef37b0323ee834 |
Delegated to: | Matthieu Baerts |
Headers | show |
Series | [mptcp-net,1/2] mptcp: fix bad RCVPRUNED mib accounting | expand |
Context | Check | Description |
---|---|---|
matttbe/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 26 lines checked |
matttbe/shellcheck | success | MPTCP selftests files have not been modified |
matttbe/build | success | Build and static analysis OK |
matttbe/KVM_Validation__normal | success | Success! ✅ |
matttbe/KVM_Validation__debug | success | Success! ✅ |
matttbe/KVM_Validation__btf__only_bpftest_all_ | success | Success! ✅ |
On Thu, 25 Jul 2024, Paolo Abeni wrote: > When a subflow receives and discards duplicate data, the mptcp > stack assumes that the consumed offset inside the current skb is > zero. > > With multiple subflows receiving data simultaneously such assertion > does not held true. As a result the subflow-level copied_seq will > be incorrectly increased and later on the same subflow will observe > a bad mapping, leading to subflow reset. > > Address the issue tacking in account the skb consumed offset in Just one fix but Matthieu can adjust: "Address the issue taking into account..." Reviewed-by: Mat Martineau <martineau@kernel.org> > mptcp_subflow_discard_data(). > > Fixes: 04e4cd4f7ca4 ("mptcp: cleanup mptcp_subflow_discard_data()") > Signed-off-by: Paolo Abeni <pabeni@redhat.com> > --- > net/mptcp/subflow.c | 16 ++++++++++++---- > 1 file changed, 12 insertions(+), 4 deletions(-) > > diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c > index 0e4b5bfbeaa1..a21c712350c3 100644 > --- a/net/mptcp/subflow.c > +++ b/net/mptcp/subflow.c > @@ -1230,14 +1230,22 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb, > { > struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); > bool fin = TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN; > - u32 incr; > + struct tcp_sock *tp = tcp_sk(ssk); > + u32 offset, incr, avail_len; > > - incr = limit >= skb->len ? skb->len + fin : limit; > + offset = tp->copied_seq - TCP_SKB_CB(skb)->seq; > + if (WARN_ON_ONCE(offset > skb->len)) > + goto out; > + > + avail_len = skb->len - offset; > + incr = limit >= avail_len ? avail_len + fin : limit; > > - pr_debug("discarding=%d len=%d seq=%d", incr, skb->len, > - subflow->map_subflow_seq); > + pr_debug("discarding=%d len=%d offset=%d seq=%d", incr, skb->len, > + offset, subflow->map_subflow_seq); > MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DUPDATA); > tcp_sk(ssk)->copied_seq += incr; > + > +out: > if (!before(tcp_sk(ssk)->copied_seq, TCP_SKB_CB(skb)->end_seq)) > sk_eat_skb(ssk, skb); > if (mptcp_subflow_get_map_offset(subflow) >= subflow->map_data_len) > --
Hi Paolo, Thank you for your modifications, that's great! Our CI did some validations and here is its report: - KVM Validation: normal: Success! ✅ - KVM Validation: debug: Success! ✅ - KVM Validation: btf (only bpftest_all): Success! ✅ - Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/10108418141 Initiator: Matthieu Baerts (NGI0) Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/9d038da097ea Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=873877 If there are some issues, you can reproduce them using the same environment as the one used by the CI thanks to a docker image, e.g.: $ cd [kernel source code] $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \ --pull always mptcp/mptcp-upstream-virtme-docker:latest \ auto-normal For more details: https://github.com/multipath-tcp/mptcp-upstream-virtme-docker Please note that despite all the efforts that have been already done to have a stable tests suite when executed on a public CI like here, it is possible some reported issues are not due to your modifications. Still, do not hesitate to help us improve that ;-) Cheers, MPTCP GH Action bot Bot operated by Matthieu Baerts (NGI0 Core)
Hi Paolo, Mat, On 26/07/2024 02:42, Mat Martineau wrote: > On Thu, 25 Jul 2024, Paolo Abeni wrote: > >> When a subflow receives and discards duplicate data, the mptcp >> stack assumes that the consumed offset inside the current skb is >> zero. >> >> With multiple subflows receiving data simultaneously such assertion >> does not held true. As a result the subflow-level copied_seq will >> be incorrectly increased and later on the same subflow will observe >> a bad mapping, leading to subflow reset. Thank you for the patches and the reviews! >> Address the issue tacking in account the skb consumed offset in > > Just one fix but Matthieu can adjust: "Address the issue taking into > account..." Just did, thanks! Now in our tree (fixes for -net) New patches for t/upstream-net and t/upstream: - 6d60c2cf352d: mptcp: fix bad RCVPRUNED mib accounting - 2e6def94f212: mptcp: fix duplicate data handling - Results: a1f9b538328a..0e547869a341 (export-net) - Results: 8149d851361f..9a749f3dd928 (export) Tests are now in progress: - export-net: https://github.com/multipath-tcp/mptcp_net-next/commit/865fc5f5109629347844c647fa71a50484d02dec/checks - export: https://github.com/multipath-tcp/mptcp_net-next/commit/166ef45101617d160bacd4c3330e32f8f71fcf17/checks Cheers, Matt
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 0e4b5bfbeaa1..a21c712350c3 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -1230,14 +1230,22 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb, { struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); bool fin = TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN; - u32 incr; + struct tcp_sock *tp = tcp_sk(ssk); + u32 offset, incr, avail_len; - incr = limit >= skb->len ? skb->len + fin : limit; + offset = tp->copied_seq - TCP_SKB_CB(skb)->seq; + if (WARN_ON_ONCE(offset > skb->len)) + goto out; + + avail_len = skb->len - offset; + incr = limit >= avail_len ? avail_len + fin : limit; - pr_debug("discarding=%d len=%d seq=%d", incr, skb->len, - subflow->map_subflow_seq); + pr_debug("discarding=%d len=%d offset=%d seq=%d", incr, skb->len, + offset, subflow->map_subflow_seq); MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DUPDATA); tcp_sk(ssk)->copied_seq += incr; + +out: if (!before(tcp_sk(ssk)->copied_seq, TCP_SKB_CB(skb)->end_seq)) sk_eat_skb(ssk, skb); if (mptcp_subflow_get_map_offset(subflow) >= subflow->map_data_len)
When a subflow receives and discards duplicate data, the mptcp stack assumes that the consumed offset inside the current skb is zero. With multiple subflows receiving data simultaneously such assertion does not held true. As a result the subflow-level copied_seq will be incorrectly increased and later on the same subflow will observe a bad mapping, leading to subflow reset. Address the issue tacking in account the skb consumed offset in mptcp_subflow_discard_data(). Fixes: 04e4cd4f7ca4 ("mptcp: cleanup mptcp_subflow_discard_data()") Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- net/mptcp/subflow.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)