Message ID | 627c15b052e2ecf4a2c6a897a83da7e25db55f32.1622132917.git.pabeni@redhat.com (mailing list archive) |
---|---|
State | Accepted, archived |
Commit | 61f6d7b1a174d10fac2d27ca4e941dfe5e8a6f7c |
Delegated to: | Matthieu Baerts |
Headers | show |
Series | mptcp: some smaller cleanup | expand |
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index a08ea8867716..4ac55e696f52 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -686,9 +686,6 @@ static bool move_skbs_to_msk(struct mptcp_sock *msk, struct sock *ssk) struct sock *sk = (struct sock *)msk; unsigned int moved = 0; - if (inet_sk_state_load(sk) == TCP_CLOSE) - return false; - mptcp_data_lock(sk); __mptcp_move_skbs_from_subflow(msk, ssk, &moved);
Currently we check the msk state to avoid enqueuing new skbs at msk shutdown time. Such test is racy - as we can't acquire the msk socket lock - and useless, as the caller already checked the subflow field 'disposable', covering the same scenario in a race free manner - read and updated under the ssk socket lock. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- net/mptcp/protocol.c | 3 --- 1 file changed, 3 deletions(-)