mbox series

[net-next,00/13] mptcp: improve multiple xmit streams support

Message ID cover.1605175834.git.pabeni@redhat.com (mailing list archive)
Headers show
Series mptcp: improve multiple xmit streams support | expand

Message

Paolo Abeni Nov. 12, 2020, 10:47 a.m. UTC
This series improves MPTCP handling of multiple concurrent
xmit streams.

The to-be-transmitted data is enqueued to a subflow only when
the send window is open, keeping the subflows xmit queue shorter
and allowing for faster switch-over.

The above requires a more accurate msk socket state tracking
and some additional infrastructure to allow pushing the data
pending in the msk xmit queue as soon as the MPTCP's send window
opens (patches 6-10).

As a side effect, the MPTCP socket could enqueue data to subflows
after close() time - to completely spooling the data sitting in the 
msk xmit queue. Dealing with the requires some infrastructure and 
core TCP changes (patches 1-5)

Finally, patches 11-12 introduce a more accurate tracking of the other
end's receive window.

Overall this refactor the MPTCP xmit path, without introducing
new features - the new code is covered by the existing self-tests.

Florian Westphal (2):
  mptcp: rework poll+nospace handling
  mptcp: keep track of advertised windows right edge

Paolo Abeni (11):
  tcp: factor out tcp_build_frag()
  mptcp: use tcp_build_frag()
  tcp: factor out __tcp_close() helper
  mptcp: introduce mptcp_schedule_work
  mptcp: reduce the arguments of mptcp_sendmsg_frag
  mptcp: add accounting for pending data
  mptcp: introduce MPTCP snd_nxt
  mptcp: refactor shutdown and close
  mptcp: move page frag allocation in mptcp_sendmsg()
  mptcp: try to push pending data on snd una updates
  mptcp: send explicit ack on delayed ack_seq incr

 include/net/tcp.h      |   4 +
 net/ipv4/tcp.c         | 128 +++---
 net/mptcp/options.c    |  30 +-
 net/mptcp/pm.c         |   3 +-
 net/mptcp/pm_netlink.c |   6 +-
 net/mptcp/protocol.c   | 969 ++++++++++++++++++++++++-----------------
 net/mptcp/protocol.h   |  72 ++-
 net/mptcp/subflow.c    |  33 +-
 8 files changed, 758 insertions(+), 487 deletions(-)

Comments

Jakub Kicinski Nov. 12, 2020, 3:40 p.m. UTC | #1
On Thu, 12 Nov 2020 11:47:58 +0100 Paolo Abeni wrote:
> This series improves MPTCP handling of multiple concurrent
> xmit streams.
> 
> The to-be-transmitted data is enqueued to a subflow only when
> the send window is open, keeping the subflows xmit queue shorter
> and allowing for faster switch-over.
> 
> The above requires a more accurate msk socket state tracking
> and some additional infrastructure to allow pushing the data
> pending in the msk xmit queue as soon as the MPTCP's send window
> opens (patches 6-10).
> 
> As a side effect, the MPTCP socket could enqueue data to subflows
> after close() time - to completely spooling the data sitting in the 
> msk xmit queue. Dealing with the requires some infrastructure and 
> core TCP changes (patches 1-5)
> 
> Finally, patches 11-12 introduce a more accurate tracking of the other
> end's receive window.
> 
> Overall this refactor the MPTCP xmit path, without introducing
> new features - the new code is covered by the existing self-tests.

Hi Paolo!

Would you mind resending? Looks like patchwork got confused about patch
6 not belonging to the series.
Paolo Abeni Nov. 12, 2020, 4:49 p.m. UTC | #2
On Thu, 2020-11-12 at 07:40 -0800, Jakub Kicinski wrote:
> On Thu, 12 Nov 2020 11:47:58 +0100 Paolo Abeni wrote:
> > This series improves MPTCP handling of multiple concurrent
> > xmit streams.
> > 
> > The to-be-transmitted data is enqueued to a subflow only when
> > the send window is open, keeping the subflows xmit queue shorter
> > and allowing for faster switch-over.
> > 
> > The above requires a more accurate msk socket state tracking
> > and some additional infrastructure to allow pushing the data
> > pending in the msk xmit queue as soon as the MPTCP's send window
> > opens (patches 6-10).
> > 
> > As a side effect, the MPTCP socket could enqueue data to subflows
> > after close() time - to completely spooling the data sitting in the 
> > msk xmit queue. Dealing with the requires some infrastructure and 
> > core TCP changes (patches 1-5)
> > 
> > Finally, patches 11-12 introduce a more accurate tracking of the other
> > end's receive window.
> > 
> > Overall this refactor the MPTCP xmit path, without introducing
> > new features - the new code is covered by the existing self-tests.
> 
> Hi Paolo!
> 
> Would you mind resending? Looks like patchwork got confused about patch
> 6 not belonging to the series.

Sure, no problem.

AFAICS, the headers look correct ?!? in 6/13:

Message-Id: <653b54ab33745d31c601ca0cd0754d181170838f.1605175834.git.pabeni@redhat.com>
In-Reply-To: <cover.1605175834.git.pabeni@redhat.com>

In 0/13:
Message-Id: <cover.1605175834.git.pabeni@redhat.com>

Cheers,

Paolo