mbox series

[mptcp-next,v11,0/9] use bpf_iter in bpf schedulers

Message ID cover.1733800334.git.tanggeliang@kylinos.cn (mailing list archive)
Headers show
Series use bpf_iter in bpf schedulers | expand

Message

Geliang Tang Dec. 10, 2024, 3:31 a.m. UTC
From: Geliang Tang <tanggeliang@kylinos.cn>

v11:
If another squash-to patchset (Squash to "Add mptcp_subflow bpf_iter
support") under review is merged before this set, v10 will fail to run.
v11 fixes this issue and can run regardless of whether it is merged
before or after the squash-to patchset.

Compared with v10, only patches 3, 5, and 8 have been modified:
 - use mptcp_subflow_tcp_sock instead of bpf_mptcp_subflow_tcp_sock in
   patch 3 and patch 5.
 - drop bpf_mptcp_sched_kfunc_set, use bpf_mptcp_common_kfunc_set instead
   in patch 8.

v10:
 - drop mptcp_subflow_set_scheduled() helper and WRITE_ONCE() in BPF.
 - add new bpf helper bpf_mptcp_send_info_to_ssk() for burst scheduler.

v9:
 - merge 'Fixes for "use bpf_iter in bpf schedulers" v8' into this set.
 - rebased on "add netns helpers" v4

v8:
 - address Mat's comments in v7.
 - move sk_stream_memory_free check inside bpf_for_each() loop.
 - implement mptcp_subflow_set_scheduled helper in BPF.
 - add cleanup patches into this set again.

v7:
 - move cleanup patches out of this set.
 - rebased.

v6:
 - rebased to "add mptcp_subflow bpf_iter" v10

v5:
 - patch 2, drop mptcp_sock_type and mptcp_subflow_type.
 - patch 3, revert "bpf: Export more bpf_burst related functions"
 - patch 4, merge "bpf: Export more bpf_burst related functions" into it.

v4:
 - patch 2, a new cleanup for "bpf: Add bpf_mptcp_sched_ops".
 - patch 3 should be reverted.
 - patch 8, register kfunc_set.

v3:
 - rebased.
 - put the "drop has_bytes_sent" squash-to patch into this set.

v2:
 - update bpf_rr and bpf_burst

With the newly added mptcp_subflow bpf_iter, we can get rid of the
subflows array "contexts" in struct mptcp_sched_data. This set
uses bpf_for_each(mptcp_subflow) helper to update all the bpf
schedules:

        bpf_for_each(mptcp_subflow, subflow, msk) {
                ... ...
                mptcp_subflow_set_scheduled(subflow, true);
        }

Geliang Tang (9):
  bpf: Add bpf_mptcp_send_info_to_ssk
  Squash to "selftests/bpf: Add bpf_bkup scheduler & test"
  Squash to "selftests/bpf: Add bpf_rr scheduler & test"
  Squash to "selftests/bpf: Add bpf_red scheduler & test"
  Squash to "selftests/bpf: Add bpf_burst scheduler & test"
  Squash to "selftests/bpf: Add bpf_first scheduler & test"
  Revert "mptcp: add sched_data helpers"
  Squash to "bpf: Export mptcp packet scheduler helpers"
  mptcp: drop subflow contexts in mptcp_sched_data

 include/net/mptcp.h                           |  2 -
 include/uapi/linux/bpf.h                      |  7 +++
 net/mptcp/bpf.c                               | 47 ++++++++--------
 net/mptcp/protocol.c                          |  5 --
 net/mptcp/protocol.h                          |  7 ++-
 net/mptcp/sched.c                             | 22 --------
 tools/include/uapi/linux/bpf.h                |  7 +++
 tools/testing/selftests/bpf/progs/mptcp_bpf.h |  3 --
 .../selftests/bpf/progs/mptcp_bpf_bkup.c      | 16 ++----
 .../selftests/bpf/progs/mptcp_bpf_burst.c     | 54 ++++++++-----------
 .../selftests/bpf/progs/mptcp_bpf_first.c     |  8 ++-
 .../selftests/bpf/progs/mptcp_bpf_red.c       |  8 ++-
 .../selftests/bpf/progs/mptcp_bpf_rr.c        | 31 +++++------
 13 files changed, 93 insertions(+), 124 deletions(-)

Comments

MPTCP CI Dec. 10, 2024, 4:34 a.m. UTC | #1
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/12248989279

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/58a73001abca
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=916204


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)