mbox series

[mptcp-next,v4,0/4] BPF path manager, part 4

Message ID cover.1738919954.git.tanggeliang@kylinos.cn (mailing list archive)
Headers show
Series BPF path manager, part 4 | expand

Message

Geliang Tang Feb. 7, 2025, 9:29 a.m. UTC
From: Geliang Tang <tanggeliang@kylinos.cn>

v4:
 - include a new patch "define BPF path manager type".

 - add new interfaces:
	created established closed
	listerner_created listener_closed

 - rename interfaces as:
	address_announced address_removed
	subflow_established subflow_closed
	get_priority set_priority

 - rename functions as:
	mptcp_pm_validate
	mptcp_pm_register
	mptcp_pm_unregister
	mptcp_pm_initialize
	mptcp_pm_release

v3:
 - rename the 2nd parameter of get_local_id() from 'local' to 'skc'.
 - keep the 'msk_sport' check in mptcp_userspace_pm_get_local_id().
 - return 'err' instead of '0' in userspace_pm_subflow_create().
 - drop 'ret' variable inmptcp_pm_data_reset().
 - fix typos in commit log.

v2:
 - update get_local_id interface in patch 2.

get_addr() and dump_addr() interfaces of BPF userspace pm are dropped
as Matt suggested.

In order to implement BPF userspace path manager, it is necessary to
unify the interfaces of the path manager. This set contains some
cleanups and refactoring to unify the interfaces in kernel space.
Finally, define a struct mptcp_pm_ops for a userspace path manager
like this:

struct mptcp_pm_ops {
        int (*created)(struct mptcp_sock *msk);
        int (*established)(struct mptcp_sock *msk);
        int (*closed)(struct mptcp_sock *msk);
        int (*address_announced)(struct mptcp_sock *msk,
                                 struct mptcp_pm_addr_entry *local);
        int (*address_removed)(struct mptcp_sock *msk, u8 id);
        int (*subflow_established)(struct mptcp_sock *msk,
                                   struct mptcp_pm_addr_entry *local,
                                   struct mptcp_addr_info *remote);
        int (*subflow_closed)(struct mptcp_sock *msk,
                              struct mptcp_pm_addr_entry *local,
                              struct mptcp_addr_info *remote);
        int (*get_local_id)(struct mptcp_sock *msk,
                            struct mptcp_pm_addr_entry *skc);
        bool (*get_priority)(struct mptcp_sock *msk,
                             struct mptcp_addr_info *skc);
        int (*set_priority)(struct mptcp_sock *msk,
                            struct mptcp_pm_addr_entry *local,
                            struct mptcp_addr_info *remote);
        int (*listener_created)(struct mptcp_sock *msk);
        int (*listener_closed)(struct mptcp_sock *msk);

        u8                      type;
        struct module           *owner;
        struct list_head        list;

        void (*init)(struct mptcp_sock *msk);
        void (*release)(struct mptcp_sock *msk);
} ____cacheline_aligned_in_smp;

Geliang Tang (4):
  mptcp: define struct mptcp_pm_ops
  mptcp: define BPF path manager type
  mptcp: register default userspace pm
  mptcp: initialize and release mptcp_pm_ops

 include/net/mptcp.h      |  32 +++++
 net/mptcp/pm.c           | 109 ++++++++++++++-
 net/mptcp/pm_netlink.c   |  11 +-
 net/mptcp/pm_userspace.c | 294 ++++++++++++++++++++++++---------------
 net/mptcp/protocol.c     |  10 +-
 net/mptcp/protocol.h     |  15 +-
 6 files changed, 355 insertions(+), 116 deletions(-)

Comments

MPTCP CI Feb. 7, 2025, 10:45 a.m. UTC | #1
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/13197339964

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/2bfd65fd00d3
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=931513


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)