mbox series

[mptcp-next,v5,0/7] BPF path manager, part 6

Message ID cover.1743054942.git.tanggeliang@kylinos.cn (mailing list archive)
Headers show
Series BPF path manager, part 6 | expand

Message

Geliang Tang March 27, 2025, 6:04 a.m. UTC
From: Geliang Tang <tanggeliang@kylinos.cn>

v5:
 - add comment "call from the subflow/msk context" for mptcp_sched_ops.
 - add new helper mptcp_pm_accept_new_subflow.
 - add "bool allow" parameter for mptcp_pm_accept_new_subflow, and drop
   .allow_new_subflow interface.
 - use a copy of pm->status in mptcp_pm_worker.
 - rename mptcp_pm_create_subflow_or_signal_addr with "__" prefix.
 - drop "!update_subflows" in mptcp_pm_subflow_check_next.
 - add_addr_received/rm_addr_received interfaces will be added in the
   next series.

v4:
 - address Matt's comments in v3. 
 - update pm locks in mptcp_pm_worker.
 - move the lock inside mptcp_pm_create_subflow_or_signal_addr.
 - move the lock inside mptcp_pm_nl_add_addr_received.
 - invoke add_addr_received interface from mptcp_pm_worker.
 - invoke rm_addr_received interface from mptcp_pm_rm_addr_or_subflow.
 - simply call mptcp_pm_close_subflow() in mptcp_pm_subflow_check_next.
 - https://patchwork.kernel.org/project/mptcp/cover/cover.1742804266.git.tanggeliang@kylinos.cn/

v3:
 - merge 'bugfixes for "BPF path manager, part 6, v2"' into this set.
 - https://patchwork.kernel.org/project/mptcp/cover/cover.1742521397.git.tanggeliang@kylinos.cn/

v2:
 - address Matt's comments in v1. 
 - add add_addr_received and rm_addr_received interfaces.
 - drop subflow_check_next interface.
 - add a "required" or "optional" comment for a group of interfaces in
   struct mptcp_pm_ops.

v1:
- https://patchwork.kernel.org/project/mptcp/cover/cover.1741685260.git.tanggeliang@kylinos.cn/

New interfaces for struct mptcp_pm_ops.

Geliang Tang (7):
  Squash to "mptcp: pm: add get_local_id() interface"
  mptcp: pm: add accept_new_subflow() interface
  mptcp: pm: use accept_new_subflow in allow_new_subflow
  mptcp: pm: update pm lock order in mptcp_pm_worker
  mptcp: pm: add established() interface
  mptcp: pm: add subflow_established() interface
  mptcp: pm: drop is_userspace in subflow_check_next

 include/net/mptcp.h      |  7 +++-
 net/mptcp/pm.c           | 88 ++++++++++++++++++++--------------------
 net/mptcp/pm_kernel.c    | 52 ++++++++++++++++++------
 net/mptcp/pm_userspace.c |  7 ++++
 net/mptcp/protocol.h     |  1 +
 net/mptcp/subflow.c      |  6 +--
 6 files changed, 99 insertions(+), 62 deletions(-)

Comments

MPTCP CI March 27, 2025, 6:30 a.m. UTC | #1
Hi Geliang,

Thank you for your modifications, that's great!

But sadly, our CI spotted some issues with it when trying to build it.

You can find more details there:

  https://github.com/multipath-tcp/mptcp_net-next/actions/runs/14100178407

Status: failure
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/55c1c9f50291
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=947653

Feel free to reply to this email if you cannot access logs, if you need
some support to fix the error, if this doesn't seem to be caused by your
modifications or if the error is a false positive one.

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
MPTCP CI March 27, 2025, 7:15 a.m. UTC | #2
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/14100178411

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/55c1c9f50291
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=947653


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
Matthieu Baerts March 27, 2025, 10:26 a.m. UTC | #3
Hi Geliang,

On 27/03/2025 07:30, MPTCP CI wrote:
> Hi Geliang,
> 
> Thank you for your modifications, that's great!
> 
> But sadly, our CI spotted some issues with it when trying to build it.
> 
> You can find more details there:
> 
>   https://github.com/multipath-tcp/mptcp_net-next/actions/runs/14100178407

It looks like it is a false positive because one function has been
renamed, we can ignore it for the moment.

Cheers,
Matt
Matthieu Baerts March 27, 2025, 11:27 a.m. UTC | #4
Hi Geliang,

On 27/03/2025 07:04, Geliang Tang wrote:
> From: Geliang Tang <tanggeliang@kylinos.cn>
> 
> v5:
>  - add comment "call from the subflow/msk context" for mptcp_sched_ops.
>  - add new helper mptcp_pm_accept_new_subflow.
>  - add "bool allow" parameter for mptcp_pm_accept_new_subflow, and drop
>    .allow_new_subflow interface.
>  - use a copy of pm->status in mptcp_pm_worker.
>  - rename mptcp_pm_create_subflow_or_signal_addr with "__" prefix.
>  - drop "!update_subflows" in mptcp_pm_subflow_check_next.
>  - add_addr_received/rm_addr_received interfaces will be added in the
>    next series.

Thank you for the update. I think more changes are needed around the PM
lock. Please see my individual comments.

Do not hesitate to tell me if you see what I mean with the modifications
around the PM lock, or if you prefer I (or someone else) take over this
part.

Cheers,
Matt