mbox series

[mptcp-next,0/2] mptcp: sched: reduce size for unused data

Message ID 20250221-mptcp-sched-data-ptr-v1-0-dbaec476fa6b@kernel.org (mailing list archive)
Headers show
Series mptcp: sched: reduce size for unused data | expand

Message

Matthieu Baerts Feb. 21, 2025, 3:08 p.m. UTC
I was going to send "mptcp: sched: split get_subflow interface into two"
commit, when I saw again that the "data" structure was no longer used.

Because it will be removed later, when "use bpf_iter in bpf schedulers"
series will be applied, a first step is to save 64B from the stack for
each scheduling operation.

The first patch can be placed after "mptcp: sched: split get_subflow
interface into two", and the second one before or squashed into "mptcp:
add sched_data helpers": this commit can be dropped when "use bpf_iter
in bpf schedulers" series will be applied, no need to keep it for future
use. Same for the struct mptcp_sched_data.

Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
---
Matthieu Baerts (NGI0) (2):
      mptcp: sched: reduce size for unused data
      Squash to "mptcp: add sched_data helpers"


---
base-commit: 98f1e7131283309b4684ce977f28c9703d16b516
change-id: 20250221-mptcp-sched-data-ptr-8740fecdbbae

Best regards,

Comments

Matthieu Baerts Feb. 21, 2025, 3:33 p.m. UTC | #1
Hello,

On 21/02/2025 16:08, Matthieu Baerts (NGI0) wrote:
> I was going to send "mptcp: sched: split get_subflow interface into two"
> commit, when I saw again that the "data" structure was no longer used.
> 
> Because it will be removed later, when "use bpf_iter in bpf schedulers"
> series will be applied, a first step is to save 64B from the stack for
> each scheduling operation.

I suggest applying this now, not to block the other patch: this small
modification is trivial, and undo only in our tree.

Cheers,
Matt
Matthieu Baerts Feb. 21, 2025, 3:56 p.m. UTC | #2
On 21/02/2025 16:33, Matthieu Baerts wrote:
> Hello,
> 
> On 21/02/2025 16:08, Matthieu Baerts (NGI0) wrote:
>> I was going to send "mptcp: sched: split get_subflow interface into two"
>> commit, when I saw again that the "data" structure was no longer used.
>>
>> Because it will be removed later, when "use bpf_iter in bpf schedulers"
>> series will be applied, a first step is to save 64B from the stack for
>> each scheduling operation.
> 
> I suggest applying this now, not to block the other patch: this small
> modification is trivial, and undo only in our tree.

Just did:

New patches for t/upstream:
- 70c366a0a40e: mptcp: sched: reduce size for unused data
- 46de2641147f: conflict in t/mptcp-add-sched_data-helpers-2
- f3c111c797d2: "squashed" patch 2/2 in "mptcp: add sched_data helpers"
- Results: 9afc9d6ddf8b..b5dbbb68e432 (export)

Tests are now in progress:

- export:
https://github.com/multipath-tcp/mptcp_net-next/commit/e62e41e330fd89c538f76910906c828684a9ce44/checks

Cheers,
Matt
MPTCP CI Feb. 21, 2025, 4:23 p.m. UTC | #3
Hi Matthieu,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/13459912721

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/4f91f5c000d9
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=936464


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)