diff mbox series

[RFC,bpf-next,v2,2/2,no_merge] selftests/bpf: Benchmark runtime performance with private stack

Message ID 20240711164209.1658101-1-yonghong.song@linux.dev (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [RFC,bpf-next,v2,1/2] bpf: Support private stack for bpf progs | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-13 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-6 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-10 success Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-18 success Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-8 success Logs for s390x-gcc / test (test_maps, false, 360) / test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17-O2
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-32 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18-O2
bpf/vmtest-bpf-next-VM_Test-16 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-4 pending Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-7 success Logs for s390x-gcc / build-release
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1054 this patch: 1054
netdev/build_tools success Errors and warnings before: 1 this patch: 1
netdev/cc_maintainers warning 11 maintainers not CCed: kpsingh@kernel.org shuah@kernel.org haoluo@google.com john.fastabend@gmail.com jolsa@kernel.org linux-kselftest@vger.kernel.org martin.lau@linux.dev mykolal@fb.com song@kernel.org eddyz87@gmail.com sdf@fomichev.me
netdev/build_clang success Errors and warnings before: 1128 this patch: 1128
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success net selftest script(s) already in Makefile
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 7785 this patch: 7785
netdev/checkpatch fail ERROR: code indent should use tabs where possible ERROR: space required before the open parenthesis '(' WARNING: added, moved or deleted file(s), does MAINTAINERS need updating? WARNING: externs should be avoided in .c files WARNING: line length of 100 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns WARNING: line length of 86 exceeds 80 columns WARNING: line length of 89 exceeds 80 columns WARNING: please, no spaces at the start of a line
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 6 this patch: 6
netdev/source_inline success Was 0 now: 0

Commit Message

Yonghong Song July 11, 2024, 4:42 p.m. UTC
This patch intends to show some benchmark results comparing a bpf
program with vs. without private stack. The patch is not intended
to land since it hacks existing kernel interface in order to
do proper comparison.

The jited code without private stack:

0:  f3 0f 1e fa             endbr64
4:  0f 1f 44 00 00          nop    DWORD PTR [rax+rax*1+0x0]
9:  66 90                   xchg   ax,ax
b:  55                      push   rbp
c:  48 89 e5                mov    rbp,rsp
f:  f3 0f 1e fa             endbr64
13: 48 81 ec 60 00 00 00    sub    rsp,0x60
1a: 48 bf 00 20 1a 00 00    movabs rdi,0xffffc900001a2000
21: c9 ff ff
24: 48 8b 77 00             mov    rsi,QWORD PTR [rdi+0x0]
28: 48 83 c6 01             add    rsi,0x1
2c: 48 89 77 00             mov    QWORD PTR [rdi+0x0],rsi
30: 31 ff                   xor    edi,edi
32: 48 89 7d f8             mov    QWORD PTR [rbp-0x8],rdi
36: be 05 00 00 00          mov    esi,0x5
3b: 89 75 f8                mov    DWORD PTR [rbp-0x8],esi
3e: 48 89 7d f0             mov    QWORD PTR [rbp-0x10],rdi
42: 48 89 7d e8             mov    QWORD PTR [rbp-0x18],rdi
46: 48 89 7d e0             mov    QWORD PTR [rbp-0x20],rdi
4a: 48 89 7d d8             mov    QWORD PTR [rbp-0x28],rdi
4e: 48 89 7d d0             mov    QWORD PTR [rbp-0x30],rdi
52: 48 89 7d c0             mov    QWORD PTR [rbp-0x40],rdi
56: 48 89 7d c8             mov    QWORD PTR [rbp-0x38],rdi
5a: 48 89 7d b8             mov    QWORD PTR [rbp-0x48],rdi
5e: 48 89 7d b0             mov    QWORD PTR [rbp-0x50],rdi
62: 48 89 7d a8             mov    QWORD PTR [rbp-0x58],rdi
66: 48 89 7d a0             mov    QWORD PTR [rbp-0x60],rdi
6a: bf 0a 00 00 00          mov    edi,0xa
6f: 89 7d c0                mov    DWORD PTR [rbp-0x40],edi
72: 48 89 ee                mov    rsi,rbp
75: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
79: 48 bf 00 f8 4b 0a 81    movabs rdi,0xffff88810a4bf800
80: 88 ff ff
83: e8 e0 1d 5f e1          call   0xffffffffe15f1e68
88: 48 85 c0                test   rax,rax
8b: 74 04                   je     0x91
8d: 48 83 c0 60             add    rax,0x60
91: 48 85 c0                test   rax,rax
94: 75 1f                   jne    0xb5
96: 48 89 ee                mov    rsi,rbp
99: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
9d: 48 89 ea                mov    rdx,rbp
a0: 48 83 c2 a0             add    rdx,0xffffffffffffffa0
a4: 48 bf 00 f8 4b 0a 81    movabs rdi,0xffff88810a4bf800
ab: 88 ff ff
ae: 31 c9                   xor    ecx,ecx
b0: e8 73 d7 5e e1          call   0xffffffffe15ed828
b5: 48 89 ee                mov    rsi,rbp
b8: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
bc: 48 bf 00 f8 4b 0a 81    movabs rdi,0xffff88810a4bf800
c3: 88 ff ff
c6: e8 0d e0 5e e1          call   0xffffffffe15ee0d8
cb: 31 c0                   xor    eax,eax
cd: c9                      leave
ce: c3                      ret

The jited code with private stack:

0:  f3 0f 1e fa             endbr64
4:  0f 1f 44 00 00          nop    DWORD PTR [rax+rax*1+0x0]
9:  66 90                   xchg   ax,ax
b:  55                      push   rbp
c:  48 89 e5                mov    rbp,rsp
f:  f3 0f 1e fa             endbr64
13: 49 b9 40 af c1 08 7e    movabs r9,0x607e08c1af40
1a: 60 00 00
1d: 65 4c 03 0c 25 00 1a    add    r9,QWORD PTR gs:0x21a00
24: 02 00
26: 48 bf 00 60 68 00 00    movabs rdi,0xffffc90000686000
2d: c9 ff ff
30: 48 8b 77 00             mov    rsi,QWORD PTR [rdi+0x0]
34: 48 83 c6 01             add    rsi,0x1
38: 48 89 77 00             mov    QWORD PTR [rdi+0x0],rsi
3c: 31 ff                   xor    edi,edi
3e: 49 89 79 f8             mov    QWORD PTR [r9-0x8],rdi
42: be 05 00 00 00          mov    esi,0x5
47: 41 89 71 f8             mov    DWORD PTR [r9-0x8],esi
4b: 49 89 79 f0             mov    QWORD PTR [r9-0x10],rdi
4f: 49 89 79 e8             mov    QWORD PTR [r9-0x18],rdi
53: 49 89 79 e0             mov    QWORD PTR [r9-0x20],rdi
57: 49 89 79 d8             mov    QWORD PTR [r9-0x28],rdi
5b: 49 89 79 d0             mov    QWORD PTR [r9-0x30],rdi
5f: 49 89 79 c0             mov    QWORD PTR [r9-0x40],rdi
63: 49 89 79 c8             mov    QWORD PTR [r9-0x38],rdi
67: 49 89 79 b8             mov    QWORD PTR [r9-0x48],rdi
6b: 49 89 79 b0             mov    QWORD PTR [r9-0x50],rdi
6f: 49 89 79 a8             mov    QWORD PTR [r9-0x58],rdi
73: 49 89 79 a0             mov    QWORD PTR [r9-0x60],rdi
77: bf 0a 00 00 00          mov    edi,0xa
7c: 41 89 79 c0             mov    DWORD PTR [r9-0x40],edi
80: 4c 89 ce                mov    rsi,r9
83: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
87: 48 bf 00 e0 22 0c 81    movabs rdi,0xffff88810c22e000
8e: 88 ff ff
91: 41 51                   push   r9
93: e8 10 1d 5f e1          call   0xffffffffe15f1da8
98: 41 59                   pop    r9
9a: 48 85 c0                test   rax,rax
9d: 74 04                   je     0xa3
9f: 48 83 c0 60             add    rax,0x60
a3: 48 85 c0                test   rax,rax
a6: 75 23                   jne    0xcb
a8: 4c 89 ce                mov    rsi,r9
ab: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
af: 4c 89 ca                mov    rdx,r9
b2: 48 83 c2 a0             add    rdx,0xffffffffffffffa0
b6: 48 bf 00 e0 22 0c 81    movabs rdi,0xffff88810c22e000
bd: 88 ff ff
c0: 31 c9                   xor    ecx,ecx
c2: 41 51                   push   r9
c4: e8 9f d6 5e e1          call   0xffffffffe15ed768
c9: 41 59                   pop    r9
cb: 4c 89 ce                mov    rsi,r9
ce: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
d2: 48 bf 00 e0 22 0c 81    movabs rdi,0xffff88810c22e000
d9: 88 ff ff
dc: 41 51                   push   r9
de: e8 35 df 5e e1          call   0xffffffffe15ee018
e3: 41 59                   pop    r9
e5: 31 c0                   xor    eax,eax
e7: c9                      leave
e8: c3                      ret

It is clear that the main overhead is the push/pop r9 for
three calls.

Five runs of the benchmarks:

[root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
no-private-stack:    0.662 ± 0.019M/s (drops 0.000 ± 0.000M/s)
private-stack:       0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
[root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
no-private-stack:    0.684 ± 0.005M/s (drops 0.000 ± 0.000M/s)
private-stack:       0.676 ± 0.008M/s (drops 0.000 ± 0.000M/s)
[root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
no-private-stack:    0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
private-stack:       0.683 ± 0.006M/s (drops 0.000 ± 0.000M/s)
[root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
no-private-stack:    0.680 ± 0.011M/s (drops 0.000 ± 0.000M/s)
private-stack:       0.626 ± 0.050M/s (drops 0.000 ± 0.000M/s)
[root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
no-private-stack:    0.686 ± 0.007M/s (drops 0.000 ± 0.000M/s)
private-stack:       0.683 ± 0.003M/s (drops 0.000 ± 0.000M/s)

The performance is very similar between private-stack and no-private-stack.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 include/linux/bpf.h                           |   3 +-
 include/uapi/linux/bpf.h                      |   3 +
 kernel/bpf/core.c                             |   2 +-
 kernel/bpf/syscall.c                          |   4 +-
 tools/include/uapi/linux/bpf.h                |   3 +
 tools/testing/selftests/bpf/Makefile          |   2 +
 tools/testing/selftests/bpf/bench.c           |   6 +
 .../bpf/benchs/bench_private_stack.c          | 142 ++++++++++++++++++
 .../bpf/benchs/run_bench_private_stack.sh     |   9 ++
 .../selftests/bpf/progs/private_stack.c       |  40 +++++
 10 files changed, 211 insertions(+), 3 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/benchs/bench_private_stack.c
 create mode 100755 tools/testing/selftests/bpf/benchs/run_bench_private_stack.sh
 create mode 100644 tools/testing/selftests/bpf/progs/private_stack.c

Comments

Alexei Starovoitov July 12, 2024, 8:16 p.m. UTC | #1
On Thu, Jul 11, 2024 at 9:42 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
> It is clear that the main overhead is the push/pop r9 for
> three calls.
>
> Five runs of the benchmarks:
>
> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> no-private-stack:    0.662 ± 0.019M/s (drops 0.000 ± 0.000M/s)
> private-stack:       0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> no-private-stack:    0.684 ± 0.005M/s (drops 0.000 ± 0.000M/s)
> private-stack:       0.676 ± 0.008M/s (drops 0.000 ± 0.000M/s)
> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> no-private-stack:    0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
> private-stack:       0.683 ± 0.006M/s (drops 0.000 ± 0.000M/s)
> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> no-private-stack:    0.680 ± 0.011M/s (drops 0.000 ± 0.000M/s)
> private-stack:       0.626 ± 0.050M/s (drops 0.000 ± 0.000M/s)
> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> no-private-stack:    0.686 ± 0.007M/s (drops 0.000 ± 0.000M/s)
> private-stack:       0.683 ± 0.003M/s (drops 0.000 ± 0.000M/s)
>
> The performance is very similar between private-stack and no-private-stack.

I'm not so sure.
What is the "perf report" before/after?
Are you sure that bench spends enough time inside the program itself?
By the look of it it seems that most of the time will be in hashmap
and syscall overhead.

You need that batch's one that uses for loop and attached to a helper.
See commit 7df4e597ea2c ("selftests/bpf: add batched, mostly in-kernel
BPF triggering benchmarks")

I think the next version doesn't need RFC tag. patch 1 lgtm.
Yonghong Song July 12, 2024, 8:48 p.m. UTC | #2
On 7/12/24 1:16 PM, Alexei Starovoitov wrote:
> On Thu, Jul 11, 2024 at 9:42 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>>
>> It is clear that the main overhead is the push/pop r9 for
>> three calls.
>>
>> Five runs of the benchmarks:
>>
>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>> no-private-stack:    0.662 ± 0.019M/s (drops 0.000 ± 0.000M/s)
>> private-stack:       0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>> no-private-stack:    0.684 ± 0.005M/s (drops 0.000 ± 0.000M/s)
>> private-stack:       0.676 ± 0.008M/s (drops 0.000 ± 0.000M/s)
>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>> no-private-stack:    0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
>> private-stack:       0.683 ± 0.006M/s (drops 0.000 ± 0.000M/s)
>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>> no-private-stack:    0.680 ± 0.011M/s (drops 0.000 ± 0.000M/s)
>> private-stack:       0.626 ± 0.050M/s (drops 0.000 ± 0.000M/s)
>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>> no-private-stack:    0.686 ± 0.007M/s (drops 0.000 ± 0.000M/s)
>> private-stack:       0.683 ± 0.003M/s (drops 0.000 ± 0.000M/s)
>>
>> The performance is very similar between private-stack and no-private-stack.
> I'm not so sure.
> What is the "perf report" before/after?
> Are you sure that bench spends enough time inside the program itself?
> By the look of it it seems that most of the time will be in hashmap
> and syscall overhead.
>
> You need that batch's one that uses for loop and attached to a helper.
> See commit 7df4e597ea2c ("selftests/bpf: add batched, mostly in-kernel
> BPF triggering benchmarks")

Okay, I see. The current approach is one trigger, one prog run where
each prog run exercise 3 syscalls. I should add a loop to the bpf
program to make bpf program spends majority of time. Will do this
in the next revision, plus running 'perf report'.

>
> I think the next version doesn't need RFC tag. patch 1 lgtm.
Andrii Nakryiko July 12, 2024, 9:47 p.m. UTC | #3
On Fri, Jul 12, 2024 at 1:48 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
> On 7/12/24 1:16 PM, Alexei Starovoitov wrote:
> > On Thu, Jul 11, 2024 at 9:42 AM Yonghong Song <yonghong.song@linux.dev> wrote:
> >>
> >> It is clear that the main overhead is the push/pop r9 for
> >> three calls.
> >>
> >> Five runs of the benchmarks:
> >>
> >> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> >> no-private-stack:    0.662 ± 0.019M/s (drops 0.000 ± 0.000M/s)
> >> private-stack:       0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
> >> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> >> no-private-stack:    0.684 ± 0.005M/s (drops 0.000 ± 0.000M/s)
> >> private-stack:       0.676 ± 0.008M/s (drops 0.000 ± 0.000M/s)
> >> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> >> no-private-stack:    0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
> >> private-stack:       0.683 ± 0.006M/s (drops 0.000 ± 0.000M/s)
> >> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> >> no-private-stack:    0.680 ± 0.011M/s (drops 0.000 ± 0.000M/s)
> >> private-stack:       0.626 ± 0.050M/s (drops 0.000 ± 0.000M/s)
> >> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
> >> no-private-stack:    0.686 ± 0.007M/s (drops 0.000 ± 0.000M/s)
> >> private-stack:       0.683 ± 0.003M/s (drops 0.000 ± 0.000M/s)
> >>
> >> The performance is very similar between private-stack and no-private-stack.
> > I'm not so sure.
> > What is the "perf report" before/after?
> > Are you sure that bench spends enough time inside the program itself?
> > By the look of it it seems that most of the time will be in hashmap
> > and syscall overhead.
> >
> > You need that batch's one that uses for loop and attached to a helper.
> > See commit 7df4e597ea2c ("selftests/bpf: add batched, mostly in-kernel
> > BPF triggering benchmarks")
>
> Okay, I see. The current approach is one trigger, one prog run where
> each prog run exercise 3 syscalls. I should add a loop to the bpf
> program to make bpf program spends majority of time. Will do this
> in the next revision, plus running 'perf report'.

please also benchmark on real hardware, VM will not give reliable results

>
> >
> > I think the next version doesn't need RFC tag. patch 1 lgtm.
>
Yonghong Song July 12, 2024, 11:42 p.m. UTC | #4
On 7/12/24 2:47 PM, Andrii Nakryiko wrote:
> On Fri, Jul 12, 2024 at 1:48 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>>
>> On 7/12/24 1:16 PM, Alexei Starovoitov wrote:
>>> On Thu, Jul 11, 2024 at 9:42 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>>>> It is clear that the main overhead is the push/pop r9 for
>>>> three calls.
>>>>
>>>> Five runs of the benchmarks:
>>>>
>>>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>>>> no-private-stack:    0.662 ± 0.019M/s (drops 0.000 ± 0.000M/s)
>>>> private-stack:       0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
>>>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>>>> no-private-stack:    0.684 ± 0.005M/s (drops 0.000 ± 0.000M/s)
>>>> private-stack:       0.676 ± 0.008M/s (drops 0.000 ± 0.000M/s)
>>>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>>>> no-private-stack:    0.673 ± 0.017M/s (drops 0.000 ± 0.000M/s)
>>>> private-stack:       0.683 ± 0.006M/s (drops 0.000 ± 0.000M/s)
>>>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>>>> no-private-stack:    0.680 ± 0.011M/s (drops 0.000 ± 0.000M/s)
>>>> private-stack:       0.626 ± 0.050M/s (drops 0.000 ± 0.000M/s)
>>>> [root@arch-fb-vm1 bpf]# ./benchs/run_bench_private_stack.sh
>>>> no-private-stack:    0.686 ± 0.007M/s (drops 0.000 ± 0.000M/s)
>>>> private-stack:       0.683 ± 0.003M/s (drops 0.000 ± 0.000M/s)
>>>>
>>>> The performance is very similar between private-stack and no-private-stack.
>>> I'm not so sure.
>>> What is the "perf report" before/after?
>>> Are you sure that bench spends enough time inside the program itself?
>>> By the look of it it seems that most of the time will be in hashmap
>>> and syscall overhead.
>>>
>>> You need that batch's one that uses for loop and attached to a helper.
>>> See commit 7df4e597ea2c ("selftests/bpf: add batched, mostly in-kernel
>>> BPF triggering benchmarks")
>> Okay, I see. The current approach is one trigger, one prog run where
>> each prog run exercise 3 syscalls. I should add a loop to the bpf
>> program to make bpf program spends majority of time. Will do this
>> in the next revision, plus running 'perf report'.
> please also benchmark on real hardware, VM will not give reliable results

Sure. Will do.

>
>>> I think the next version doesn't need RFC tag. patch 1 lgtm.
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 19a3f5355363..2f8708465c19 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1551,7 +1551,8 @@  struct bpf_prog {
 				call_get_stack:1, /* Do we call bpf_get_stack() or bpf_get_stackid() */
 				call_get_func_ip:1, /* Do we call get_func_ip() */
 				tstamp_type_access:1, /* Accessed __sk_buff->tstamp_type */
-				sleepable:1;	/* BPF program is sleepable */
+				sleepable:1,	/* BPF program is sleepable */
+				disable_private_stack:1; /* Disable private stack */
 	enum bpf_prog_type	type;		/* Type of BPF program */
 	enum bpf_attach_type	expected_attach_type; /* For some prog types */
 	u32			len;		/* Number of filter blocks */
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 35bcf52dbc65..98af8ea8a4d6 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1409,6 +1409,9 @@  enum {
 
 /* Do not translate kernel bpf_arena pointers to user pointers */
 	BPF_F_NO_USER_CONV	= (1U << 18),
+
+/* Disable private stack */
+	BPF_F_DISABLE_PRIVATE_STACK	= (1U << 19),
 };
 
 /* Flags for BPF_PROG_QUERY. */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index f69eb0c5fe03..297e76a8f463 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2815,7 +2815,7 @@  EXPORT_SYMBOL_GPL(bpf_prog_free);
 
 bool bpf_enable_private_stack(struct bpf_prog *prog)
 {
-	if (prog->aux->stack_depth <= 64)
+	if (prog->disable_private_stack || prog->aux->stack_depth <= 64)
 		return false;
 
 	switch (prog->aux->prog->type) {
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 89162ddb4747..bb2b632c9c2c 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2715,7 +2715,8 @@  static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 				 BPF_F_XDP_HAS_FRAGS |
 				 BPF_F_XDP_DEV_BOUND_ONLY |
 				 BPF_F_TEST_REG_INVARIANTS |
-				 BPF_F_TOKEN_FD))
+				 BPF_F_TOKEN_FD |
+				 BPF_F_DISABLE_PRIVATE_STACK))
 		return -EINVAL;
 
 	bpf_prog_load_fixup_attach_type(attr);
@@ -2828,6 +2829,7 @@  static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 
 	prog->expected_attach_type = attr->expected_attach_type;
 	prog->sleepable = !!(attr->prog_flags & BPF_F_SLEEPABLE);
+	prog->disable_private_stack = !!(attr->prog_flags & BPF_F_DISABLE_PRIVATE_STACK);
 	prog->aux->attach_btf = attach_btf;
 	prog->aux->attach_btf_id = attr->attach_btf_id;
 	prog->aux->dst_prog = dst_prog;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 35bcf52dbc65..98af8ea8a4d6 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1409,6 +1409,9 @@  enum {
 
 /* Do not translate kernel bpf_arena pointers to user pointers */
 	BPF_F_NO_USER_CONV	= (1U << 18),
+
+/* Disable private stack */
+	BPF_F_DISABLE_PRIVATE_STACK	= (1U << 19),
 };
 
 /* Flags for BPF_PROG_QUERY. */
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index dd49c1d23a60..44a6a43da71c 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -733,6 +733,7 @@  $(OUTPUT)/bench_local_storage_create.o: $(OUTPUT)/bench_local_storage_create.ske
 $(OUTPUT)/bench_bpf_hashmap_lookup.o: $(OUTPUT)/bpf_hashmap_lookup.skel.h
 $(OUTPUT)/bench_htab_mem.o: $(OUTPUT)/htab_mem_bench.skel.h
 $(OUTPUT)/bench_bpf_crypto.o: $(OUTPUT)/crypto_bench.skel.h
+$(OUTPUT)/bench_private_stack.o: $(OUTPUT)/private_stack.skel.h
 $(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ)
 $(OUTPUT)/bench: LDLIBS += -lm
 $(OUTPUT)/bench: $(OUTPUT)/bench.o \
@@ -753,6 +754,7 @@  $(OUTPUT)/bench: $(OUTPUT)/bench.o \
 		 $(OUTPUT)/bench_local_storage_create.o \
 		 $(OUTPUT)/bench_htab_mem.o \
 		 $(OUTPUT)/bench_bpf_crypto.o \
+		 $(OUTPUT)/bench_private_stack.o \
 		 #
 	$(call msg,BINARY,,$@)
 	$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
index 627b74ae041b..4f4867cd80f9 100644
--- a/tools/testing/selftests/bpf/bench.c
+++ b/tools/testing/selftests/bpf/bench.c
@@ -282,6 +282,7 @@  extern struct argp bench_local_storage_create_argp;
 extern struct argp bench_htab_mem_argp;
 extern struct argp bench_trigger_batch_argp;
 extern struct argp bench_crypto_argp;
+extern struct argp bench_private_stack_argp;
 
 static const struct argp_child bench_parsers[] = {
 	{ &bench_ringbufs_argp, 0, "Ring buffers benchmark", 0 },
@@ -296,6 +297,7 @@  static const struct argp_child bench_parsers[] = {
 	{ &bench_htab_mem_argp, 0, "hash map memory benchmark", 0 },
 	{ &bench_trigger_batch_argp, 0, "BPF triggering benchmark", 0 },
 	{ &bench_crypto_argp, 0, "bpf crypto benchmark", 0 },
+	{ &bench_private_stack_argp, 0, "bpf private stack benchmark", 0 },
 	{},
 };
 
@@ -542,6 +544,8 @@  extern const struct bench bench_local_storage_create;
 extern const struct bench bench_htab_mem;
 extern const struct bench bench_crypto_encrypt;
 extern const struct bench bench_crypto_decrypt;
+extern const struct bench bench_no_private_stack;
+extern const struct bench bench_private_stack;
 
 static const struct bench *benchs[] = {
 	&bench_count_global,
@@ -596,6 +600,8 @@  static const struct bench *benchs[] = {
 	&bench_htab_mem,
 	&bench_crypto_encrypt,
 	&bench_crypto_decrypt,
+	&bench_no_private_stack,
+	&bench_private_stack,
 };
 
 static void find_benchmark(void)
diff --git a/tools/testing/selftests/bpf/benchs/bench_private_stack.c b/tools/testing/selftests/bpf/benchs/bench_private_stack.c
new file mode 100644
index 000000000000..3d8aa695823e
--- /dev/null
+++ b/tools/testing/selftests/bpf/benchs/bench_private_stack.c
@@ -0,0 +1,142 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+
+#include <argp.h>
+#include "bench.h"
+#include "private_stack.skel.h"
+
+static struct ctx {
+	struct private_stack *skel;
+} ctx;
+
+static struct {
+	bool disable_private_stack;
+} args = {
+	.disable_private_stack = 0,
+};
+
+enum {
+	ARG_DISABLE_PRIVATE_STACK = 3000,
+};
+
+static const struct argp_option opts[] = {
+        { "disable-private-stack", ARG_DISABLE_PRIVATE_STACK, "DISABLE_PRIVATE_STACK",
+		0, "Disable private stack" },
+        {},
+};
+
+static error_t private_stack_parse_arg(int key, char *arg, struct argp_state *state)
+{
+	long ret;
+
+        switch (key) {
+        case ARG_DISABLE_PRIVATE_STACK:
+                ret = strtoul(arg, NULL, 10);
+		if (ret != 1)
+			argp_usage(state);
+		args.disable_private_stack = 1;
+                break;
+        default:
+                return ARGP_ERR_UNKNOWN;
+        }
+
+        return 0;
+}
+
+const struct argp bench_private_stack_argp = {
+        .options = opts,
+        .parser = private_stack_parse_arg,
+};
+
+static void private_stack_validate(void)
+{
+	if (env.consumer_cnt != 0) {
+		fprintf(stderr,
+			"The private stack benchmarks do not support consumer\n");
+		exit(1);
+	}
+}
+
+static void common_setup(bool disable_private_stack)
+{
+	struct private_stack *skel;
+	struct bpf_link *link;
+	__u32 old_flags;
+	int err;
+
+	skel = private_stack__open();
+	if(!skel) {
+		fprintf(stderr, "failed to open skeleton\n");
+		exit(1);
+	}
+	ctx.skel = skel;
+
+	if (disable_private_stack) {
+		old_flags = bpf_program__flags(skel->progs.stack0);
+		bpf_program__set_flags(skel->progs.stack0, old_flags | BPF_F_DISABLE_PRIVATE_STACK);
+	}
+
+	err = private_stack__load(skel);
+	if (err) {
+		fprintf(stderr, "failed to load program\n");
+		exit(1);
+	}
+
+	link = bpf_program__attach(skel->progs.stack0);
+	if (!link) {
+		fprintf(stderr, "failed to attach program\n");
+		exit(1);
+	}
+}
+
+static void no_private_stack_setup(void)
+{
+	common_setup(true);
+}
+
+static void private_stack_setup(void)
+{
+	common_setup(false);
+}
+
+static void private_stack_measure(struct bench_res *res)
+{
+	struct private_stack *skel = ctx.skel;
+	unsigned long total_hits = 0;
+	static unsigned long last_hits;
+
+	total_hits = skel->bss->hits;
+	res->hits = total_hits - last_hits;
+	res->drops = 0;
+	res->false_hits = 0;
+	last_hits = total_hits;
+}
+
+static void *private_stack_producer(void *unused)
+{
+	while (true)
+		syscall(__NR_getpgid);
+	return NULL;
+}
+
+const struct bench bench_no_private_stack = {
+	.name = "no-private-stack",
+	.argp = &bench_private_stack_argp,
+	.validate = private_stack_validate,
+	.setup = no_private_stack_setup,
+	.producer_thread = private_stack_producer,
+	.measure = private_stack_measure,
+	.report_progress = hits_drops_report_progress,
+	.report_final = hits_drops_report_final,
+};
+
+const struct bench bench_private_stack = {
+	.name = "private-stack",
+	.argp = &bench_private_stack_argp,
+	.validate = private_stack_validate,
+	.setup = private_stack_setup,
+	.producer_thread = private_stack_producer,
+	.measure = private_stack_measure,
+	.report_progress = hits_drops_report_progress,
+	.report_final = hits_drops_report_final,
+};
diff --git a/tools/testing/selftests/bpf/benchs/run_bench_private_stack.sh b/tools/testing/selftests/bpf/benchs/run_bench_private_stack.sh
new file mode 100755
index 000000000000..e032353f7fa6
--- /dev/null
+++ b/tools/testing/selftests/bpf/benchs/run_bench_private_stack.sh
@@ -0,0 +1,9 @@ 
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+source ./benchs/run_common.sh
+
+set -eufo pipefail
+
+summarize "no-private-stack: " "$($RUN_BENCH --disable-private-stack 1 no-private-stack)"
+summarize "private-stack: " "$($RUN_BENCH private-stack)"
diff --git a/tools/testing/selftests/bpf/progs/private_stack.c b/tools/testing/selftests/bpf/progs/private_stack.c
new file mode 100644
index 000000000000..3b062e5b27e9
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/private_stack.c
@@ -0,0 +1,40 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+#include <linux/types.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct data_t {
+        unsigned int d[12];
+};
+
+struct {
+        __uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 10);
+	__type(key, struct data_t);
+	__type(value, struct data_t);
+} htab SEC(".maps");
+
+unsigned long hits = 0;
+
+SEC("tp/syscalls/sys_enter_getpgid")
+int stack0(void *ctx)
+{
+	struct data_t key = {}, value = {};
+	struct data_t *pvalue;
+
+	hits++;
+	key.d[10] = 5;
+	value.d[8] = 10;
+
+	pvalue = bpf_map_lookup_elem(&htab, &key);
+	if (!pvalue)
+		bpf_map_update_elem(&htab, &key, &value, 0);
+	bpf_map_delete_elem(&htab, &key);
+
+	return 0;
+}
+