diff mbox series

[bpf-next,v1,2/2,no_merge] selftests/bpf: Benchmark runtime performance with private stack

Message ID 20240716011652.811985-1-yonghong.song@linux.dev (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next,v1,1/2] bpf: Support private stack for bpf progs | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-10 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-42 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-17 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17-O2
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 fail Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-32 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-40 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-39 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-41 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18-O2
bpf/vmtest-bpf-next-VM_Test-13 success Logs for s390x-gcc / test (test_maps, false, 360) / test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-PR fail PR summary
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1052 this patch: 1052
netdev/build_tools success Errors and warnings before: 1 this patch: 1
netdev/cc_maintainers warning 11 maintainers not CCed: kpsingh@kernel.org shuah@kernel.org haoluo@google.com john.fastabend@gmail.com jolsa@kernel.org linux-kselftest@vger.kernel.org martin.lau@linux.dev mykolal@fb.com song@kernel.org eddyz87@gmail.com sdf@fomichev.me
netdev/build_clang success Errors and warnings before: 1128 this patch: 1128
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success net selftest script(s) already in Makefile
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 7783 this patch: 7783
netdev/checkpatch fail ERROR: code indent should use tabs where possible ERROR: space required before the open parenthesis '(' WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst WARNING: added, moved or deleted file(s), does MAINTAINERS need updating? WARNING: externs should be avoided in .c files WARNING: line length of 100 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns WARNING: line length of 88 exceeds 80 columns WARNING: line length of 89 exceeds 80 columns WARNING: line length of 94 exceeds 80 columns WARNING: please, no spaces at the start of a line
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 6 this patch: 6
netdev/source_inline success Was 0 now: 0

Commit Message

Yonghong Song July 16, 2024, 1:16 a.m. UTC
This patch intends to show some benchmark results comparing a bpf
program with vs. without private stack. The patch is not intended
to land since it hacks existing kernel interface in order to
do proper comparison.

The following is the jited code for bpf prog in progs/private_stack.c
without private stack. The number of batch iterations is 4096.

0:  f3 0f 1e fa             endbr64
4:  0f 1f 44 00 00          nop    DWORD PTR [rax+rax*1+0x0]
9:  66 90                   xchg   ax,ax
b:  55                      push   rbp
c:  48 89 e5                mov    rbp,rsp
f:  f3 0f 1e fa             endbr64
13: 48 81 ec 60 00 00 00    sub    rsp,0x60
1a: 53                      push   rbx
1b: 41 55                   push   r13
1d: 48 bf 00 50 5d 00 00    movabs rdi,0xffffc900005d5000
24: c9 ff ff
27: 48 8b 77 00             mov    rsi,QWORD PTR [rdi+0x0]
2b: 48 83 c6 01             add    rsi,0x1
2f: 48 89 77 00             mov    QWORD PTR [rdi+0x0],rsi
33: 31 ff                   xor    edi,edi
35: 48 89 7d f8             mov    QWORD PTR [rbp-0x8],rdi
39: be 05 00 00 00          mov    esi,0x5
3e: 89 75 f8                mov    DWORD PTR [rbp-0x8],esi
41: 48 89 7d f0             mov    QWORD PTR [rbp-0x10],rdi
45: 48 89 7d e8             mov    QWORD PTR [rbp-0x18],rdi
49: 48 89 7d e0             mov    QWORD PTR [rbp-0x20],rdi
4d: 48 89 7d d8             mov    QWORD PTR [rbp-0x28],rdi
51: 48 89 7d d0             mov    QWORD PTR [rbp-0x30],rdi
55: 48 89 7d c0             mov    QWORD PTR [rbp-0x40],rdi
59: 48 89 7d c8             mov    QWORD PTR [rbp-0x38],rdi
5d: 48 89 7d b8             mov    QWORD PTR [rbp-0x48],rdi
61: 48 89 7d b0             mov    QWORD PTR [rbp-0x50],rdi
65: 48 89 7d a8             mov    QWORD PTR [rbp-0x58],rdi
69: 48 89 7d a0             mov    QWORD PTR [rbp-0x60],rdi
6d: bf 0a 00 00 00          mov    edi,0xa
72: 89 7d c0                mov    DWORD PTR [rbp-0x40],edi
75: 48 bb 00 80 5d 00 00    movabs rbx,0xffffc900005d8000
7c: c9 ff ff
7f: 8b 7b 00                mov    edi,DWORD PTR [rbx+0x0]
82: 83 ff 01                cmp    edi,0x1
85: 7c 27                   jl     0xae
87: 45 31 ed                xor    r13d,r13d
8a: eb 29                   jmp    0xb5
8c: 48 89 ee                mov    rsi,rbp
8f: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
93: 48 bf 00 38 f9 09 81    movabs rdi,0xffff888109f93800
9a: 88 ff ff
9d: e8 e2 e1 5e e1          call   0xffffffffe15ee284
a2: 41 83 c5 01             add    r13d,0x1
a6: 8b 7b 00                mov    edi,DWORD PTR [rbx+0x0]
a9: 41 39 fd                cmp    r13d,edi
ac: 7c 07                   jl     0xb5
ae: 31 c0                   xor    eax,eax
b0: 41 5d                   pop    r13
b2: 5b                      pop    rbx
b3: c9                      leave
b4: c3                      ret
b5: 48 89 ee                mov    rsi,rbp
b8: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
bc: 48 bf 00 38 f9 09 81    movabs rdi,0xffff888109f93800
c3: 88 ff ff
c6: e8 49 1f 5f e1          call   0xffffffffe15f2014
cb: 48 85 c0                test   rax,rax
ce: 74 04                   je     0xd4
d0: 48 83 c0 60             add    rax,0x60
d4: 48 85 c0                test   rax,rax
d7: 75 b3                   jne    0x8c
d9: 48 89 ee                mov    rsi,rbp
dc: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
e0: 48 89 ea                mov    rdx,rbp
e3: 48 83 c2 a0             add    rdx,0xffffffffffffffa0
e7: 48 bf 00 38 f9 09 81    movabs rdi,0xffff888109f93800
ee: 88 ff ff
f1: 31 c9                   xor    ecx,ecx
f3: e8 dc d8 5e e1          call   0xffffffffe15ed9d4
f8: eb 92                   jmp    0x8c

The following is the corresponding jited code with private stack:

0:  f3 0f 1e fa             endbr64
4:  0f 1f 44 00 00          nop    DWORD PTR [rax+rax*1+0x0]
9:  66 90                   xchg   ax,ax
b:  55                      push   rbp
c:  48 89 e5                mov    rbp,rsp
f:  f3 0f 1e fa             endbr64
13: 53                      push   rbx
14: 41 55                   push   r13
16: 49 b9 c0 a8 c1 08 7e    movabs r9,0x607e08c1a8c0
1d: 60 00 00
20: 65 4c 03 0c 25 00 1a    add    r9,QWORD PTR gs:0x21a00
27: 02 00
29: 48 bf 00 80 61 00 00    movabs rdi,0xffffc90000618000
30: c9 ff ff
33: 48 8b 77 00             mov    rsi,QWORD PTR [rdi+0x0]
37: 48 83 c6 01             add    rsi,0x1
3b: 48 89 77 00             mov    QWORD PTR [rdi+0x0],rsi
3f: 31 ff                   xor    edi,edi
41: 49 89 79 f8             mov    QWORD PTR [r9-0x8],rdi
45: be 05 00 00 00          mov    esi,0x5
4a: 41 89 71 f8             mov    DWORD PTR [r9-0x8],esi
4e: 49 89 79 f0             mov    QWORD PTR [r9-0x10],rdi
52: 49 89 79 e8             mov    QWORD PTR [r9-0x18],rdi
56: 49 89 79 e0             mov    QWORD PTR [r9-0x20],rdi
5a: 49 89 79 d8             mov    QWORD PTR [r9-0x28],rdi
5e: 49 89 79 d0             mov    QWORD PTR [r9-0x30],rdi
62: 49 89 79 c0             mov    QWORD PTR [r9-0x40],rdi
66: 49 89 79 c8             mov    QWORD PTR [r9-0x38],rdi
6a: 49 89 79 b8             mov    QWORD PTR [r9-0x48],rdi
6e: 49 89 79 b0             mov    QWORD PTR [r9-0x50],rdi
72: 49 89 79 a8             mov    QWORD PTR [r9-0x58],rdi
76: 49 89 79 a0             mov    QWORD PTR [r9-0x60],rdi
7a: bf 0a 00 00 00          mov    edi,0xa
7f: 41 89 79 c0             mov    DWORD PTR [r9-0x40],edi
83: 48 bb 00 b0 61 00 00    movabs rbx,0xffffc9000061b000
8a: c9 ff ff
8d: 8b 7b 00                mov    edi,DWORD PTR [rbx+0x0]
90: 83 ff 01                cmp    edi,0x1
93: 7c 2b                   jl     0xc0
95: 45 31 ed                xor    r13d,r13d
98: eb 2d                   jmp    0xc7
9a: 4c 89 ce                mov    rsi,r9
9d: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
a1: 48 bf 00 40 f9 09 81    movabs rdi,0xffff888109f94000
a8: 88 ff ff
ab: 41 51                   push   r9
ad: e8 fa e1 5e e1          call   0xffffffffe15ee2ac
b2: 41 59                   pop    r9
b4: 41 83 c5 01             add    r13d,0x1
b8: 8b 7b 00                mov    edi,DWORD PTR [rbx+0x0]
bb: 41 39 fd                cmp    r13d,edi
be: 7c 07                   jl     0xc7
c0: 31 c0                   xor    eax,eax
c2: 41 5d                   pop    r13
c4: 5b                      pop    rbx
c5: c9                      leave
c6: c3                      ret
c7: 4c 89 ce                mov    rsi,r9
ca: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
ce: 48 bf 00 40 f9 09 81    movabs rdi,0xffff888109f94000
d5: 88 ff ff
d8: 41 51                   push   r9
da: e8 5d 1f 5f e1          call   0xffffffffe15f203c
df: 41 59                   pop    r9
e1: 48 85 c0                test   rax,rax
e4: 74 04                   je     0xea
e6: 48 83 c0 60             add    rax,0x60
ea: 48 85 c0                test   rax,rax
ed: 75 ab                   jne    0x9a
ef: 4c 89 ce                mov    rsi,r9
f2: 48 83 c6 d0             add    rsi,0xffffffffffffffd0
f6: 4c 89 ca                mov    rdx,r9
f9: 48 83 c2 a0             add    rdx,0xffffffffffffffa0
fd: 48 bf 00 40 f9 09 81    movabs rdi,0xffff888109f94000
104:    88 ff ff
107:    31 c9                   xor    ecx,ecx
109:    41 51                   push   r9
10b:    e8 ec d8 5e e1          call   0xffffffffe15ed9fc
110:    41 59                   pop    r9
112:    eb 86                   jmp    0x9a

It is clear that the main overhead is the push/pop r9 for
three calls since those calls are in a loop. The initial r9
assignment
  16: 49 b9 c0 a8 c1 08 7e    movabs r9,0x607e08c1a8c0
  1d: 60 00 00
  20: 65 4c 03 0c 25 00 1a    add    r9,QWORD PTR gs:0x21a00
  27: 02 00
is the overhead per prog run.

I did some benchmarking on an intel box (Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz)
which has 20 cores and 80 cpus. Note that the number of hits are in the unit
of loop iterations. More loop iterations per prog means more time will be
spent with bpf programs.

I did four runs of tests. [no-]private-stack-[num-of-loop-iterations-per-prog]
shows whether the run is with/without private stack and the number of loop
iterations per program run. The number of hits equals to the total number of
loop iterations in bpf prog.

The following are two of benchmark results:
  $ ./benchs/run_bench_private_stack.sh
  no-private-stack-1:  2.771 ± 0.001M/s (drops 0.000 ± 0.000M/s)
  private-stack-1:     2.734 ± 0.031M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-8:  4.613 ± 0.003M/s (drops 0.000 ± 0.000M/s)
  private-stack-8:     4.611 ± 0.013M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-64:  5.062 ± 0.006M/s (drops 0.000 ± 0.000M/s)
  private-stack-64:    5.024 ± 0.004M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-512:  5.127 ± 0.005M/s (drops 0.000 ± 0.000M/s)
  private-stack-512:   5.120 ± 0.009M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-2048:  5.132 ± 0.011M/s (drops 0.000 ± 0.000M/s)
  private-stack-2048:  5.131 ± 0.008M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-4096:  5.116 ± 0.023M/s (drops 0.000 ± 0.000M/s)
  private-stack-4096:  5.123 ± 0.012M/s (drops 0.000 ± 0.000M/s)
  $ ./benchs/run_bench_private_stack.sh
  no-private-stack-1:  2.769 ± 0.005M/s (drops 0.000 ± 0.000M/s)
  private-stack-1:     2.740 ± 0.004M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-8:  4.617 ± 0.005M/s (drops 0.000 ± 0.000M/s)
  private-stack-8:     4.578 ± 0.018M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-64:  5.059 ± 0.009M/s (drops 0.000 ± 0.000M/s)
  private-stack-64:    5.051 ± 0.007M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-512:  5.125 ± 0.007M/s (drops 0.000 ± 0.000M/s)
  private-stack-512:   5.116 ± 0.016M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-2048:  5.132 ± 0.008M/s (drops 0.000 ± 0.000M/s)
  private-stack-2048:  5.135 ± 0.013M/s (drops 0.000 ± 0.000M/s)
  no-private-stack-4096:  5.142 ± 0.013M/s (drops 0.000 ± 0.000M/s)
  private-stack-4096:  5.109 ± 0.023M/s (drops 0.000 ± 0.000M/s)

The other two are simialr such that for batch size 2048/4096,
private-stack might show better results than non-private-stack due to noisek
But in general, the no-private-stack is slighter better than private-stack,
esp. the number of loop iterations in the bpf prog is large.

I also collected some perf results. With one loop iteration
per program run, I got
  $ perf record -- ./bench -w3 -d10 -a --nr-batch-iters=1 no-private-stack
    18.48%  bench    [kernel.vmlinux]                               [k] htab_map_hash
    13.04%  bench    [kernel.vmlinux]                               [k] _raw_spin_lock
     7.00%  bench    libc.so.6                                      [.] syscall
     5.91%  bench    [kernel.vmlinux]                               [k] htab_map_update_elem
     5.68%  bench    [kernel.vmlinux]                               [k] entry_SYSRETQ_unsafe_stack
     4.55%  bench    [kernel.vmlinux]                               [k] perf_syscall_enter
     4.37%  bench    [kernel.vmlinux]                               [k] htab_map_delete_elem
     2.89%  bench    bpf_prog_a8e2493fe867b453_stack0               [k] bpf_prog_a8e2493fe867b453_stack0
     2.83%  bench    [kernel.vmlinux]                               [k] memcpy_orig
     2.60%  bench    [kernel.vmlinux]                               [k] __htab_map_lookup_elem
     2.53%  bench    [kernel.vmlinux]                               [k] alloc_htab_elem
     2.52%  bench    [kernel.vmlinux]                               [k] trace_call_bpf
     2.37%  bench    [kernel.vmlinux]                               [k] entry_SYSCALL_64_after_hwframe
     2.29%  bench    [kernel.vmlinux]                               [k] do_syscall_64

I only showed functions with cpu consumption >= 2%. You can see 'syscall'
overhead itself is 7% and bpf progrm run didn't take majority of time.
The stack trace for private stack is very similar to the above.

With 4096 loop ierations per program run, I got
  $ perf record -- ./bench -w3 -d10 -a --nr-batch-iters=4096 no-private-stack
    27.89%  bench    [kernel.vmlinux]                  [k] htab_map_hash
    21.55%  bench    [kernel.vmlinux]                  [k] _raw_spin_lock
    11.51%  bench    [kernel.vmlinux]                  [k] htab_map_delete_elem
    10.26%  bench    [kernel.vmlinux]                  [k] htab_map_update_elem
     4.85%  bench    [kernel.vmlinux]                  [k] __pcpu_freelist_push
     4.34%  bench    [kernel.vmlinux]                  [k] alloc_htab_elem
     3.50%  bench    [kernel.vmlinux]                  [k] memcpy_orig
     3.22%  bench    [kernel.vmlinux]                  [k] __pcpu_freelist_pop
     2.68%  bench    [kernel.vmlinux]                  [k] bcmp
     2.52%  bench    [kernel.vmlinux]                  [k] __htab_map_lookup_elem
     ...
     0.01%  bench    libc.so.6                         [.] syscall

The 'syscall' overhead is 0.01% now and majority cpu time is on bpf programs.
Again, the stack trace for private stack is very similar to the above.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 include/linux/bpf.h                           |   3 +-
 include/uapi/linux/bpf.h                      |   3 +
 kernel/bpf/core.c                             |   2 +-
 kernel/bpf/syscall.c                          |   4 +-
 tools/include/uapi/linux/bpf.h                |   3 +
 tools/testing/selftests/bpf/Makefile          |   2 +
 tools/testing/selftests/bpf/bench.c           |   6 +
 .../bpf/benchs/bench_private_stack.c          | 144 ++++++++++++++++++
 .../bpf/benchs/run_bench_private_stack.sh     |  11 ++
 .../selftests/bpf/progs/private_stack.c       |  44 ++++++
 10 files changed, 219 insertions(+), 3 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/benchs/bench_private_stack.c
 create mode 100755 tools/testing/selftests/bpf/benchs/run_bench_private_stack.sh
 create mode 100644 tools/testing/selftests/bpf/progs/private_stack.c

Comments

Alexei Starovoitov July 16, 2024, 1:35 a.m. UTC | #1
On Mon, Jul 15, 2024 at 6:17 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
> With 4096 loop ierations per program run, I got
>   $ perf record -- ./bench -w3 -d10 -a --nr-batch-iters=4096 no-private-stack
>     27.89%  bench    [kernel.vmlinux]                  [k] htab_map_hash
>     21.55%  bench    [kernel.vmlinux]                  [k] _raw_spin_lock
>     11.51%  bench    [kernel.vmlinux]                  [k] htab_map_delete_elem
>     10.26%  bench    [kernel.vmlinux]                  [k] htab_map_update_elem
>      4.85%  bench    [kernel.vmlinux]                  [k] __pcpu_freelist_push
>      4.34%  bench    [kernel.vmlinux]                  [k] alloc_htab_elem
>      3.50%  bench    [kernel.vmlinux]                  [k] memcpy_orig
>      3.22%  bench    [kernel.vmlinux]                  [k] __pcpu_freelist_pop
>      2.68%  bench    [kernel.vmlinux]                  [k] bcmp
>      2.52%  bench    [kernel.vmlinux]                  [k] __htab_map_lookup_elem


so the prog itself is not even in the top 10 which means
that the test doesn't measure anything meaningful about the private
stack itself.
It just benchmarks hash map and overhead of extra push/pop is invisible.

> +SEC("tp/syscalls/sys_enter_getpgid")
> +int stack0(void *ctx)
> +{
> +       struct data_t key = {}, value = {};
> +       struct data_t *pvalue;
> +       int i;
> +
> +       hits++;
> +       key.d[10] = 5;
> +       value.d[8] = 10;
> +
> +       for (i = 0; i < batch_iters; i++) {
> +               pvalue = bpf_map_lookup_elem(&htab, &key);
> +               if (!pvalue)
> +                       bpf_map_update_elem(&htab, &key, &value, 0);
> +               bpf_map_delete_elem(&htab, &key);
> +       }

Instead of calling helpers that do a lot of work the test should
call global subprograms or noinline static functions that are nops.
Only then we might see the overhead of push/pop r9.

Once you do that you'll see that
+SEC("tp/syscalls/sys_enter_getpgid")
approach has too much overhead.
(you don't see right now since hashmap dominates).
Pls use an approach I mentioned earlier by fentry-ing into
a helper and another prog calling that helper in for() loop.

pw-bot: cr
Yonghong Song July 16, 2024, 5:45 p.m. UTC | #2
On 7/15/24 6:35 PM, Alexei Starovoitov wrote:
> On Mon, Jul 15, 2024 at 6:17 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>> With 4096 loop ierations per program run, I got
>>    $ perf record -- ./bench -w3 -d10 -a --nr-batch-iters=4096 no-private-stack
>>      27.89%  bench    [kernel.vmlinux]                  [k] htab_map_hash
>>      21.55%  bench    [kernel.vmlinux]                  [k] _raw_spin_lock
>>      11.51%  bench    [kernel.vmlinux]                  [k] htab_map_delete_elem
>>      10.26%  bench    [kernel.vmlinux]                  [k] htab_map_update_elem
>>       4.85%  bench    [kernel.vmlinux]                  [k] __pcpu_freelist_push
>>       4.34%  bench    [kernel.vmlinux]                  [k] alloc_htab_elem
>>       3.50%  bench    [kernel.vmlinux]                  [k] memcpy_orig
>>       3.22%  bench    [kernel.vmlinux]                  [k] __pcpu_freelist_pop
>>       2.68%  bench    [kernel.vmlinux]                  [k] bcmp
>>       2.52%  bench    [kernel.vmlinux]                  [k] __htab_map_lookup_elem
>
> so the prog itself is not even in the top 10 which means
> that the test doesn't measure anything meaningful about the private
> stack itself.
> It just benchmarks hash map and overhead of extra push/pop is invisible.
>
>> +SEC("tp/syscalls/sys_enter_getpgid")
>> +int stack0(void *ctx)
>> +{
>> +       struct data_t key = {}, value = {};
>> +       struct data_t *pvalue;
>> +       int i;
>> +
>> +       hits++;
>> +       key.d[10] = 5;
>> +       value.d[8] = 10;
>> +
>> +       for (i = 0; i < batch_iters; i++) {
>> +               pvalue = bpf_map_lookup_elem(&htab, &key);
>> +               if (!pvalue)
>> +                       bpf_map_update_elem(&htab, &key, &value, 0);
>> +               bpf_map_delete_elem(&htab, &key);
>> +       }
> Instead of calling helpers that do a lot of work the test should
> call global subprograms or noinline static functions that are nops.
> Only then we might see the overhead of push/pop r9.
>
> Once you do that you'll see that
> +SEC("tp/syscalls/sys_enter_getpgid")
> approach has too much overhead.
> (you don't see right now since hashmap dominates).
> Pls use an approach I mentioned earlier by fentry-ing into
> a helper and another prog calling that helper in for() loop.

Thanks for suggestion. Will use fentry program with empty functions
to test maximum worst performance.

>
> pw-bot: cr
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 19a3f5355363..2f8708465c19 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1551,7 +1551,8 @@  struct bpf_prog {
 				call_get_stack:1, /* Do we call bpf_get_stack() or bpf_get_stackid() */
 				call_get_func_ip:1, /* Do we call get_func_ip() */
 				tstamp_type_access:1, /* Accessed __sk_buff->tstamp_type */
-				sleepable:1;	/* BPF program is sleepable */
+				sleepable:1,	/* BPF program is sleepable */
+				disable_private_stack:1; /* Disable private stack */
 	enum bpf_prog_type	type;		/* Type of BPF program */
 	enum bpf_attach_type	expected_attach_type; /* For some prog types */
 	u32			len;		/* Number of filter blocks */
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 35bcf52dbc65..98af8ea8a4d6 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1409,6 +1409,9 @@  enum {
 
 /* Do not translate kernel bpf_arena pointers to user pointers */
 	BPF_F_NO_USER_CONV	= (1U << 18),
+
+/* Disable private stack */
+	BPF_F_DISABLE_PRIVATE_STACK	= (1U << 19),
 };
 
 /* Flags for BPF_PROG_QUERY. */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index f69eb0c5fe03..297e76a8f463 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2815,7 +2815,7 @@  EXPORT_SYMBOL_GPL(bpf_prog_free);
 
 bool bpf_enable_private_stack(struct bpf_prog *prog)
 {
-	if (prog->aux->stack_depth <= 64)
+	if (prog->disable_private_stack || prog->aux->stack_depth <= 64)
 		return false;
 
 	switch (prog->aux->prog->type) {
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 89162ddb4747..bb2b632c9c2c 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2715,7 +2715,8 @@  static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 				 BPF_F_XDP_HAS_FRAGS |
 				 BPF_F_XDP_DEV_BOUND_ONLY |
 				 BPF_F_TEST_REG_INVARIANTS |
-				 BPF_F_TOKEN_FD))
+				 BPF_F_TOKEN_FD |
+				 BPF_F_DISABLE_PRIVATE_STACK))
 		return -EINVAL;
 
 	bpf_prog_load_fixup_attach_type(attr);
@@ -2828,6 +2829,7 @@  static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 
 	prog->expected_attach_type = attr->expected_attach_type;
 	prog->sleepable = !!(attr->prog_flags & BPF_F_SLEEPABLE);
+	prog->disable_private_stack = !!(attr->prog_flags & BPF_F_DISABLE_PRIVATE_STACK);
 	prog->aux->attach_btf = attach_btf;
 	prog->aux->attach_btf_id = attr->attach_btf_id;
 	prog->aux->dst_prog = dst_prog;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 35bcf52dbc65..98af8ea8a4d6 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1409,6 +1409,9 @@  enum {
 
 /* Do not translate kernel bpf_arena pointers to user pointers */
 	BPF_F_NO_USER_CONV	= (1U << 18),
+
+/* Disable private stack */
+	BPF_F_DISABLE_PRIVATE_STACK	= (1U << 19),
 };
 
 /* Flags for BPF_PROG_QUERY. */
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index dd49c1d23a60..44a6a43da71c 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -733,6 +733,7 @@  $(OUTPUT)/bench_local_storage_create.o: $(OUTPUT)/bench_local_storage_create.ske
 $(OUTPUT)/bench_bpf_hashmap_lookup.o: $(OUTPUT)/bpf_hashmap_lookup.skel.h
 $(OUTPUT)/bench_htab_mem.o: $(OUTPUT)/htab_mem_bench.skel.h
 $(OUTPUT)/bench_bpf_crypto.o: $(OUTPUT)/crypto_bench.skel.h
+$(OUTPUT)/bench_private_stack.o: $(OUTPUT)/private_stack.skel.h
 $(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ)
 $(OUTPUT)/bench: LDLIBS += -lm
 $(OUTPUT)/bench: $(OUTPUT)/bench.o \
@@ -753,6 +754,7 @@  $(OUTPUT)/bench: $(OUTPUT)/bench.o \
 		 $(OUTPUT)/bench_local_storage_create.o \
 		 $(OUTPUT)/bench_htab_mem.o \
 		 $(OUTPUT)/bench_bpf_crypto.o \
+		 $(OUTPUT)/bench_private_stack.o \
 		 #
 	$(call msg,BINARY,,$@)
 	$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
index 627b74ae041b..4f4867cd80f9 100644
--- a/tools/testing/selftests/bpf/bench.c
+++ b/tools/testing/selftests/bpf/bench.c
@@ -282,6 +282,7 @@  extern struct argp bench_local_storage_create_argp;
 extern struct argp bench_htab_mem_argp;
 extern struct argp bench_trigger_batch_argp;
 extern struct argp bench_crypto_argp;
+extern struct argp bench_private_stack_argp;
 
 static const struct argp_child bench_parsers[] = {
 	{ &bench_ringbufs_argp, 0, "Ring buffers benchmark", 0 },
@@ -296,6 +297,7 @@  static const struct argp_child bench_parsers[] = {
 	{ &bench_htab_mem_argp, 0, "hash map memory benchmark", 0 },
 	{ &bench_trigger_batch_argp, 0, "BPF triggering benchmark", 0 },
 	{ &bench_crypto_argp, 0, "bpf crypto benchmark", 0 },
+	{ &bench_private_stack_argp, 0, "bpf private stack benchmark", 0 },
 	{},
 };
 
@@ -542,6 +544,8 @@  extern const struct bench bench_local_storage_create;
 extern const struct bench bench_htab_mem;
 extern const struct bench bench_crypto_encrypt;
 extern const struct bench bench_crypto_decrypt;
+extern const struct bench bench_no_private_stack;
+extern const struct bench bench_private_stack;
 
 static const struct bench *benchs[] = {
 	&bench_count_global,
@@ -596,6 +600,8 @@  static const struct bench *benchs[] = {
 	&bench_htab_mem,
 	&bench_crypto_encrypt,
 	&bench_crypto_decrypt,
+	&bench_no_private_stack,
+	&bench_private_stack,
 };
 
 static void find_benchmark(void)
diff --git a/tools/testing/selftests/bpf/benchs/bench_private_stack.c b/tools/testing/selftests/bpf/benchs/bench_private_stack.c
new file mode 100644
index 000000000000..9a1fec9d1096
--- /dev/null
+++ b/tools/testing/selftests/bpf/benchs/bench_private_stack.c
@@ -0,0 +1,144 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+
+#include <argp.h>
+#include "bench.h"
+#include "private_stack.skel.h"
+
+static struct ctx {
+	struct private_stack *skel;
+} ctx;
+
+static struct {
+	long nr_batch_iters;
+} args = {
+	.nr_batch_iters = 0,
+};
+
+enum {
+	ARG_NR_BATCH_ITERS = 3000,
+};
+
+static const struct argp_option opts[] = {
+        { "nr-batch-iters", ARG_NR_BATCH_ITERS, "NR_BATCH_ITERS",
+		0, "nr batch iters" },
+        {},
+};
+
+static error_t private_stack_parse_arg(int key, char *arg, struct argp_state *state)
+{
+	long ret;
+
+        switch (key) {
+        case ARG_NR_BATCH_ITERS:
+                ret = strtoul(arg, NULL, 10);
+		if (ret < 1)
+			argp_usage(state);
+		args.nr_batch_iters = ret;
+                break;
+        default:
+                return ARGP_ERR_UNKNOWN;
+        }
+
+        return 0;
+}
+
+const struct argp bench_private_stack_argp = {
+        .options = opts,
+        .parser = private_stack_parse_arg,
+};
+
+static void private_stack_validate(void)
+{
+	if (env.consumer_cnt != 0) {
+		fprintf(stderr,
+			"The private stack benchmarks do not support consumer\n");
+		exit(1);
+	}
+}
+
+static void common_setup(bool disable_private_stack)
+{
+	struct private_stack *skel;
+	struct bpf_link *link;
+	__u32 old_flags;
+	int err;
+
+	skel = private_stack__open();
+	if(!skel) {
+		fprintf(stderr, "failed to open skeleton\n");
+		exit(1);
+	}
+	ctx.skel = skel;
+
+	if (disable_private_stack) {
+		old_flags = bpf_program__flags(skel->progs.stack0);
+		bpf_program__set_flags(skel->progs.stack0, old_flags | BPF_F_DISABLE_PRIVATE_STACK);
+	}
+
+	skel->rodata->batch_iters = args.nr_batch_iters;
+
+	err = private_stack__load(skel);
+	if (err) {
+		fprintf(stderr, "failed to load program\n");
+		exit(1);
+	}
+
+	link = bpf_program__attach(skel->progs.stack0);
+	if (!link) {
+		fprintf(stderr, "failed to attach program\n");
+		exit(1);
+	}
+}
+
+static void no_private_stack_setup(void)
+{
+	common_setup(true);
+}
+
+static void private_stack_setup(void)
+{
+	common_setup(false);
+}
+
+static void private_stack_measure(struct bench_res *res)
+{
+	struct private_stack *skel = ctx.skel;
+	unsigned long total_hits = 0;
+	static unsigned long last_hits;
+
+	total_hits = skel->bss->hits * skel->rodata->batch_iters;
+	res->hits = total_hits - last_hits;
+	res->drops = 0;
+	res->false_hits = 0;
+	last_hits = total_hits;
+}
+
+static void *private_stack_producer(void *unused)
+{
+	while (true)
+		syscall(__NR_getpgid);
+	return NULL;
+}
+
+const struct bench bench_no_private_stack = {
+	.name = "no-private-stack",
+	.argp = &bench_private_stack_argp,
+	.validate = private_stack_validate,
+	.setup = no_private_stack_setup,
+	.producer_thread = private_stack_producer,
+	.measure = private_stack_measure,
+	.report_progress = hits_drops_report_progress,
+	.report_final = hits_drops_report_final,
+};
+
+const struct bench bench_private_stack = {
+	.name = "private-stack",
+	.argp = &bench_private_stack_argp,
+	.validate = private_stack_validate,
+	.setup = private_stack_setup,
+	.producer_thread = private_stack_producer,
+	.measure = private_stack_measure,
+	.report_progress = hits_drops_report_progress,
+	.report_final = hits_drops_report_final,
+};
diff --git a/tools/testing/selftests/bpf/benchs/run_bench_private_stack.sh b/tools/testing/selftests/bpf/benchs/run_bench_private_stack.sh
new file mode 100755
index 000000000000..692a5f9676a7
--- /dev/null
+++ b/tools/testing/selftests/bpf/benchs/run_bench_private_stack.sh
@@ -0,0 +1,11 @@ 
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+source ./benchs/run_common.sh
+
+set -eufo pipefail
+
+for b in 1 8 64 512 2048 4096; do
+    summarize "no-private-stack-${b}: " "$($RUN_BENCH --nr-batch-iters=${b} no-private-stack)"
+    summarize "private-stack-${b}: " "$($RUN_BENCH --nr-batch-iters=${b} private-stack)"
+done
diff --git a/tools/testing/selftests/bpf/progs/private_stack.c b/tools/testing/selftests/bpf/progs/private_stack.c
new file mode 100644
index 000000000000..ba2fa67306c7
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/private_stack.c
@@ -0,0 +1,44 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+#include <linux/types.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct data_t {
+        unsigned int d[12];
+};
+
+struct {
+        __uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 10);
+	__type(key, struct data_t);
+	__type(value, struct data_t);
+} htab SEC(".maps");
+
+unsigned long hits = 0;
+const volatile int batch_iters = 0;
+
+SEC("tp/syscalls/sys_enter_getpgid")
+int stack0(void *ctx)
+{
+	struct data_t key = {}, value = {};
+	struct data_t *pvalue;
+	int i;
+
+	hits++;
+	key.d[10] = 5;
+	value.d[8] = 10;
+
+	for (i = 0; i < batch_iters; i++) {
+		pvalue = bpf_map_lookup_elem(&htab, &key);
+		if (!pvalue)
+			bpf_map_update_elem(&htab, &key, &value, 0);
+		bpf_map_delete_elem(&htab, &key);
+	}
+
+	return 0;
+}
+