mbox series

[bpf-next,v3,0/4] bpf: track find_equal_scalars history on per-instruction level

Message ID 20240718202357.1746514-1-eddyz87@gmail.com (mailing list archive)
Headers show
Series bpf: track find_equal_scalars history on per-instruction level | expand

Message

Eduard Zingerman July 18, 2024, 8:23 p.m. UTC
This is a fix for precision tracking bug reported in [0].
It supersedes my previous attempt to fix similar issue in commit [1].
Here is a minimized test case from [0]:

    0:  call bpf_get_prandom_u32;
    1:  r7 = r0;
    2:  r8 = r0;
    3:  call bpf_get_prandom_u32;
    4:  if r0 > 1 goto +0;
    /* --- checkpoint #1: r7.id=1, r8.id=1 --- */
    5:  if r8 >= r0 goto 9f;
    6:  r8 += r8;
    /* --- checkpoint #2: r7.id=1, r8.id=0 --- */
    7:  if r7 == 0 goto 9f;
    8:  r0 /= 0;
    /* --- checkpoint #3 --- */
    9:  r0 = 42;
    10: exit;

W/o this fix verifier incorrectly assumes that instruction at label
(8) is unreachable. The issue is caused by failure to infer
precision mark for r0 at checkpoint #1:
- first verification path is:
  - (0-4): r0 range [0,1];
  - (5): r8 range [0,0], propagated to r7;
  - (6): r8.id is reset;
  - (7): jump is predicted to happen;
  - (9-10): safe exit.
- when jump at (7) is predicted mark_chain_precision() for r7 is
  called and backtrack_insn() proceeds as follows:
  - at (7) r7 is marked as precise;
  - at (5) r8 is not currently tracked and thus r0 is not marked;
  - at (4-5) boundary logic from [1] is triggered and r7,r8 are marked
    as precise;
  - => r0 precision mark is missed.
- when second branch of (4) is considered, verifier prunes the state
  because r0 is not marked as precise in the visited state.

Basically, backtracking logic fails to notice that at (5)
range information is gained for both r7 and r8, and thus both
r8 and r0 have to be marked as precise.
This happens because [1] can only account for such range
transfers at parent/child state boundaries.

The solution suggested by Andrii Nakryiko in [0] is to use jump
history to remember which registers gained range as a result of
find_equal_scalars() [renamed to sync_linked_regs()] and use
this information in backtrack_insn().
Which is what this patch-set does.

The patch-set uses u64 value as a vector of 10-bit values that
identify registers gaining range in find_equal_scalars().
This amounts to maximum of 6 possible values.
To check if such capacity is sufficient I've instrumented kernel
to track a histogram for maximal amount of registers that gain range
in find_equal_scalars per program verification [2].
Measurements done for verifier selftests and Cilium bpf object files
from [3] show that number of such registers is *always* <= 4 and
in 98% of cases it is <= 2.

When tested on a subset of selftests identified by
selftests/bpf/veristat.cfg and Cilium bpf object files from [3]
this patch-set has minimal verification performance impact:

File                      Program                   Insns   (DIFF)  States (DIFF)
------------------------  ------------------------  --------------  -------------
bpf_host.o                tail_handle_nat_fwd_ipv4    -75 (-0.61%)    -3 (-0.39%)
pyperf600_nounroll.bpf.o  on_event                  +1673 (+0.33%)    +3 (+0.01%)


[0] https://lore.kernel.org/bpf/CAEf4BzZ0xidVCqB47XnkXcNhkPWF6_nTV7yt+_Lf0kcFEut2Mg@mail.gmail.com/
[1] commit 904e6ddf4133 ("bpf: Use scalar ids in mark_chain_precision()")
[2] https://github.com/eddyz87/bpf/tree/find-equal-scalars-in-jump-history-with-stats
[3] https://github.com/anakryiko/cilium

Changes:
- v2 -> v3:
  A number of stylistic changes suggested by Andrii:
  - renamings:
    - struct reg_or_spill   -> linked_reg;
    - find_equal_scalars()  -> collect_linked_regs;
    - copy_known_reg()      -> sync_linked_regs;
  - collect_linked_regs() now returns linked regs set of
    size 2 or larger;
  - dropped usage of bit fields in struct linked_reg;
  - added a patch changing references to find_equal_scalars() in
    selftests comments.
- v1 -> v2:
  - patch "bpf: replace env->cur_hist_ent with a getter function" is
    dropped (Andrii);
  - added structure linked_regs and helper functions to [de]serialize
    u64 value as such structure (Andrii);
  - bt_set_equal_scalars() renamed to bt_sync_linked_regs(), moved to
    start and end of backtrack_insn() in order to untie linked
    register logic from conditional jumps backtracking.
    Andrii requested a more radical change of moving linked registers
    processing to bt_set_xxx() functions, I did an experiment in this
    direction:
    https://github.com/eddyz87/bpf/tree/find-equal-scalars-in-jump-history--linked-regs-in-bt-set-reg
    the end result of the experiment seems much uglier than version
    presented in v2.

Revisions:
- v1: https://lore.kernel.org/bpf/20240222005005.31784-1-eddyz87@gmail.com/
- v2: https://lore.kernel.org/bpf/20240705205851.2635794-1-eddyz87@gmail.com/

Eduard Zingerman (4):
  bpf: track equal scalars history on per-instruction level
  bpf: remove mark_precise_scalar_ids()
  selftests/bpf: tests for per-insn sync_linked_regs() precision
    tracking
  selftests/bpf: update comments find_equal_scalars->sync_linked_regs

 include/linux/bpf_verifier.h                  |   4 +
 kernel/bpf/verifier.c                         | 363 +++++++++++-------
 .../selftests/bpf/progs/verifier_scalar_ids.c | 256 ++++++++----
 .../selftests/bpf/progs/verifier_spill_fill.c |  16 +-
 .../bpf/progs/verifier_subprog_precision.c    |   2 +-
 .../testing/selftests/bpf/verifier/precise.c  |  28 +-
 6 files changed, 434 insertions(+), 235 deletions(-)

Comments

patchwork-bot+netdevbpf@kernel.org July 19, 2024, 11:30 p.m. UTC | #1
Hello:

This series was applied to bpf/bpf-next.git (master)
by Andrii Nakryiko <andrii@kernel.org>:

On Thu, 18 Jul 2024 13:23:52 -0700 you wrote:
> This is a fix for precision tracking bug reported in [0].
> It supersedes my previous attempt to fix similar issue in commit [1].
> Here is a minimized test case from [0]:
> 
>     0:  call bpf_get_prandom_u32;
>     1:  r7 = r0;
>     2:  r8 = r0;
>     3:  call bpf_get_prandom_u32;
>     4:  if r0 > 1 goto +0;
>     /* --- checkpoint #1: r7.id=1, r8.id=1 --- */
>     5:  if r8 >= r0 goto 9f;
>     6:  r8 += r8;
>     /* --- checkpoint #2: r7.id=1, r8.id=0 --- */
>     7:  if r7 == 0 goto 9f;
>     8:  r0 /= 0;
>     /* --- checkpoint #3 --- */
>     9:  r0 = 42;
>     10: exit;
> 
> [...]

Here is the summary with links:
  - [bpf-next,v3,1/4] bpf: track equal scalars history on per-instruction level
    https://git.kernel.org/bpf/bpf-next/c/2b7350d7ca65
  - [bpf-next,v3,2/4] bpf: remove mark_precise_scalar_ids()
    https://git.kernel.org/bpf/bpf-next/c/cb0f94d85874
  - [bpf-next,v3,3/4] selftests/bpf: tests for per-insn sync_linked_regs() precision tracking
    https://git.kernel.org/bpf/bpf-next/c/c0087d59e504
  - [bpf-next,v3,4/4] selftests/bpf: update comments find_equal_scalars->sync_linked_regs
    https://git.kernel.org/bpf/bpf-next/c/f081cd17c0bc

You are awesome, thank you!