diff mbox series

[bpf-next] bpf: ensure precise is reset to false in __mark_reg_const_zero()

Message ID 20231215235822.908223-1-andrii@kernel.org (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next] bpf: ensure precise is reset to false in __mark_reg_const_zero() | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for bpf-next
netdev/ynl success SINGLE THREAD; Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1127 this patch: 1127
netdev/cc_maintainers warning 12 maintainers not CCed: kpsingh@kernel.org sdf@google.com yonghong.song@linux.dev martin.lau@linux.dev mykolal@fb.com song@kernel.org haoluo@google.com eddyz87@gmail.com jolsa@kernel.org linux-kselftest@vger.kernel.org shuah@kernel.org john.fastabend@gmail.com
netdev/build_clang success Errors and warnings before: 1143 this patch: 1143
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1154 this patch: 1154
netdev/checkpatch warning WARNING: line length of 81 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-16 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-32 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-39 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-40 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-41 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-13 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-15 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-18 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-42 success Logs for x86_64-llvm-18 / veristat

Commit Message

Andrii Nakryiko Dec. 15, 2023, 11:58 p.m. UTC
It is safe to always start with imprecise SCALAR_VALUE register.
Previously __mark_reg_const_zero() relied on caller to reset precise
mark, but it's very error prone and we already missed it in a few
places. So instead make __mark_reg_const_zero() reset precision always,
as it's a safe default for SCALAR_VALUE. Explanation is basically the
same as for why we are resetting (or rather not setting) precision in
current state. If necessary, precision propagation will set it to
precise correctly.

As such, also remove a big comment about forward precision propagation
in mark_reg_stack_read() and avoid unnecessarily setting precision to
true after reading from STACK_ZERO stack. Again, precision propagation
will correctly handle this, if that SCALAR_VALUE register will ever be
needed to be precise.

Reported-by: Maxim Mikityanskiy <maxtram95@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/verifier.c                            | 16 +++-------------
 .../selftests/bpf/progs/verifier_spill_fill.c    | 10 ++++++++--
 2 files changed, 11 insertions(+), 15 deletions(-)

Comments

Yonghong Song Dec. 16, 2023, 2:44 a.m. UTC | #1
On 12/15/23 3:58 PM, Andrii Nakryiko wrote:
> It is safe to always start with imprecise SCALAR_VALUE register.
> Previously __mark_reg_const_zero() relied on caller to reset precise
> mark, but it's very error prone and we already missed it in a few
> places. So instead make __mark_reg_const_zero() reset precision always,
> as it's a safe default for SCALAR_VALUE. Explanation is basically the
> same as for why we are resetting (or rather not setting) precision in
> current state. If necessary, precision propagation will set it to
> precise correctly.
>
> As such, also remove a big comment about forward precision propagation
> in mark_reg_stack_read() and avoid unnecessarily setting precision to
> true after reading from STACK_ZERO stack. Again, precision propagation
> will correctly handle this, if that SCALAR_VALUE register will ever be
> needed to be precise.
>
> Reported-by: Maxim Mikityanskiy <maxtram95@gmail.com>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Yonghong Song <yonghong.song@linux.dev>
Maxim Mikityanskiy Dec. 16, 2023, 3:13 p.m. UTC | #2
On Fri, 15 Dec 2023 at 15:58:22 -0800, Andrii Nakryiko wrote:
> It is safe to always start with imprecise SCALAR_VALUE register.
> Previously __mark_reg_const_zero() relied on caller to reset precise
> mark, but it's very error prone and we already missed it in a few
> places. So instead make __mark_reg_const_zero() reset precision always,
> as it's a safe default for SCALAR_VALUE. Explanation is basically the
> same as for why we are resetting (or rather not setting) precision in
> current state. If necessary, precision propagation will set it to
> precise correctly.
> 
> As such, also remove a big comment about forward precision propagation
> in mark_reg_stack_read() and avoid unnecessarily setting precision to
> true after reading from STACK_ZERO stack. Again, precision propagation
> will correctly handle this, if that SCALAR_VALUE register will ever be
> needed to be precise.
> 
> Reported-by: Maxim Mikityanskiy <maxtram95@gmail.com>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
>  kernel/bpf/verifier.c                            | 16 +++-------------
>  .../selftests/bpf/progs/verifier_spill_fill.c    | 10 ++++++++--
>  2 files changed, 11 insertions(+), 15 deletions(-)

Thanks for the prompt fix!

Acked-by: Maxim Mikityanskiy <maxtram95@gmail.com>
Daniel Borkmann Dec. 18, 2023, 10:46 a.m. UTC | #3
On 12/16/23 12:58 AM, Andrii Nakryiko wrote:
> It is safe to always start with imprecise SCALAR_VALUE register.
> Previously __mark_reg_const_zero() relied on caller to reset precise
> mark, but it's very error prone and we already missed it in a few
> places. So instead make __mark_reg_const_zero() reset precision always,
> as it's a safe default for SCALAR_VALUE. Explanation is basically the
> same as for why we are resetting (or rather not setting) precision in
> current state. If necessary, precision propagation will set it to
> precise correctly.
> 
> As such, also remove a big comment about forward precision propagation
> in mark_reg_stack_read() and avoid unnecessarily setting precision to
> true after reading from STACK_ZERO stack. Again, precision propagation
> will correctly handle this, if that SCALAR_VALUE register will ever be
> needed to be precise.
> 
> Reported-by: Maxim Mikityanskiy <maxtram95@gmail.com>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
>   kernel/bpf/verifier.c                            | 16 +++-------------
>   .../selftests/bpf/progs/verifier_spill_fill.c    | 10 ++++++++--
>   2 files changed, 11 insertions(+), 15 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 1863826a4ac3..3009d1faec86 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1781,6 +1781,7 @@ static void __mark_reg_const_zero(struct bpf_reg_state *reg)
>   {
>   	__mark_reg_known(reg, 0);
>   	reg->type = SCALAR_VALUE;
> +	reg->precise = false; /* all scalars are assumed imprecise initially */

Could you elaborate on why it is safe to set it to false instead of using:

   reg->precise = !env->bpf_capable;

For !cap_bpf we typically always set precise requirement to true, see also
__mark_reg_unknown().

>   }
>   
>   static void mark_reg_known_zero(struct bpf_verifier_env *env,
> @@ -4706,21 +4707,10 @@ static void mark_reg_stack_read(struct bpf_verifier_env *env,
>   		zeros++;
>   	}
>   	if (zeros == max_off - min_off) {
> -		/* any access_size read into register is zero extended,
> -		 * so the whole register == const_zero
> +		/* Any access_size read into register is zero extended,
> +		 * so the whole register == const_zero.
>   		 */
>   		__mark_reg_const_zero(&state->regs[dst_regno]);
> -		/* backtracking doesn't support STACK_ZERO yet,
> -		 * so mark it precise here, so that later
> -		 * backtracking can stop here.
> -		 * Backtracking may not need this if this register
> -		 * doesn't participate in pointer adjustment.
> -		 * Forward propagation of precise flag is not
> -		 * necessary either. This mark is only to stop
> -		 * backtracking. Any register that contributed
> -		 * to const 0 was marked precise before spill.
> -		 */
> -		state->regs[dst_regno].precise = true;
>   	} else {
>   		/* have read misc data from the stack */
>   		mark_reg_unknown(env, state->regs, dst_regno);
> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> index 508f5d6c7347..39fe3372e0e0 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> @@ -499,8 +499,14 @@ __success
>   __msg("2: (7a) *(u64 *)(r10 -8) = 0          ; R10=fp0 fp-8_w=00000000")
>   /* but fp-16 is spilled IMPRECISE zero const reg */
>   __msg("4: (7b) *(u64 *)(r10 -16) = r0        ; R0_w=0 R10=fp0 fp-16_w=0")
> -/* and now check that precision propagation works even for such tricky case */
> -__msg("10: (71) r2 = *(u8 *)(r10 -9)         ; R2_w=P0 R10=fp0 fp-16_w=0")
> +/* validate that assigning R2 from STACK_ZERO doesn't mark register
> + * precise immediately; if necessary, it will be marked precise later
> + */
> +__msg("6: (71) r2 = *(u8 *)(r10 -1)          ; R2_w=0 R10=fp0 fp-8_w=00000000")
> +/* similarly, when R2 is assigned from spilled register, it is initially
> + * imprecise, but will be marked precise later once it is used in precise context
> + */
> +__msg("10: (71) r2 = *(u8 *)(r10 -9)         ; R2_w=0 R10=fp0 fp-16_w=0")
>   __msg("11: (0f) r1 += r2")
>   __msg("mark_precise: frame0: last_idx 11 first_idx 0 subseq_idx -1")
>   __msg("mark_precise: frame0: regs=r2 stack= before 10: (71) r2 = *(u8 *)(r10 -9)")
>
Andrii Nakryiko Dec. 18, 2023, 5:18 p.m. UTC | #4
On Mon, Dec 18, 2023 at 2:46 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 12/16/23 12:58 AM, Andrii Nakryiko wrote:
> > It is safe to always start with imprecise SCALAR_VALUE register.
> > Previously __mark_reg_const_zero() relied on caller to reset precise
> > mark, but it's very error prone and we already missed it in a few
> > places. So instead make __mark_reg_const_zero() reset precision always,
> > as it's a safe default for SCALAR_VALUE. Explanation is basically the
> > same as for why we are resetting (or rather not setting) precision in
> > current state. If necessary, precision propagation will set it to
> > precise correctly.
> >
> > As such, also remove a big comment about forward precision propagation
> > in mark_reg_stack_read() and avoid unnecessarily setting precision to
> > true after reading from STACK_ZERO stack. Again, precision propagation
> > will correctly handle this, if that SCALAR_VALUE register will ever be
> > needed to be precise.
> >
> > Reported-by: Maxim Mikityanskiy <maxtram95@gmail.com>
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> >   kernel/bpf/verifier.c                            | 16 +++-------------
> >   .../selftests/bpf/progs/verifier_spill_fill.c    | 10 ++++++++--
> >   2 files changed, 11 insertions(+), 15 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 1863826a4ac3..3009d1faec86 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -1781,6 +1781,7 @@ static void __mark_reg_const_zero(struct bpf_reg_state *reg)
> >   {
> >       __mark_reg_known(reg, 0);
> >       reg->type = SCALAR_VALUE;
> > +     reg->precise = false; /* all scalars are assumed imprecise initially */
>
> Could you elaborate on why it is safe to set it to false instead of using:
>
>    reg->precise = !env->bpf_capable;
>
> For !cap_bpf we typically always set precise requirement to true, see also
> __mark_reg_unknown().

Oh, you are right, I forgot about unpriv. I'll send v2 taking unpriv
into account, thanks!

Let's also try this new fancy thing:

pw-bot: cr

>
> >   }
> >
> >   static void mark_reg_known_zero(struct bpf_verifier_env *env,
> > @@ -4706,21 +4707,10 @@ static void mark_reg_stack_read(struct bpf_verifier_env *env,
> >               zeros++;
> >       }
> >       if (zeros == max_off - min_off) {
> > -             /* any access_size read into register is zero extended,
> > -              * so the whole register == const_zero
> > +             /* Any access_size read into register is zero extended,
> > +              * so the whole register == const_zero.
> >                */
> >               __mark_reg_const_zero(&state->regs[dst_regno]);
> > -             /* backtracking doesn't support STACK_ZERO yet,
> > -              * so mark it precise here, so that later
> > -              * backtracking can stop here.
> > -              * Backtracking may not need this if this register
> > -              * doesn't participate in pointer adjustment.
> > -              * Forward propagation of precise flag is not
> > -              * necessary either. This mark is only to stop
> > -              * backtracking. Any register that contributed
> > -              * to const 0 was marked precise before spill.
> > -              */
> > -             state->regs[dst_regno].precise = true;
> >       } else {
> >               /* have read misc data from the stack */
> >               mark_reg_unknown(env, state->regs, dst_regno);
> > diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > index 508f5d6c7347..39fe3372e0e0 100644
> > --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > @@ -499,8 +499,14 @@ __success
> >   __msg("2: (7a) *(u64 *)(r10 -8) = 0          ; R10=fp0 fp-8_w=00000000")
> >   /* but fp-16 is spilled IMPRECISE zero const reg */
> >   __msg("4: (7b) *(u64 *)(r10 -16) = r0        ; R0_w=0 R10=fp0 fp-16_w=0")
> > -/* and now check that precision propagation works even for such tricky case */
> > -__msg("10: (71) r2 = *(u8 *)(r10 -9)         ; R2_w=P0 R10=fp0 fp-16_w=0")
> > +/* validate that assigning R2 from STACK_ZERO doesn't mark register
> > + * precise immediately; if necessary, it will be marked precise later
> > + */
> > +__msg("6: (71) r2 = *(u8 *)(r10 -1)          ; R2_w=0 R10=fp0 fp-8_w=00000000")
> > +/* similarly, when R2 is assigned from spilled register, it is initially
> > + * imprecise, but will be marked precise later once it is used in precise context
> > + */
> > +__msg("10: (71) r2 = *(u8 *)(r10 -9)         ; R2_w=0 R10=fp0 fp-16_w=0")
> >   __msg("11: (0f) r1 += r2")
> >   __msg("mark_precise: frame0: last_idx 11 first_idx 0 subseq_idx -1")
> >   __msg("mark_precise: frame0: regs=r2 stack= before 10: (71) r2 = *(u8 *)(r10 -9)")
> >
>
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1863826a4ac3..3009d1faec86 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1781,6 +1781,7 @@  static void __mark_reg_const_zero(struct bpf_reg_state *reg)
 {
 	__mark_reg_known(reg, 0);
 	reg->type = SCALAR_VALUE;
+	reg->precise = false; /* all scalars are assumed imprecise initially */
 }
 
 static void mark_reg_known_zero(struct bpf_verifier_env *env,
@@ -4706,21 +4707,10 @@  static void mark_reg_stack_read(struct bpf_verifier_env *env,
 		zeros++;
 	}
 	if (zeros == max_off - min_off) {
-		/* any access_size read into register is zero extended,
-		 * so the whole register == const_zero
+		/* Any access_size read into register is zero extended,
+		 * so the whole register == const_zero.
 		 */
 		__mark_reg_const_zero(&state->regs[dst_regno]);
-		/* backtracking doesn't support STACK_ZERO yet,
-		 * so mark it precise here, so that later
-		 * backtracking can stop here.
-		 * Backtracking may not need this if this register
-		 * doesn't participate in pointer adjustment.
-		 * Forward propagation of precise flag is not
-		 * necessary either. This mark is only to stop
-		 * backtracking. Any register that contributed
-		 * to const 0 was marked precise before spill.
-		 */
-		state->regs[dst_regno].precise = true;
 	} else {
 		/* have read misc data from the stack */
 		mark_reg_unknown(env, state->regs, dst_regno);
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 508f5d6c7347..39fe3372e0e0 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -499,8 +499,14 @@  __success
 __msg("2: (7a) *(u64 *)(r10 -8) = 0          ; R10=fp0 fp-8_w=00000000")
 /* but fp-16 is spilled IMPRECISE zero const reg */
 __msg("4: (7b) *(u64 *)(r10 -16) = r0        ; R0_w=0 R10=fp0 fp-16_w=0")
-/* and now check that precision propagation works even for such tricky case */
-__msg("10: (71) r2 = *(u8 *)(r10 -9)         ; R2_w=P0 R10=fp0 fp-16_w=0")
+/* validate that assigning R2 from STACK_ZERO doesn't mark register
+ * precise immediately; if necessary, it will be marked precise later
+ */
+__msg("6: (71) r2 = *(u8 *)(r10 -1)          ; R2_w=0 R10=fp0 fp-8_w=00000000")
+/* similarly, when R2 is assigned from spilled register, it is initially
+ * imprecise, but will be marked precise later once it is used in precise context
+ */
+__msg("10: (71) r2 = *(u8 *)(r10 -9)         ; R2_w=0 R10=fp0 fp-16_w=0")
 __msg("11: (0f) r1 += r2")
 __msg("mark_precise: frame0: last_idx 11 first_idx 0 subseq_idx -1")
 __msg("mark_precise: frame0: regs=r2 stack= before 10: (71) r2 = *(u8 *)(r10 -9)")