diff mbox series

[bpf-next,3/7] bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier

Message ID 20230330055615.89935-1-yhs@fb.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series bpf: Improve verifier for cond_op and spilled loop index variables | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 28 this patch: 28
netdev/cc_maintainers warning 11 maintainers not CCed: eddyz87@gmail.com mykolal@fb.com song@kernel.org shuah@kernel.org sdf@google.com haoluo@google.com john.fastabend@gmail.com kpsingh@kernel.org jolsa@kernel.org martin.lau@linux.dev linux-kselftest@vger.kernel.org
netdev/build_clang success Errors and warnings before: 18 this patch: 18
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 28 this patch: 28
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 26 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-10 fail Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-7 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_verifier on s390x with gcc

Commit Message

Yonghong Song March 30, 2023, 5:56 a.m. UTC
Currently, the verifier does not handle '<const> <cond_op> <non_const>' well.
For example,
  ...
  10: (79) r1 = *(u64 *)(r10 -16)       ; R1_w=scalar() R10=fp0
  11: (b7) r2 = 0                       ; R2_w=0
  12: (2d) if r2 > r1 goto pc+2
  13: (b7) r0 = 0
  14: (95) exit
  15: (65) if r1 s> 0x1 goto pc+3
  16: (0f) r0 += r1
  ...
At insn 12, verifier decides both true and false branch are possible, but
actually only false branch is possible.

Currently, the verifier already supports patterns '<non_const> <cond_op> <const>.
Add support for patterns '<const> <cond_op> <non_const>' in a similar way.

Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10'
due to this change.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 kernel/bpf/verifier.c                                | 12 ++++++++++++
 .../bpf/progs/verifier_bounds_mix_sign_unsign.c      |  2 +-
 2 files changed, 13 insertions(+), 1 deletion(-)

Comments

Dave Marchevsky March 30, 2023, 10:54 p.m. UTC | #1
On 3/30/23 1:56 AM, Yonghong Song wrote:
> Currently, the verifier does not handle '<const> <cond_op> <non_const>' well.
> For example,
>   ...
>   10: (79) r1 = *(u64 *)(r10 -16)       ; R1_w=scalar() R10=fp0
>   11: (b7) r2 = 0                       ; R2_w=0
>   12: (2d) if r2 > r1 goto pc+2
>   13: (b7) r0 = 0
>   14: (95) exit
>   15: (65) if r1 s> 0x1 goto pc+3
>   16: (0f) r0 += r1
>   ...
> At insn 12, verifier decides both true and false branch are possible, but
> actually only false branch is possible.
> 
> Currently, the verifier already supports patterns '<non_const> <cond_op> <const>.
> Add support for patterns '<const> <cond_op> <non_const>' in a similar way.
> 
> Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10'
> due to this change.
> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  kernel/bpf/verifier.c                                | 12 ++++++++++++
>  .../bpf/progs/verifier_bounds_mix_sign_unsign.c      |  2 +-
>  2 files changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 90bb6d25bc9c..d070943a8ba1 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -13302,6 +13302,18 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
>  				       src_reg->var_off.value,
>  				       opcode,
>  				       is_jmp32);
> +	} else if (dst_reg->type == SCALAR_VALUE &&
> +		   is_jmp32 && tnum_is_const(tnum_subreg(dst_reg->var_off))) {
> +		pred = is_branch_taken(src_reg,
> +				       tnum_subreg(dst_reg->var_off).value,
> +				       flip_opcode(opcode),
> +				       is_jmp32);
> +	} else if (dst_reg->type == SCALAR_VALUE &&
> +		   !is_jmp32 && tnum_is_const(dst_reg->var_off)) {
> +		pred = is_branch_taken(src_reg,
> +				       dst_reg->var_off.value,
> +				       flip_opcode(opcode),
> +				       is_jmp32);
>  	} else if (reg_is_pkt_pointer_any(dst_reg) &&
>  		   reg_is_pkt_pointer_any(src_reg) &&
>  		   !is_jmp32) {

Looking at the two SCALAR_VALUE 'else if's above these added lines, these
additions make sense. Having four separate 'else if' checks for essentially
similar logic makes this hard to read, though, maybe it's an opportunity to
refactor a bit.

While trying to make sense of the logic here I attempted to simplify with
a helper:

@@ -13234,6 +13234,21 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
        }));
 }

+static int maybe_const_operand_branch(struct tnum maybe_const_op,
+                                     struct bpf_reg_state *other_op_reg,
+                                     u8 opcode, bool is_jmp32)
+{
+       struct tnum jmp_tnum = is_jmp32 ? tnum_subreg(maybe_const_op) :
+                                         maybe_const_op;
+       if (!tnum_is_const(jmp_tnum))
+               return -1;
+
+       return is_branch_taken(other_op_reg,
+                              jmp_tnum.value,
+                              opcode,
+                              is_jmp32);
+}
+
 static int check_cond_jmp_op(struct bpf_verifier_env *env,
                             struct bpf_insn *insn, int *insn_idx)
 {
@@ -13287,18 +13302,12 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,

        if (BPF_SRC(insn->code) == BPF_K) {
                pred = is_branch_taken(dst_reg, insn->imm, opcode, is_jmp32);
-       } else if (src_reg->type == SCALAR_VALUE &&
-                  is_jmp32 && tnum_is_const(tnum_subreg(src_reg->var_off))) {
-               pred = is_branch_taken(dst_reg,
-                                      tnum_subreg(src_reg->var_off).value,
-                                      opcode,
-                                      is_jmp32);
-       } else if (src_reg->type == SCALAR_VALUE &&
-                  !is_jmp32 && tnum_is_const(src_reg->var_off)) {
-               pred = is_branch_taken(dst_reg,
-                                      src_reg->var_off.value,
-                                      opcode,
-                                      is_jmp32);
+       } else if (src_reg->type == SCALAR_VALUE) {
+               pred = maybe_const_operand_branch(src_reg->var_off, dst_reg,
+                                                 opcode, is_jmp32);
+       } else if (dst_reg->type == SCALAR_VALUE) {
+               pred = maybe_const_operand_branch(dst_reg->var_off, src_reg,
+                                                 flip_opcode(opcode), is_jmp32);
        } else if (reg_is_pkt_pointer_any(dst_reg) &&
                   reg_is_pkt_pointer_any(src_reg) &&
                   !is_jmp32) {


I think the resultant logic is the same as your patch, but it's easier to
understand, for me at least. Note that I didn't test the above.
Yonghong Song March 31, 2023, 3:26 p.m. UTC | #2
On 3/30/23 3:54 PM, Dave Marchevsky wrote:
> On 3/30/23 1:56 AM, Yonghong Song wrote:
>> Currently, the verifier does not handle '<const> <cond_op> <non_const>' well.
>> For example,
>>    ...
>>    10: (79) r1 = *(u64 *)(r10 -16)       ; R1_w=scalar() R10=fp0
>>    11: (b7) r2 = 0                       ; R2_w=0
>>    12: (2d) if r2 > r1 goto pc+2
>>    13: (b7) r0 = 0
>>    14: (95) exit
>>    15: (65) if r1 s> 0x1 goto pc+3
>>    16: (0f) r0 += r1
>>    ...
>> At insn 12, verifier decides both true and false branch are possible, but
>> actually only false branch is possible.
>>
>> Currently, the verifier already supports patterns '<non_const> <cond_op> <const>.
>> Add support for patterns '<const> <cond_op> <non_const>' in a similar way.
>>
>> Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10'
>> due to this change.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>   kernel/bpf/verifier.c                                | 12 ++++++++++++
>>   .../bpf/progs/verifier_bounds_mix_sign_unsign.c      |  2 +-
>>   2 files changed, 13 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 90bb6d25bc9c..d070943a8ba1 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -13302,6 +13302,18 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
>>   				       src_reg->var_off.value,
>>   				       opcode,
>>   				       is_jmp32);
>> +	} else if (dst_reg->type == SCALAR_VALUE &&
>> +		   is_jmp32 && tnum_is_const(tnum_subreg(dst_reg->var_off))) {
>> +		pred = is_branch_taken(src_reg,
>> +				       tnum_subreg(dst_reg->var_off).value,
>> +				       flip_opcode(opcode),
>> +				       is_jmp32);
>> +	} else if (dst_reg->type == SCALAR_VALUE &&
>> +		   !is_jmp32 && tnum_is_const(dst_reg->var_off)) {
>> +		pred = is_branch_taken(src_reg,
>> +				       dst_reg->var_off.value,
>> +				       flip_opcode(opcode),
>> +				       is_jmp32);
>>   	} else if (reg_is_pkt_pointer_any(dst_reg) &&
>>   		   reg_is_pkt_pointer_any(src_reg) &&
>>   		   !is_jmp32) {
> 
> Looking at the two SCALAR_VALUE 'else if's above these added lines, these
> additions make sense. Having four separate 'else if' checks for essentially
> similar logic makes this hard to read, though, maybe it's an opportunity to
> refactor a bit.
> 
> While trying to make sense of the logic here I attempted to simplify with
> a helper:
> 
> @@ -13234,6 +13234,21 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
>          }));
>   }
> 
> +static int maybe_const_operand_branch(struct tnum maybe_const_op,
> +                                     struct bpf_reg_state *other_op_reg,
> +                                     u8 opcode, bool is_jmp32)
> +{
> +       struct tnum jmp_tnum = is_jmp32 ? tnum_subreg(maybe_const_op) :
> +                                         maybe_const_op;
> +       if (!tnum_is_const(jmp_tnum))
> +               return -1;
> +
> +       return is_branch_taken(other_op_reg,
> +                              jmp_tnum.value,
> +                              opcode,
> +                              is_jmp32);
> +}
> +
>   static int check_cond_jmp_op(struct bpf_verifier_env *env,
>                               struct bpf_insn *insn, int *insn_idx)
>   {
> @@ -13287,18 +13302,12 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
> 
>          if (BPF_SRC(insn->code) == BPF_K) {
>                  pred = is_branch_taken(dst_reg, insn->imm, opcode, is_jmp32);
> -       } else if (src_reg->type == SCALAR_VALUE &&
> -                  is_jmp32 && tnum_is_const(tnum_subreg(src_reg->var_off))) {
> -               pred = is_branch_taken(dst_reg,
> -                                      tnum_subreg(src_reg->var_off).value,
> -                                      opcode,
> -                                      is_jmp32);
> -       } else if (src_reg->type == SCALAR_VALUE &&
> -                  !is_jmp32 && tnum_is_const(src_reg->var_off)) {
> -               pred = is_branch_taken(dst_reg,
> -                                      src_reg->var_off.value,
> -                                      opcode,
> -                                      is_jmp32);
> +       } else if (src_reg->type == SCALAR_VALUE) {
> +               pred = maybe_const_operand_branch(src_reg->var_off, dst_reg,
> +                                                 opcode, is_jmp32);
> +       } else if (dst_reg->type == SCALAR_VALUE) {
> +               pred = maybe_const_operand_branch(dst_reg->var_off, src_reg,
> +                                                 flip_opcode(opcode), is_jmp32);
>          } else if (reg_is_pkt_pointer_any(dst_reg) &&
>                     reg_is_pkt_pointer_any(src_reg) &&
>                     !is_jmp32) {
> 
> 
> I think the resultant logic is the same as your patch, but it's easier to
> understand, for me at least. Note that I didn't test the above.

Thanks. Will do refactoring to simplify logic in the next revision.
Andrii Nakryiko April 4, 2023, 10:04 p.m. UTC | #3
On Thu, Mar 30, 2023 at 3:55 PM Dave Marchevsky <davemarchevsky@meta.com> wrote:
>
> On 3/30/23 1:56 AM, Yonghong Song wrote:
> > Currently, the verifier does not handle '<const> <cond_op> <non_const>' well.
> > For example,
> >   ...
> >   10: (79) r1 = *(u64 *)(r10 -16)       ; R1_w=scalar() R10=fp0
> >   11: (b7) r2 = 0                       ; R2_w=0
> >   12: (2d) if r2 > r1 goto pc+2
> >   13: (b7) r0 = 0
> >   14: (95) exit
> >   15: (65) if r1 s> 0x1 goto pc+3
> >   16: (0f) r0 += r1
> >   ...
> > At insn 12, verifier decides both true and false branch are possible, but
> > actually only false branch is possible.
> >
> > Currently, the verifier already supports patterns '<non_const> <cond_op> <const>.
> > Add support for patterns '<const> <cond_op> <non_const>' in a similar way.
> >
> > Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10'
> > due to this change.
> >
> > Signed-off-by: Yonghong Song <yhs@fb.com>
> > ---
> >  kernel/bpf/verifier.c                                | 12 ++++++++++++
> >  .../bpf/progs/verifier_bounds_mix_sign_unsign.c      |  2 +-
> >  2 files changed, 13 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 90bb6d25bc9c..d070943a8ba1 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -13302,6 +13302,18 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
> >                                      src_reg->var_off.value,
> >                                      opcode,
> >                                      is_jmp32);
> > +     } else if (dst_reg->type == SCALAR_VALUE &&
> > +                is_jmp32 && tnum_is_const(tnum_subreg(dst_reg->var_off))) {
> > +             pred = is_branch_taken(src_reg,
> > +                                    tnum_subreg(dst_reg->var_off).value,
> > +                                    flip_opcode(opcode),
> > +                                    is_jmp32);
> > +     } else if (dst_reg->type == SCALAR_VALUE &&
> > +                !is_jmp32 && tnum_is_const(dst_reg->var_off)) {
> > +             pred = is_branch_taken(src_reg,
> > +                                    dst_reg->var_off.value,
> > +                                    flip_opcode(opcode),
> > +                                    is_jmp32);
> >       } else if (reg_is_pkt_pointer_any(dst_reg) &&
> >                  reg_is_pkt_pointer_any(src_reg) &&
> >                  !is_jmp32) {
>
> Looking at the two SCALAR_VALUE 'else if's above these added lines, these
> additions make sense. Having four separate 'else if' checks for essentially
> similar logic makes this hard to read, though, maybe it's an opportunity to
> refactor a bit.
>
> While trying to make sense of the logic here I attempted to simplify with
> a helper:
>
> @@ -13234,6 +13234,21 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
>         }));
>  }
>
> +static int maybe_const_operand_branch(struct tnum maybe_const_op,
> +                                     struct bpf_reg_state *other_op_reg,
> +                                     u8 opcode, bool is_jmp32)
> +{
> +       struct tnum jmp_tnum = is_jmp32 ? tnum_subreg(maybe_const_op) :
> +                                         maybe_const_op;
> +       if (!tnum_is_const(jmp_tnum))
> +               return -1;
> +
> +       return is_branch_taken(other_op_reg,
> +                              jmp_tnum.value,
> +                              opcode,
> +                              is_jmp32);
> +}
> +
>  static int check_cond_jmp_op(struct bpf_verifier_env *env,
>                              struct bpf_insn *insn, int *insn_idx)
>  {
> @@ -13287,18 +13302,12 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
>
>         if (BPF_SRC(insn->code) == BPF_K) {
>                 pred = is_branch_taken(dst_reg, insn->imm, opcode, is_jmp32);
> -       } else if (src_reg->type == SCALAR_VALUE &&
> -                  is_jmp32 && tnum_is_const(tnum_subreg(src_reg->var_off))) {
> -               pred = is_branch_taken(dst_reg,
> -                                      tnum_subreg(src_reg->var_off).value,
> -                                      opcode,
> -                                      is_jmp32);
> -       } else if (src_reg->type == SCALAR_VALUE &&
> -                  !is_jmp32 && tnum_is_const(src_reg->var_off)) {
> -               pred = is_branch_taken(dst_reg,
> -                                      src_reg->var_off.value,
> -                                      opcode,
> -                                      is_jmp32);
> +       } else if (src_reg->type == SCALAR_VALUE) {
> +               pred = maybe_const_operand_branch(src_reg->var_off, dst_reg,
> +                                                 opcode, is_jmp32);
> +       } else if (dst_reg->type == SCALAR_VALUE) {
> +               pred = maybe_const_operand_branch(dst_reg->var_off, src_reg,
> +                                                 flip_opcode(opcode), is_jmp32);
>         } else if (reg_is_pkt_pointer_any(dst_reg) &&
>                    reg_is_pkt_pointer_any(src_reg) &&
>                    !is_jmp32) {
>
>
> I think the resultant logic is the same as your patch, but it's easier to
> understand, for me at least. Note that I didn't test the above.

should we push it half a step further and have

if (src_reg->type == SCALAR_VALUE || dst_reg->type == SCALAR_VALUE)
  pred = is_branch_taken_regs(src_reg, dst_reg, opcode, is_jmp32)

seems even clearer like that. All the tnum subreg, const vs non-const,
and dst/src flip can be handled internally in one nicely isolated
place.
Yonghong Song April 6, 2023, 4:51 p.m. UTC | #4
On 4/4/23 3:04 PM, Andrii Nakryiko wrote:
> On Thu, Mar 30, 2023 at 3:55 PM Dave Marchevsky <davemarchevsky@meta.com> wrote:
>>
>> On 3/30/23 1:56 AM, Yonghong Song wrote:
>>> Currently, the verifier does not handle '<const> <cond_op> <non_const>' well.
>>> For example,
>>>    ...
>>>    10: (79) r1 = *(u64 *)(r10 -16)       ; R1_w=scalar() R10=fp0
>>>    11: (b7) r2 = 0                       ; R2_w=0
>>>    12: (2d) if r2 > r1 goto pc+2
>>>    13: (b7) r0 = 0
>>>    14: (95) exit
>>>    15: (65) if r1 s> 0x1 goto pc+3
>>>    16: (0f) r0 += r1
>>>    ...
>>> At insn 12, verifier decides both true and false branch are possible, but
>>> actually only false branch is possible.
>>>
>>> Currently, the verifier already supports patterns '<non_const> <cond_op> <const>.
>>> Add support for patterns '<const> <cond_op> <non_const>' in a similar way.
>>>
>>> Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10'
>>> due to this change.
>>>
>>> Signed-off-by: Yonghong Song <yhs@fb.com>
>>> ---
>>>   kernel/bpf/verifier.c                                | 12 ++++++++++++
>>>   .../bpf/progs/verifier_bounds_mix_sign_unsign.c      |  2 +-
>>>   2 files changed, 13 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index 90bb6d25bc9c..d070943a8ba1 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -13302,6 +13302,18 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
>>>                                       src_reg->var_off.value,
>>>                                       opcode,
>>>                                       is_jmp32);
>>> +     } else if (dst_reg->type == SCALAR_VALUE &&
>>> +                is_jmp32 && tnum_is_const(tnum_subreg(dst_reg->var_off))) {
>>> +             pred = is_branch_taken(src_reg,
>>> +                                    tnum_subreg(dst_reg->var_off).value,
>>> +                                    flip_opcode(opcode),
>>> +                                    is_jmp32);
>>> +     } else if (dst_reg->type == SCALAR_VALUE &&
>>> +                !is_jmp32 && tnum_is_const(dst_reg->var_off)) {
>>> +             pred = is_branch_taken(src_reg,
>>> +                                    dst_reg->var_off.value,
>>> +                                    flip_opcode(opcode),
>>> +                                    is_jmp32);
>>>        } else if (reg_is_pkt_pointer_any(dst_reg) &&
>>>                   reg_is_pkt_pointer_any(src_reg) &&
>>>                   !is_jmp32) {
>>
>> Looking at the two SCALAR_VALUE 'else if's above these added lines, these
>> additions make sense. Having four separate 'else if' checks for essentially
>> similar logic makes this hard to read, though, maybe it's an opportunity to
>> refactor a bit.
>>
>> While trying to make sense of the logic here I attempted to simplify with
>> a helper:
>>
>> @@ -13234,6 +13234,21 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
>>          }));
>>   }
>>
>> +static int maybe_const_operand_branch(struct tnum maybe_const_op,
>> +                                     struct bpf_reg_state *other_op_reg,
>> +                                     u8 opcode, bool is_jmp32)
>> +{
>> +       struct tnum jmp_tnum = is_jmp32 ? tnum_subreg(maybe_const_op) :
>> +                                         maybe_const_op;
>> +       if (!tnum_is_const(jmp_tnum))
>> +               return -1;
>> +
>> +       return is_branch_taken(other_op_reg,
>> +                              jmp_tnum.value,
>> +                              opcode,
>> +                              is_jmp32);
>> +}
>> +
>>   static int check_cond_jmp_op(struct bpf_verifier_env *env,
>>                               struct bpf_insn *insn, int *insn_idx)
>>   {
>> @@ -13287,18 +13302,12 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
>>
>>          if (BPF_SRC(insn->code) == BPF_K) {
>>                  pred = is_branch_taken(dst_reg, insn->imm, opcode, is_jmp32);
>> -       } else if (src_reg->type == SCALAR_VALUE &&
>> -                  is_jmp32 && tnum_is_const(tnum_subreg(src_reg->var_off))) {
>> -               pred = is_branch_taken(dst_reg,
>> -                                      tnum_subreg(src_reg->var_off).value,
>> -                                      opcode,
>> -                                      is_jmp32);
>> -       } else if (src_reg->type == SCALAR_VALUE &&
>> -                  !is_jmp32 && tnum_is_const(src_reg->var_off)) {
>> -               pred = is_branch_taken(dst_reg,
>> -                                      src_reg->var_off.value,
>> -                                      opcode,
>> -                                      is_jmp32);
>> +       } else if (src_reg->type == SCALAR_VALUE) {
>> +               pred = maybe_const_operand_branch(src_reg->var_off, dst_reg,
>> +                                                 opcode, is_jmp32);
>> +       } else if (dst_reg->type == SCALAR_VALUE) {
>> +               pred = maybe_const_operand_branch(dst_reg->var_off, src_reg,
>> +                                                 flip_opcode(opcode), is_jmp32);
>>          } else if (reg_is_pkt_pointer_any(dst_reg) &&
>>                     reg_is_pkt_pointer_any(src_reg) &&
>>                     !is_jmp32) {
>>
>>
>> I think the resultant logic is the same as your patch, but it's easier to
>> understand, for me at least. Note that I didn't test the above.
> 
> should we push it half a step further and have
> 
> if (src_reg->type == SCALAR_VALUE || dst_reg->type == SCALAR_VALUE)
>    pred = is_branch_taken_regs(src_reg, dst_reg, opcode, is_jmp32)
> 
> seems even clearer like that. All the tnum subreg, const vs non-const,
> and dst/src flip can be handled internally in one nicely isolated
> place.

I kept my original logic. I think it is more clean. In many cases,
both src_reg and dst_reg are scalar values. What we really want is to
test whether src_reg or dst_reg are constant or not and act
accordingly.
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 90bb6d25bc9c..d070943a8ba1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13302,6 +13302,18 @@  static int check_cond_jmp_op(struct bpf_verifier_env *env,
 				       src_reg->var_off.value,
 				       opcode,
 				       is_jmp32);
+	} else if (dst_reg->type == SCALAR_VALUE &&
+		   is_jmp32 && tnum_is_const(tnum_subreg(dst_reg->var_off))) {
+		pred = is_branch_taken(src_reg,
+				       tnum_subreg(dst_reg->var_off).value,
+				       flip_opcode(opcode),
+				       is_jmp32);
+	} else if (dst_reg->type == SCALAR_VALUE &&
+		   !is_jmp32 && tnum_is_const(dst_reg->var_off)) {
+		pred = is_branch_taken(src_reg,
+				       dst_reg->var_off.value,
+				       flip_opcode(opcode),
+				       is_jmp32);
 	} else if (reg_is_pkt_pointer_any(dst_reg) &&
 		   reg_is_pkt_pointer_any(src_reg) &&
 		   !is_jmp32) {
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c b/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c
index 91a66357896a..4f40144748a5 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c
@@ -354,7 +354,7 @@  __naked void signed_and_unsigned_variant_10(void)
 	call %[bpf_map_lookup_elem];			\
 	if r0 == 0 goto l0_%=;				\
 	r1 = *(u64*)(r10 - 16);				\
-	r2 = 0;						\
+	r2 = -1;						\
 	if r2 > r1 goto l1_%=;				\
 	r0 = 0;						\
 	exit;						\