diff mbox series

[bpf-next,08/15] bpf: Assign ID to scalars on spill

Message ID 20231220214013.3327288-9-maxtram95@gmail.com (mailing list archive)
State New
Headers show
Series Improvements for tracking scalars in the BPF verifier | expand

Commit Message

Maxim Mikityanskiy Dec. 20, 2023, 9:40 p.m. UTC
From: Maxim Mikityanskiy <maxim@isovalent.com>

Currently, when a scalar bounded register is spilled to the stack, its
ID is preserved, but only if was already assigned, i.e. if this register
was MOVed before.

Assign an ID on spill if none is set, so that equal scalars could be
tracked if a register is spilled to the stack and filled into another
register.

One test is adjusted to reflect the change in register IDs.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
 kernel/bpf/verifier.c                                     | 8 +++++++-
 .../selftests/bpf/progs/verifier_direct_packet_access.c   | 2 +-
 2 files changed, 8 insertions(+), 2 deletions(-)

Comments

Alexei Starovoitov Dec. 25, 2023, 3:15 a.m. UTC | #1
On Wed, Dec 20, 2023 at 1:40 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> From: Maxim Mikityanskiy <maxim@isovalent.com>
>
> Currently, when a scalar bounded register is spilled to the stack, its
> ID is preserved, but only if was already assigned, i.e. if this register
> was MOVed before.
>
> Assign an ID on spill if none is set, so that equal scalars could be
> tracked if a register is spilled to the stack and filled into another
> register.
>
> One test is adjusted to reflect the change in register IDs.
>
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> ---
>  kernel/bpf/verifier.c                                     | 8 +++++++-
>  .../selftests/bpf/progs/verifier_direct_packet_access.c   | 2 +-
>  2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index b757fdbbbdd2..caa768f1e369 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>
>         mark_stack_slot_scratched(env, spi);
>         if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
> +               bool reg_value_fits;
> +
> +               reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
> +               /* Make sure that reg had an ID to build a relation on spill. */
> +               if (reg_value_fits)
> +                       assign_scalar_id_before_mov(env, reg);

Thanks.
I just debugged this issue as part of my bpf_cmp series.

llvm generated:

1093: (7b) *(u64 *)(r10 -96) = r0     ;
R0_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
fp-96_w=scalar(smin=smin32=-4095,smax=smax32=256)
; if (bpf_cmp(filepart_length, >, MAX_PATH))
1094: (25) if r0 > 0x100 goto pc+903          ;
R0_w=scalar(id=53,smin=smin32=0,smax=umax=smax32=umax32=256,var_off=(0x0;
0x1ff))

the verifier refined the range of 'r0' here,
but the code just read spilled value from stack:

1116: (79) r1 = *(u64 *)(r10 -64)     ; R1_w=map_value
; payload += filepart_length;
1117: (79) r2 = *(u64 *)(r10 -96)     ;
R2_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
fp-96=scalar(smin=smin32=-4095,smax=smax32=256)
1118: (0f) r1 += r2 ;
R1_w=map_value(map=data_heap,ks=4,vs=23040,off=148,smin=smin32=-4095,smax=smax32=3344)

And later errors as:
"R1 min value is negative, either use unsigned index or do a if (index
>=0) check."

This verifier improvement is certainly necessary.

Since you've analyzed this issue did you figure out a workaround
for C code on existing and older kernels?
Maxim Mikityanskiy Dec. 25, 2023, 9:11 p.m. UTC | #2
On Sun, 24 Dec 2023 at 19:15:42 -0800, Alexei Starovoitov wrote:
> On Wed, Dec 20, 2023 at 1:40 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
> >
> > From: Maxim Mikityanskiy <maxim@isovalent.com>
> >
> > Currently, when a scalar bounded register is spilled to the stack, its
> > ID is preserved, but only if was already assigned, i.e. if this register
> > was MOVed before.
> >
> > Assign an ID on spill if none is set, so that equal scalars could be
> > tracked if a register is spilled to the stack and filled into another
> > register.
> >
> > One test is adjusted to reflect the change in register IDs.
> >
> > Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> > ---
> >  kernel/bpf/verifier.c                                     | 8 +++++++-
> >  .../selftests/bpf/progs/verifier_direct_packet_access.c   | 2 +-
> >  2 files changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index b757fdbbbdd2..caa768f1e369 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >
> >         mark_stack_slot_scratched(env, spi);
> >         if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
> > +               bool reg_value_fits;
> > +
> > +               reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
> > +               /* Make sure that reg had an ID to build a relation on spill. */
> > +               if (reg_value_fits)
> > +                       assign_scalar_id_before_mov(env, reg);
> 
> Thanks.
> I just debugged this issue as part of my bpf_cmp series.
> 
> llvm generated:
> 
> 1093: (7b) *(u64 *)(r10 -96) = r0     ;
> R0_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
> fp-96_w=scalar(smin=smin32=-4095,smax=smax32=256)
> ; if (bpf_cmp(filepart_length, >, MAX_PATH))
> 1094: (25) if r0 > 0x100 goto pc+903          ;
> R0_w=scalar(id=53,smin=smin32=0,smax=umax=smax32=umax32=256,var_off=(0x0;
> 0x1ff))
> 
> the verifier refined the range of 'r0' here,
> but the code just read spilled value from stack:
> 
> 1116: (79) r1 = *(u64 *)(r10 -64)     ; R1_w=map_value
> ; payload += filepart_length;
> 1117: (79) r2 = *(u64 *)(r10 -96)     ;
> R2_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
> fp-96=scalar(smin=smin32=-4095,smax=smax32=256)
> 1118: (0f) r1 += r2 ;
> R1_w=map_value(map=data_heap,ks=4,vs=23040,off=148,smin=smin32=-4095,smax=smax32=3344)
> 
> And later errors as:
> "R1 min value is negative, either use unsigned index or do a if (index
> >=0) check."
> 
> This verifier improvement is certainly necessary.

Glad that you found it useful!

> Since you've analyzed this issue did you figure out a workaround
> for C code on existing and older kernels?

Uhm... in my case (Cilium, it was a while ago) I did some big change
(reorganized function calls and revalidate_data() calls) that changed
codegen significantly, and the problematic pattern disappeared.

I can suggest trying to play with volatile, e.g., declare
filepart_length as volatile; if it doesn't help, create another volatile
variable and copy filepart_length to it before doing bpf_cmp (copying
reg->reg will assign an ID, but I'm not sure if they'll still be in
registers after being declared as volatile).

Unfortunately, I couldn't reproduce your issue locally, so I couldn't
try these suggestions myself. Is this the right code, or should I take
it from elsewhere?

https://patchwork.kernel.org/project/netdevbpf/list/?series=812010

What LLVM version do you see the issue on? I can try to look for a
specific C workaround if I reproduce it locally.

BTW, the asm workaround is obvious (copy reg to another reg to assign an
ID), so maybe an inline asm like this would do the thing?

asm volatile("r8 = %0" :: "r"(filepart_length) : "r8");
Alexei Starovoitov Dec. 25, 2023, 9:26 p.m. UTC | #3
On Mon, Dec 25, 2023 at 1:11 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> On Sun, 24 Dec 2023 at 19:15:42 -0800, Alexei Starovoitov wrote:
> > On Wed, Dec 20, 2023 at 1:40 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
> > >
> > > From: Maxim Mikityanskiy <maxim@isovalent.com>
> > >
> > > Currently, when a scalar bounded register is spilled to the stack, its
> > > ID is preserved, but only if was already assigned, i.e. if this register
> > > was MOVed before.
> > >
> > > Assign an ID on spill if none is set, so that equal scalars could be
> > > tracked if a register is spilled to the stack and filled into another
> > > register.
> > >
> > > One test is adjusted to reflect the change in register IDs.
> > >
> > > Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> > > ---
> > >  kernel/bpf/verifier.c                                     | 8 +++++++-
> > >  .../selftests/bpf/progs/verifier_direct_packet_access.c   | 2 +-
> > >  2 files changed, 8 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index b757fdbbbdd2..caa768f1e369 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > >
> > >         mark_stack_slot_scratched(env, spi);
> > >         if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
> > > +               bool reg_value_fits;
> > > +
> > > +               reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
> > > +               /* Make sure that reg had an ID to build a relation on spill. */
> > > +               if (reg_value_fits)
> > > +                       assign_scalar_id_before_mov(env, reg);
> >
> > Thanks.
> > I just debugged this issue as part of my bpf_cmp series.
> >
> > llvm generated:
> >
> > 1093: (7b) *(u64 *)(r10 -96) = r0     ;
> > R0_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
> > fp-96_w=scalar(smin=smin32=-4095,smax=smax32=256)
> > ; if (bpf_cmp(filepart_length, >, MAX_PATH))
> > 1094: (25) if r0 > 0x100 goto pc+903          ;
> > R0_w=scalar(id=53,smin=smin32=0,smax=umax=smax32=umax32=256,var_off=(0x0;
> > 0x1ff))
> >
> > the verifier refined the range of 'r0' here,
> > but the code just read spilled value from stack:
> >
> > 1116: (79) r1 = *(u64 *)(r10 -64)     ; R1_w=map_value
> > ; payload += filepart_length;
> > 1117: (79) r2 = *(u64 *)(r10 -96)     ;
> > R2_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
> > fp-96=scalar(smin=smin32=-4095,smax=smax32=256)
> > 1118: (0f) r1 += r2 ;
> > R1_w=map_value(map=data_heap,ks=4,vs=23040,off=148,smin=smin32=-4095,smax=smax32=3344)
> >
> > And later errors as:
> > "R1 min value is negative, either use unsigned index or do a if (index
> > >=0) check."
> >
> > This verifier improvement is certainly necessary.
>
> Glad that you found it useful!
>
> > Since you've analyzed this issue did you figure out a workaround
> > for C code on existing and older kernels?
>
> Uhm... in my case (Cilium, it was a while ago) I did some big change
> (reorganized function calls and revalidate_data() calls) that changed
> codegen significantly, and the problematic pattern disappeared.
>
> I can suggest trying to play with volatile, e.g., declare
> filepart_length as volatile; if it doesn't help, create another volatile
> variable and copy filepart_length to it before doing bpf_cmp (copying
> reg->reg will assign an ID, but I'm not sure if they'll still be in
> registers after being declared as volatile).
>
> Unfortunately, I couldn't reproduce your issue locally, so I couldn't
> try these suggestions myself.

No worries.

> What LLVM version do you see the issue on? I can try to look for a
> specific C workaround if I reproduce it locally.
>
> BTW, the asm workaround is obvious (copy reg to another reg to assign an
> ID), so maybe an inline asm like this would do the thing?
>
> asm volatile("r8 = %0" :: "r"(filepart_length) : "r8");

Right. I tried:
asm volatile("%[reg]=%[reg]"::[reg]"r"((short)filepart_length));

and it forces ID assignment, but depending on the code it might still be
too late.
I've seen the pattern:
call ...
*(u64 *)(r10 -96) = r0
r0 = r0 // asm trick above
if r0 > 0x100 goto pc+903

So it may or may not help, but it was good to understand this issue.
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b757fdbbbdd2..caa768f1e369 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4503,9 +4503,15 @@  static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
 
 	mark_stack_slot_scratched(env, spi);
 	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
+		bool reg_value_fits;
+
+		reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
+		/* Make sure that reg had an ID to build a relation on spill. */
+		if (reg_value_fits)
+			assign_scalar_id_before_mov(env, reg);
 		save_register_state(env, state, spi, reg, size);
 		/* Break the relation on a narrowing spill. */
-		if (get_reg_width(reg) > BITS_PER_BYTE * size)
+		if (!reg_value_fits)
 			state->stack[spi].spilled_ptr.id = 0;
 	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
 		   insn->imm != 0 && env->bpf_capable) {
diff --git a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
index be95570ab382..28b602ac9cbe 100644
--- a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
+++ b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
@@ -568,7 +568,7 @@  l0_%=:	r0 = 0;						\
 
 SEC("tc")
 __description("direct packet access: test23 (x += pkt_ptr, 4)")
-__failure __msg("invalid access to packet, off=0 size=8, R5(id=2,off=0,r=0)")
+__failure __msg("invalid access to packet, off=0 size=8, R5(id=3,off=0,r=0)")
 __flag(BPF_F_ANY_ALIGNMENT)
 __naked void test23_x_pkt_ptr_4(void)
 {