diff mbox series

[bpf-next,v2] bpf: Detect jumping to reserved code during check_cfg()

Message ID 20231010-jmp-into-reserved-fields-v2-1-3dd5a94d1e21@gmail.com (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series [bpf-next,v2] bpf: Detect jumping to reserved code during check_cfg() | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-VM_Test-0 success Logs for ShellCheck
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-1 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-6 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-7 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-28 success Logs for veristat
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1352 this patch: 1352
netdev/cc_maintainers warning 3 maintainers not CCed: shuah@kernel.org mykolal@fb.com linux-kselftest@vger.kernel.org
netdev/build_clang success Errors and warnings before: 1364 this patch: 1364
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1375 this patch: 1375
netdev/checkpatch warning WARNING: line length of 81 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Hao Sun Oct. 10, 2023, 12:03 p.m. UTC
Currently, we don't check if the branch-taken of a jump is reserved code of
ld_imm64. Instead, such a issue is captured in check_ld_imm(). The verifier
gives the following log in such case:

func#0 @0
0: R1=ctx(off=0,imm=0) R10=fp0
0: (18) r4 = 0xffff888103436000       ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
2: (18) r1 = 0x1d                     ; R1_w=29
4: (55) if r4 != 0x0 goto pc+4        ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
5: (1c) w1 -= w1                      ; R1_w=0
6: (18) r5 = 0x32                     ; R5_w=50
8: (56) if w5 != 0xfffffff4 goto pc-2
mark_precise: frame0: last_idx 8 first_idx 0 subseq_idx -1
mark_precise: frame0: regs=r5 stack= before 6: (18) r5 = 0x32
7: R5_w=50
7: BUG_ld_00
invalid BPF_LD_IMM insn

Here the verifier rejects the program because it thinks insn at 7 is an
invalid BPF_LD_IMM, but such a error log is not accurate since the issue
is jumping to reserved code not because the program contains invalid insn.
Therefore, make the verifier check the jump target during check_cfg(). For
the same program, the verifier reports the following log:

func#0 @0
jump to reserved code from insn 8 to 7

Also adjust existing tests in ld_imm64.c, testing forward/back jump to
reserved code.

Signed-off-by: Hao Sun <sunhao.th@gmail.com>
---
Changes in v2:
- Adjust existing test cases
- Link to v1: https://lore.kernel.org/bpf/20231009-jmp-into-reserved-fields-v1-1-d8006e2ac1f6@gmail.com/
---
 kernel/bpf/verifier.c                           | 7 +++++++
 tools/testing/selftests/bpf/verifier/ld_imm64.c | 8 +++-----
 2 files changed, 10 insertions(+), 5 deletions(-)


---
base-commit: 3157b7ce14bbf468b0ca8613322a05c37b5ae25d
change-id: 20231009-jmp-into-reserved-fields-fc1a98a8e7dc

Best regards,

Comments

Eduard Zingerman Oct. 10, 2023, 2:46 p.m. UTC | #1
On Tue, 2023-10-10 at 14:03 +0200, Hao Sun wrote:
> Currently, we don't check if the branch-taken of a jump is reserved code of
> ld_imm64. Instead, such a issue is captured in check_ld_imm(). The verifier
> gives the following log in such case:
> 
> func#0 @0
> 0: R1=ctx(off=0,imm=0) R10=fp0
> 0: (18) r4 = 0xffff888103436000       ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
> 2: (18) r1 = 0x1d                     ; R1_w=29
> 4: (55) if r4 != 0x0 goto pc+4        ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
> 5: (1c) w1 -= w1                      ; R1_w=0
> 6: (18) r5 = 0x32                     ; R5_w=50
> 8: (56) if w5 != 0xfffffff4 goto pc-2
> mark_precise: frame0: last_idx 8 first_idx 0 subseq_idx -1
> mark_precise: frame0: regs=r5 stack= before 6: (18) r5 = 0x32
> 7: R5_w=50
> 7: BUG_ld_00
> invalid BPF_LD_IMM insn
> 
> Here the verifier rejects the program because it thinks insn at 7 is an
> invalid BPF_LD_IMM, but such a error log is not accurate since the issue
> is jumping to reserved code not because the program contains invalid insn.
> Therefore, make the verifier check the jump target during check_cfg(). For
> the same program, the verifier reports the following log:
> 
> func#0 @0
> jump to reserved code from insn 8 to 7
> 
> Also adjust existing tests in ld_imm64.c, testing forward/back jump to
> reserved code.
> 
> Signed-off-by: Hao Sun <sunhao.th@gmail.com>

Please see a nitpick below.

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

> ---
> Changes in v2:
> - Adjust existing test cases
> - Link to v1: https://lore.kernel.org/bpf/20231009-jmp-into-reserved-fields-v1-1-d8006e2ac1f6@gmail.com/
> ---
>  kernel/bpf/verifier.c                           | 7 +++++++
>  tools/testing/selftests/bpf/verifier/ld_imm64.c | 8 +++-----
>  2 files changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index eed7350e15f4..725ac0b464cf 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -14980,6 +14980,7 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
>  {
>  	int *insn_stack = env->cfg.insn_stack;
>  	int *insn_state = env->cfg.insn_state;
> +	struct bpf_insn *insns = env->prog->insnsi;
>  
>  	if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
>  		return DONE_EXPLORING;
> @@ -14993,6 +14994,12 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
>  		return -EINVAL;
>  	}
>  
> +	if (e == BRANCH && insns[w].code == 0) {
> +		verbose_linfo(env, t, "%d", t);
> +		verbose(env, "jump to reserved code from insn %d to %d\n", t, w);
> +		return -EINVAL;
> +	}
> +
>  	if (e == BRANCH) {
>  		/* mark branch target for state pruning */
>  		mark_prune_point(env, w);
> diff --git a/tools/testing/selftests/bpf/verifier/ld_imm64.c b/tools/testing/selftests/bpf/verifier/ld_imm64.c
> index f9297900cea6..c34aa78f1877 100644
> --- a/tools/testing/selftests/bpf/verifier/ld_imm64.c
> +++ b/tools/testing/selftests/bpf/verifier/ld_imm64.c
> @@ -9,22 +9,20 @@
>  	BPF_MOV64_IMM(BPF_REG_0, 2),
>  	BPF_EXIT_INSN(),
>  	},
> -	.errstr = "invalid BPF_LD_IMM insn",
> -	.errstr_unpriv = "R1 pointer comparison",
> +	.errstr = "jump to reserved code",
>  	.result = REJECT,
>  },
>  {
>  	"test2 ld_imm64",
>  	.insns = {
> -	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
>  	BPF_LD_IMM64(BPF_REG_0, 0),
> +	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -2),

This change is not really necessary, the test reports same error
either way.

>  	BPF_LD_IMM64(BPF_REG_0, 0),
>  	BPF_LD_IMM64(BPF_REG_0, 1),
>  	BPF_LD_IMM64(BPF_REG_0, 1),
>  	BPF_EXIT_INSN(),
>  	},
> -	.errstr = "invalid BPF_LD_IMM insn",
> -	.errstr_unpriv = "R1 pointer comparison",
> +	.errstr = "jump to reserved code",
>  	.result = REJECT,
>  },
>  {
> 
> ---
> base-commit: 3157b7ce14bbf468b0ca8613322a05c37b5ae25d
> change-id: 20231009-jmp-into-reserved-fields-fc1a98a8e7dc
> 
> Best regards,
Daniel Borkmann Oct. 10, 2023, 3:27 p.m. UTC | #2
On 10/10/23 4:46 PM, Eduard Zingerman wrote:
> On Tue, 2023-10-10 at 14:03 +0200, Hao Sun wrote:
>> Currently, we don't check if the branch-taken of a jump is reserved code of
>> ld_imm64. Instead, such a issue is captured in check_ld_imm(). The verifier
>> gives the following log in such case:
>>
>> func#0 @0
>> 0: R1=ctx(off=0,imm=0) R10=fp0
>> 0: (18) r4 = 0xffff888103436000       ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
>> 2: (18) r1 = 0x1d                     ; R1_w=29
>> 4: (55) if r4 != 0x0 goto pc+4        ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
>> 5: (1c) w1 -= w1                      ; R1_w=0
>> 6: (18) r5 = 0x32                     ; R5_w=50
>> 8: (56) if w5 != 0xfffffff4 goto pc-2
>> mark_precise: frame0: last_idx 8 first_idx 0 subseq_idx -1
>> mark_precise: frame0: regs=r5 stack= before 6: (18) r5 = 0x32
>> 7: R5_w=50
>> 7: BUG_ld_00
>> invalid BPF_LD_IMM insn
>>
>> Here the verifier rejects the program because it thinks insn at 7 is an
>> invalid BPF_LD_IMM, but such a error log is not accurate since the issue
>> is jumping to reserved code not because the program contains invalid insn.
>> Therefore, make the verifier check the jump target during check_cfg(). For
>> the same program, the verifier reports the following log:
>>
>> func#0 @0
>> jump to reserved code from insn 8 to 7
>>
>> Also adjust existing tests in ld_imm64.c, testing forward/back jump to
>> reserved code.
>>
>> Signed-off-by: Hao Sun <sunhao.th@gmail.com>
> 
> Please see a nitpick below.
> 
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> 
>> ---
>> Changes in v2:
>> - Adjust existing test cases
>> - Link to v1: https://lore.kernel.org/bpf/20231009-jmp-into-reserved-fields-v1-1-d8006e2ac1f6@gmail.com/
>> ---
>>   kernel/bpf/verifier.c                           | 7 +++++++
>>   tools/testing/selftests/bpf/verifier/ld_imm64.c | 8 +++-----
>>   2 files changed, 10 insertions(+), 5 deletions(-)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index eed7350e15f4..725ac0b464cf 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -14980,6 +14980,7 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
>>   {
>>   	int *insn_stack = env->cfg.insn_stack;
>>   	int *insn_state = env->cfg.insn_state;
>> +	struct bpf_insn *insns = env->prog->insnsi;
>>   
>>   	if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
>>   		return DONE_EXPLORING;
>> @@ -14993,6 +14994,12 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
>>   		return -EINVAL;
>>   	}
>>   
>> +	if (e == BRANCH && insns[w].code == 0) {
>> +		verbose_linfo(env, t, "%d", t);
>> +		verbose(env, "jump to reserved code from insn %d to %d\n", t, w);
>> +		return -EINVAL;
>> +	}
>> +
>>   	if (e == BRANCH) {
>>   		/* mark branch target for state pruning */
>>   		mark_prune_point(env, w);
>> diff --git a/tools/testing/selftests/bpf/verifier/ld_imm64.c b/tools/testing/selftests/bpf/verifier/ld_imm64.c
>> index f9297900cea6..c34aa78f1877 100644
>> --- a/tools/testing/selftests/bpf/verifier/ld_imm64.c
>> +++ b/tools/testing/selftests/bpf/verifier/ld_imm64.c
>> @@ -9,22 +9,20 @@
>>   	BPF_MOV64_IMM(BPF_REG_0, 2),
>>   	BPF_EXIT_INSN(),
>>   	},
>> -	.errstr = "invalid BPF_LD_IMM insn",
>> -	.errstr_unpriv = "R1 pointer comparison",
>> +	.errstr = "jump to reserved code",
>>   	.result = REJECT,
>>   },
>>   {
>>   	"test2 ld_imm64",
>>   	.insns = {
>> -	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
>>   	BPF_LD_IMM64(BPF_REG_0, 0),
>> +	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -2),
> 
> This change is not really necessary, the test reports same error
> either way.

If we don't have a backward jump covered, we could probably also make this
a new test case rather than modifying an existing one. Aside from that it
would probably also make sense to make this a separate commit, so it eases
backporting a bit.

>>   	BPF_LD_IMM64(BPF_REG_0, 0),
>>   	BPF_LD_IMM64(BPF_REG_0, 1),
>>   	BPF_LD_IMM64(BPF_REG_0, 1),
>>   	BPF_EXIT_INSN(),
>>   	},
>> -	.errstr = "invalid BPF_LD_IMM insn",
>> -	.errstr_unpriv = "R1 pointer comparison",
>> +	.errstr = "jump to reserved code",
>>   	.result = REJECT,
>>   },
>>   {
>>
>> ---
>> base-commit: 3157b7ce14bbf468b0ca8613322a05c37b5ae25d
>> change-id: 20231009-jmp-into-reserved-fields-fc1a98a8e7dc
>>
>> Best regards,
>
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index eed7350e15f4..725ac0b464cf 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -14980,6 +14980,7 @@  static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
 {
 	int *insn_stack = env->cfg.insn_stack;
 	int *insn_state = env->cfg.insn_state;
+	struct bpf_insn *insns = env->prog->insnsi;
 
 	if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
 		return DONE_EXPLORING;
@@ -14993,6 +14994,12 @@  static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
 		return -EINVAL;
 	}
 
+	if (e == BRANCH && insns[w].code == 0) {
+		verbose_linfo(env, t, "%d", t);
+		verbose(env, "jump to reserved code from insn %d to %d\n", t, w);
+		return -EINVAL;
+	}
+
 	if (e == BRANCH) {
 		/* mark branch target for state pruning */
 		mark_prune_point(env, w);
diff --git a/tools/testing/selftests/bpf/verifier/ld_imm64.c b/tools/testing/selftests/bpf/verifier/ld_imm64.c
index f9297900cea6..c34aa78f1877 100644
--- a/tools/testing/selftests/bpf/verifier/ld_imm64.c
+++ b/tools/testing/selftests/bpf/verifier/ld_imm64.c
@@ -9,22 +9,20 @@ 
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
-	.errstr = "invalid BPF_LD_IMM insn",
-	.errstr_unpriv = "R1 pointer comparison",
+	.errstr = "jump to reserved code",
 	.result = REJECT,
 },
 {
 	"test2 ld_imm64",
 	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_LD_IMM64(BPF_REG_0, 0),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -2),
 	BPF_LD_IMM64(BPF_REG_0, 0),
 	BPF_LD_IMM64(BPF_REG_0, 1),
 	BPF_LD_IMM64(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
-	.errstr = "invalid BPF_LD_IMM insn",
-	.errstr_unpriv = "R1 pointer comparison",
+	.errstr = "jump to reserved code",
 	.result = REJECT,
 },
 {