diff mbox series

[bpf,v3,1/2] bpf: Account for early exit of bpf_tail_call() and LD_ABS

Message ID 20250106171709.2832649-2-afabre@cloudflare.com (mailing list archive)
State New
Delegated to: BPF
Headers show
Series bpf: Account for early exit of bpf_tail_call() and LD_ABS | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for bpf, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1 this patch: 1
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers warning 3 maintainers not CCed: kuba@kernel.org hawk@kernel.org netdev@vger.kernel.org
netdev/build_clang success Errors and warnings before: 141 this patch: 141
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 15 this patch: 15
netdev/checkpatch warning WARNING: Prefer 'fallthrough;' over fallthrough comment WARNING: line length of 102 exceeds 80 columns WARNING: line length of 107 exceeds 80 columns WARNING: line length of 111 exceeds 80 columns WARNING: line length of 113 exceeds 80 columns WARNING: line length of 81 exceeds 80 columns WARNING: line length of 91 exceeds 80 columns WARNING: line length of 92 exceeds 80 columns WARNING: line length of 94 exceeds 80 columns WARNING: line length of 96 exceeds 80 columns WARNING: line length of 97 exceeds 80 columns WARNING: line length of 98 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-VM_Test-7 success Logs for aarch64-gcc / veristat-kernel
bpf/vmtest-bpf-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-VM_Test-9 fail Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-VM_Test-4 fail Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-VM_Test-6 success Logs for aarch64-gcc / test
bpf/vmtest-bpf-VM_Test-8 success Logs for aarch64-gcc / veristat-meta
bpf/vmtest-bpf-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-VM_Test-10 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-VM_Test-14 success Logs for set-matrix
bpf/vmtest-bpf-VM_Test-13 success Logs for s390x-gcc / veristat-meta
bpf/vmtest-bpf-VM_Test-11 success Logs for s390x-gcc / test
bpf/vmtest-bpf-VM_Test-15 fail Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-VM_Test-12 success Logs for s390x-gcc / veristat-kernel
bpf/vmtest-bpf-VM_Test-16 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-VM_Test-17 success Logs for x86_64-gcc / test
bpf/vmtest-bpf-VM_Test-18 success Logs for x86_64-gcc / veristat-kernel
bpf/vmtest-bpf-VM_Test-21 fail Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17-O2
bpf/vmtest-bpf-VM_Test-19 success Logs for x86_64-gcc / veristat-meta
bpf/vmtest-bpf-VM_Test-20 fail Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-VM_Test-22 success Logs for x86_64-llvm-17 / test
bpf/vmtest-bpf-VM_Test-33 success Logs for x86_64-llvm-18 / veristat-meta
bpf/vmtest-bpf-VM_Test-26 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18-O2
bpf/vmtest-bpf-VM_Test-32 success Logs for x86_64-llvm-18 / veristat-kernel
bpf/vmtest-bpf-VM_Test-23 success Logs for x86_64-llvm-17 / veristat-kernel
bpf/vmtest-bpf-VM_Test-24 success Logs for x86_64-llvm-17 / veristat-meta
bpf/vmtest-bpf-VM_Test-25 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-PR fail PR summary
bpf/vmtest-bpf-VM_Test-27 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-VM_Test-28 fail Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-VM_Test-31 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-VM_Test-29 fail Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-VM_Test-30 fail Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18

Commit Message

Arthur Fabre Jan. 6, 2025, 5:15 p.m. UTC
bpf_tail_call(), LD_ABS, and LD_IND can cause the current function to
return abnormally:
- On success, bpf_tail_call() will jump to the tail_called program, and
  that program will return directly to the outer caller.
- On failure, LD_ABS or LD_IND return directly to the outer caller.

But the verifier doesn't account for these abnormal exits, so it assumes
the instructions following a bpf_tail_call() or LD_ABS are always
executed, and updates bounds info accordingly.

Before BPF to BPF calls that was ok: the whole BPF program would
terminate anyways, so it didn't matter that the verifier state didn't
match reality.

But if these instructions are used in a function call, the verifier will
propagate some of this incorrect bounds info to the caller. There are at
least two kinds of this:
- The callee's return value in the caller.
- References to the caller's stack passed into the caller.

For example, loading:

    #include <linux/bpf.h>
    #include <bpf/bpf_helpers.h>

    struct {
            __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
            __uint(max_entries, 1);
            __uint(key_size, sizeof(__u32));
            __uint(value_size, sizeof(__u32));
    } tail_call_map SEC(".maps");

    static __attribute__((noinline)) int callee(struct xdp_md *ctx)
    {
            bpf_tail_call(ctx, &tail_call_map, 0);

            int ret;
            asm volatile("%0 = 23" : "=r"(ret));
            return ret;
    }

    static SEC("xdp") int caller(struct xdp_md *ctx)
    {
            int res = callee(ctx);
            if (res == 23) {
                    return XDP_PASS;
            }
            return XDP_DROP;
    }

The verifier logs:

    func#0 @0
    func#1 @6
    0: R1=ctx() R10=fp0
    ; int res = callee(ctx); @ test.c:24
    0: (85) call pc+5
    caller:
     R10=fp0
    callee:
     frame1: R1=ctx() R10=fp0
    6: frame1: R1=ctx() R10=fp0
    ; bpf_tail_call(ctx, &tail_call_map, 0); @ test.c:15
    6: (18) r2 = 0xffff8a9c82a75800       ; frame1: R2_w=map_ptr(map=tail_call_map,ks=4,vs=4)
    8: (b4) w3 = 0                        ; frame1: R3_w=0
    9: (85) call bpf_tail_call#12
    10: frame1:
    ; asm volatile("%0 = 23" : "=r"(ret)); @ test.c:18
    10: (b7) r0 = 23                      ; frame1: R0_w=23
    ; return ret; @ test.c:19
    11: (95) exit
    returning from callee:
     frame1: R0_w=23 R10=fp0
    to caller at 1:
     R0_w=23 R10=fp0

    from 11 to 1: R0_w=23 R10=fp0
    ; int res = callee(ctx); @ test.c:24
    1: (bc) w1 = w0                       ; R0_w=23 R1_w=23
    2: (b4) w0 = 2                        ; R0=2
    ;  @ test.c:0
    3: (16) if w1 == 0x17 goto pc+1
    3: R1=23
    ; } @ test.c:29
    5: (95) exit
    processed 10 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 1

And tracks R0_w=23 from the callee through to the caller.
This lets it completely prune the res != 23 branch, skipping over
instruction 4.

But this isn't sound: the bpf_tail_call() could make the callee return
before r0 = 23.

Aside from pruning incorrect branches, this can also be used to read and
write arbitrary memory by using r0 as a index.

Make the verifier track instructions that can return abnormally as a
branch that either exits, or falls through to the remaining
instructions.

This naturally checks for resource leaks, so we can remove the explicit
checks for tail_calls and LD_ABS.

Fixes: f4d7e40a5b71 ("bpf: introduce function calls (verification)")
Signed-off-by: Arthur Fabre <afabre@cloudflare.com>
Cc: stable@vger.kernel.org
---
 kernel/bpf/verifier.c | 84 +++++++++++++++++++++++++++++++------------
 1 file changed, 61 insertions(+), 23 deletions(-)

Comments

Eduard Zingerman Jan. 6, 2025, 8:31 p.m. UTC | #1
On Mon, 2025-01-06 at 18:15 +0100, Arthur Fabre wrote:
> bpf_tail_call(), LD_ABS, and LD_IND can cause the current function to
> return abnormally:
> - On success, bpf_tail_call() will jump to the tail_called program, and
>   that program will return directly to the outer caller.
> - On failure, LD_ABS or LD_IND return directly to the outer caller.
> 
> But the verifier doesn't account for these abnormal exits, so it assumes
> the instructions following a bpf_tail_call() or LD_ABS are always
> executed, and updates bounds info accordingly.
> 
> Before BPF to BPF calls that was ok: the whole BPF program would
> terminate anyways, so it didn't matter that the verifier state didn't
> match reality.
> 
> But if these instructions are used in a function call, the verifier will
> propagate some of this incorrect bounds info to the caller. There are at
> least two kinds of this:
> - The callee's return value in the caller.
> - References to the caller's stack passed into the caller.
> 
> For example, loading:
> 
>     #include <linux/bpf.h>
>     #include <bpf/bpf_helpers.h>
> 
>     struct {
>             __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
>             __uint(max_entries, 1);
>             __uint(key_size, sizeof(__u32));
>             __uint(value_size, sizeof(__u32));
>     } tail_call_map SEC(".maps");
> 
>     static __attribute__((noinline)) int callee(struct xdp_md *ctx)
>     {
>             bpf_tail_call(ctx, &tail_call_map, 0);
> 
>             int ret;
>             asm volatile("%0 = 23" : "=r"(ret));
>             return ret;
>     }
> 
>     static SEC("xdp") int caller(struct xdp_md *ctx)
>     {
>             int res = callee(ctx);
>             if (res == 23) {
>                     return XDP_PASS;
>             }
>             return XDP_DROP;
>     }
> 
> The verifier logs:
> 
>     func#0 @0
>     func#1 @6
>     0: R1=ctx() R10=fp0
>     ; int res = callee(ctx); @ test.c:24
>     0: (85) call pc+5
>     caller:
>      R10=fp0
>     callee:
>      frame1: R1=ctx() R10=fp0
>     6: frame1: R1=ctx() R10=fp0
>     ; bpf_tail_call(ctx, &tail_call_map, 0); @ test.c:15
>     6: (18) r2 = 0xffff8a9c82a75800       ; frame1: R2_w=map_ptr(map=tail_call_map,ks=4,vs=4)
>     8: (b4) w3 = 0                        ; frame1: R3_w=0
>     9: (85) call bpf_tail_call#12
>     10: frame1:
>     ; asm volatile("%0 = 23" : "=r"(ret)); @ test.c:18
>     10: (b7) r0 = 23                      ; frame1: R0_w=23
>     ; return ret; @ test.c:19
>     11: (95) exit
>     returning from callee:
>      frame1: R0_w=23 R10=fp0
>     to caller at 1:
>      R0_w=23 R10=fp0
> 
>     from 11 to 1: R0_w=23 R10=fp0
>     ; int res = callee(ctx); @ test.c:24
>     1: (bc) w1 = w0                       ; R0_w=23 R1_w=23
>     2: (b4) w0 = 2                        ; R0=2
>     ;  @ test.c:0
>     3: (16) if w1 == 0x17 goto pc+1
>     3: R1=23
>     ; } @ test.c:29
>     5: (95) exit
>     processed 10 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 1
> 
> And tracks R0_w=23 from the callee through to the caller.
> This lets it completely prune the res != 23 branch, skipping over
> instruction 4.
> 
> But this isn't sound: the bpf_tail_call() could make the callee return
> before r0 = 23.
> 
> Aside from pruning incorrect branches, this can also be used to read and
> write arbitrary memory by using r0 as a index.
> 
> Make the verifier track instructions that can return abnormally as a
> branch that either exits, or falls through to the remaining
> instructions.
> 
> This naturally checks for resource leaks, so we can remove the explicit
> checks for tail_calls and LD_ABS.
> 
> Fixes: f4d7e40a5b71 ("bpf: introduce function calls (verification)")
> Signed-off-by: Arthur Fabre <afabre@cloudflare.com>
> Cc: stable@vger.kernel.org
> ---

This patch is correct as far as I can tell.

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

[...]

> @@ -18770,6 +18780,21 @@ static int do_check(struct bpf_verifier_env *env)
>  					return err;
>  
>  				mark_reg_scratched(env, BPF_REG_0);
> +
> +				if (insn->src_reg == 0 && insn->imm == BPF_FUNC_tail_call) {
> +					/* Explore both cases: tail_call fails and we fallthrough,
> +					 * or it succeeds and we exit the current function.
> +					 */
> +					if (!push_stack(env, env->insn_idx + 1, env->insn_idx, false))
> +						return -ENOMEM;
> +					/* bpf_tail_call() doesn't set r0 on failure / in the fallthrough case.
> +					 * But it does on success, so we have to mark it after queueing the
> +					 * fallthrough case, but before prepare_func_exit().
> +					 */
> +					__mark_reg_unknown(env, &state->frame[state->curframe]->regs[BPF_REG_0]);
> +					exit = BPF_EXIT_TAIL_CALL;
> +					goto process_bpf_exit_full;
> +				}

Nit: it's a bit unfortunate, that this logic is inside do_check,
     instead of check_helper_call() and check_ld_abs().
     But it makes BPF_EXIT_* propagation simpler.

>  			} else if (opcode == BPF_JA) {
>  				if (BPF_SRC(insn->code) != BPF_K ||
>  				    insn->src_reg != BPF_REG_0 ||

[...]
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 77f56674aaa9..a0853e9866d8 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -10488,13 +10488,20 @@  record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
 	return 0;
 }
 
-static int check_reference_leak(struct bpf_verifier_env *env, bool exception_exit)
+enum bpf_exit {
+	BPF_EXIT_INSN,
+	BPF_EXIT_EXCEPTION,
+	BPF_EXIT_TAIL_CALL,
+	BPF_EXIT_LD_ABS,
+};
+
+static int check_reference_leak(struct bpf_verifier_env *env, enum bpf_exit exit)
 {
 	struct bpf_func_state *state = cur_func(env);
 	bool refs_lingering = false;
 	int i;
 
-	if (!exception_exit && state->frameno)
+	if (exit != BPF_EXIT_EXCEPTION && state->frameno)
 		return 0;
 
 	for (i = 0; i < state->acquired_refs; i++) {
@@ -10507,16 +10514,32 @@  static int check_reference_leak(struct bpf_verifier_env *env, bool exception_exi
 	return refs_lingering ? -EINVAL : 0;
 }
 
-static int check_resource_leak(struct bpf_verifier_env *env, bool exception_exit, bool check_lock, const char *prefix)
+static int check_resource_leak(struct bpf_verifier_env *env, enum bpf_exit exit, bool check_lock)
 {
 	int err;
+	const char *prefix;
+
+	switch (exit) {
+	case BPF_EXIT_INSN:
+		prefix = "BPF_EXIT instruction";
+		break;
+	case BPF_EXIT_EXCEPTION:
+		prefix = "bpf_throw";
+		break;
+	case BPF_EXIT_TAIL_CALL:
+		prefix = "tail_call";
+		break;
+	case BPF_EXIT_LD_ABS:
+		prefix = "BPF_LD_[ABS|IND]";
+		break;
+	}
 
 	if (check_lock && cur_func(env)->active_locks) {
 		verbose(env, "%s cannot be used inside bpf_spin_lock-ed region\n", prefix);
 		return -EINVAL;
 	}
 
-	err = check_reference_leak(env, exception_exit);
+	err = check_reference_leak(env, exit);
 	if (err) {
 		verbose(env, "%s would lead to reference leak\n", prefix);
 		return err;
@@ -10802,11 +10825,6 @@  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 	}
 
 	switch (func_id) {
-	case BPF_FUNC_tail_call:
-		err = check_resource_leak(env, false, true, "tail_call");
-		if (err)
-			return err;
-		break;
 	case BPF_FUNC_get_local_storage:
 		/* check that flags argument in get_local_storage(map, flags) is 0,
 		 * this is required because get_local_storage() can't return an error.
@@ -15963,14 +15981,6 @@  static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	if (err)
 		return err;
 
-	/* Disallow usage of BPF_LD_[ABS|IND] with reference tracking, as
-	 * gen_ld_abs() may terminate the program at runtime, leading to
-	 * reference leak.
-	 */
-	err = check_resource_leak(env, false, true, "BPF_LD_[ABS|IND]");
-	if (err)
-		return err;
-
 	if (regs[ctx_reg].type != PTR_TO_CTX) {
 		verbose(env,
 			"at the time of BPF_LD_ABS|IND R6 != pointer to skb\n");
@@ -18540,7 +18550,7 @@  static int do_check(struct bpf_verifier_env *env)
 	int prev_insn_idx = -1;
 
 	for (;;) {
-		bool exception_exit = false;
+		enum bpf_exit exit;
 		struct bpf_insn *insn;
 		u8 class;
 		int err;
@@ -18760,7 +18770,7 @@  static int do_check(struct bpf_verifier_env *env)
 				} else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
 					err = check_kfunc_call(env, insn, &env->insn_idx);
 					if (!err && is_bpf_throw_kfunc(insn)) {
-						exception_exit = true;
+						exit = BPF_EXIT_EXCEPTION;
 						goto process_bpf_exit_full;
 					}
 				} else {
@@ -18770,6 +18780,21 @@  static int do_check(struct bpf_verifier_env *env)
 					return err;
 
 				mark_reg_scratched(env, BPF_REG_0);
+
+				if (insn->src_reg == 0 && insn->imm == BPF_FUNC_tail_call) {
+					/* Explore both cases: tail_call fails and we fallthrough,
+					 * or it succeeds and we exit the current function.
+					 */
+					if (!push_stack(env, env->insn_idx + 1, env->insn_idx, false))
+						return -ENOMEM;
+					/* bpf_tail_call() doesn't set r0 on failure / in the fallthrough case.
+					 * But it does on success, so we have to mark it after queueing the
+					 * fallthrough case, but before prepare_func_exit().
+					 */
+					__mark_reg_unknown(env, &state->frame[state->curframe]->regs[BPF_REG_0]);
+					exit = BPF_EXIT_TAIL_CALL;
+					goto process_bpf_exit_full;
+				}
 			} else if (opcode == BPF_JA) {
 				if (BPF_SRC(insn->code) != BPF_K ||
 				    insn->src_reg != BPF_REG_0 ||
@@ -18795,6 +18820,8 @@  static int do_check(struct bpf_verifier_env *env)
 					verbose(env, "BPF_EXIT uses reserved fields\n");
 					return -EINVAL;
 				}
+				exit = BPF_EXIT_INSN;
+
 process_bpf_exit_full:
 				/* We must do check_reference_leak here before
 				 * prepare_func_exit to handle the case when
@@ -18802,8 +18829,7 @@  static int do_check(struct bpf_verifier_env *env)
 				 * function, for which reference_state must
 				 * match caller reference state when it exits.
 				 */
-				err = check_resource_leak(env, exception_exit, !env->cur_state->curframe,
-							  "BPF_EXIT instruction");
+				err = check_resource_leak(env, exit, !env->cur_state->curframe);
 				if (err)
 					return err;
 
@@ -18817,7 +18843,7 @@  static int do_check(struct bpf_verifier_env *env)
 				 * exits. We also skip return code checks as
 				 * they are not needed for exceptional exits.
 				 */
-				if (exception_exit)
+				if (exit == BPF_EXIT_EXCEPTION)
 					goto process_bpf_exit;
 
 				if (state->curframe) {
@@ -18829,6 +18855,12 @@  static int do_check(struct bpf_verifier_env *env)
 					continue;
 				}
 
+				/* BPF_EXIT instruction is the only one that doesn't intrinsically
+				 * set R0.
+				 */
+				if (exit != BPF_EXIT_INSN)
+					goto process_bpf_exit;
+
 				err = check_return_code(env, BPF_REG_0, "R0");
 				if (err)
 					return err;
@@ -18857,7 +18889,13 @@  static int do_check(struct bpf_verifier_env *env)
 				err = check_ld_abs(env, insn);
 				if (err)
 					return err;
-
+				/* Explore both cases: LD_ABS|IND succeeds and we fallthrough,
+				 * or it fails and we exit the current function.
+				 */
+				if (!push_stack(env, env->insn_idx + 1, env->insn_idx, false))
+					return -ENOMEM;
+				exit = BPF_EXIT_LD_ABS;
+				goto process_bpf_exit_full;
 			} else if (mode == BPF_IMM) {
 				err = check_ld_imm(env, insn);
 				if (err)