diff mbox series

[bpf-next] bpf: Refactor release_regno searching logic

Message ID 20230121002417.1684602-1-davemarchevsky@fb.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next] bpf: Refactor release_regno searching logic | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-7 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-11 pending Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 fail Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 fail Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 fail Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 fail Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 fail Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 fail Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 fail Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 fail Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-34 fail Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 fail Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-37 fail Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-38 fail Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-36 fail Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-PR fail merge-conflict
netdev/tree_selection success Clearly marked for bpf-next
netdev/apply fail Patch does not apply to bpf-next

Commit Message

Dave Marchevsky Jan. 21, 2023, 12:24 a.m. UTC
Kfuncs marked KF_RELEASE indicate that they release some
previously-acquired arg. The verifier assumes that such a function will
only have one arg reg w/ ref_obj_id set, and that that arg is the one to
be released. Multiple kfunc arg regs have ref_obj_id set is considered
an invalid state.

For helpers, OBJ_RELEASE is used to tag a particular arg in the function
proto, not the function itself. The arg with OBJ_RELEASE type tag is the
arg that the helper will release. There can only be one such tagged arg.
When verifying arg regs, multiple helper arg regs w/ ref_obj_id set is
also considered an invalid state.

Currently the ref_obj_id and OBJ_RELEASE searching is done in the code
that examines each individual arg (check_func_arg for helpers and
check_kfunc_args inner loop for kfuncs). This patch pulls out this
searching to occur before individual arg type handling, resulting in a
cleaner separation of logic and shared logic between kfuncs and helpers.

Two new helper functions are added:
  * args_find_ref_obj_id_regno
    * For helpers and kfuncs. Searches through arg regs to find
      ref_obj_id reg and returns its regno.

  * helper_proto_find_release_arg_regno
    * For helpers only. Searches through fn proto args to find the
      OBJ_RELEASE arg and returns the corresponding regno.

The refactoring strives to keep failure logic and error messages
unchanged. However, because the release arg searching is now done before
any arg-specific type checking, verifier states that are invalid due to
both invalid release arg state _and_ some type- or helper-specific
checking logic might see the release arg-related error message first,
when previously verification would fail for the other reason.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
This patch appeared in a rbtree series, but is no longer necessary for
that work. Regardless, it's independently useful as it pulls out some
logic common to kfunc and helper verification. The rbtree version added
some additional functionality, while this patch doesn't, so it's not
marked as a v2 of that patch.

Regardless, including changelog as there was some feedback on that patch
which is addressed here.

v0 -> v1: https://lore.kernel.org/bpf/20221217082506.1570898-2-davemarchevsky@fb.com/
 * Remove allow_multi from args_find_ref_obj_id_regno, no need to
   support multiple ref_obj_id arg regs
 * No need to use temp variable 'i' to count nargs (David)
 * Proper formatting of function-level comments on newly-added helpers (David)

 kernel/bpf/verifier.c | 218 +++++++++++++++++++++++++++++-------------
 1 file changed, 150 insertions(+), 68 deletions(-)

Comments

kernel test robot Jan. 21, 2023, 8:39 a.m. UTC | #1
Hi Dave,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Dave-Marchevsky/bpf-Refactor-release_regno-searching-logic/20230121-082705
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230121002417.1684602-1-davemarchevsky%40fb.com
patch subject: [PATCH bpf-next] bpf: Refactor release_regno searching logic
config: powerpc-randconfig-r012-20230119 (https://download.01.org/0day-ci/archive/20230121/202301211658.eN5W6lCN-lkp@intel.com/config)
compiler: clang version 16.0.0 (https://github.com/llvm/llvm-project 4196ca3278f78c6e19246e54ab0ecb364e37d66a)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install powerpc cross compiling tool for clang build
        # apt-get install binutils-powerpc-linux-gnu
        # https://github.com/intel-lab-lkp/linux/commit/3ad2c350687658a62ca29b15042c86b68ed6a73b
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dave-Marchevsky/bpf-Refactor-release_regno-searching-logic/20230121-082705
        git checkout 3ad2c350687658a62ca29b15042c86b68ed6a73b
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=powerpc olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=powerpc SHELL=/bin/bash kernel/bpf/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> kernel/bpf/verifier.c:7846:20: warning: variable 'i' is uninitialized when used here [-Wuninitialized]
                   if (fn->arg_type[i] == ARG_DONTCARE)
                                    ^
   kernel/bpf/verifier.c:7771:7: note: initialize the variable 'i' to silence this warning
           int i, err, func_id, nargs, release_regno, ref_regno;
                ^
                 = 0
   1 warning generated.


vim +/i +7846 kernel/bpf/verifier.c

  7766	
  7767	static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
  7768				     int *insn_idx_p)
  7769	{
  7770		enum bpf_prog_type prog_type = resolve_prog_type(env->prog);
  7771		int i, err, func_id, nargs, release_regno, ref_regno;
  7772		const struct bpf_func_proto *fn = NULL;
  7773		enum bpf_return_type ret_type;
  7774		enum bpf_type_flag ret_flag;
  7775		struct bpf_reg_state *regs;
  7776		struct bpf_call_arg_meta meta;
  7777		int insn_idx = *insn_idx_p;
  7778		bool changes_data;
  7779	
  7780		/* find function prototype */
  7781		func_id = insn->imm;
  7782		if (func_id < 0 || func_id >= __BPF_FUNC_MAX_ID) {
  7783			verbose(env, "invalid func %s#%d\n", func_id_name(func_id),
  7784				func_id);
  7785			return -EINVAL;
  7786		}
  7787	
  7788		if (env->ops->get_func_proto)
  7789			fn = env->ops->get_func_proto(func_id, env->prog);
  7790		if (!fn) {
  7791			verbose(env, "unknown func %s#%d\n", func_id_name(func_id),
  7792				func_id);
  7793			return -EINVAL;
  7794		}
  7795	
  7796		/* eBPF programs must be GPL compatible to use GPL-ed functions */
  7797		if (!env->prog->gpl_compatible && fn->gpl_only) {
  7798			verbose(env, "cannot call GPL-restricted function from non-GPL compatible program\n");
  7799			return -EINVAL;
  7800		}
  7801	
  7802		if (fn->allowed && !fn->allowed(env->prog)) {
  7803			verbose(env, "helper call is not allowed in probe\n");
  7804			return -EINVAL;
  7805		}
  7806	
  7807		if (!env->prog->aux->sleepable && fn->might_sleep) {
  7808			verbose(env, "helper call might sleep in a non-sleepable prog\n");
  7809			return -EINVAL;
  7810		}
  7811	
  7812		/* With LD_ABS/IND some JITs save/restore skb from r1. */
  7813		changes_data = bpf_helper_changes_pkt_data(fn->func);
  7814		if (changes_data && fn->arg1_type != ARG_PTR_TO_CTX) {
  7815			verbose(env, "kernel subsystem misconfigured func %s#%d: r1 != ctx\n",
  7816				func_id_name(func_id), func_id);
  7817			return -EINVAL;
  7818		}
  7819	
  7820		memset(&meta, 0, sizeof(meta));
  7821		meta.pkt_access = fn->pkt_access;
  7822	
  7823		err = check_func_proto(fn, func_id);
  7824		if (err) {
  7825			verbose(env, "kernel subsystem misconfigured func %s#%d\n",
  7826				func_id_name(func_id), func_id);
  7827			return err;
  7828		}
  7829	
  7830		if (env->cur_state->active_rcu_lock) {
  7831			if (fn->might_sleep) {
  7832				verbose(env, "sleepable helper %s#%d in rcu_read_lock region\n",
  7833					func_id_name(func_id), func_id);
  7834				return -EINVAL;
  7835			}
  7836	
  7837			if (env->prog->aux->sleepable && is_storage_get_function(func_id))
  7838				env->insn_aux_data[insn_idx].storage_get_func_atomic = true;
  7839		}
  7840	
  7841		meta.func_id = func_id;
  7842		regs = cur_regs(env);
  7843	
  7844		/* find actual arg count */
  7845		for (nargs = 0; nargs < MAX_BPF_FUNC_REG_ARGS; nargs++)
> 7846			if (fn->arg_type[i] == ARG_DONTCARE)
  7847				break;
  7848	
  7849		release_regno = helper_proto_find_release_arg_regno(env, fn, nargs);
  7850		if (release_regno < 0)
  7851			return release_regno;
  7852	
  7853		ref_regno = args_find_ref_obj_id_regno(env, regs, nargs);
  7854		if (ref_regno < 0)
  7855			return ref_regno;
  7856		else if (ref_regno > 0)
  7857			meta.ref_obj_id = regs[ref_regno].ref_obj_id;
  7858	
  7859		if (release_regno > 0) {
  7860			if (!regs[release_regno].ref_obj_id &&
  7861			    !register_is_null(&regs[release_regno]) &&
  7862			    !arg_type_is_dynptr(fn->arg_type[release_regno - BPF_REG_1])) {
  7863				verbose(env, "R%d must be referenced when passed to release function\n",
  7864					release_regno);
  7865				return -EINVAL;
  7866			}
  7867	
  7868			meta.release_regno = release_regno;
  7869		}
  7870	
  7871		/* check args */
  7872		for (i = 0; i < nargs; i++) {
  7873			err = check_func_arg(env, i, &meta, fn);
  7874			if (err)
  7875				return err;
  7876		}
  7877	
  7878		err = record_func_map(env, &meta, func_id, insn_idx);
  7879		if (err)
  7880			return err;
  7881	
  7882		err = record_func_key(env, &meta, func_id, insn_idx);
  7883		if (err)
  7884			return err;
  7885	
  7886		/* Mark slots with STACK_MISC in case of raw mode, stack offset
  7887		 * is inferred from register state.
  7888		 */
  7889		for (i = 0; i < meta.access_size; i++) {
  7890			err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B,
  7891					       BPF_WRITE, -1, false);
  7892			if (err)
  7893				return err;
  7894		}
  7895	
  7896		/* This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
  7897		 * be reinitialized by any dynptr helper. Hence, mark_stack_slots_dynptr
  7898		 * is safe to do directly.
  7899		 */
  7900		if (meta.uninit_dynptr_regno) {
  7901			if (regs[meta.uninit_dynptr_regno].type == CONST_PTR_TO_DYNPTR) {
  7902				verbose(env, "verifier internal error: CONST_PTR_TO_DYNPTR cannot be initialized\n");
  7903				return -EFAULT;
  7904			}
  7905			/* we write BPF_DW bits (8 bytes) at a time */
  7906			for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
  7907				err = check_mem_access(env, insn_idx, meta.uninit_dynptr_regno,
  7908						       i, BPF_DW, BPF_WRITE, -1, false);
  7909				if (err)
  7910					return err;
  7911			}
  7912	
  7913			err = mark_stack_slots_dynptr(env, &regs[meta.uninit_dynptr_regno],
  7914						      fn->arg_type[meta.uninit_dynptr_regno - BPF_REG_1],
  7915						      insn_idx);
  7916			if (err)
  7917				return err;
  7918		}
  7919	
  7920		if (meta.release_regno) {
  7921			err = -EINVAL;
  7922			/* This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
  7923			 * be released by any dynptr helper. Hence, unmark_stack_slots_dynptr
  7924			 * is safe to do directly.
  7925			 */
  7926			if (arg_type_is_dynptr(fn->arg_type[meta.release_regno - BPF_REG_1])) {
  7927				if (regs[meta.release_regno].type == CONST_PTR_TO_DYNPTR) {
  7928					verbose(env, "verifier internal error: CONST_PTR_TO_DYNPTR cannot be released\n");
  7929					return -EFAULT;
  7930				}
  7931				err = unmark_stack_slots_dynptr(env, &regs[meta.release_regno]);
  7932			} else if (meta.ref_obj_id) {
  7933				err = release_reference(env, meta.ref_obj_id);
  7934			} else if (register_is_null(&regs[meta.release_regno])) {
  7935				/* meta.ref_obj_id can only be 0 if register that is meant to be
  7936				 * released is NULL, which must be > R0.
  7937				 */
  7938				err = 0;
  7939			}
  7940			if (err) {
  7941				verbose(env, "func %s#%d reference has not been acquired before\n",
  7942					func_id_name(func_id), func_id);
  7943				return err;
  7944			}
  7945		}
  7946	
  7947		switch (func_id) {
  7948		case BPF_FUNC_tail_call:
  7949			err = check_reference_leak(env);
  7950			if (err) {
  7951				verbose(env, "tail_call would lead to reference leak\n");
  7952				return err;
  7953			}
  7954			break;
  7955		case BPF_FUNC_get_local_storage:
  7956			/* check that flags argument in get_local_storage(map, flags) is 0,
  7957			 * this is required because get_local_storage() can't return an error.
  7958			 */
  7959			if (!register_is_null(&regs[BPF_REG_2])) {
  7960				verbose(env, "get_local_storage() doesn't support non-zero flags\n");
  7961				return -EINVAL;
  7962			}
  7963			break;
  7964		case BPF_FUNC_for_each_map_elem:
  7965			err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
  7966						set_map_elem_callback_state);
  7967			break;
  7968		case BPF_FUNC_timer_set_callback:
  7969			err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
  7970						set_timer_callback_state);
  7971			break;
  7972		case BPF_FUNC_find_vma:
  7973			err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
  7974						set_find_vma_callback_state);
  7975			break;
  7976		case BPF_FUNC_snprintf:
  7977			err = check_bpf_snprintf_call(env, regs);
  7978			break;
  7979		case BPF_FUNC_loop:
  7980			update_loop_inline_state(env, meta.subprogno);
  7981			err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
  7982						set_loop_callback_state);
  7983			break;
  7984		case BPF_FUNC_dynptr_from_mem:
  7985			if (regs[BPF_REG_1].type != PTR_TO_MAP_VALUE) {
  7986				verbose(env, "Unsupported reg type %s for bpf_dynptr_from_mem data\n",
  7987					reg_type_str(env, regs[BPF_REG_1].type));
  7988				return -EACCES;
  7989			}
  7990			break;
  7991		case BPF_FUNC_set_retval:
  7992			if (prog_type == BPF_PROG_TYPE_LSM &&
  7993			    env->prog->expected_attach_type == BPF_LSM_CGROUP) {
  7994				if (!env->prog->aux->attach_func_proto->type) {
  7995					/* Make sure programs that attach to void
  7996					 * hooks don't try to modify return value.
  7997					 */
  7998					verbose(env, "BPF_LSM_CGROUP that attach to void LSM hooks can't modify return value!\n");
  7999					return -EINVAL;
  8000				}
  8001			}
  8002			break;
  8003		case BPF_FUNC_dynptr_data:
  8004			for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
  8005				if (arg_type_is_dynptr(fn->arg_type[i])) {
  8006					struct bpf_reg_state *reg = &regs[BPF_REG_1 + i];
  8007	
  8008					if (meta.ref_obj_id) {
  8009						verbose(env, "verifier internal error: meta.ref_obj_id already set\n");
  8010						return -EFAULT;
  8011					}
  8012	
  8013					meta.ref_obj_id = dynptr_ref_obj_id(env, reg);
  8014					break;
  8015				}
  8016			}
  8017			if (i == MAX_BPF_FUNC_REG_ARGS) {
  8018				verbose(env, "verifier internal error: no dynptr in bpf_dynptr_data()\n");
  8019				return -EFAULT;
  8020			}
  8021			break;
  8022		case BPF_FUNC_user_ringbuf_drain:
  8023			err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
  8024						set_user_ringbuf_callback_state);
  8025			break;
  8026		}
  8027	
  8028		if (err)
  8029			return err;
  8030	
  8031		/* reset caller saved regs */
  8032		for (i = 0; i < CALLER_SAVED_REGS; i++) {
  8033			mark_reg_not_init(env, regs, caller_saved[i]);
  8034			check_reg_arg(env, caller_saved[i], DST_OP_NO_MARK);
  8035		}
  8036	
  8037		/* helper call returns 64-bit value. */
  8038		regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG;
  8039	
  8040		/* update return register (already marked as written above) */
  8041		ret_type = fn->ret_type;
  8042		ret_flag = type_flag(ret_type);
  8043	
  8044		switch (base_type(ret_type)) {
  8045		case RET_INTEGER:
  8046			/* sets type to SCALAR_VALUE */
  8047			mark_reg_unknown(env, regs, BPF_REG_0);
  8048			break;
  8049		case RET_VOID:
  8050			regs[BPF_REG_0].type = NOT_INIT;
  8051			break;
  8052		case RET_PTR_TO_MAP_VALUE:
  8053			/* There is no offset yet applied, variable or fixed */
  8054			mark_reg_known_zero(env, regs, BPF_REG_0);
  8055			/* remember map_ptr, so that check_map_access()
  8056			 * can check 'value_size' boundary of memory access
  8057			 * to map element returned from bpf_map_lookup_elem()
  8058			 */
  8059			if (meta.map_ptr == NULL) {
  8060				verbose(env,
  8061					"kernel subsystem misconfigured verifier\n");
  8062				return -EINVAL;
  8063			}
  8064			regs[BPF_REG_0].map_ptr = meta.map_ptr;
  8065			regs[BPF_REG_0].map_uid = meta.map_uid;
  8066			regs[BPF_REG_0].type = PTR_TO_MAP_VALUE | ret_flag;
  8067			if (!type_may_be_null(ret_type) &&
  8068			    btf_record_has_field(meta.map_ptr->record, BPF_SPIN_LOCK)) {
  8069				regs[BPF_REG_0].id = ++env->id_gen;
  8070			}
  8071			break;
  8072		case RET_PTR_TO_SOCKET:
  8073			mark_reg_known_zero(env, regs, BPF_REG_0);
  8074			regs[BPF_REG_0].type = PTR_TO_SOCKET | ret_flag;
  8075			break;
  8076		case RET_PTR_TO_SOCK_COMMON:
  8077			mark_reg_known_zero(env, regs, BPF_REG_0);
  8078			regs[BPF_REG_0].type = PTR_TO_SOCK_COMMON | ret_flag;
  8079			break;
  8080		case RET_PTR_TO_TCP_SOCK:
  8081			mark_reg_known_zero(env, regs, BPF_REG_0);
  8082			regs[BPF_REG_0].type = PTR_TO_TCP_SOCK | ret_flag;
  8083			break;
  8084		case RET_PTR_TO_MEM:
  8085			mark_reg_known_zero(env, regs, BPF_REG_0);
  8086			regs[BPF_REG_0].type = PTR_TO_MEM | ret_flag;
  8087			regs[BPF_REG_0].mem_size = meta.mem_size;
  8088			break;
  8089		case RET_PTR_TO_MEM_OR_BTF_ID:
  8090		{
  8091			const struct btf_type *t;
  8092	
  8093			mark_reg_known_zero(env, regs, BPF_REG_0);
  8094			t = btf_type_skip_modifiers(meta.ret_btf, meta.ret_btf_id, NULL);
  8095			if (!btf_type_is_struct(t)) {
  8096				u32 tsize;
  8097				const struct btf_type *ret;
  8098				const char *tname;
  8099	
  8100				/* resolve the type size of ksym. */
  8101				ret = btf_resolve_size(meta.ret_btf, t, &tsize);
  8102				if (IS_ERR(ret)) {
  8103					tname = btf_name_by_offset(meta.ret_btf, t->name_off);
  8104					verbose(env, "unable to resolve the size of type '%s': %ld\n",
  8105						tname, PTR_ERR(ret));
  8106					return -EINVAL;
  8107				}
  8108				regs[BPF_REG_0].type = PTR_TO_MEM | ret_flag;
  8109				regs[BPF_REG_0].mem_size = tsize;
  8110			} else {
  8111				/* MEM_RDONLY may be carried from ret_flag, but it
  8112				 * doesn't apply on PTR_TO_BTF_ID. Fold it, otherwise
  8113				 * it will confuse the check of PTR_TO_BTF_ID in
  8114				 * check_mem_access().
  8115				 */
  8116				ret_flag &= ~MEM_RDONLY;
  8117	
  8118				regs[BPF_REG_0].type = PTR_TO_BTF_ID | ret_flag;
  8119				regs[BPF_REG_0].btf = meta.ret_btf;
  8120				regs[BPF_REG_0].btf_id = meta.ret_btf_id;
  8121			}
  8122			break;
  8123		}
  8124		case RET_PTR_TO_BTF_ID:
  8125		{
  8126			struct btf *ret_btf;
  8127			int ret_btf_id;
  8128	
  8129			mark_reg_known_zero(env, regs, BPF_REG_0);
  8130			regs[BPF_REG_0].type = PTR_TO_BTF_ID | ret_flag;
  8131			if (func_id == BPF_FUNC_kptr_xchg) {
  8132				ret_btf = meta.kptr_field->kptr.btf;
  8133				ret_btf_id = meta.kptr_field->kptr.btf_id;
  8134			} else {
  8135				if (fn->ret_btf_id == BPF_PTR_POISON) {
  8136					verbose(env, "verifier internal error:");
  8137					verbose(env, "func %s has non-overwritten BPF_PTR_POISON return type\n",
  8138						func_id_name(func_id));
  8139					return -EINVAL;
  8140				}
  8141				ret_btf = btf_vmlinux;
  8142				ret_btf_id = *fn->ret_btf_id;
  8143			}
  8144			if (ret_btf_id == 0) {
  8145				verbose(env, "invalid return type %u of func %s#%d\n",
  8146					base_type(ret_type), func_id_name(func_id),
  8147					func_id);
  8148				return -EINVAL;
  8149			}
  8150			regs[BPF_REG_0].btf = ret_btf;
  8151			regs[BPF_REG_0].btf_id = ret_btf_id;
  8152			break;
  8153		}
  8154		default:
  8155			verbose(env, "unknown return type %u of func %s#%d\n",
  8156				base_type(ret_type), func_id_name(func_id), func_id);
  8157			return -EINVAL;
  8158		}
  8159	
  8160		if (type_may_be_null(regs[BPF_REG_0].type))
  8161			regs[BPF_REG_0].id = ++env->id_gen;
  8162	
  8163		if (helper_multiple_ref_obj_use(func_id, meta.map_ptr)) {
  8164			verbose(env, "verifier internal error: func %s#%d sets ref_obj_id more than once\n",
  8165				func_id_name(func_id), func_id);
  8166			return -EFAULT;
  8167		}
  8168	
  8169		if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) {
  8170			/* For release_reference() */
  8171			regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
  8172		} else if (is_acquire_function(func_id, meta.map_ptr)) {
  8173			int id = acquire_reference_state(env, insn_idx);
  8174	
  8175			if (id < 0)
  8176				return id;
  8177			/* For mark_ptr_or_null_reg() */
  8178			regs[BPF_REG_0].id = id;
  8179			/* For release_reference() */
  8180			regs[BPF_REG_0].ref_obj_id = id;
  8181		}
  8182	
  8183		do_refine_retval_range(regs, fn->ret_type, func_id, &meta);
  8184	
  8185		err = check_map_func_compatibility(env, meta.map_ptr, func_id);
  8186		if (err)
  8187			return err;
  8188	
  8189		if ((func_id == BPF_FUNC_get_stack ||
  8190		     func_id == BPF_FUNC_get_task_stack) &&
  8191		    !env->prog->has_callchain_buf) {
  8192			const char *err_str;
  8193
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ca7db2ce70b9..99b76202c3b6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6418,49 +6418,6 @@  static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		return err;
 
 skip_type_check:
-	if (arg_type_is_release(arg_type)) {
-		if (arg_type_is_dynptr(arg_type)) {
-			struct bpf_func_state *state = func(env, reg);
-			int spi;
-
-			/* Only dynptr created on stack can be released, thus
-			 * the get_spi and stack state checks for spilled_ptr
-			 * should only be done before process_dynptr_func for
-			 * PTR_TO_STACK.
-			 */
-			if (reg->type == PTR_TO_STACK) {
-				spi = get_spi(reg->off);
-				if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
-				    !state->stack[spi].spilled_ptr.ref_obj_id) {
-					verbose(env, "arg %d is an unacquired reference\n", regno);
-					return -EINVAL;
-				}
-			} else {
-				verbose(env, "cannot release unowned const bpf_dynptr\n");
-				return -EINVAL;
-			}
-		} else if (!reg->ref_obj_id && !register_is_null(reg)) {
-			verbose(env, "R%d must be referenced when passed to release function\n",
-				regno);
-			return -EINVAL;
-		}
-		if (meta->release_regno) {
-			verbose(env, "verifier internal error: more than one release argument\n");
-			return -EFAULT;
-		}
-		meta->release_regno = regno;
-	}
-
-	if (reg->ref_obj_id) {
-		if (meta->ref_obj_id) {
-			verbose(env, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n",
-				regno, reg->ref_obj_id,
-				meta->ref_obj_id);
-			return -EFAULT;
-		}
-		meta->ref_obj_id = reg->ref_obj_id;
-	}
-
 	switch (base_type(arg_type)) {
 	case ARG_CONST_MAP_PTR:
 		/* bpf_map_xxx(map_ptr) call: remember that map_ptr */
@@ -6571,6 +6528,27 @@  static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		err = check_mem_size_reg(env, reg, regno, true, meta);
 		break;
 	case ARG_PTR_TO_DYNPTR:
+		if (meta->release_regno == regno) {
+			struct bpf_func_state *state = func(env, reg);
+			int spi;
+
+			/* Only dynptr created on stack can be released, thus
+			 * the get_spi and stack state checks for spilled_ptr
+			 * should only be done before process_dynptr_func for
+			 * PTR_TO_STACK.
+			 */
+			if (reg->type == PTR_TO_STACK) {
+				spi = get_spi(reg->off);
+				if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
+				    !state->stack[spi].spilled_ptr.ref_obj_id) {
+					verbose(env, "arg %d is an unacquired reference\n", regno);
+					return -EINVAL;
+				}
+			} else {
+				verbose(env, "cannot release unowned const bpf_dynptr\n");
+				return -EINVAL;
+			}
+		}
 		err = process_dynptr_func(env, regno, arg_type, meta);
 		if (err)
 			return err;
@@ -7706,10 +7684,91 @@  static void update_loop_inline_state(struct bpf_verifier_env *env, u32 subprogno
 				 state->callback_subprogno == subprogno);
 }
 
+/**
+ * args_find_ref_obj_id_regno() - Find regno that should become meta->ref_obj_id
+ * @env: Verifier env
+ * @regs: Regs to search for ref_obj_id
+ * @nargs: Number of arg regs to search
+ *
+ * Call arg meta's ref_obj_id is used to either:
+ * * For release funcs, keep track of ref that needs to be released
+ * * For other funcs, keep track of ref that needs to be propagated to retval
+ *
+ * Find the arg regno with nonzero ref_obj_id
+ *
+ * Return:
+ * * On success, regno that should become meta->ref_obj_id (regno > 0 since
+ *   BPF_REG_1 is first arg
+ * * 0 if no arg had ref_obj_id set
+ * * -err if some invalid arg reg state
+ */
+static int args_find_ref_obj_id_regno(struct bpf_verifier_env *env, struct bpf_reg_state *regs,
+				      u32 nargs)
+{
+	struct bpf_reg_state *reg;
+	u32 i, regno, found_regno = 0;
+
+	for (i = 0; i < nargs; i++) {
+		regno = i + BPF_REG_1;
+		reg = &regs[regno];
+
+		if (!reg->ref_obj_id)
+			continue;
+
+		if (found_regno) {
+			verbose(env, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n",
+				regno, reg->ref_obj_id, regs[found_regno].ref_obj_id);
+			return -EFAULT;
+		}
+
+		found_regno = regno;
+	}
+
+	return found_regno;
+}
+
+/**
+ * helper_proto_find_release_arg_regno() - Find OBJ_RELEASE arg in func proto
+ * @env: Verifier env
+ * @fn: Func proto to search for OBJ_RELEASE
+ * @nargs: Number of arg specs to search
+ *
+ * For helpers, to determine which arg reg should be released, loop through
+ * func proto arg specification to find arg with OBJ_RELEASE
+ *
+ * Return:
+ * * On success, regno of single OBJ_RELEASE arg
+ * * 0 if no arg in the proto was OBJ_RELEASE
+ * * -err if some invalid func proto state
+ */
+static int helper_proto_find_release_arg_regno(struct bpf_verifier_env *env,
+					       const struct bpf_func_proto *fn, u32 nargs)
+{
+	enum bpf_arg_type arg_type;
+	int i, release_regno = 0;
+
+	for (i = 0; i < nargs; i++) {
+		arg_type = fn->arg_type[i];
+
+		if (!arg_type_is_release(arg_type))
+			continue;
+
+		if (release_regno) {
+			verbose(env, "verifier internal error: more than one release argument\n");
+			return -EFAULT;
+		}
+
+		release_regno = i + BPF_REG_1;
+	}
+
+	return release_regno;
+}
+
 static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			     int *insn_idx_p)
 {
 	enum bpf_prog_type prog_type = resolve_prog_type(env->prog);
+	int i, err, func_id, nargs, release_regno, ref_regno;
 	const struct bpf_func_proto *fn = NULL;
 	enum bpf_return_type ret_type;
 	enum bpf_type_flag ret_flag;
@@ -7717,7 +7776,6 @@  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 	struct bpf_call_arg_meta meta;
 	int insn_idx = *insn_idx_p;
 	bool changes_data;
-	int i, err, func_id;
 
 	/* find function prototype */
 	func_id = insn->imm;
@@ -7781,8 +7839,37 @@  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 	}
 
 	meta.func_id = func_id;
+	regs = cur_regs(env);
+
+	/* find actual arg count */
+	for (nargs = 0; nargs < MAX_BPF_FUNC_REG_ARGS; nargs++)
+		if (fn->arg_type[i] == ARG_DONTCARE)
+			break;
+
+	release_regno = helper_proto_find_release_arg_regno(env, fn, nargs);
+	if (release_regno < 0)
+		return release_regno;
+
+	ref_regno = args_find_ref_obj_id_regno(env, regs, nargs);
+	if (ref_regno < 0)
+		return ref_regno;
+	else if (ref_regno > 0)
+		meta.ref_obj_id = regs[ref_regno].ref_obj_id;
+
+	if (release_regno > 0) {
+		if (!regs[release_regno].ref_obj_id &&
+		    !register_is_null(&regs[release_regno]) &&
+		    !arg_type_is_dynptr(fn->arg_type[release_regno - BPF_REG_1])) {
+			verbose(env, "R%d must be referenced when passed to release function\n",
+				release_regno);
+			return -EINVAL;
+		}
+
+		meta.release_regno = release_regno;
+	}
+
 	/* check args */
-	for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
+	for (i = 0; i < nargs; i++) {
 		err = check_func_arg(env, i, &meta, fn);
 		if (err)
 			return err;
@@ -7806,8 +7893,6 @@  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 			return err;
 	}
 
-	regs = cur_regs(env);
-
 	/* This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
 	 * be reinitialized by any dynptr helper. Hence, mark_stack_slots_dynptr
 	 * is safe to do directly.
@@ -8803,10 +8888,11 @@  static int process_kf_arg_ptr_to_list_node(struct bpf_verifier_env *env,
 static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta)
 {
 	const char *func_name = meta->func_name, *ref_tname;
+	struct bpf_reg_state *regs = cur_regs(env);
 	const struct btf *btf = meta->btf;
 	const struct btf_param *args;
+	int ret, ref_regno;
 	u32 i, nargs;
-	int ret;
 
 	args = (const struct btf_param *)(meta->func_proto + 1);
 	nargs = btf_type_vlen(meta->func_proto);
@@ -8816,17 +8902,31 @@  static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 		return -EINVAL;
 	}
 
+	ref_regno = args_find_ref_obj_id_regno(env, cur_regs(env), nargs);
+	if (ref_regno < 0) {
+		return ref_regno;
+	} else if (!ref_regno && is_kfunc_release(meta)) {
+		verbose(env, "release kernel function %s expects refcounted PTR_TO_BTF_ID\n",
+			func_name);
+		return -EINVAL;
+	}
+
+	meta->ref_obj_id = regs[ref_regno].ref_obj_id;
+	if (is_kfunc_release(meta))
+		meta->release_regno = ref_regno;
+
 	/* Check that BTF function arguments match actual types that the
 	 * verifier sees.
 	 */
 	for (i = 0; i < nargs; i++) {
-		struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[i + 1];
 		const struct btf_type *t, *ref_t, *resolve_ret;
 		enum bpf_arg_type arg_type = ARG_DONTCARE;
 		u32 regno = i + 1, ref_id, type_size;
 		bool is_ret_buf_sz = false;
+		struct bpf_reg_state *reg;
 		int kf_arg_type;
 
+		reg = &regs[regno];
 		t = btf_type_skip_modifiers(btf, args[i].type, NULL);
 
 		if (is_kfunc_arg_ignore(btf, &args[i]))
@@ -8883,18 +8983,6 @@  static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			return -EINVAL;
 		}
 
-		if (reg->ref_obj_id) {
-			if (is_kfunc_release(meta) && meta->ref_obj_id) {
-				verbose(env, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n",
-					regno, reg->ref_obj_id,
-					meta->ref_obj_id);
-				return -EFAULT;
-			}
-			meta->ref_obj_id = reg->ref_obj_id;
-			if (is_kfunc_release(meta))
-				meta->release_regno = regno;
-		}
-
 		ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id);
 		ref_tname = btf_name_by_offset(btf, ref_t->name_off);
 
@@ -8937,7 +9025,7 @@  static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			return -EFAULT;
 		}
 
-		if (is_kfunc_release(meta) && reg->ref_obj_id)
+		if (is_kfunc_release(meta) && regno == meta->release_regno)
 			arg_type |= OBJ_RELEASE;
 		ret = check_func_arg_reg_off(env, reg, regno, arg_type);
 		if (ret < 0)
@@ -9057,12 +9145,6 @@  static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 		}
 	}
 
-	if (is_kfunc_release(meta) && !meta->release_regno) {
-		verbose(env, "release kernel function %s expects refcounted PTR_TO_BTF_ID\n",
-			func_name);
-		return -EINVAL;
-	}
-
 	return 0;
 }