From patchwork Wed Nov 27 15:33:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887150 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5112E3B2BB for ; Wed, 27 Nov 2024 15:33:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721593; cv=none; b=eVQY4kXssaCJ+jrPym5AccjOGORw4QGVEDbDilkN7/hDd8P7RUOhYhn3Z+ZsuRNPu2T/QP4fyzGnPUe6VwKZ47cbXp+aTHuooeASQTZM0rA3aSlOY5FhtP8/FaacD82xhJ4MBfdUqCScyL13jbiuw1GtB37+tsEgwTk8FMSlAFY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721593; c=relaxed/simple; bh=haXHBIw+IAeTSlGBb0DWF0SoN+arpIfSKJ65ihAMstA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fc1jgDZLSV0pgUuaBcI1HINx4aedgBGitKrZMyssXyQa8PUI0F7W5kpjPKoaZP2rwiUQDTrvKnTxyxnA5KbTunf0e0FhTd+Z3HRNzzJbuc5/8DOhjdQ53g2S1uuQYVBcB/vUCoa/BzjkSr089DHNQh5uZiBsd8lk5lX71f7pQpA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=kOyK6prw; arc=none smtp.client-ip=209.85.221.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="kOyK6prw" Received: by mail-wr1-f66.google.com with SMTP id ffacd0b85a97d-382433611d0so5793913f8f.3 for ; Wed, 27 Nov 2024 07:33:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732721589; x=1733326389; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5GeYHbLohCgWUukG+5zybVEDFJ+Pso3FMFyO/WP56Ps=; b=kOyK6prwhMIFSMzPvFSRsFQaU1uiJC1A3MuPRIQsLZ/Ok+3cZGTbD8LLVXK50QSi0r BzD5djMAJHN4tLbi2kehxBzhI3gpQbeF84B0gpgHkIg6rKs2vUw/ZTmrESNPvYlVtYWX WnA57YcZT7pDYBQLdSx3VB1leIZq4Ph88JJ0gbwPmdSFA8TfgXpzYbNj7DcR9vgRSAO3 ZL5XZy3MyanC78oFeH/89HxmuYLGRCuT5sjg51VpqzgQcj7nY73sZH0wNjlsQfzOSvU9 rOPkEM8EOvt8F5E0ronBIOEA7BFf9JwnmzV+ghr100P0y0Gq2thPYMD4ksbZrUM8hwp2 a/GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732721589; x=1733326389; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5GeYHbLohCgWUukG+5zybVEDFJ+Pso3FMFyO/WP56Ps=; b=VMKCDIJI/REWR45ppqN45gq59B8pUwMN9zp2LZvCgywWGRA3pKDGxY2IHVaKNW3rLK 9FX4i49RFshDZgN08/YskLjVyBV/VPCXQw2dCZA13v+BngudEoH/ycfqoGzpHyXKBNhC u/zNAfI1kTKTrtm/rCYPFXDBy4mhqYH5aLxdQieX4LTZ3uEz7o+pUmEK2NsI9tBebpRP xhd4Dj16oPg3l4wrpOVw20E8NaK4T2D11Jl/UJYYVvWNCOIuuxqkRFYk3VuyEhMpBvkE mphRcbs9dbViDET4HuBb6827MF+tnr9SDFxw/qvnI339iGXpqtLyWCGFAk1093kihm83 Tp2g== X-Gm-Message-State: AOJu0YxOgY9TbAMrE0OsCUw5BzIUqQRL37hSGfM0vL75qY5LMj+wdNKt ovmrNlsXOw4R/Eiqo+6KCPJ7t2FaQYULokxEf+kzVFZe9+Lc6e2uiCOCtue63BY= X-Gm-Gg: ASbGncv4rOqOZkT5U0fGF6bztrJduFAGb7R/XveC9uXopAMyowM9L2uiHucAQ4ftOSi b7NOhMiYnqBZZnOlRiKibfSJV4/wVTDmxWUCf5p71q4/qOovOGErz3UAZpAjutBAG/Wgtz35+UK gr8948+yzuxZ79nUeDWMGhcCC7QRKdg9vph7D+ZCvmmOEy2oQ29Btpb5y9PiCYgn8xRI0loDbXs LpwOKsnJagyz9V2kFCGMKht1B0IdwKGqRXJ9uFoH7yuX6xLwhmzK8UG3dce312FQZv99Hvai78r VA== X-Google-Smtp-Source: AGHT+IFgEX9+u7apwxmr2GZFKBpEdhP1KFPHVtBpJBRK7UGCINXHVHAFM9QfPR3IA/uJ7a/1OW3eIQ== X-Received: by 2002:a05:6000:1a8b:b0:382:372a:573a with SMTP id ffacd0b85a97d-385c6edd1b2mr3063763f8f.50.1732721589151; Wed, 27 Nov 2024 07:33:09 -0800 (PST) Received: from localhost (fwdproxy-cln-032.fbsv.net. [2a03:2880:31ff:20::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fb537dfsm16553427f8f.63.2024.11.27.07.33.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 07:33:08 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v2 1/7] bpf: Consolidate locks and reference state in verifier state Date: Wed, 27 Nov 2024 07:33:00 -0800 Message-ID: <20241127153306.1484562-2-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127153306.1484562-1-memxor@gmail.com> References: <20241127153306.1484562-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=18177; h=from:subject; bh=haXHBIw+IAeTSlGBb0DWF0SoN+arpIfSKJ65ihAMstA=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnRztcfgemBNPfsR0Yb6UDrTC83hnQitfVQ5N6J088 gT1qzUuJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0c7XAAKCRBM4MiGSL8RytBPD/ 43wzfPp6aJryxN2tyyOUNeU7m5fTU3RJa6wpkjZvLOhyWwC14kCuqASRDPr6BP3/mVikZfkBUAOy9L Kmu9HuLHJyfCBGt1SJqeQcE3TCeMYuQpWCh3ndZWiBUrmGtYPgRPnnabV2KZVoGjV3CmZqK1K50+u3 6dJq3zbHOb2G3hu6i+un/QgvvPJ6PAKGu2rZjBpGZfjgD9Zxks1Xk15KjNCUhcArOdMUGEskKuh6aa xtIo620Rgk793ld38ZPTHohGio7qX2owNxzlPnEJHpKEw1TWxdXRtfDMuPaAR0QgU02lYHCJbHMARC UmrNlKhDQhKKYgd9PqNPHWvcYCZRHgZldfR+skUrdrV6FyD64q4rMExE7M4LdnszMcy/a3uBQ8Srz6 1mU3nSW0Gr/780ecRCKWDdnmFWSPMsABQ5MKbZ0tPUCQKmGoWrwyswaizcwYg4/z9HIdnEnWeuMmRG 6pRpoCcG7M3woZZSoMRLNPI/snSEIoOq/XdM1HVka+cSG/KKMweG7DRB55oDFn+iEWlDa9YBK2F5QF MNaGWDiHcUFuTf63Cv2kuqC4CyWabyzzuw4LST3yI6rLLwVYQQyTkWGbv7DvSgocrN1/PL7psxt/rg zakE74P40BqB6b/vpViovHf4wTK92rpsxY4OaFsgPyYhCXdLQRYAtsMRBnlg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net Currently, state for RCU read locks and preemption is in bpf_verifier_state, while locks and pointer reference state remains in bpf_func_state. There is no particular reason to keep the latter in bpf_func_state. Additionally, it is copied into a new frame's state and copied back to the caller frame's state everytime the verifier processes a pseudo call instruction. This is a bit wasteful, given this state is global for a given verification state / path. Move all resource and reference related state in bpf_verifier_state structure in this patch, in preparation for introducing new reference state types in the future. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 11 ++-- kernel/bpf/log.c | 11 ++-- kernel/bpf/verifier.c | 112 ++++++++++++++++------------------- 3 files changed, 64 insertions(+), 70 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index f4290c179bee..af64b5415df8 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -315,9 +315,6 @@ struct bpf_func_state { u32 callback_depth; /* The following fields should be last. See copy_func_state() */ - int acquired_refs; - int active_locks; - struct bpf_reference_state *refs; /* The state of the stack. Each element of the array describes BPF_REG_SIZE * (i.e. 8) bytes worth of stack memory. * stack[0] represents bytes [*(r10-8)..*(r10-1)] @@ -419,9 +416,13 @@ struct bpf_verifier_state { u32 insn_idx; u32 curframe; - bool speculative; + struct bpf_reference_state *refs; + u32 acquired_refs; + u32 active_locks; + u32 active_preempt_locks; bool active_rcu_lock; - u32 active_preempt_lock; + + bool speculative; /* If this state was ever pointed-to by other state's loop_entry field * this flag would be set to true. Used to avoid freeing such states * while they are still in use. diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index 4a858fdb6476..8b52e5b7504c 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -756,6 +756,7 @@ static void print_reg_state(struct bpf_verifier_env *env, void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_func_state *state, bool print_all) { + struct bpf_verifier_state *vstate = env->cur_state; const struct bpf_reg_state *reg; int i; @@ -843,11 +844,11 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_func_st break; } } - if (state->acquired_refs && state->refs[0].id) { - verbose(env, " refs=%d", state->refs[0].id); - for (i = 1; i < state->acquired_refs; i++) - if (state->refs[i].id) - verbose(env, ",%d", state->refs[i].id); + if (vstate->acquired_refs && vstate->refs[0].id) { + verbose(env, " refs=%d", vstate->refs[0].id); + for (i = 1; i < vstate->acquired_refs; i++) + if (vstate->refs[i].id) + verbose(env, ",%d", vstate->refs[i].id); } if (state->in_callback_fn) verbose(env, " cb"); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1c4ebb326785..f8313e95eb8e 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1279,15 +1279,17 @@ static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size) return arr ? arr : ZERO_SIZE_PTR; } -static int copy_reference_state(struct bpf_func_state *dst, const struct bpf_func_state *src) +static int copy_reference_state(struct bpf_verifier_state *dst, const struct bpf_verifier_state *src) { dst->refs = copy_array(dst->refs, src->refs, src->acquired_refs, sizeof(struct bpf_reference_state), GFP_KERNEL); if (!dst->refs) return -ENOMEM; - dst->active_locks = src->active_locks; dst->acquired_refs = src->acquired_refs; + dst->active_locks = src->active_locks; + dst->active_preempt_locks = src->active_preempt_locks; + dst->active_rcu_lock = src->active_rcu_lock; return 0; } @@ -1304,7 +1306,7 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st return 0; } -static int resize_reference_state(struct bpf_func_state *state, size_t n) +static int resize_reference_state(struct bpf_verifier_state *state, size_t n) { state->refs = realloc_array(state->refs, state->acquired_refs, n, sizeof(struct bpf_reference_state)); @@ -1349,7 +1351,7 @@ static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state */ static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx) { - struct bpf_func_state *state = cur_func(env); + struct bpf_verifier_state *state = env->cur_state; int new_ofs = state->acquired_refs; int id, err; @@ -1367,7 +1369,7 @@ static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx) static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum ref_state_type type, int id, void *ptr) { - struct bpf_func_state *state = cur_func(env); + struct bpf_verifier_state *state = env->cur_state; int new_ofs = state->acquired_refs; int err; @@ -1384,7 +1386,7 @@ static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum r } /* release function corresponding to acquire_reference_state(). Idempotent. */ -static int release_reference_state(struct bpf_func_state *state, int ptr_id) +static int release_reference_state(struct bpf_verifier_state *state, int ptr_id) { int i, last_idx; @@ -1404,7 +1406,7 @@ static int release_reference_state(struct bpf_func_state *state, int ptr_id) return -EINVAL; } -static int release_lock_state(struct bpf_func_state *state, int type, int id, void *ptr) +static int release_lock_state(struct bpf_verifier_state *state, int type, int id, void *ptr) { int i, last_idx; @@ -1425,10 +1427,9 @@ static int release_lock_state(struct bpf_func_state *state, int type, int id, vo return -EINVAL; } -static struct bpf_reference_state *find_lock_state(struct bpf_verifier_env *env, enum ref_state_type type, +static struct bpf_reference_state *find_lock_state(struct bpf_verifier_state *state, enum ref_state_type type, int id, void *ptr) { - struct bpf_func_state *state = cur_func(env); int i; for (i = 0; i < state->acquired_refs; i++) { @@ -1447,7 +1448,6 @@ static void free_func_state(struct bpf_func_state *state) { if (!state) return; - kfree(state->refs); kfree(state->stack); kfree(state); } @@ -1461,6 +1461,7 @@ static void free_verifier_state(struct bpf_verifier_state *state, free_func_state(state->frame[i]); state->frame[i] = NULL; } + kfree(state->refs); if (free_self) kfree(state); } @@ -1471,12 +1472,7 @@ static void free_verifier_state(struct bpf_verifier_state *state, static int copy_func_state(struct bpf_func_state *dst, const struct bpf_func_state *src) { - int err; - - memcpy(dst, src, offsetof(struct bpf_func_state, acquired_refs)); - err = copy_reference_state(dst, src); - if (err) - return err; + memcpy(dst, src, offsetof(struct bpf_func_state, stack)); return copy_stack_state(dst, src); } @@ -1493,9 +1489,10 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, free_func_state(dst_state->frame[i]); dst_state->frame[i] = NULL; } + err = copy_reference_state(dst_state, src); + if (err) + return err; dst_state->speculative = src->speculative; - dst_state->active_rcu_lock = src->active_rcu_lock; - dst_state->active_preempt_lock = src->active_preempt_lock; dst_state->in_sleepable = src->in_sleepable; dst_state->curframe = src->curframe; dst_state->branches = src->branches; @@ -5496,7 +5493,7 @@ static bool in_sleepable(struct bpf_verifier_env *env) static bool in_rcu_cs(struct bpf_verifier_env *env) { return env->cur_state->active_rcu_lock || - cur_func(env)->active_locks || + env->cur_state->active_locks || !in_sleepable(env); } @@ -7850,15 +7847,15 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg * Since only one bpf_spin_lock is allowed the checks are simpler than * reg_is_refcounted() logic. The verifier needs to remember only * one spin_lock instead of array of acquired_refs. - * cur_func(env)->active_locks remembers which map value element or allocated + * env->cur_state->active_locks remembers which map value element or allocated * object got locked and clears it after bpf_spin_unlock. */ static int process_spin_lock(struct bpf_verifier_env *env, int regno, bool is_lock) { struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno]; + struct bpf_verifier_state *cur = env->cur_state; bool is_const = tnum_is_const(reg->var_off); - struct bpf_func_state *cur = cur_func(env); u64 val = reg->var_off.value; struct bpf_map *map = NULL; struct btf *btf = NULL; @@ -7925,7 +7922,7 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, return -EINVAL; } - if (release_lock_state(cur_func(env), REF_TYPE_LOCK, reg->id, ptr)) { + if (release_lock_state(env->cur_state, REF_TYPE_LOCK, reg->id, ptr)) { verbose(env, "bpf_spin_unlock of different lock\n"); return -EINVAL; } @@ -9679,7 +9676,7 @@ static int release_reference(struct bpf_verifier_env *env, struct bpf_reg_state *reg; int err; - err = release_reference_state(cur_func(env), ref_obj_id); + err = release_reference_state(env->cur_state, ref_obj_id); if (err) return err; @@ -9757,9 +9754,7 @@ static int setup_func_entry(struct bpf_verifier_env *env, int subprog, int calls callsite, state->curframe + 1 /* frameno within this callchain */, subprog /* subprog number within this prog */); - /* Transfer references to the callee */ - err = copy_reference_state(callee, caller); - err = err ?: set_callee_state_cb(env, caller, callee, callsite); + err = set_callee_state_cb(env, caller, callee, callsite); if (err) goto err_out; @@ -9992,14 +9987,14 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, const char *sub_name = subprog_name(env, subprog); /* Only global subprogs cannot be called with a lock held. */ - if (cur_func(env)->active_locks) { + if (env->cur_state->active_locks) { verbose(env, "global function calls are not allowed while holding a lock,\n" "use static function instead\n"); return -EINVAL; } /* Only global subprogs cannot be called with preemption disabled. */ - if (env->cur_state->active_preempt_lock) { + if (env->cur_state->active_preempt_locks) { verbose(env, "global function calls are not allowed with preemption disabled,\n" "use static function instead\n"); return -EINVAL; @@ -10333,11 +10328,6 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx) caller->regs[BPF_REG_0] = *r0; } - /* Transfer references to the caller */ - err = copy_reference_state(caller, callee); - if (err) - return err; - /* for callbacks like bpf_loop or bpf_for_each_map_elem go back to callsite, * there function call logic would reschedule callback visit. If iteration * converges is_state_visited() would prune that visit eventually. @@ -10502,11 +10492,11 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta, static int check_reference_leak(struct bpf_verifier_env *env, bool exception_exit) { - struct bpf_func_state *state = cur_func(env); + struct bpf_verifier_state *state = env->cur_state; bool refs_lingering = false; int i; - if (!exception_exit && state->frameno) + if (!exception_exit && cur_func(env)->frameno) return 0; for (i = 0; i < state->acquired_refs; i++) { @@ -10523,7 +10513,7 @@ static int check_resource_leak(struct bpf_verifier_env *env, bool exception_exit { int err; - if (check_lock && cur_func(env)->active_locks) { + if (check_lock && env->cur_state->active_locks) { verbose(env, "%s cannot be used inside bpf_spin_lock-ed region\n", prefix); return -EINVAL; } @@ -10539,7 +10529,7 @@ static int check_resource_leak(struct bpf_verifier_env *env, bool exception_exit return -EINVAL; } - if (check_lock && env->cur_state->active_preempt_lock) { + if (check_lock && env->cur_state->active_preempt_locks) { verbose(env, "%s cannot be used inside bpf_preempt_disable-ed region\n", prefix); return -EINVAL; } @@ -10727,7 +10717,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn env->insn_aux_data[insn_idx].storage_get_func_atomic = true; } - if (env->cur_state->active_preempt_lock) { + if (env->cur_state->active_preempt_locks) { if (fn->might_sleep) { verbose(env, "sleepable helper %s#%d in non-preemptible region\n", func_id_name(func_id), func_id); @@ -10784,7 +10774,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn struct bpf_func_state *state; struct bpf_reg_state *reg; - err = release_reference_state(cur_func(env), ref_obj_id); + err = release_reference_state(env->cur_state, ref_obj_id); if (!err) { bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({ if (reg->ref_obj_id == ref_obj_id) { @@ -11746,7 +11736,7 @@ static int ref_set_non_owning(struct bpf_verifier_env *env, struct bpf_reg_state { struct btf_record *rec = reg_btf_record(reg); - if (!cur_func(env)->active_locks) { + if (!env->cur_state->active_locks) { verbose(env, "verifier internal error: ref_set_non_owning w/o active lock\n"); return -EFAULT; } @@ -11765,12 +11755,11 @@ static int ref_set_non_owning(struct bpf_verifier_env *env, struct bpf_reg_state static int ref_convert_owning_non_owning(struct bpf_verifier_env *env, u32 ref_obj_id) { - struct bpf_func_state *state, *unused; + struct bpf_verifier_state *state = env->cur_state; + struct bpf_func_state *unused; struct bpf_reg_state *reg; int i; - state = cur_func(env); - if (!ref_obj_id) { verbose(env, "verifier internal error: ref_obj_id is zero for " "owning -> non-owning conversion\n"); @@ -11860,9 +11849,9 @@ static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_ } id = reg->id; - if (!cur_func(env)->active_locks) + if (!env->cur_state->active_locks) return -EINVAL; - s = find_lock_state(env, REF_TYPE_LOCK, id, ptr); + s = find_lock_state(env->cur_state, REF_TYPE_LOCK, id, ptr); if (!s) { verbose(env, "held lock and object are not in the same allocation\n"); return -EINVAL; @@ -12789,17 +12778,17 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return -EINVAL; } - if (env->cur_state->active_preempt_lock) { + if (env->cur_state->active_preempt_locks) { if (preempt_disable) { - env->cur_state->active_preempt_lock++; + env->cur_state->active_preempt_locks++; } else if (preempt_enable) { - env->cur_state->active_preempt_lock--; + env->cur_state->active_preempt_locks--; } else if (sleepable) { verbose(env, "kernel func %s is sleepable within non-preemptible region\n", func_name); return -EACCES; } } else if (preempt_disable) { - env->cur_state->active_preempt_lock++; + env->cur_state->active_preempt_locks++; } else if (preempt_enable) { verbose(env, "unmatched attempt to enable preemption (kernel function %s)\n", func_name); return -EINVAL; @@ -15398,7 +15387,7 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno, * No one could have freed the reference state before * doing the NULL check. */ - WARN_ON_ONCE(release_reference_state(state, id)); + WARN_ON_ONCE(release_reference_state(vstate, id)); bpf_for_each_reg_in_vstate(vstate, state, reg, ({ mark_ptr_or_null_reg(state, reg, id, is_null); @@ -17750,7 +17739,7 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, return true; } -static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur, +static bool refsafe(struct bpf_verifier_state *old, struct bpf_verifier_state *cur, struct bpf_idmap *idmap) { int i; @@ -17758,6 +17747,15 @@ static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur, if (old->acquired_refs != cur->acquired_refs) return false; + if (old->active_locks != cur->active_locks) + return false; + + if (old->active_preempt_locks != cur->active_preempt_locks) + return false; + + if (old->active_rcu_lock != cur->active_rcu_lock) + return false; + for (i = 0; i < old->acquired_refs; i++) { if (!check_ids(old->refs[i].id, cur->refs[i].id, idmap) || old->refs[i].type != cur->refs[i].type) @@ -17820,9 +17818,6 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat if (!stacksafe(env, old, cur, &env->idmap_scratch, exact)) return false; - if (!refsafe(old, cur, &env->idmap_scratch)) - return false; - return true; } @@ -17850,13 +17845,10 @@ static bool states_equal(struct bpf_verifier_env *env, if (old->speculative && !cur->speculative) return false; - if (old->active_rcu_lock != cur->active_rcu_lock) - return false; - - if (old->active_preempt_lock != cur->active_preempt_lock) + if (old->in_sleepable != cur->in_sleepable) return false; - if (old->in_sleepable != cur->in_sleepable) + if (!refsafe(old, cur, &env->idmap_scratch)) return false; /* for states to be equal callsites have to be the same @@ -18751,7 +18743,7 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } - if (cur_func(env)->active_locks) { + if (env->cur_state->active_locks) { if ((insn->src_reg == BPF_REG_0 && insn->imm != BPF_FUNC_spin_unlock) || (insn->src_reg == BPF_PSEUDO_KFUNC_CALL && (insn->off != 0 || !is_bpf_graph_api_kfunc(insn->imm)))) { From patchwork Wed Nov 27 15:33:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887151 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95C621FCFD7 for ; Wed, 27 Nov 2024 15:33:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721594; cv=none; b=NLWYnuTVmmYfT+udMCk/+BPD1DLAQf05MrQy4NxwKzE1eM3Nx6RT6xpwKmhax4YmvzNlWrXjrhBWV92st7fGP5G1kdx57wamjldp/M+4bn6Ci+msPbvXBR07abnBfyESjueQZ6h4Axikx5nO+AvU3p8wJ3vAHaax0BD5SXN3K6Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721594; c=relaxed/simple; bh=+lbvwqryOE2W6P45sCY/CQfIp65/Ra0f06fAzDGT4+E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PZNdKI7a3C/mtn/oSytxPCtj+aCeMrgqQa26fJ/JDEOzyVwzFbtViZVYGT8qA1qQvo9mC0XNjZ7PMMJRz8zKOC5D8Tv/R0apyrEXU6aeobSCxVUpCy6bwstFLWbwcDLICGCSzM1oCu03nniIxp3WssjLuk01OM3WUBTSCAlnS0M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MDTrb32r; arc=none smtp.client-ip=209.85.128.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MDTrb32r" Received: by mail-wm1-f66.google.com with SMTP id 5b1f17b1804b1-434a2f3bae4so23603645e9.3 for ; Wed, 27 Nov 2024 07:33:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732721590; x=1733326390; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fyFj/w4NJo/EiVDGBkkt3HCkn23xq83hNOiKaFrtX/0=; b=MDTrb32rcuOCPAL7ZU2Ww9KxLI0Dh+QpwSBy3CVj6+8YVuUO8Q1KndUDHhV4M2X8+M c08sBqenWBc9wz4kkCxB/M+Hiat8X6oEcPpA0suQ9/rWUulAY88rwTKMc/5tWfbexOyK uxmj7Sz/+8tLYAhLr+4i5Z1z2H7ezFAsVheut9RosJgZXxMRQqncOPABQvVRW3B+o+wY FDEjPfYaiBg22vQDtJwPdz1S+/Yqosapjd+Fe/3m9k3m65Eh5AghvnQBI56B/u7XN+ka TmiHVh8jsBSDOEbtWBAtxem0Tk65iVKKe3YENud6HeH1rSAOYHTUgkQk/07N7XFft6BA +qWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732721590; x=1733326390; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fyFj/w4NJo/EiVDGBkkt3HCkn23xq83hNOiKaFrtX/0=; b=hQL4I4liUitwjdW1Zbx0MMCnhyBNqi/s8FEw3HWC5LsewuNonhs5JosSvtc7WbQ+gK 3madNY6po3xqeSNEZr8Lppik8+PeBj/nmSF01/uXvcnCuuORTpbgHwcSdcGj/L5U4Pz7 iSmHRa8jAT8sJBh9i181xijFo/xKJpqKX5sFRvFKGbt8bfowIWb2/HoDxyKIJUht99/t pKVFnFUXPGGOC+CcuOAmeKLwMZlL8czCPNADXMYJxkJTXchlXu9A7QyjS8P/zIYesYZT ze6arBzccwlIZC1D20c1wTPbf95MkYBRTeAVlb9qo88Ho06Sc1pLWuFkVGFu1CKhuJE9 d0VQ== X-Gm-Message-State: AOJu0Yy3rFekjXuAmYApj9+zXEHfp0bgQ3LOC/VkxH3HzF7P0q3EEhQ+ n5CpOTWktT4cOAkcGxpikX3OtqHOvyeNml0CFx2sfEg5U+LGYY25P1PWm8vjl7c= X-Gm-Gg: ASbGncv2haoALOKr+R9T8hEERymXsf76OrwOX4gZch927HAJcq5aVp8Z7sivQygxOrU X55w0IpcKcVolS5EE1Fl50fwwp6q+ncqegtRtQrTgUXbcdiOUy3E79OiWQbM867S3evvn1w6/CD BUF6swX/Ew1J7xwpgK3pt2kgTTC6pZO7XWPwvPSuwnWO/FkuGP2Tr+Ym/BVoavWErSBZtdGSluF 5oB0M475plcXVTf7nKSAGpVCkH5lt1uxSZgbLQi7d5BpYNDTk6BSww/GeS4yz67JxxE9+VMxhoE zg== X-Google-Smtp-Source: AGHT+IFAeTG66B1aAJoJXFiQOD6JVJ/xUhuK+A3149yFfgzZ0weBIzJwbPGOJJbcBhZa0wO7djUhBw== X-Received: by 2002:a05:600c:4f09:b0:434:a802:43d with SMTP id 5b1f17b1804b1-434a9df0376mr31386635e9.27.1732721590354; Wed, 27 Nov 2024 07:33:10 -0800 (PST) Received: from localhost (fwdproxy-cln-039.fbsv.net. [2a03:2880:31ff:27::face:b00c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa76a981sm24047645e9.16.2024.11.27.07.33.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 07:33:09 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v2 2/7] bpf: Refactor {acquire,release}_reference_state Date: Wed, 27 Nov 2024 07:33:01 -0800 Message-ID: <20241127153306.1484562-3-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127153306.1484562-1-memxor@gmail.com> References: <20241127153306.1484562-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9386; h=from:subject; bh=+lbvwqryOE2W6P45sCY/CQfIp65/Ra0f06fAzDGT4+E=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnRztdL/DlFkU+TVBd+Gdd2AGUfKzuJOe+YbJRWprV S/dy0lGJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0c7XQAKCRBM4MiGSL8Rypb6D/ 9EkOTx4v8z/Th2ZppRV/h5hJQWEWZzI4vzT8uadBQmwwGApFxyMgI4+/MRB2SZP7L+0wtolgGLDI6x /hdLy25/OgO/+mIMsvOsHnqxlUAa07RmJRF7F6REGvnC7yCgM+SMyRfXQrQ0Ja9AatIJNgIPHu8o9w uFjZ6BRoCJS9Y9X9l28W7Phnr++f2bMDPBvDPfxE2SffMNlTfA31sqRaX8K1+ReE4fWr+tDquf7J7q 56ZWUnvVpVLp/3AIIGRTNoI1tPMiKZIRKWisPIJQ9JTEitdfwQBEFoX22i8L4MKTNtFztOmLU7pHyd oi1nzbolcpqNJYf+qdWtpHGolMhCCUGEVOuJdKofadcOy7xFhAAjRd1e0SEaVXLiObgSjkN0rnpN6G xykdpvj4R9WIACGmMxHxoJy7PScjSutXf/Hg6uItGRX0cBcDJfPx9e6Zf5wTl9cEhVVPHMByuR8Xs7 7hQcm34daqUfI/IEp7G0cdQvDrUBUzjT94IkLqKRg4ucF5416loo0EjLysagigMYc7poKPkZEMoTsX AQJ4pdBawAEkuC5bz7hAUI7x8r8aA1W+AFRxmpW8Dh/Cl7xTn796rj2hCBUs7Msr4MFXYuoZdQWYa2 Dse042vRq0LdlaHHHFmDgVPGKuTOXKtxz1/myCpgcMFXWmOaWxky5NQM/FOg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net In preparation for introducing support for more reference types which have to add and remove reference state, refactor the acquire_reference_state and release_reference_state functions to share common logic. The acquire_reference_state function simply handles growing the acquired refs and returning the pointer to the new uninitialized element, which can be filled in by the caller. The release_reference_state function simply erases a reference state entry in the acquired_refs array and shrinks it. The callers are responsible for finding the suitable element by matching on various fields of the reference state and requesting deletion through this function. It is not supposed to be called directly. Existing callers of release_reference_state were using it to find and remove state for a given ref_obj_id without scrubbing the associated registers in the verifier state. Introduce release_reference_nomark to provide this functionality and convert callers. We now use this new release_reference_nomark function within release_reference as well. It needs to operate on a verifier state instead of taking verifier env as mark_ptr_or_null_regs requires operating on verifier state of the two branches of a NULL condition check, therefore env->cur_state cannot be used directly. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 113 +++++++++++++++++++++++------------------- 1 file changed, 63 insertions(+), 50 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index f8313e95eb8e..474cca3e8f66 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -196,7 +196,8 @@ struct bpf_verifier_stack_elem { #define BPF_PRIV_STACK_MIN_SIZE 64 -static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx); +static int acquire_reference(struct bpf_verifier_env *env, int insn_idx); +static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id); static int release_reference(struct bpf_verifier_env *env, int ref_obj_id); static void invalidate_non_owning_refs(struct bpf_verifier_env *env); static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env); @@ -771,7 +772,7 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_ if (clone_ref_obj_id) id = clone_ref_obj_id; else - id = acquire_reference_state(env, insn_idx); + id = acquire_reference(env, insn_idx); if (id < 0) return id; @@ -1033,7 +1034,7 @@ static int mark_stack_slots_iter(struct bpf_verifier_env *env, if (spi < 0) return spi; - id = acquire_reference_state(env, insn_idx); + id = acquire_reference(env, insn_idx); if (id < 0) return id; @@ -1349,77 +1350,69 @@ static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state * On success, returns a valid pointer id to associate with the register * On failure, returns a negative errno. */ -static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx) +static struct bpf_reference_state *acquire_reference_state(struct bpf_verifier_env *env, int insn_idx, bool gen_id) { struct bpf_verifier_state *state = env->cur_state; int new_ofs = state->acquired_refs; - int id, err; + int err; err = resize_reference_state(state, state->acquired_refs + 1); if (err) - return err; - id = ++env->id_gen; - state->refs[new_ofs].type = REF_TYPE_PTR; - state->refs[new_ofs].id = id; + return NULL; + if (gen_id) + state->refs[new_ofs].id = ++env->id_gen; state->refs[new_ofs].insn_idx = insn_idx; - return id; + return &state->refs[new_ofs]; +} + +static int acquire_reference(struct bpf_verifier_env *env, int insn_idx) +{ + struct bpf_reference_state *s; + + s = acquire_reference_state(env, insn_idx, true); + if (!s) + return -ENOMEM; + s->type = REF_TYPE_PTR; + return s->id; } static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum ref_state_type type, int id, void *ptr) { struct bpf_verifier_state *state = env->cur_state; - int new_ofs = state->acquired_refs; - int err; + struct bpf_reference_state *s; - err = resize_reference_state(state, state->acquired_refs + 1); - if (err) - return err; - state->refs[new_ofs].type = type; - state->refs[new_ofs].id = id; - state->refs[new_ofs].insn_idx = insn_idx; - state->refs[new_ofs].ptr = ptr; + s = acquire_reference_state(env, insn_idx, false); + s->type = type; + s->id = id; + s->ptr = ptr; state->active_locks++; return 0; } -/* release function corresponding to acquire_reference_state(). Idempotent. */ -static int release_reference_state(struct bpf_verifier_state *state, int ptr_id) +static void release_reference_state(struct bpf_verifier_state *state, int idx) { - int i, last_idx; + int last_idx; last_idx = state->acquired_refs - 1; - for (i = 0; i < state->acquired_refs; i++) { - if (state->refs[i].type != REF_TYPE_PTR) - continue; - if (state->refs[i].id == ptr_id) { - if (last_idx && i != last_idx) - memcpy(&state->refs[i], &state->refs[last_idx], - sizeof(*state->refs)); - memset(&state->refs[last_idx], 0, sizeof(*state->refs)); - state->acquired_refs--; - return 0; - } - } - return -EINVAL; + if (last_idx && idx != last_idx) + memcpy(&state->refs[idx], &state->refs[last_idx], sizeof(*state->refs)); + memset(&state->refs[last_idx], 0, sizeof(*state->refs)); + state->acquired_refs--; + return; } static int release_lock_state(struct bpf_verifier_state *state, int type, int id, void *ptr) { - int i, last_idx; + int i; - last_idx = state->acquired_refs - 1; for (i = 0; i < state->acquired_refs; i++) { if (state->refs[i].type != type) continue; if (state->refs[i].id == id && state->refs[i].ptr == ptr) { - if (last_idx && i != last_idx) - memcpy(&state->refs[i], &state->refs[last_idx], - sizeof(*state->refs)); - memset(&state->refs[last_idx], 0, sizeof(*state->refs)); - state->acquired_refs--; + release_reference_state(state, i); state->active_locks--; return 0; } @@ -9666,21 +9659,41 @@ static void mark_pkt_end(struct bpf_verifier_state *vstate, int regn, bool range reg->range = AT_PKT_END; } +static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id) +{ + int i; + + for (i = 0; i < state->acquired_refs; i++) { + if (state->refs[i].type != REF_TYPE_PTR) + continue; + if (state->refs[i].id == ref_obj_id) { + release_reference_state(state, i); + return 0; + } + } + return -EINVAL; +} + /* The pointer with the specified id has released its reference to kernel * resources. Identify all copies of the same pointer and clear the reference. + * + * This is the release function corresponding to acquire_reference(). Idempotent. + * The 'mark' boolean is used to optionally skip scrubbing registers matching + * the ref_obj_id, in case they need to be switched to some other type instead + * of havoc scalar value. */ -static int release_reference(struct bpf_verifier_env *env, - int ref_obj_id) +static int release_reference(struct bpf_verifier_env *env, int ref_obj_id) { + struct bpf_verifier_state *vstate = env->cur_state; struct bpf_func_state *state; struct bpf_reg_state *reg; int err; - err = release_reference_state(env->cur_state, ref_obj_id); + err = release_reference_nomark(vstate, ref_obj_id); if (err) return err; - bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({ + bpf_for_each_reg_in_vstate(vstate, state, reg, ({ if (reg->ref_obj_id == ref_obj_id) mark_reg_invalid(env, reg); })); @@ -10774,7 +10787,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn struct bpf_func_state *state; struct bpf_reg_state *reg; - err = release_reference_state(env->cur_state, ref_obj_id); + err = release_reference_nomark(env->cur_state, ref_obj_id); if (!err) { bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({ if (reg->ref_obj_id == ref_obj_id) { @@ -11107,7 +11120,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn /* For release_reference() */ regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id; } else if (is_acquire_function(func_id, meta.map_ptr)) { - int id = acquire_reference_state(env, insn_idx); + int id = acquire_reference(env, insn_idx); if (id < 0) return id; @@ -13087,7 +13100,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } mark_btf_func_reg_size(env, BPF_REG_0, sizeof(void *)); if (is_kfunc_acquire(&meta)) { - int id = acquire_reference_state(env, insn_idx); + int id = acquire_reference(env, insn_idx); if (id < 0) return id; @@ -15387,7 +15400,7 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno, * No one could have freed the reference state before * doing the NULL check. */ - WARN_ON_ONCE(release_reference_state(vstate, id)); + WARN_ON_ONCE(release_reference_nomark(vstate, id)); bpf_for_each_reg_in_vstate(vstate, state, reg, ({ mark_ptr_or_null_reg(state, reg, id, is_null); From patchwork Wed Nov 27 15:33:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887152 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BDCD91FF5F8 for ; Wed, 27 Nov 2024 15:33:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.67 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721595; cv=none; b=Uob5yUnRPCh0bKTC/NqQgWNQA+R7LUsrxYKRU8EQaJ2Q1qXuQo8VVfDtBv0AGoNfPAObrni9QiqS8uWVpIWZ+OYQuK5ikigqvU5gVAtZf0uou3eAn5UYZg+JTNI2U1ujCs5p0T8WAXK5ip1IuFhC4jn7k6x1G7ayc4UudlLFHnU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721595; c=relaxed/simple; bh=mCbV2hzfoSvQXfT9CUwMciZZAlSpMRDDqFj2bBCWrIk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XgVPjImqlSu4zagIpYYV3eHIh5tid7rHhxY2HsUpOMwZEZpggXgCXHU1aWykqgaUGf97SmjRCvwR7SDdMIRKVICpll1p+61H5G2UkJT571KVyfX6AzjRwuTj7ffVMxtTEyxdxuyAyzOm8rCeoQAzUmag3nsadXp1T1cjFzph6gM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=K8ql2LOz; arc=none smtp.client-ip=209.85.128.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="K8ql2LOz" Received: by mail-wm1-f67.google.com with SMTP id 5b1f17b1804b1-43494a20379so35510905e9.0 for ; Wed, 27 Nov 2024 07:33:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732721592; x=1733326392; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QdwiHmAk6OhdkiUA9Ev7EZPLqHFZB5MK41Iiedr0ApI=; b=K8ql2LOzlkLNLHSwdHVuk8tb8/UCcQOkcOHF4fNvEtd9g9+kScZOJdjIBkzuCYxatQ XY1hyggLpfiRdIycNq8lLnbIXImrerMqjKSYi4/Rf+PMfNDwK6c4S7p6a0TPhtXA1svt V8zUyLXU3hHy/mMLvwWLWKZg1nc+cPl3fFfzIs2Xtdc1oNvVXd1QulQExYXOtQ6yJHH0 qUie1lWmMFUhegFc/uwMj5CkyKzFn5s1d9olABacRaIU3AFTRuzxNxepgDrz1QYOiy7T CeeQW4rjKlQy9QLE01GFjUEFCFEuGgHYlAvu93/M+9Y9McjWW1wbMJXNAiIf7t5W1maG QZsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732721592; x=1733326392; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QdwiHmAk6OhdkiUA9Ev7EZPLqHFZB5MK41Iiedr0ApI=; b=sadzH9X9pKUzAJeOIuQ1DKlOX7V9J1Zrme4/3u+trzlakv8bMb2rwiaeU8CpRgATlI amPaXo2o3DjHrVbUvLUy5D2JnX3W4w8Q2HWkhsdeZlHVC9lDbsUMvsm7bOgMMDghYFoo Hoi+BBaoiXTNsFYhJqT5qoeSX62IDvbRcXClyK/P8gs0weECu+/azJab5pdw+8SncFIv 0dmMtFUwFrABkQgthtKlAyl9CIhvCZRP5V4G8zwheilHrwYGqSMHTCSdlOlYZj+Kxch2 bn5CWyS/jQtJRSqpJ/BitatgGPChFLAZOvMo92H2ukLrFOoxra5sxwZcPTw0UEFVw1l4 IO/Q== X-Gm-Message-State: AOJu0YxMweKDJz7/Jdn6ed5Ctl/+oe6GYm7nGGNFeD0TEGQzWNunoPPk 4W11pRAZKoqYcUxVZ6TGBrrqDM4nrY0xLVEJst9b7exFcIKbmxi/7Ldw8bJrtQc= X-Gm-Gg: ASbGncvDSMs205GCwNtJ7XGO02GML8aqKuwrt5xkJvzidirJzlRXtaV0lK9RUVoVphj kVfSf2CBCNR7K3B3bDBqaL0Bh23yZTGvdyUsSrED1HWD54LNlaSdAytUhTovyx+LgwJ8AxZdlQ6 1nI/QKmuU63Bibko+YdM+g/iV9PUjM5fERDVfU16AdNnuTUQdgIjbz07XhWEcVktVFZw8ZpY1/A +d2PWw0BO4OAgUE3+0T8nP/K2s9ovLpOMH4L0AGbHtJtF5Xr0HggnTef30QxS+OUGjdAFKGNzvz X-Google-Smtp-Source: AGHT+IE+9N6yIUUtiHm1PkMTft0YrIxPyEAvsiNEI6LFbrlutl7QCUgmydECIULzqwKnatbT/O1LbQ== X-Received: by 2002:a05:600c:3585:b0:42f:7e87:3438 with SMTP id 5b1f17b1804b1-434a9d4ff94mr35091065e9.0.1732721591556; Wed, 27 Nov 2024 07:33:11 -0800 (PST) Received: from localhost (fwdproxy-cln-014.fbsv.net. [2a03:2880:31ff:e::face:b00c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7e2202sm24182575e9.35.2024.11.27.07.33.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 07:33:11 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v2 3/7] bpf: Refactor mark_{dynptr,iter}_read Date: Wed, 27 Nov 2024 07:33:02 -0800 Message-ID: <20241127153306.1484562-4-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127153306.1484562-1-memxor@gmail.com> References: <20241127153306.1484562-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2790; h=from:subject; bh=mCbV2hzfoSvQXfT9CUwMciZZAlSpMRDDqFj2bBCWrIk=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnRztd5JREANWkAQkTq9sY5tFPCMEqN0mVIp6AlRAX NqsjPXSJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0c7XQAKCRBM4MiGSL8RylAND/ 4sBHLqU8sdnL8k4jY55Hrba2MIHA02RPrMEWY8Dabay/1A5a+g/ek2WIlKL4P7tqlxhFjxZjF92XDS 76J4KOcEBcqJ7aA9IAjzUkjyMDZXGK9nxPbwuG9gT6qt8QDKk/NG0EXcIoO5OvRVHeDzXr23D7s0Lm HcStXInqfS3UXIUoJ4YFwcugmJ17qBfVdZE6FtBgw+i1Syi8QobPXOIzo9c6TyTn/meqplV9w6yp7a wrt5r5CVIY7aLVs8QhVKUY7Zk9ATkKJUiG46HOhPu4gez7grFvnh7JgwsyudP7HKG/LqFSMcwUFME2 2CgXBXBD3e14SHlMlUnEScDsR8hPhjqXCwU4waq/PZu8twW1oq4a2vgqAlyIrq+1HYgEt3yY5dNU4D jXPCDIwNIGPhGqhn/NjfELhBKRfj4BmkrMb1brShdy8XpcWvSy+Kt7Rb7b0LS+pBIwJwhcbcHlEVHJ qjhUxX53/RYVlKhuRM4/GIw2y+qA7qM/YpAGs02PfAHpkwP2dJDg5cYW0jtrNmVrsW6fy0mlt77Pb4 ndzyoKhjvL6Vn/dyNUHbPyWf1qz0IWPF24CYHDUfeLWV8daHVRX4sUpuf1pFl4hn086zAWMSTANMRT wQHbfqXz0QZ8dxqH+gGV/mGH7wKHtT/zaeE05JInoh5kWpw41AGxT3ZksDbA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net There is possibility of sharing code between mark_dynptr_read and mark_iter_read for updating liveness information of their stack slots. Consolidate common logic into mark_stack_slot_obj_read function in preparation for the next patch which needs the same logic for its own stack slots. Acked-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 43 +++++++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 474cca3e8f66..be2365a9794a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3192,10 +3192,27 @@ static int mark_reg_read(struct bpf_verifier_env *env, return 0; } -static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +static int mark_stack_slot_obj_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg, + int spi, int nr_slots) { struct bpf_func_state *state = func(env, reg); - int spi, ret; + int err, i; + + for (i = 0; i < nr_slots; i++) { + struct bpf_reg_state *st = &state->stack[spi - i].spilled_ptr; + + err = mark_reg_read(env, st, st->parent, REG_LIVE_READ64); + if (err) + return err; + + mark_stack_slot_scratched(env, spi - i); + } + return 0; +} + +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + int spi; /* For CONST_PTR_TO_DYNPTR, it must have already been done by * check_reg_arg in check_helper_call and mark_btf_func_reg_size in @@ -3210,31 +3227,13 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state * * bounds and spi is the first dynptr slot. Simply mark stack slot as * read. */ - ret = mark_reg_read(env, &state->stack[spi].spilled_ptr, - state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64); - if (ret) - return ret; - return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr, - state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64); + return mark_stack_slot_obj_read(env, reg, spi, BPF_DYNPTR_NR_SLOTS); } static int mark_iter_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int spi, int nr_slots) { - struct bpf_func_state *state = func(env, reg); - int err, i; - - for (i = 0; i < nr_slots; i++) { - struct bpf_reg_state *st = &state->stack[spi - i].spilled_ptr; - - err = mark_reg_read(env, st, st->parent, REG_LIVE_READ64); - if (err) - return err; - - mark_stack_slot_scratched(env, spi - i); - } - - return 0; + return mark_stack_slot_obj_read(env, reg, spi, nr_slots); } /* This function is supposed to be used by the following 32-bit optimization From patchwork Wed Nov 27 15:33:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887153 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4816820010C for ; Wed, 27 Nov 2024 15:33:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721597; cv=none; b=sibO+PWmRhEFg4GCFpTdaqOIkUGLA3iNU/mOtxS3GEGe4do2eSzGdfac7Jzx2G4Q1FG8jpfO7YrMLhFhvpQkIx6Q1o0c1P+7l1tadc3IsmrYWypIjBC+59YdYqouPpjHqx5JUzDYxYzaSWLK1tqduwPL0oTitkbI8Y19avLLTbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721597; c=relaxed/simple; bh=OGz+IMgIpwXl7i41CWjBo+ALPg/yyTItckax53mIjfk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YusNlGxdjMHppqut/oQvlKPNJzG5khqCsYqIHkytDjKLJI30oXUVtaCt9CLDBQ9vuWbQY1yUuUJ6qdIxVEWO6UjZYUGXPOAPnjXqpmntnPXR5a9Q4utlMGG7Fzy23qZ19p9U/OGD0p2EG/fRIIH019hZGkSa0T1n4XFG46t56L0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=l0A6dT07; arc=none smtp.client-ip=209.85.221.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="l0A6dT07" Received: by mail-wr1-f65.google.com with SMTP id ffacd0b85a97d-382456c6597so4838091f8f.2 for ; Wed, 27 Nov 2024 07:33:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732721593; x=1733326393; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=It/E+xXYt7v1i+u7vvU7wAgRYrJJ3zoRLFounhHHeZs=; b=l0A6dT07oOv/QkIVE16P3ncJzGhaAOQPcsgoFt1339+e5e2WE5gSPUNT3P1zxcv5Wy tWomKfKPlhyGsjxBv8MTzr6KmYxK09Mau8F5cWvhYpN0ZUMEHDiBL7gHsh1qOKK277mM Qo0GczBDOAcElC5G8mSPqLmjr0lyjveJP3mPVBiOw7oUBBwtAsmOnBnBod0/+AMEzLQ4 A71LGW8R/fcqn+IdItTxfmrUfh/6aycm/C9/NXqUitq15XJjY610c+YT4xjMlxMMq8Iy xdmivKg1PG22jmJg6ofJAZcF7MNKKY2nqiN+BbPUmwF9rcWJt1kCjYMd/4nn1oacoNS2 5www== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732721593; x=1733326393; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=It/E+xXYt7v1i+u7vvU7wAgRYrJJ3zoRLFounhHHeZs=; b=Hm+4q0ylGs2/F7mob1Rpcu+HhTfrkplAptNVjiebtN4iWd8puXf0yaZseRsGQ+ON5O u9ng+HqwMnIwXT6xbhn8QEB8yMaObIMu9iJXW2oUclXTrqtr80F6udY1eWU32Ju4nfwi N85EJM1mOMBWSJ3tHuy7CUCL4JWXVMZ7U8uuTpkNZnpr9ng2VGm4lo6+MmPXUJrZMRq2 v5scdvyKV4uEQthURSZWZF9YQpIkPSfFJnvvo3NVM0JMd/kW4JlNpBnbA62Zbt+dkJRx IjT5jd/sEa+u+eNijxxY2MbHn38Nkub/IUSA+3CI7MPvdSPnOfOGia+f6nrkZoFFA2TX VuTg== X-Gm-Message-State: AOJu0YxswfML5sBjkfGwMod2o9wkDadMxNgmMPy7HZzLCUSpnD16hf72 6MNqHf8VoLcmZP4sLk9np+d1zMwG9rPppSJL1+E5f6IqZwYy3kS+9R1HXwYwR/Q= X-Gm-Gg: ASbGncsloq7EeEC31iYmOaltU1sjqNfFdKTPlSLT2mZnPpgLZSGeZK8hBP4h8kFbLrQ j1y7CO4AAW7nq10qyqP2IPcjra4M+kbFsuIT4XGaiYtwQkmjAxn5Xs42OSWkXwGhmDgBjpmKOVi AD2YSuN2wN39xQzyfHV3Oo3wcBeQodPbBDKN7B7KKXTu+HOPAOx24MPRdNX/ksVtczx8BInxNyM wCwn2CjSfKpGWiYfFGDu/eUnUAAosa1x5O4m+Be86ummZ/DRSOdev0yOPdMnkRZJS7XcExYf82U X-Google-Smtp-Source: AGHT+IFBB/h6SrBUQkFwidjCw79zgU1IFYo+pHjZ9dnoKUdnu2oToojgZTRaOYH1HSy5Y2xPg3/c3Q== X-Received: by 2002:a05:6000:1866:b0:382:4421:8123 with SMTP id ffacd0b85a97d-385c70e1b13mr3334857f8f.59.1732721592782; Wed, 27 Nov 2024 07:33:12 -0800 (PST) Received: from localhost (fwdproxy-cln-006.fbsv.net. [2a03:2880:31ff:6::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fad62e9sm16986823f8f.6.2024.11.27.07.33.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 07:33:12 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v2 4/7] bpf: Introduce support for bpf_local_irq_{save,restore} Date: Wed, 27 Nov 2024 07:33:03 -0800 Message-ID: <20241127153306.1484562-5-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127153306.1484562-1-memxor@gmail.com> References: <20241127153306.1484562-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=18789; h=from:subject; bh=OGz+IMgIpwXl7i41CWjBo+ALPg/yyTItckax53mIjfk=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnRztd4RS8m6nnAS8qCnsTSopCYP1FXN7hwKbN/yqe h46jLaaJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0c7XQAKCRBM4MiGSL8RygMOEA C7nY80ewDHrCPjxNpVE6IU8bnlRulF20fb6Vy79/xuTqPm1zAxWxpIruZ12mySSLWiX4qWKwHHTEV2 hqJUzg2nxf88kufmGx83kq/nvNxfMDgvlonM6zEeRbmw5mlCPhuSuN4Gk5mcpOfdSAzjrtu+wer4ZW Skoh1o61cYc4r4zvzxOHJTboBjrTWFO+n0iwiSxPTjlF9sDa2NN1HHdTusc8JNp7AD7X4HI8rxJbJu V79GFyKJhjk6vd6rS70xzhkGM1nYijWshli2TStGa7VlzTS4lFCsn++Sj8k/cI7wzeb043sJxxfwYi R6V3GGS/ejUB0I4Gaa+TZRH8boLEsSBq6t4RS7WVTzxmwIaLH7eWVIJxDUNxLDlAlbSbdMtlNDOezJ ss/Vx6IgXqw3zOVRNTY71IwEulBcDu3rzo1g9QYoieW39kxvrfzTkGPGiMbMFGQ/r9Zwp9EFxjqCRA 8744WOHdAqXOLu1s7bKzk5MbZ0foFsOg6jiJuVubBKrc6V3LQMd0HN2UzSD+0Gizrv4rznPgSsDcW8 WsozXO2lI/uc7IYfY7jZW6h4aGuHQA/V6dZcss3UkwIbZhOorC+UyWZ74DnOgm2nwHYtRLTUg1HwZr C60Al4JxRK7E7xXDHPySoDpcM58qb1WV7e5U8vRMqnuZLg3YLe4FYf3nFKCA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net Teach the verifier about IRQ-disabled sections through the introduction of two new kfuncs, bpf_local_irq_save, to save IRQ state and disable them, and bpf_local_irq_restore, to restore IRQ state and enable them back again. For the purposes of tracking the saved IRQ state, the verifier is taught about a new special object on the stack of type STACK_IRQ_FLAG. This is a 8 byte value which saves the IRQ flags which are to be passed back to the IRQ restore kfunc. To track a dynamic number of IRQ-disabled regions and their associated saved states, a new resource type RES_TYPE_IRQ is introduced, which its state management functions: acquire_irq_state and release_irq_state, taking advantage of the refactoring and clean ups made in earlier commits. One notable requirement of the kernel's IRQ save and restore API is that they cannot happen out of order. For this purpose, when releasing reference we keep track of the prev_id we saw with REF_TYPE_IRQ. Since reference states are inserted in increasing order of the index, this is used to remember the ordering of acquisitions of IRQ saved states, so that we maintain a logical stack in acquisition order of resource identities, and can enforce LIFO ordering when restoring IRQ state. The top of the stack is maintained using bpf_verifier_state's active_irq_id. The logic to detect initialized and unitialized irq flag slots, marking and unmarking is similar to how it's done for iterators. No additional checks are needed in refsafe for REF_TYPE_IRQ, apart from the usual check_id satisfiability check on the ref[i].id. We have to perform the same check_ids check on state->active_irq_id as well. The kfuncs themselves are plain wrappers over local_irq_save and local_irq_restore macros. Acked-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 9 +- kernel/bpf/helpers.c | 17 +++ kernel/bpf/log.c | 1 + kernel/bpf/verifier.c | 279 ++++++++++++++++++++++++++++++++++- 4 files changed, 303 insertions(+), 3 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index af64b5415df8..81eebe449e6c 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -233,6 +233,7 @@ enum bpf_stack_slot_type { */ STACK_DYNPTR, STACK_ITER, + STACK_IRQ_FLAG, }; #define BPF_REG_SIZE 8 /* size of eBPF register in bytes */ @@ -254,8 +255,11 @@ struct bpf_reference_state { * default to pointer reference on zero initialization of a state. */ enum ref_state_type { - REF_TYPE_PTR = 0, - REF_TYPE_LOCK, + REF_TYPE_PTR = 0, + REF_TYPE_IRQ = (1 << 0), + + REF_TYPE_LOCK = (1 << 1), + REF_TYPE_LOCK_MASK = REF_TYPE_LOCK, } type; /* Track each reference created with a unique id, even if the same * instruction creates the reference multiple times (eg, via CALL). @@ -420,6 +424,7 @@ struct bpf_verifier_state { u32 acquired_refs; u32 active_locks; u32 active_preempt_locks; + u32 active_irq_id; bool active_rcu_lock; bool speculative; diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 751c150f9e1c..532ea74d4850 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -3057,6 +3057,21 @@ __bpf_kfunc int bpf_copy_from_user_str(void *dst, u32 dst__sz, const void __user return ret + 1; } +/* Keep unsinged long in prototype so that kfunc is usable when emitted to + * vmlinux.h in BPF programs directly, but note that while in BPF prog, the + * unsigned long always points to 8-byte region on stack, the kernel may only + * read and write the 4-bytes on 32-bit. + */ +__bpf_kfunc void bpf_local_irq_save(unsigned long *flags__irq_flag) +{ + local_irq_save(*flags__irq_flag); +} + +__bpf_kfunc void bpf_local_irq_restore(unsigned long *flags__irq_flag) +{ + local_irq_restore(*flags__irq_flag); +} + __bpf_kfunc_end_defs(); BTF_KFUNCS_START(generic_btf_ids) @@ -3149,6 +3164,8 @@ BTF_ID_FLAGS(func, bpf_get_kmem_cache) BTF_ID_FLAGS(func, bpf_iter_kmem_cache_new, KF_ITER_NEW | KF_SLEEPABLE) BTF_ID_FLAGS(func, bpf_iter_kmem_cache_next, KF_ITER_NEXT | KF_RET_NULL | KF_SLEEPABLE) BTF_ID_FLAGS(func, bpf_iter_kmem_cache_destroy, KF_ITER_DESTROY | KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_local_irq_save) +BTF_ID_FLAGS(func, bpf_local_irq_restore) BTF_KFUNCS_END(common_btf_ids) static const struct btf_kfunc_id_set common_kfunc_set = { diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index 8b52e5b7504c..434fc320ba1d 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -537,6 +537,7 @@ static char slot_type_char[] = { [STACK_ZERO] = '0', [STACK_DYNPTR] = 'd', [STACK_ITER] = 'i', + [STACK_IRQ_FLAG] = 'f' }; static void print_liveness(struct bpf_verifier_env *env, diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index be2365a9794a..2ff866749fe7 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -661,6 +661,11 @@ static int iter_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg, return stack_slot_obj_get_spi(env, reg, "iter", nr_slots); } +static int irq_flag_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + return stack_slot_obj_get_spi(env, reg, "irq_flag", 1); +} + static enum bpf_dynptr_type arg_to_dynptr_type(enum bpf_arg_type arg_type) { switch (arg_type & DYNPTR_TYPE_FLAG_MASK) { @@ -1156,10 +1161,126 @@ static int is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_s return 0; } +static int acquire_irq_state(struct bpf_verifier_env *env, int insn_idx); +static int release_irq_state(struct bpf_verifier_state *state, int id); + +static int mark_stack_slot_irq_flag(struct bpf_verifier_env *env, + struct bpf_kfunc_call_arg_meta *meta, + struct bpf_reg_state *reg, int insn_idx) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + struct bpf_reg_state *st; + int spi, i, id; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return spi; + + id = acquire_irq_state(env, insn_idx); + if (id < 0) + return id; + + slot = &state->stack[spi]; + st = &slot->spilled_ptr; + + __mark_reg_known_zero(st); + st->type = PTR_TO_STACK; /* we don't have dedicated reg type */ + st->live |= REG_LIVE_WRITTEN; + st->ref_obj_id = id; + + for (i = 0; i < BPF_REG_SIZE; i++) + slot->slot_type[i] = STACK_IRQ_FLAG; + + mark_stack_slot_scratched(env, spi); + return 0; +} + +static int unmark_stack_slot_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + struct bpf_reg_state *st; + int spi, i, err; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return spi; + + slot = &state->stack[spi]; + st = &slot->spilled_ptr; + + err = release_irq_state(env->cur_state, st->ref_obj_id); + WARN_ON_ONCE(err && err != -EACCES); + if (err) { + verbose(env, "cannot restore irq state out of order\n"); + return err; + } + + __mark_reg_not_init(env, st); + + /* see unmark_stack_slots_dynptr() for why we need to set REG_LIVE_WRITTEN */ + st->live |= REG_LIVE_WRITTEN; + + for (i = 0; i < BPF_REG_SIZE; i++) + slot->slot_type[i] = STACK_INVALID; + + mark_stack_slot_scratched(env, spi); + return 0; +} + +static bool is_irq_flag_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + int spi, i; + + /* For -ERANGE (i.e. spi not falling into allocated stack slots), we + * will do check_mem_access to check and update stack bounds later, so + * return true for that case. + */ + spi = irq_flag_get_spi(env, reg); + if (spi == -ERANGE) + return true; + if (spi < 0) + return false; + + slot = &state->stack[spi]; + + for (i = 0; i < BPF_REG_SIZE; i++) + if (slot->slot_type[i] == STACK_IRQ_FLAG) + return false; + return true; +} + +static int is_irq_flag_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + struct bpf_reg_state *st; + int spi, i; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return -EINVAL; + + slot = &state->stack[spi]; + st = &slot->spilled_ptr; + + if (!st->ref_obj_id) + return -EINVAL; + + for (i = 0; i < BPF_REG_SIZE; i++) + if (slot->slot_type[i] != STACK_IRQ_FLAG) + return -EINVAL; + return 0; +} + /* Check if given stack slot is "special": * - spilled register state (STACK_SPILL); * - dynptr state (STACK_DYNPTR); * - iter state (STACK_ITER). + * - irq flag state (STACK_IRQ_FLAG) */ static bool is_stack_slot_special(const struct bpf_stack_state *stack) { @@ -1169,6 +1290,7 @@ static bool is_stack_slot_special(const struct bpf_stack_state *stack) case STACK_SPILL: case STACK_DYNPTR: case STACK_ITER: + case STACK_IRQ_FLAG: return true; case STACK_INVALID: case STACK_MISC: @@ -1291,6 +1413,7 @@ static int copy_reference_state(struct bpf_verifier_state *dst, const struct bpf dst->active_locks = src->active_locks; dst->active_preempt_locks = src->active_preempt_locks; dst->active_rcu_lock = src->active_rcu_lock; + dst->active_irq_id = src->active_irq_id; return 0; } @@ -1392,6 +1515,20 @@ static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum r return 0; } +static int acquire_irq_state(struct bpf_verifier_env *env, int insn_idx) +{ + struct bpf_verifier_state *state = env->cur_state; + struct bpf_reference_state *s; + + s = acquire_reference_state(env, insn_idx, true); + if (!s) + return -ENOMEM; + s->type = REF_TYPE_IRQ; + + state->active_irq_id = s->id; + return s->id; +} + static void release_reference_state(struct bpf_verifier_state *state, int idx) { int last_idx; @@ -1420,6 +1557,28 @@ static int release_lock_state(struct bpf_verifier_state *state, int type, int id return -EINVAL; } +static int release_irq_state(struct bpf_verifier_state *state, int id) +{ + u32 prev_id = 0; + int i; + + if (id != state->active_irq_id) + return -EACCES; + + for (i = 0; i < state->acquired_refs; i++) { + if (state->refs[i].type != REF_TYPE_IRQ) + continue; + if (state->refs[i].id == id) { + release_reference_state(state, i); + state->active_irq_id = prev_id; + return 0; + } else { + prev_id = state->refs[i].id; + } + } + return -EINVAL; +} + static struct bpf_reference_state *find_lock_state(struct bpf_verifier_state *state, enum ref_state_type type, int id, void *ptr) { @@ -1428,7 +1587,7 @@ static struct bpf_reference_state *find_lock_state(struct bpf_verifier_state *st for (i = 0; i < state->acquired_refs; i++) { struct bpf_reference_state *s = &state->refs[i]; - if (s->type == REF_TYPE_PTR || s->type != type) + if (!(s->type & REF_TYPE_LOCK_MASK) || s->type != type) continue; if (s->id == id && s->ptr == ptr) @@ -3236,6 +3395,16 @@ static int mark_iter_read(struct bpf_verifier_env *env, struct bpf_reg_state *re return mark_stack_slot_obj_read(env, reg, spi, nr_slots); } +static int mark_irq_flag_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + int spi; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return spi; + return mark_stack_slot_obj_read(env, reg, spi, 1); +} + /* This function is supposed to be used by the following 32-bit optimization * code only. It returns TRUE if the source or destination register operates * on 64-bit, otherwise return FALSE. @@ -10012,6 +10181,12 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return -EINVAL; } + if (env->cur_state->active_irq_id) { + verbose(env, "global function calls are not allowed with IRQs disabled,\n" + "use static function instead\n"); + return -EINVAL; + } + if (err) { verbose(env, "Caller passes invalid args into func#%d ('%s')\n", subprog, sub_name); @@ -10536,6 +10711,11 @@ static int check_resource_leak(struct bpf_verifier_env *env, bool exception_exit return err; } + if (check_lock && env->cur_state->active_irq_id) { + verbose(env, "%s cannot be used inside bpf_local_irq_save-ed region\n", prefix); + return -EINVAL; + } + if (check_lock && env->cur_state->active_rcu_lock) { verbose(env, "%s cannot be used inside bpf_rcu_read_lock-ed region\n", prefix); return -EINVAL; @@ -10740,6 +10920,17 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn env->insn_aux_data[insn_idx].storage_get_func_atomic = true; } + if (env->cur_state->active_irq_id) { + if (fn->might_sleep) { + verbose(env, "sleepable helper %s#%d in IRQ-disabled region\n", + func_id_name(func_id), func_id); + return -EINVAL; + } + + if (in_sleepable(env) && is_storage_get_function(func_id)) + env->insn_aux_data[insn_idx].storage_get_func_atomic = true; + } + meta.func_id = func_id; /* check args */ for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) { @@ -11301,6 +11492,11 @@ static bool is_kfunc_arg_const_str(const struct btf *btf, const struct btf_param return btf_param_match_suffix(btf, arg, "__str"); } +static bool is_kfunc_arg_irq_flag(const struct btf *btf, const struct btf_param *arg) +{ + return btf_param_match_suffix(btf, arg, "__irq_flag"); +} + static bool is_kfunc_arg_scalar_with_name(const struct btf *btf, const struct btf_param *arg, const char *name) @@ -11454,6 +11650,7 @@ enum kfunc_ptr_arg_type { KF_ARG_PTR_TO_CONST_STR, KF_ARG_PTR_TO_MAP, KF_ARG_PTR_TO_WORKQUEUE, + KF_ARG_PTR_TO_IRQ_FLAG, }; enum special_kfunc_type { @@ -11485,6 +11682,8 @@ enum special_kfunc_type { KF_bpf_iter_css_task_new, KF_bpf_session_cookie, KF_bpf_get_kmem_cache, + KF_bpf_local_irq_save, + KF_bpf_local_irq_restore, }; BTF_SET_START(special_kfunc_set) @@ -11551,6 +11750,8 @@ BTF_ID(func, bpf_session_cookie) BTF_ID_UNUSED #endif BTF_ID(func, bpf_get_kmem_cache) +BTF_ID(func, bpf_local_irq_save) +BTF_ID(func, bpf_local_irq_restore) static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) { @@ -11641,6 +11842,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, if (is_kfunc_arg_wq(meta->btf, &args[argno])) return KF_ARG_PTR_TO_WORKQUEUE; + if (is_kfunc_arg_irq_flag(meta->btf, &args[argno])) + return KF_ARG_PTR_TO_IRQ_FLAG; + if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) { if (!btf_type_is_struct(ref_t)) { verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n", @@ -11744,6 +11948,54 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env, return 0; } +static int process_irq_flag(struct bpf_verifier_env *env, int regno, + struct bpf_kfunc_call_arg_meta *meta) +{ + struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno]; + bool irq_save; + int err; + + if (meta->func_id == special_kfunc_list[KF_bpf_local_irq_save]) { + irq_save = true; + } else if (meta->func_id == special_kfunc_list[KF_bpf_local_irq_restore]) { + irq_save = false; + } else { + verbose(env, "verifier internal error: unknown irq flags kfunc\n"); + return -EFAULT; + } + + if (irq_save) { + if (!is_irq_flag_reg_valid_uninit(env, reg)) { + verbose(env, "expected uninitialized irq flag as arg#%d\n", regno); + return -EINVAL; + } + + err = check_mem_access(env, env->insn_idx, regno, 0, BPF_DW, BPF_WRITE, -1, false, false); + if (err) + return err; + + err = mark_stack_slot_irq_flag(env, meta, reg, env->insn_idx); + if (err) + return err; + } else { + err = is_irq_flag_reg_valid_init(env, reg); + if (err) { + verbose(env, "expected an initialized irq flag as arg#%d\n", regno); + return err; + } + + err = mark_irq_flag_read(env, reg); + if (err) + return err; + + err = unmark_stack_slot_irq_flag(env, reg); + if (err) + return err; + } + return 0; +} + + static int ref_set_non_owning(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { struct btf_record *rec = reg_btf_record(reg); @@ -12332,6 +12584,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ case KF_ARG_PTR_TO_REFCOUNTED_KPTR: case KF_ARG_PTR_TO_CONST_STR: case KF_ARG_PTR_TO_WORKQUEUE: + case KF_ARG_PTR_TO_IRQ_FLAG: break; default: WARN_ON_ONCE(1); @@ -12626,6 +12879,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (ret < 0) return ret; break; + case KF_ARG_PTR_TO_IRQ_FLAG: + if (reg->type != PTR_TO_STACK) { + verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i); + return -EINVAL; + } + ret = process_irq_flag(env, regno, meta); + if (ret < 0) + return ret; + break; } } @@ -12806,6 +13068,11 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return -EINVAL; } + if (env->cur_state->active_irq_id && sleepable) { + verbose(env, "kernel func %s is sleepable within IRQ-disabled region\n", func_name); + return -EACCES; + } + /* In case of release function, we get register number of refcounted * PTR_TO_BTF_ID in bpf_kfunc_arg_meta, do the release now. */ @@ -17739,6 +18006,12 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap)) return false; break; + case STACK_IRQ_FLAG: + old_reg = &old->stack[spi].spilled_ptr; + cur_reg = &cur->stack[spi].spilled_ptr; + if (!check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap)) + return false; + break; case STACK_MISC: case STACK_ZERO: case STACK_INVALID: @@ -17768,12 +18041,16 @@ static bool refsafe(struct bpf_verifier_state *old, struct bpf_verifier_state *c if (old->active_rcu_lock != cur->active_rcu_lock) return false; + if (!check_ids(old->active_irq_id, cur->active_irq_id, idmap)) + return false; + for (i = 0; i < old->acquired_refs; i++) { if (!check_ids(old->refs[i].id, cur->refs[i].id, idmap) || old->refs[i].type != cur->refs[i].type) return false; switch (old->refs[i].type) { case REF_TYPE_PTR: + case REF_TYPE_IRQ: break; case REF_TYPE_LOCK: if (old->refs[i].ptr != cur->refs[i].ptr) From patchwork Wed Nov 27 15:33:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887154 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f68.google.com (mail-wr1-f68.google.com [209.85.221.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 622561FF5F8 for ; Wed, 27 Nov 2024 15:33:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.68 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721598; cv=none; b=UKGTFsg2vQ5wRjl/XU0xi7p8CTQ48YFf1+3jAd2h7SCv2wSd7gKa7hFiXeSfzrlMVe1kAe8juUW4kGk9doUV1CTSeghXXST7Gox/AJf0sQuyZotu9mT4q++Eq9e/TSX5YVFFmpf6umqCw0KBsDKQgC8Tn+D+EL6HlJ/3fqilCA4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721598; c=relaxed/simple; bh=LRIHnze/tuJwh0Edu2mYJoN5fmH/KhBc7bfOgRA3U8A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pY/DufEb7wKP/EENBBbbBaJgBVFDeAFlLE8jLCLl+yIlSFjeppUCekBLloeI8s+bxC9rWM9J6MpThuP1Q4GFHVFv3sJEPYJ3kkYKeSuY6PlfRVSgOwbhonXXdhxGTV5xCwaevT4AxcG1spTSdn/xoB78A6681WO7BN5iIQkXhqc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=N4iQIYzi; arc=none smtp.client-ip=209.85.221.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="N4iQIYzi" Received: by mail-wr1-f68.google.com with SMTP id ffacd0b85a97d-3824038142aso4440325f8f.2 for ; Wed, 27 Nov 2024 07:33:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732721594; x=1733326394; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qn+I+q5m9l6dM76TziVD0OvhITT1JjlgBm4WfvfTWmU=; b=N4iQIYzid+CUI6c1FtoKQatUvfgzSN9PZpSy9tJmTahbabTGLKfgsT3dabkDMHnbWq 9iZlfi7elf2YCnyXyxEEFj8f/G9JMQtxOGSek8UIfdgJNcot0AvxK5Cpp5igcy+qs+VR ATNp3/+FRnei1cJkm54I7QV9U5RUc+Ve3QVWWMRQb7Ot5xSn8wuPxM3GQUHRxAkdNmHn W96gFRqOdH/thWH8qfQKkHO1xTwePhC8zJSNpd96GwW+drnwPhk3hJC5FVNrKNTDiBiI oPpdB3ETzQbeZ3qb62r8cfr9FeIlivjjTuPiEHv980k6iAsil/6osV40B7K1yk/dqnlI xoLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732721594; x=1733326394; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qn+I+q5m9l6dM76TziVD0OvhITT1JjlgBm4WfvfTWmU=; b=Oe4Zon8DoGw3th0QxWXPJrkXfzQTIKTjOPXRjH5QsLi1Xowzp/cmpzKxWJ9YkV8EO+ OVMD+ui92wr6aid8sOUVNgnK1XvItaSHAhA+9PAM5IO1f0edBAgGqjOIklXF9oK1tq04 h8+tJD0B6lkkqOmJWUV2ytj1D1vp7+2FKauSQyQPem3F1WqU5C/VPZmGSfIZ0Pm3MvrP TLy1UAs6OyciNdHz95J8myPMmdC5R5kxU7DcJoxdxB59aOGQquIPMkODX4F8BqBl6BAP hlyQpjqdzhtMn1JE5lisLffLklp7lEibFsyARnCvbjjvDXC+bpX3mTBWhlWcThXWT3I6 LY6Q== X-Gm-Message-State: AOJu0Yx9klWvGEzlF0sZHoYZZkIj2aFPJUtCWhnrvYrPUJEfQ5Chy1vy qqfDSXC+PtU4ExUD1bOKdg9sKNWVR+Upx8uoBXwYp28JJgKdi6soLtPPmLcqc+I= X-Gm-Gg: ASbGnct4PNLIxiHsYVayE7i+nEC3/XkK4djibW0/VloGbwP950MDgWA86VXuFEAiHZY igpdvyxZEpBekVsLM2Li7T2EZsDv5BavVgEFuwqdD30K4b2xAlePfZ6s41VZhXA9nD5LSiYtkeQ z4I6bR2ntrezGoZhacOXbE/wJVD6003lR9cfE5EAtXxOie1YlUA/y+6TzTKZRKKjYuHmeLU2jSY 26pv7TYDvIbIlx6Lm/upGmGQRlyIYOYnOTBWRE3qG/9fD62c6ERwBEHX3xL7t+N2jEzwEpoFKng X-Google-Smtp-Source: AGHT+IHy0FCH3B/3uaACUcsntiDPBznWQoY4IfNPPU/Fw8teG4z1D51duDHkTwLTJ1ndpJH1AyqQAg== X-Received: by 2002:a05:6000:1867:b0:382:50a7:beef with SMTP id ffacd0b85a97d-385c6ebda1emr3102849f8f.24.1732721594329; Wed, 27 Nov 2024 07:33:14 -0800 (PST) Received: from localhost (fwdproxy-cln-003.fbsv.net. [2a03:2880:31ff:3::face:b00c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7a45e4sm24355385e9.4.2024.11.27.07.33.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 07:33:13 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v2 5/7] bpf: Improve verifier log for resource leak on exit Date: Wed, 27 Nov 2024 07:33:04 -0800 Message-ID: <20241127153306.1484562-6-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127153306.1484562-1-memxor@gmail.com> References: <20241127153306.1484562-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5787; h=from:subject; bh=LRIHnze/tuJwh0Edu2mYJoN5fmH/KhBc7bfOgRA3U8A=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnRztdG5HzoAS0J1Vxcd9Xoymb6GUP0G/6/zCRgFem XMBxHISJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0c7XQAKCRBM4MiGSL8RynQcD/ 9YCeBSyot+IYMv88rSM9CjFoFu8EEesg6VDNrW2wslcJtwZNqj9AilpY2WxlVBT8GDawlZmMz033YF S1kExRz1GjW5wHB0+Knjb6ZkBeXChKZflFX4narKfcggIFVSbm+lXNlMwktXM5cAH24au/9GXiqJW7 VaeYKGXA4h+eV8LWKbK2pS6pE1+vVnLegufldgZ+fN0ENjBFiQzvgk8Rrtdd2K9Hiv0JGwOtfCZwD8 hnamGugq4k37jYDO8iI+UtqCfjZDrbxYU4knI8Fi6DDev8GxBtejo03Lnop7fiFiaHNmFMsrjXb9qR rrYfHdsxcdqRy6gXP4c6jx0K9tRp6oWy/xMCUizr9RNuCiwCOS2Fqlzp2Uxs1FkImwd57/Qp4rGcCE YKod9NFsF42dG+wwJPiXUITIxckGhjTWS8xD8RRs9XPOVtzX2OHUM4gkGPe+v0FToL0sbPyJSbQxOI OGzqrhZYDd0UDBHaev/kvF/fQ/65qubNl0dSIT8JpMbp8jeTMSFa9WoPASJvfrDLbSidgKTrgw4G8F WI9JNmcmBjEBNgUdviahuHSdpYVEZMjLI5LVIQv1iCHdx7CHmGHD9R6X/gNV6qsY+RMeAJQRTWDgho 7YI5HcFg6/cPhdxVoInntE+oyttI6JAYf9s256u7cn9go/UzWyvAYaL5QUHg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net The verifier log when leaking resources on BPF_EXIT may be a bit confusing, as it's a problem only when finally existing from the main prog, not from any of the subprogs. Hence, update the verifier error string and the corresponding selftests matching on it. Suggested-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 2 +- .../testing/selftests/bpf/progs/exceptions_fail.c | 4 ++-- tools/testing/selftests/bpf/progs/preempt_lock.c | 14 +++++++------- .../selftests/bpf/progs/verifier_spin_lock.c | 2 +- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2ff866749fe7..946bfc114664 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -19088,7 +19088,7 @@ static int do_check(struct bpf_verifier_env *env) * match caller reference state when it exits. */ err = check_resource_leak(env, exception_exit, !env->cur_state->curframe, - "BPF_EXIT instruction"); + "BPF_EXIT instruction in main prog"); if (err) return err; diff --git a/tools/testing/selftests/bpf/progs/exceptions_fail.c b/tools/testing/selftests/bpf/progs/exceptions_fail.c index fe0f3fa5aab6..8a0fdff89927 100644 --- a/tools/testing/selftests/bpf/progs/exceptions_fail.c +++ b/tools/testing/selftests/bpf/progs/exceptions_fail.c @@ -131,7 +131,7 @@ int reject_subprog_with_lock(void *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_rcu_read_lock-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_rcu_read_lock-ed region") int reject_with_rcu_read_lock(void *ctx) { bpf_rcu_read_lock(); @@ -147,7 +147,7 @@ __noinline static int throwing_subprog(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_rcu_read_lock-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_rcu_read_lock-ed region") int reject_subprog_with_rcu_read_lock(void *ctx) { bpf_rcu_read_lock(); diff --git a/tools/testing/selftests/bpf/progs/preempt_lock.c b/tools/testing/selftests/bpf/progs/preempt_lock.c index 885377e83607..5269571cf7b5 100644 --- a/tools/testing/selftests/bpf/progs/preempt_lock.c +++ b/tools/testing/selftests/bpf/progs/preempt_lock.c @@ -6,7 +6,7 @@ #include "bpf_experimental.h" SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_1(struct __sk_buff *ctx) { bpf_preempt_disable(); @@ -14,7 +14,7 @@ int preempt_lock_missing_1(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_2(struct __sk_buff *ctx) { bpf_preempt_disable(); @@ -23,7 +23,7 @@ int preempt_lock_missing_2(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_3(struct __sk_buff *ctx) { bpf_preempt_disable(); @@ -33,7 +33,7 @@ int preempt_lock_missing_3(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_3_minus_2(struct __sk_buff *ctx) { bpf_preempt_disable(); @@ -55,7 +55,7 @@ static __noinline void preempt_enable(void) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_1_subprog(struct __sk_buff *ctx) { preempt_disable(); @@ -63,7 +63,7 @@ int preempt_lock_missing_1_subprog(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_2_subprog(struct __sk_buff *ctx) { preempt_disable(); @@ -72,7 +72,7 @@ int preempt_lock_missing_2_subprog(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_2_minus_1_subprog(struct __sk_buff *ctx) { preempt_disable(); diff --git a/tools/testing/selftests/bpf/progs/verifier_spin_lock.c b/tools/testing/selftests/bpf/progs/verifier_spin_lock.c index 3f679de73229..25599eac9a70 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spin_lock.c +++ b/tools/testing/selftests/bpf/progs/verifier_spin_lock.c @@ -187,7 +187,7 @@ l0_%=: r6 = r0; \ SEC("cgroup/skb") __description("spin_lock: test6 missing unlock") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_spin_lock-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_spin_lock-ed region") __failure_unpriv __msg_unpriv("") __naked void spin_lock_test6_missing_unlock(void) { From patchwork Wed Nov 27 15:33:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887155 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FC5D1F9AB6 for ; Wed, 27 Nov 2024 15:33:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.68 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721599; cv=none; b=DuzGmSm7bMA8AvPzl95n27kMn3D3wFxWovCSxmcsXUjnb0r8acLqyHANMHybO5Fcw2VU0wXcae0MgxwkoxRAmKEadMTHucBGc323/00LhhPLVLMPoMrLyY5OaywYSYpPAZ1J5AQJASsriALneoTGxhekFAtkMcLVxQVGEdC5gpI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721599; c=relaxed/simple; bh=nh4dH5NlTiDfgbb3waHbwKlrhlaqFszmKXwUltb3VZs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kjs4eKIRLr7V8yJSaBhcGihcw2Qm4hffPhbfN5Hn0B/e/ktZogSAogdGqYcRPyvl51ZFnzqpL03xey20Bh8SAMAoAuZIYYNR95yrtpOMh0wjfwJyMkAB/BBeLB7wROFBDmzZYDWxSlZH9dsC+NcvydHHnMmixt9kmIee7+WA0Vg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MCmKh8Py; arc=none smtp.client-ip=209.85.128.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MCmKh8Py" Received: by mail-wm1-f68.google.com with SMTP id 5b1f17b1804b1-4349f160d62so32643705e9.2 for ; Wed, 27 Nov 2024 07:33:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732721595; x=1733326395; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MJwbhPzqxw6PiVwLVYiGAIa2hOssPAZ0gOhNr1j2YRU=; b=MCmKh8PyvXJ9fa3sTFBNhuIOzpvtW9FCNDsMkwSy7iEWOnQZP/PK5tNhP2tv4I/mAh KwaCwaRaw/qq7hWRbnfTcguVA/zOq8qHjjwRF0Xp0ixTyp+MAPQC35gad8mNoxpR4K95 r9r4PP1wOQgj9Uc9oQVh6IPmtLEor35SMP2eI794Ae8zVSSGeD/HGp0Q/WSBtLx4hmqT p+CvwQXhPqSz0FNJkPqtRrdekgWatVVnHFdAzPZbfY8nmMmrP9qYbaNEhp904LBx54lG aILMmzkg1W0ldC0tUFaeI8B5MsWLT0lB5tI0ceNeockdAJnC/mrcgDTgW91L5Pf/jAle 3U0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732721595; x=1733326395; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MJwbhPzqxw6PiVwLVYiGAIa2hOssPAZ0gOhNr1j2YRU=; b=aZromiDbMFYyH2wqONuUnEChzUksLstcpf5BHscUlUApnD+omU50fdxBp179rDilDr eLPqbN9Q+bIvxzSAUhixAdqoh81Xmv5tSSpoyjfVZjnSgDhQFugn26gN9kiOQG8QhBos EXkPX2PNxpDSdGs8GklyQul+iPC/wOQveK2TsXjwy9OHm4HlVXG0Ia1oxu3QkphoRkad HQt7b9dJ1Q6EtWL30cwtQqUiEbZdJIHmNhTHPcBzCs9rSqLSOOSjvuXhc8oeTpz5cJBk 5PFHKidNm/waEwnOZooreIJ7Rvfl6Anq8cYXIJ+eigfJAYXJ9um1HTR9r1JLq5Lx1J5C xLAg== X-Gm-Message-State: AOJu0YyatGjxiSFoq6jzkBBMrRPwP3oC/0LcPRW4C2BB/852CH9We3b2 DGCUTvCIUkoC5oJf/nQsp6ilgCFaO16rk+HNinOTcg1T+/41tkPQRKkSUjo4COE= X-Gm-Gg: ASbGncvYuBU4lgLEK78g1q5Xzc6TDayqCnaq3dSpiORWOdcArTtFqpqQ61wUPBtpA69 uabmAUGs+KDrtDl84xtTi5rcyubvzLBXQ+rdTqpkt/+36/2B7Zv+WXEK1J4lBOWI/EL9tuDHOor 2/mJVlvCOflX9ydiKeHBrzzy+3M7imu//2zPdmGpQ4ze6MaLqM1kSgrFCNsCU6AEFR0YE1Y90aK s1hOyXFhf1AkeLrOZ0R4kPjlt9pIs5SgcbQLcjPEeLCxhDUsRIbngaWwWQEktelKGYxgSGVSR9r vA== X-Google-Smtp-Source: AGHT+IE68Ka0o+6QmdMfD1b5qiuK5UCIZ13zQ7Pgm3Wd3x80GmwZpb0dFOn9AWS4oyaZTF7eV1YK3Q== X-Received: by 2002:a05:600c:198c:b0:434:a386:6ae with SMTP id 5b1f17b1804b1-434a9db7cf7mr31480765e9.7.1732721595495; Wed, 27 Nov 2024 07:33:15 -0800 (PST) Received: from localhost (fwdproxy-cln-039.fbsv.net. [2a03:2880:31ff:27::face:b00c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa78120dsm24310775e9.24.2024.11.27.07.33.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 07:33:15 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v2 6/7] selftests/bpf: Expand coverage of preempt tests to sleepable kfunc Date: Wed, 27 Nov 2024 07:33:05 -0800 Message-ID: <20241127153306.1484562-7-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127153306.1484562-1-memxor@gmail.com> References: <20241127153306.1484562-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1228; h=from:subject; bh=nh4dH5NlTiDfgbb3waHbwKlrhlaqFszmKXwUltb3VZs=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnRztd8WGn3gS6E0MxEsrWKBVooyTUjx8ax4WgHL+x cqnsw7WJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0c7XQAKCRBM4MiGSL8RymCKD/ 4k62ECYzW/R/nBHTtk6HnVE/YLMsq4HzpLYjulXUtiMjOWqesgYhAtMLgJmNPHuJpDXHnG56Z9PF1N dBFyHSUdqehm6wbx+nrPVS6TgO27PS8cpRYY/g3Eee8E0I4s8Gi06TLSt+PPxr2kifM3HAam5gSqOy CCfTarHxNQ3Pc+BVqeFzcv1uKlEZDBCVQBU8TLVlGQsHg7bRULDM/lfFnqhBpFnwPFQFYHZXKwfz5x 8lysxANpOzDeiiNbwcjrXTCQvV8LcDggJsu3mLmxngaHylE0H8yfePkecHfIxBGD6zfJUwoZBkrAS8 pwh7mIcPrp7AYB//T4NedqmWT9lNgkw5rkVtSEvhselz1I4d8S4v5VUCb1o90a/2b+AdBGPcILD0Uj HraKsXYSko4KZgoUNBqF2cIO3xmgEEVA6L+uZ4F3NhKaAedXmxZVU11IV274O0JrKuxLVQ/E9QiXc1 74KEdFXoivmPaBsGAXLUp+ENSXrD+VNrlWD0lb8z4ZPcnau7DJrobziN75r0cDn/5XboM6ITsAKAYl hWqQA70Ydj2fJVj/sqYPLe0sWQ6jAKzHkWoon5mKYhXvW8vV51upwCm9wTlI53QbY9XCjPP/qTqhAr EsEMC38SS8zlxe5klBRAY2OjE/3GsZGOZAkLn7HNZSYbQlmcJLKFShKI8cSA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net For preemption-related kfuncs, we don't test their interaction with sleepable kfuncs (we do test helpers) even though the verifier has code to protect against such a pattern. Expand coverage of the selftest to include this case. Acked-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- tools/testing/selftests/bpf/progs/preempt_lock.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/preempt_lock.c b/tools/testing/selftests/bpf/progs/preempt_lock.c index 5269571cf7b5..788cf155d641 100644 --- a/tools/testing/selftests/bpf/progs/preempt_lock.c +++ b/tools/testing/selftests/bpf/progs/preempt_lock.c @@ -113,6 +113,18 @@ int preempt_sleepable_helper(void *ctx) return 0; } +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +__failure __msg("kernel func bpf_copy_from_user_str is sleepable within non-preemptible region") +int preempt_sleepable_kfunc(void *ctx) +{ + u32 data; + + bpf_preempt_disable(); + bpf_copy_from_user_str(&data, sizeof(data), NULL, 0); + bpf_preempt_enable(); + return 0; +} + int __noinline preempt_global_subprog(void) { preempt_balance_subprog(); From patchwork Wed Nov 27 15:33:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887156 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5CE0200121 for ; Wed, 27 Nov 2024 15:33:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.68 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721600; cv=none; b=Dc5gg/+UIYqT06q/1+1Qzioiwqyi3wRD9M2FYQ9U9NhiNumYjM5FJoAqmbcFVbInQld2WrlCSR8aPGvq+yqcQf3gyBUeh21gXX815Obv/zMhikxVaXa+A+PTZTe5aKHpeuK77IOOWRIn0pcXV5jCEpUimSRcjkDKJMtF+bOkRgg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732721600; c=relaxed/simple; bh=zosYKyJ5fhewCfyRbC/booiLawjfXHsjri1X0KLb8B8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OtvaX1aYCht5jsDUH00TXGzbGbzwBPJhfmGxiIBmy48mUw77Psq2NN5OfULXcD/fzRIMruEBck5qKejY28hiERbc6Kq1GkIt20WZl8IrP6EBC77NWR99737MhF7iT7vjwB5ClNyxkTV8cBdWVUFJxFC065s6BSQ3zvMgVQGuabA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZMcWpJaR; arc=none smtp.client-ip=209.85.128.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZMcWpJaR" Received: by mail-wm1-f68.google.com with SMTP id 5b1f17b1804b1-4349cc45219so35143565e9.3 for ; Wed, 27 Nov 2024 07:33:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732721597; x=1733326397; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rn9QSyfkaBdlANBW0DYKxSE5MgZsj7LY9XZ2/qaaPro=; b=ZMcWpJaRwxGU6Ba+CYDwUgKXBKT6Ed8pTTW5QS1jpKAvbNB4WWjI8D85LMvg9PYgow bh/f+GTt3HZtyN8y1yXCU583YSdc+8EcOp3qvzyQE5DdXdd568Q+ROY9N7hbSwyKtibF 4fZmeeWELLi9yc5BhAtm4zXjzZTOyYxzSr9V+ZFeSP0sv3mIj4vx0rivNRLzR0aD1JpL gwS6S5lH1dp0kX5546JFl6i4vmrKyb7+czAhi8VBWC0CTVXq9yYuTwJ9oEdbxSVeXKgi 5vVUYLsEcRtQz2cRd2ZcktHER2DluLqugOeFjWycorVMLfvJuS/SzFe93GwraIkqJagO 9aug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732721597; x=1733326397; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rn9QSyfkaBdlANBW0DYKxSE5MgZsj7LY9XZ2/qaaPro=; b=hbPP9OhP6/UJAgIjsLlZTD2tHXWLN3VT/mmU2FlVgVdvPqkTjiOe44lZNzOfQP56RK OotDO7CBQvztZRzYH5RgtQW7RSGaR/dAc/yoRVafdNo/Y3XxWlgOKt1p85Y/DQKlkK0f QPr05c9yMIXWC64ic4cL2jiwafqmB2V23JmbJr4f343Q0Ga+YLto6KDW9n6eUkV7mThX ugqpbVPyGfjduOzF5V4/5DB7yDIalWv3Iiyb9bR5Ueqr6bT0dEEiajX8LVXkOF7+3C1H Q+VrstqPpX7Jft6KSzL4QAXH1MUY62Whh5E0fNMWIvRGVaa/2L2AntiXjXNXa5F0n1R/ oj2Q== X-Gm-Message-State: AOJu0YyMtyA56C8ka3hBq9r7Cki5W9tLKp/vUNo6zrNPWiUtZbeLGheK bONNHtThAJXJgpnP6wW01RmLdJGQOYbyJanv2PRe0rDYker9zXTP8lQC2uyfnys= X-Gm-Gg: ASbGncuK0bEqxFbiM/mqHxiGuWxC+7TPIaJ8driAI0WS4D4bHUnZgosVOJ/DS0Fp5Js x9SsOvnJnvCSzDQx2F7kKPMW14IH+mjh3b7jTiQnru0CZNRTfO4ZzvW8MgzNIVwAbdMZ8ltJfP6 8xywUMQ3CWFvdn29LxiRqM3kFVrPtApL6004Wx8xGv5a7XJdsFCmIY9SFNjegEl+aF4re706AR/ a3j2Gg8vHIXgcfkWeJvon9JGNOOG5SVEVkz2AGKr+naZ9g1JBU/+IeZHXd2qCsZpRFBIihuoHw5 Iw== X-Google-Smtp-Source: AGHT+IE+rxDQqXyw/hYZq/4MJHxG5IM6ztis1M/01cJHgp06PgfV5G8psWgQPML+DqXtMT8fF3roQQ== X-Received: by 2002:a05:600c:4fc9:b0:434:a746:9c82 with SMTP id 5b1f17b1804b1-434a9dbaf67mr32367965e9.5.1732721596661; Wed, 27 Nov 2024 07:33:16 -0800 (PST) Received: from localhost (fwdproxy-cln-115.fbsv.net. [2a03:2880:31ff:73::face:b00c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7691b0sm24276305e9.17.2024.11.27.07.33.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 07:33:16 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v2 7/7] selftests/bpf: Add IRQ save/restore tests Date: Wed, 27 Nov 2024 07:33:06 -0800 Message-ID: <20241127153306.1484562-8-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127153306.1484562-1-memxor@gmail.com> References: <20241127153306.1484562-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=12254; h=from:subject; bh=zosYKyJ5fhewCfyRbC/booiLawjfXHsjri1X0KLb8B8=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnRztdYiplM7DeWfr+7eHg6gWMOSx3Ay6oDxlbsYQ2 DBQMpy6JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0c7XQAKCRBM4MiGSL8RynsyD/ 9yKSu+VjkJ/Bje9nTd98WVAkm5eoy5DETIqVFQaXM856y7Pu3j2n44IDh7gXQVQ3tYKUSsMF24dpPa CvHFLvZz7YPhcq/T1UPzBDLr7wTc9np5qrJxQA0S6FKcGaJpr4UHT/Ed0v506J8g//vu+uZ7rHYv0I n5gfbZSKB6ym2a1gNUUfsMBCj3ujbbAVZqYrx0bZkTF4HhXj5x+P9E163QyVLqtKPCEqmshrQ9YaBM YLmCKBdYFoBp/mcO92gc6zrHOgIsjGyrL31WHfGlj0knZ2Yxo27sF0DuHtClPCdfsgRnTIDl0ouRCP t6a5KvRxFY2CkdTQnfnDanI+T5R+KlIZDS+HaG7pbx3LpT5TEWIhPClHxLeuxVSQxe0ettgMSELsTD wStnzfZaDWyvvx4didDQ9cYcyCEAXVb7LCE7kU+ZCDFs0/mJGhTMYX2YeT9d697yx7iWvoSdi60tnN nGTwdvOE4gDXhnW9aI4NZEmWlyEy0FRiWjiSDoYntLGfGU3cie2LsDU43RRTl6ksdcXVz2+kTer+8w l/lYHFRSH93zLACuusNfxAZYJESvzqd3FVU+j1iSxOTIobDNZ4Xyo8FK9yFtggpvrJGaCMr4MBcI7G I1tq6W7oWuG+N3CQY7ecIj++xgMxWFetfa9qnBH0fLc3iAI7dnsRggNDwK0A== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net Include tests that check for rejection in erroneous cases, like unbalanced IRQ-disabled counts, within and across subprogs, invalid IRQ flag state or input to kfuncs, behavior upon overwriting IRQ saved state on stack, interaction with sleepable kfuncs/helpers, global functions, and out of order restore. Include some success scenarios as well to demonstrate usage. #128/1 irq/irq_save_bad_arg:OK #128/2 irq/irq_restore_bad_arg:OK #128/3 irq/irq_restore_missing_2:OK #128/4 irq/irq_restore_missing_3:OK #128/5 irq/irq_restore_missing_3_minus_2:OK #128/6 irq/irq_restore_missing_1_subprog:OK #128/7 irq/irq_restore_missing_2_subprog:OK #128/8 irq/irq_restore_missing_3_subprog:OK #128/9 irq/irq_restore_missing_3_minus_2_subprog:OK #128/10 irq/irq_balance:OK #128/11 irq/irq_balance_n:OK #128/12 irq/irq_balance_subprog:OK #128/13 irq/irq_global_subprog:OK #128/14 irq/irq_restore_ooo:OK #128/15 irq/irq_restore_ooo_3:OK #128/16 irq/irq_restore_3_subprog:OK #128/17 irq/irq_restore_4_subprog:OK #128/18 irq/irq_restore_ooo_3_subprog:OK #128/19 irq/irq_restore_invalid:OK #128/20 irq/irq_save_invalid:OK #128/21 irq/irq_restore_iter:OK #128/22 irq/irq_save_iter:OK #128/23 irq/irq_flag_overwrite:OK #128/24 irq/irq_flag_overwrite_partial:OK #128/25 irq/irq_sleepable_helper:OK #128/26 irq/irq_sleepable_kfunc:OK #128 irq:OK Summary: 1/26 PASSED, 0 SKIPPED, 0 FAILED Acked-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- .../selftests/bpf/prog_tests/verifier.c | 2 + tools/testing/selftests/bpf/progs/irq.c | 393 ++++++++++++++++++ 2 files changed, 395 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/irq.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index d9f65adb456b..b1b4d69c407a 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -98,6 +98,7 @@ #include "verifier_xdp_direct_packet_access.skel.h" #include "verifier_bits_iter.skel.h" #include "verifier_lsm.skel.h" +#include "irq.skel.h" #define MAX_ENTRIES 11 @@ -225,6 +226,7 @@ void test_verifier_xdp(void) { RUN(verifier_xdp); } void test_verifier_xdp_direct_packet_access(void) { RUN(verifier_xdp_direct_packet_access); } void test_verifier_bits_iter(void) { RUN(verifier_bits_iter); } void test_verifier_lsm(void) { RUN(verifier_lsm); } +void test_irq(void) { RUN(irq); } void test_verifier_mtu(void) { diff --git a/tools/testing/selftests/bpf/progs/irq.c b/tools/testing/selftests/bpf/progs/irq.c new file mode 100644 index 000000000000..453024b59ce0 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/irq.c @@ -0,0 +1,393 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */ +#include +#include +#include "bpf_misc.h" + +unsigned long global_flags; + +SEC("?tc") +__failure __msg("arg#0 doesn't point to an irq flag on stack") +int irq_save_bad_arg(struct __sk_buff *ctx) +{ + bpf_local_irq_save(&global_flags); + return 0; +} + +SEC("?tc") +__failure __msg("arg#0 doesn't point to an irq flag on stack") +int irq_restore_bad_arg(struct __sk_buff *ctx) +{ + bpf_local_irq_restore(&global_flags); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_2(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_save(&flags3); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3_minus_2(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_save(&flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_restore(&flags2); + return 0; +} + +static __noinline void local_irq_save(unsigned long *flags) +{ + bpf_local_irq_save(flags); +} + +static __noinline void local_irq_restore(unsigned long *flags) +{ + bpf_local_irq_restore(flags); +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_1_subprog(struct __sk_buff *ctx) +{ + unsigned long flags; + + local_irq_save(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_2_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + + local_irq_save(&flags1); + local_irq_save(&flags2); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3_minus_2_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + local_irq_restore(&flags3); + local_irq_restore(&flags2); + return 0; +} + +SEC("?tc") +__success +int irq_balance(struct __sk_buff *ctx) +{ + unsigned long flags; + + local_irq_save(&flags); + local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__success +int irq_balance_n(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + local_irq_restore(&flags3); + local_irq_restore(&flags2); + local_irq_restore(&flags1); + return 0; +} + +static __noinline void local_irq_balance(void) +{ + unsigned long flags; + + local_irq_save(&flags); + local_irq_restore(&flags); +} + +static __noinline void local_irq_balance_n(void) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + local_irq_restore(&flags3); + local_irq_restore(&flags2); + local_irq_restore(&flags1); +} + +SEC("?tc") +__success +int irq_balance_subprog(struct __sk_buff *ctx) +{ + local_irq_balance(); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +__failure __msg("sleepable helper bpf_copy_from_user#") +int irq_sleepable_helper(void *ctx) +{ + unsigned long flags; + u32 data; + + local_irq_save(&flags); + bpf_copy_from_user(&data, sizeof(data), NULL); + local_irq_restore(&flags); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +__failure __msg("kernel func bpf_copy_from_user_str is sleepable within IRQ-disabled region") +int irq_sleepable_kfunc(void *ctx) +{ + unsigned long flags; + u32 data; + + local_irq_save(&flags); + bpf_copy_from_user_str(&data, sizeof(data), NULL, 0); + local_irq_restore(&flags); + return 0; +} + +int __noinline global_local_irq_balance(void) +{ + local_irq_balance_n(); + return 0; +} + +SEC("?tc") +__failure __msg("global function calls are not allowed with IRQs disabled") +int irq_global_subprog(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + global_local_irq_balance(); + bpf_local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_ooo(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_restore(&flags1); + bpf_local_irq_restore(&flags2); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_ooo_3(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_restore(&flags2); + bpf_local_irq_save(&flags3); + bpf_local_irq_restore(&flags1); + bpf_local_irq_restore(&flags3); + return 0; +} + +static __noinline void local_irq_save_3(unsigned long *flags1, unsigned long *flags2, + unsigned long *flags3) +{ + local_irq_save(flags1); + local_irq_save(flags2); + local_irq_save(flags3); +} + +SEC("?tc") +__success +int irq_restore_3_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save_3(&flags1, &flags2, &flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_restore(&flags2); + bpf_local_irq_restore(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_4_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + unsigned long flags4; + + local_irq_save_3(&flags1, &flags2, &flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_save(&flags4); + bpf_local_irq_restore(&flags4); + bpf_local_irq_restore(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_ooo_3_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save_3(&flags1, &flags2, &flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_restore(&flags2); + bpf_local_irq_save(&flags3); + bpf_local_irq_restore(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_restore_invalid(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags = 0xfaceb00c; + + bpf_local_irq_save(&flags1); + bpf_local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("expected uninitialized") +int irq_save_invalid(struct __sk_buff *ctx) +{ + unsigned long flags1; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_restore_iter(struct __sk_buff *ctx) +{ + struct bpf_iter_num it; + + bpf_iter_num_new(&it, 0, 42); + bpf_local_irq_restore((unsigned long *)&it); + return 0; +} + +SEC("?tc") +__failure __msg("Unreleased reference id=1") +int irq_save_iter(struct __sk_buff *ctx) +{ + struct bpf_iter_num it; + + /* Ensure same sized slot has st->ref_obj_id set, so we reject based on + * slot_type != STACK_IRQ_FLAG... + */ + _Static_assert(sizeof(it) == sizeof(unsigned long), "broken iterator size"); + + bpf_iter_num_new(&it, 0, 42); + bpf_local_irq_save((unsigned long *)&it); + bpf_local_irq_restore((unsigned long *)&it); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_flag_overwrite(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + flags = 0xdeadbeef; + bpf_local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_flag_overwrite_partial(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + *(((char *)&flags) + 1) = 0xff; + bpf_local_irq_restore(&flags); + return 0; +} + +char _license[] SEC("license") = "GPL";