From patchwork Thu Nov 21 00:53:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13881523 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92F5645979 for ; Thu, 21 Nov 2024 00:53:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150416; cv=none; b=bSppZvu3CjUnxqLJfsbO1t9uFijnih0gKwnEoe4VQBTNIbe/a84CXGRB7j5GhTwpPyYGwvLYEFfZBOaKTo8jCsLN7jUHL9nrHr6+JFC744Iw8HTY8FGiMuPDvLctCRFMnJUhVjN8MejtCdUw2HffGjkY7PXKRGeJ9ydl6t1c5MQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150416; c=relaxed/simple; bh=fCDcAN+4zc4C3MttQjgY7FqxujvS/RrgChgtcrDkD6w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L3BEcpZG+x50G+1+3cybenIwg+l5ejYHaudoePhHuqs4EW2z3H8gfU5ab/6V3qs2ztbN7SuT9jhIQtA6r5SLztvvABuYYNMyZ43nDZZwDpEPdF1QxijVnlkzdpp96LWj+R2Y3PPf9K36om/mDog1wk/OKsc/2ajuzmHgEU/s2HQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aDPyG4H6; arc=none smtp.client-ip=209.85.128.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aDPyG4H6" Received: by mail-wm1-f66.google.com with SMTP id 5b1f17b1804b1-4316a44d1bbso2371955e9.3 for ; Wed, 20 Nov 2024 16:53:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732150412; x=1732755212; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WVL6vU9VsC7sSPIJgIB2KdqVwc84j5czVKw3PnYJPD8=; b=aDPyG4H6h/XEkXJ75sR5lVzCmRBLG64dC344rkLI+QNeWSakKTMsXj4UUnsztTvFWm 0s91aKPzMH/ZvjGru9MgxVQMeuAJK6dkBIQsKT7QPNJAoDmX4H++t63aNL8+X4p4Tck8 nMsxeEVlM33KE0BiD8Lr4f3uWeqNZBjBfxAhMcLEz3726yhI1fMrmPwyAQ+Oku6KiYRj W+iTRC7z2mJRtiaSTzZGdYDgnS2kawqaO6XDSevN9IDA4nYPduRrcTTiBzRQB23QIswG v6H7xqEM13kf1OGh3vPqDyQXfjGyFKs00D5FGAWyyuWSFMH3ZA/tnlZSElUVVqj3203l ulbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732150412; x=1732755212; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WVL6vU9VsC7sSPIJgIB2KdqVwc84j5czVKw3PnYJPD8=; b=n8OOfTIkKrhITfWbsDYnBxPheaRMvbreADLtmFHvpeJf7gIElIA9RyyPgyngJ/eoG8 o44gixwYM/iad+ytrfdRkMCa14ZZ2XBEwKAep0QszWQx7WiDDc9t9e1mYU4Gkztd1XTs xTOFX7UAkcLGCs7/k0H7ozA1qUCL7/nHVVdSh5mwwe/hu0qEkQbfiJKjfmCobCVZGDMC 3MFZw+PG0HK17cWWZR3B5BMsEbjTBjyQn0kGOicLuelUIgRuqmyAhAnRv9THltdv3nn5 6zegy1S5s2YjiTrg1Y1MqXfOx+CRx1G1M8wgHs5g+a49c55ZZwIx+wZc9z8X7KmupWE+ U0KQ== X-Gm-Message-State: AOJu0YxUwQAAlydIJsCjyn4ajOA3W/F6H/+FtzIVWsl/YSvN+LmVd4/A r1H+XjlX3AlP2JX9qYN3GRG/Ftf1FZPxTCvp9FZds6CoeNslD0wjeg7rvo5lnZE= X-Google-Smtp-Source: AGHT+IHAf0I6b2kri9VLZPn6Ve2fLQgtMxj3PmhcFtoE4bFJl5cQk2Gmz94zK8IC7XqIHxuLZWzaJg== X-Received: by 2002:a05:600c:3493:b0:431:6060:8b16 with SMTP id 5b1f17b1804b1-4334f02c6f9mr46265175e9.30.1732150412119; Wed, 20 Nov 2024 16:53:32 -0800 (PST) Received: from localhost (fwdproxy-cln-113.fbsv.net. [2a03:2880:31ff:71::face:b00c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-433b463ad45sm37046915e9.39.2024.11.20.16.53.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Nov 2024 16:53:31 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v1 1/7] bpf: Refactor and rename resource management Date: Wed, 20 Nov 2024 16:53:23 -0800 Message-ID: <20241121005329.408873-2-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241121005329.408873-1-memxor@gmail.com> References: <20241121005329.408873-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=16148; h=from:subject; bh=fCDcAN+4zc4C3MttQjgY7FqxujvS/RrgChgtcrDkD6w=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnPoQ2VfFjcLJ8qO84XhzNOTivvsBZ0yK/qKwrvEWt 1wXiZKWJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZz6ENgAKCRBM4MiGSL8RykCdD/ 49I5u+85eVDTv0UASoHydBrkpPt426++jwF+uEaLypywsh0Wah7d2hAc/NyentGlBZ78DGgQfJcqbq 2rHl2oK+Cmeo7pKBpX3T9aO09IDWxyjOq8FjNWyCnxiiHe06XIgz2xBzmrcf8uif/IZxdIPrn35quo UGwRwV7HjSekZrCwm0BKMykSJiFlA4MmNvU55O7WhjYYtFCyMwtjrJXrO4OdaVRNYEXI/0HHEqya1b ypUCS+OrcmR334zTt5yqLMopvIn2hulU2mJo9iCR6WAOCYFVuIfNwYIvhUcGSFaG9lJSDAQWVnTj+O 8vrml9YrbTPw/8JuHR1B2rhF7hbwjn7hHVr15T1a0RbNr/XBKyzuTGmCj9SNKEQXpPu5Oc/On+ICxk UpORQtPlGfY2tlIfR6AifdR2+45qebydhPwlSsQ1w+padpGMAlDBjsxcBn4tOwfDSzOn50bLNWapkL 5uHbZE/ULihR4KQizQB/KOHflHyqZAtb00WPjzrs91WBVNn2c3jOtIDpnKl5UdMkFocw+rCSMb+Phb H4uL78mcH74IjnkiKMgO8y4x/T7qWwCRkZe3dysWq8DE1blAG71cwo5tWC1PXd+dbBFtF4UQqVibpZ mKr4jlIe+1v8SQY+pVzIiTgXFIUgM5iyKajf1dXGwsxI2BH5r95axrrQLxsQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net With the commit f6b9a69a9e56 ("bpf: Refactor active lock management"), we have begun using the acquired_refs array to also store active lock metadata, as a way to consolidate and manage all kernel resources that the program may acquire. This is beginning to cause some confusion and duplication in existing code, where the terms references now both mean lock reference state and the references for acquired kernel object pointers. To clarify and improve the current state of affairs, as well as reduce code duplication, make the following changes: Rename bpf_reference_state to bpf_resource_state, and begin using resource as the umbrella term. This terminology matches what we use in check_resource_leak. Next, "reference" now only means RES_TYPE_PTR, and the usage and meaning is updated accordingly. Next, factor out common code paths for managing addition and removal of resource state in acquire_resource_state and erase_resource_state, and then implement type specific resource handling on top of these common functions. Overall, this patch improves upon the confusion and minimizes code duplication, as we prepare to introduce new resource types in subsequent patches. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 24 +++-- kernel/bpf/log.c | 10 +- kernel/bpf/verifier.c | 173 +++++++++++++++++++---------------- 3 files changed, 108 insertions(+), 99 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index f4290c179bee..e5123b6804eb 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -249,20 +249,18 @@ struct bpf_stack_state { u8 slot_type[BPF_REG_SIZE]; }; -struct bpf_reference_state { - /* Each reference object has a type. Ensure REF_TYPE_PTR is zero to - * default to pointer reference on zero initialization of a state. - */ - enum ref_state_type { - REF_TYPE_PTR = 0, - REF_TYPE_LOCK, +struct bpf_resource_state { + enum res_state_type { + RES_TYPE_INV = -1, + RES_TYPE_PTR = 0, + RES_TYPE_LOCK, } type; - /* Track each reference created with a unique id, even if the same - * instruction creates the reference multiple times (eg, via CALL). + /* Track each resource created with a unique id, even if the same + * instruction creates the resource multiple times (eg, via CALL). */ int id; - /* Instruction where the allocation of this reference occurred. This - * is used purely to inform the user of a reference leak. + /* Instruction where the allocation of this resource occurred. This + * is used purely to inform the user of a resource leak. */ int insn_idx; /* Use to keep track of the source object of a lock, to ensure @@ -315,9 +313,9 @@ struct bpf_func_state { u32 callback_depth; /* The following fields should be last. See copy_func_state() */ - int acquired_refs; + int acquired_res; int active_locks; - struct bpf_reference_state *refs; + struct bpf_resource_state *res; /* The state of the stack. Each element of the array describes BPF_REG_SIZE * (i.e. 8) bytes worth of stack memory. * stack[0] represents bytes [*(r10-8)..*(r10-1)] diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index 4a858fdb6476..0ad6f0737c57 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -843,11 +843,11 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_func_st break; } } - if (state->acquired_refs && state->refs[0].id) { - verbose(env, " refs=%d", state->refs[0].id); - for (i = 1; i < state->acquired_refs; i++) - if (state->refs[i].id) - verbose(env, ",%d", state->refs[i].id); + if (state->acquired_res && state->res[0].id) { + verbose(env, " refs=%d", state->res[0].id); + for (i = 1; i < state->acquired_res; i++) + if (state->res[i].id) + verbose(env, ",%d", state->res[i].id); } if (state->in_callback_fn) verbose(env, " cb"); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1c4ebb326785..c106720d0c62 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1279,15 +1279,15 @@ static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size) return arr ? arr : ZERO_SIZE_PTR; } -static int copy_reference_state(struct bpf_func_state *dst, const struct bpf_func_state *src) +static int copy_resource_state(struct bpf_func_state *dst, const struct bpf_func_state *src) { - dst->refs = copy_array(dst->refs, src->refs, src->acquired_refs, - sizeof(struct bpf_reference_state), GFP_KERNEL); - if (!dst->refs) + dst->res = copy_array(dst->res, src->res, src->acquired_res, + sizeof(struct bpf_resource_state), GFP_KERNEL); + if (!dst->res) return -ENOMEM; + dst->acquired_res = src->acquired_res; dst->active_locks = src->active_locks; - dst->acquired_refs = src->acquired_refs; return 0; } @@ -1304,14 +1304,14 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st return 0; } -static int resize_reference_state(struct bpf_func_state *state, size_t n) +static int resize_resource_state(struct bpf_func_state *state, size_t n) { - state->refs = realloc_array(state->refs, state->acquired_refs, n, - sizeof(struct bpf_reference_state)); - if (!state->refs) + state->res = realloc_array(state->res, state->acquired_res, n, + sizeof(struct bpf_resource_state)); + if (!state->res) return -ENOMEM; - state->acquired_refs = n; + state->acquired_res = n; return 0; } @@ -1342,6 +1342,25 @@ static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state return 0; } +static struct bpf_resource_state *acquire_resource_state(struct bpf_verifier_env *env, int insn_idx, int *id) +{ + struct bpf_func_state *state = cur_func(env); + int new_ofs = state->acquired_res; + struct bpf_resource_state *s; + int err; + + err = resize_resource_state(state, state->acquired_res + 1); + if (err) + return NULL; + s = &state->res[new_ofs]; + s->type = RES_TYPE_INV; + if (id) + *id = s->id = ++env->id_gen; + s->insn_idx = insn_idx; + + return s; +} + /* Acquire a pointer id from the env and update the state->refs to include * this new pointer reference. * On success, returns a valid pointer id to associate with the register @@ -1349,55 +1368,52 @@ static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state */ static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx) { - struct bpf_func_state *state = cur_func(env); - int new_ofs = state->acquired_refs; - int id, err; - - err = resize_reference_state(state, state->acquired_refs + 1); - if (err) - return err; - id = ++env->id_gen; - state->refs[new_ofs].type = REF_TYPE_PTR; - state->refs[new_ofs].id = id; - state->refs[new_ofs].insn_idx = insn_idx; + struct bpf_resource_state *s; + int id; + s = acquire_resource_state(env, insn_idx, &id); + if (!s) + return -ENOMEM; + s->type = RES_TYPE_PTR; return id; } -static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum ref_state_type type, +static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum res_state_type type, int id, void *ptr) { struct bpf_func_state *state = cur_func(env); - int new_ofs = state->acquired_refs; - int err; + struct bpf_resource_state *s; - err = resize_reference_state(state, state->acquired_refs + 1); - if (err) - return err; - state->refs[new_ofs].type = type; - state->refs[new_ofs].id = id; - state->refs[new_ofs].insn_idx = insn_idx; - state->refs[new_ofs].ptr = ptr; + s = acquire_resource_state(env, insn_idx, NULL); + if (!s) + return -ENOMEM; + s->type = type; + s->id = id; + s->ptr = ptr; state->active_locks++; return 0; } -/* release function corresponding to acquire_reference_state(). Idempotent. */ +static void erase_resource_state(struct bpf_func_state *state, int res_idx) +{ + int last_idx = state->acquired_res - 1; + + if (last_idx && res_idx != last_idx) + memcpy(&state->res[res_idx], &state->res[last_idx], sizeof(*state->res)); + memset(&state->res[last_idx], 0, sizeof(*state->res)); + state->acquired_res--; +} + static int release_reference_state(struct bpf_func_state *state, int ptr_id) { - int i, last_idx; + int i; - last_idx = state->acquired_refs - 1; - for (i = 0; i < state->acquired_refs; i++) { - if (state->refs[i].type != REF_TYPE_PTR) + for (i = 0; i < state->acquired_res; i++) { + if (state->res[i].type != RES_TYPE_PTR) continue; - if (state->refs[i].id == ptr_id) { - if (last_idx && i != last_idx) - memcpy(&state->refs[i], &state->refs[last_idx], - sizeof(*state->refs)); - memset(&state->refs[last_idx], 0, sizeof(*state->refs)); - state->acquired_refs--; + if (state->res[i].id == ptr_id) { + erase_resource_state(state, i); return 0; } } @@ -1406,18 +1422,13 @@ static int release_reference_state(struct bpf_func_state *state, int ptr_id) static int release_lock_state(struct bpf_func_state *state, int type, int id, void *ptr) { - int i, last_idx; + int i; - last_idx = state->acquired_refs - 1; - for (i = 0; i < state->acquired_refs; i++) { - if (state->refs[i].type != type) + for (i = 0; i < state->acquired_res; i++) { + if (state->res[i].type != type) continue; - if (state->refs[i].id == id && state->refs[i].ptr == ptr) { - if (last_idx && i != last_idx) - memcpy(&state->refs[i], &state->refs[last_idx], - sizeof(*state->refs)); - memset(&state->refs[last_idx], 0, sizeof(*state->refs)); - state->acquired_refs--; + if (state->res[i].id == id && state->res[i].ptr == ptr) { + erase_resource_state(state, i); state->active_locks--; return 0; } @@ -1425,16 +1436,16 @@ static int release_lock_state(struct bpf_func_state *state, int type, int id, vo return -EINVAL; } -static struct bpf_reference_state *find_lock_state(struct bpf_verifier_env *env, enum ref_state_type type, +static struct bpf_resource_state *find_lock_state(struct bpf_verifier_env *env, enum res_state_type type, int id, void *ptr) { struct bpf_func_state *state = cur_func(env); int i; - for (i = 0; i < state->acquired_refs; i++) { - struct bpf_reference_state *s = &state->refs[i]; + for (i = 0; i < state->acquired_res; i++) { + struct bpf_resource_state *s = &state->res[i]; - if (s->type == REF_TYPE_PTR || s->type != type) + if (s->type == RES_TYPE_PTR || s->type != type) continue; if (s->id == id && s->ptr == ptr) @@ -1447,7 +1458,7 @@ static void free_func_state(struct bpf_func_state *state) { if (!state) return; - kfree(state->refs); + kfree(state->res); kfree(state->stack); kfree(state); } @@ -1473,8 +1484,8 @@ static int copy_func_state(struct bpf_func_state *dst, { int err; - memcpy(dst, src, offsetof(struct bpf_func_state, acquired_refs)); - err = copy_reference_state(dst, src); + memcpy(dst, src, offsetof(struct bpf_func_state, acquired_res)); + err = copy_resource_state(dst, src); if (err) return err; return copy_stack_state(dst, src); @@ -7907,7 +7918,7 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, "Locking two bpf_spin_locks are not allowed\n"); return -EINVAL; } - err = acquire_lock_state(env, env->insn_idx, REF_TYPE_LOCK, reg->id, ptr); + err = acquire_lock_state(env, env->insn_idx, RES_TYPE_LOCK, reg->id, ptr); if (err < 0) { verbose(env, "Failed to acquire lock state\n"); return err; @@ -7925,7 +7936,7 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, return -EINVAL; } - if (release_lock_state(cur_func(env), REF_TYPE_LOCK, reg->id, ptr)) { + if (release_lock_state(cur_func(env), RES_TYPE_LOCK, reg->id, ptr)) { verbose(env, "bpf_spin_unlock of different lock\n"); return -EINVAL; } @@ -9758,7 +9769,7 @@ static int setup_func_entry(struct bpf_verifier_env *env, int subprog, int calls state->curframe + 1 /* frameno within this callchain */, subprog /* subprog number within this prog */); /* Transfer references to the callee */ - err = copy_reference_state(callee, caller); + err = copy_resource_state(callee, caller); err = err ?: set_callee_state_cb(env, caller, callee, callsite); if (err) goto err_out; @@ -10334,7 +10345,7 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx) } /* Transfer references to the caller */ - err = copy_reference_state(caller, callee); + err = copy_resource_state(caller, callee); if (err) return err; @@ -10509,11 +10520,11 @@ static int check_reference_leak(struct bpf_verifier_env *env, bool exception_exi if (!exception_exit && state->frameno) return 0; - for (i = 0; i < state->acquired_refs; i++) { - if (state->refs[i].type != REF_TYPE_PTR) + for (i = 0; i < state->acquired_res; i++) { + if (state->res[i].type != RES_TYPE_PTR) continue; verbose(env, "Unreleased reference id=%d alloc_insn=%d\n", - state->refs[i].id, state->refs[i].insn_idx); + state->res[i].id, state->res[i].insn_idx); refs_lingering = true; } return refs_lingering ? -EINVAL : 0; @@ -11777,8 +11788,8 @@ static int ref_convert_owning_non_owning(struct bpf_verifier_env *env, u32 ref_o return -EFAULT; } - for (i = 0; i < state->acquired_refs; i++) { - if (state->refs[i].id != ref_obj_id) + for (i = 0; i < state->acquired_res; i++) { + if (state->res[i].id != ref_obj_id) continue; /* Clear ref_obj_id here so release_reference doesn't clobber @@ -11843,7 +11854,7 @@ static int ref_convert_owning_non_owning(struct bpf_verifier_env *env, u32 ref_o */ static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { - struct bpf_reference_state *s; + struct bpf_resource_state *s; void *ptr; u32 id; @@ -11862,7 +11873,7 @@ static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_ if (!cur_func(env)->active_locks) return -EINVAL; - s = find_lock_state(env, REF_TYPE_LOCK, id, ptr); + s = find_lock_state(env, RES_TYPE_LOCK, id, ptr); if (!s) { verbose(env, "held lock and object are not in the same allocation\n"); return -EINVAL; @@ -17750,27 +17761,27 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, return true; } -static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur, +static bool ressafe(struct bpf_func_state *old, struct bpf_func_state *cur, struct bpf_idmap *idmap) { int i; - if (old->acquired_refs != cur->acquired_refs) + if (old->acquired_res != cur->acquired_res) return false; - for (i = 0; i < old->acquired_refs; i++) { - if (!check_ids(old->refs[i].id, cur->refs[i].id, idmap) || - old->refs[i].type != cur->refs[i].type) + for (i = 0; i < old->acquired_res; i++) { + if (!check_ids(old->res[i].id, cur->res[i].id, idmap) || + old->res[i].type != cur->res[i].type) return false; - switch (old->refs[i].type) { - case REF_TYPE_PTR: + switch (old->res[i].type) { + case RES_TYPE_PTR: break; - case REF_TYPE_LOCK: - if (old->refs[i].ptr != cur->refs[i].ptr) + case RES_TYPE_LOCK: + if (old->res[i].ptr != cur->res[i].ptr) return false; break; default: - WARN_ONCE(1, "Unhandled enum type for reference state: %d\n", old->refs[i].type); + WARN_ONCE(1, "Unhandled enum type for resource state: %d\n", old->res[i].type); return false; } } @@ -17820,7 +17831,7 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat if (!stacksafe(env, old, cur, &env->idmap_scratch, exact)) return false; - if (!refsafe(old, cur, &env->idmap_scratch)) + if (!ressafe(old, cur, &env->idmap_scratch)) return false; return true; From patchwork Thu Nov 21 00:53:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13881524 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9ED6243ACB for ; Thu, 21 Nov 2024 00:53:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.68 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150417; cv=none; b=G8zh/EglwV8B9OyGtKDKa1zyRVLERQ+trTJbiNY/WQJ9NaRSiDH1Zt3qOYrXJy7ZE2r5jNQQPG6qwQbio8p4m/zepmypQSObC08ow5lnu0Z8IgFyyPK30MPiNsUbZMO0rGObfV0zkH8qg890dpgHcgPZZJr1P2yAKPIcUeUdDzo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150417; c=relaxed/simple; bh=py+sXAh6Cy2Iyg5nd70WwzpXzbrELAhJxE4+xQCjBl0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UtagVyLKlxiF0scogHC8P2cOCzBAxf0Ww/7T98bmZS6y4C5zpIXUo96CZs/XpoBxaQpeaGezXW5UUXEkGp9rqiqiT0P2OkgNqUNi+ovSPFCo0M7s6FckLO9AKRzKsAYWVkDEexe9FZhyOh1DbZldz/ZMCe69pC9vadWLLtsXTo0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=P0gK7iI7; arc=none smtp.client-ip=209.85.128.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="P0gK7iI7" Received: by mail-wm1-f68.google.com with SMTP id 5b1f17b1804b1-4314fa33a35so2555315e9.1 for ; Wed, 20 Nov 2024 16:53:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732150413; x=1732755213; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qrpkn4VDlr9O23IiU1lbT/j+TVGWZonCObJSn/4KlaE=; b=P0gK7iI77ocxf4feyXWeDSgGwdILVlCjv0cLQ/HI4wjVIzpiIR1hwuPM1MnWGwkLJ2 kKqnIG1cht1gwZK+rwKRmM+YqKto2ZK735coSF4MBUlW9v/WRwq9PTUc2PwnJCl6NPKD r4KR9iitO6tURbYuNtGu5hH2+JMlNzX2fVeZuJn6+rDKVfLJFtkGnyF/CkbgvDIGiXlj dB+C+JqG1L7J9DIUaooTdBURJ2zxs8IHLwr1HpoMrpFhr+Jm98v4zAOZV4EtlR9nKSk4 ariSByk5fYgnWPzXpTMIRzUesNqlIdVr0jcCIm/aTKis3elmzGsi3Mke6JSgky2hNxny Twmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732150413; x=1732755213; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qrpkn4VDlr9O23IiU1lbT/j+TVGWZonCObJSn/4KlaE=; b=U4AV5/2GRTG3Y4YgqxoUHlEjvR/Qq6HsUfbH+LdsM4RSfq/mNtgLzblT7FXEa/q6+B X2XDrn5+l3iaudaV5AiiNp1hh7tbNSc3rDCq3h1YbHuLWi0t7XDUin97AhAjFj/MUx32 yo2/sydQ49dB4IsCkpyWl/aYO33EaRFwaiWYrBMgKYg+L1pl3GHdradWWWz/jcYRsGui 5rTPs8wOLGnY5+xsoPDpVMEODjZkRBNi6Ncnl2h8z2DWcqqFK5r3QkFoIQKy6/tHhll0 WufU53YaPgRaUqDqi5cAKjmjv+ZQDuG3SKWVxhzgo142iVUNbmIoUxSqUY/s44dy8afx GDgw== X-Gm-Message-State: AOJu0YyXq11DgYEU2oKkxhaSLuY1AN27iw9Uu+N0BIOAPc82uzgT4mlJ JpGELCoEtNgFupRcOZlBbl6PG3dKDtVL+a1GBxbB+J1sMwmHxuBkOWMJDjQKsIw= X-Google-Smtp-Source: AGHT+IEssxDKMhA+bVA3GZ1WiNCQGozbFTCnfdzJtgU5P5YS6em6R+iJvKvjwgllSgQihi4kA0fRXw== X-Received: by 2002:a05:600c:3b83:b0:42c:a89e:b0e6 with SMTP id 5b1f17b1804b1-433489b3146mr40948785e9.11.1732150413578; Wed, 20 Nov 2024 16:53:33 -0800 (PST) Received: from localhost (fwdproxy-cln-016.fbsv.net. [2a03:2880:31ff:10::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825493ea2bsm3458138f8f.87.2024.11.20.16.53.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Nov 2024 16:53:33 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v1 2/7] bpf: Be consistent between {acquire,find,release}_lock_state Date: Wed, 20 Nov 2024 16:53:24 -0800 Message-ID: <20241121005329.408873-3-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241121005329.408873-1-memxor@gmail.com> References: <20241121005329.408873-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1509; h=from:subject; bh=py+sXAh6Cy2Iyg5nd70WwzpXzbrELAhJxE4+xQCjBl0=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnPoQ2SeQnU05BU1hNVZUVo5P0qdzIs7QWmr6x5F8Z tScTlluJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZz6ENgAKCRBM4MiGSL8RylOEEA CvvAXV4a2P2N3nL+aWyR9xktL/sVsqL65vtyuwVrip7dKmEIw5tIa4Ad6Mw+17zlmq4cd7S+RubS/A fgZ3g3MPcgN984Q5tDpex2ryYw+j6hwI0am5N5aQHlvElN2pLwoONDDNx7MhYWiuFxQCgQBsCu44UZ k3+adi0tor0RFy7YZBpKluvGTXsgeZJgA8W6riN9jRQl9mY5Zv2pPusTWPW4NMtzePDDbD/2MEQAQq F90qQ5rJRket2W9sZMj0X6HLWF8LMBHtwgA38apTSXroOL9aheLRhcpS7VJcIYSy6/GddRiOFCbVOF LYHAOT3RYK539UBxu7ji5tpCJ0wYBjevNRxKKxA+IlhygY1zffC0GlVB8tF6L7j8pAfQ/NiuOgrgq2 +V0OkkldSJ1g8eREtXLsqQQbs4JxxBaZCvmZgmxeDGRkdh2593iGnGozKUBVruF0rcmY3OU3bVqw1k oChx/jOpqglJecsTY3eXqcMeMGUqlAHppqz26bbLNqQ0aR5xhG7g9q1llor7C+GPpFeQO0eZk/MHk0 QSRZ/aHDzWCAlFf103rVHJiRbtYi+IgWwmcCu2QXOrHUTAtN0Ho4vswXgSjf/KyYfKs8ZSXUpXUtay MJxepU6nmu8A1CPlHJE32t+lipJQGK3TqqtXtjcuUngD8WMGZR7dthIp4bOQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net Both acquire_lock_state and release_lock_state take the bpf_func_state as a parameter, while find_lock_state does not. Future patches will end up requiring operating on non-cur_func(env) bpf_func_state (for resilient locks), hence just make the prototype consistent and take bpf_func_state directly. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c106720d0c62..0ff436c06c13 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1436,10 +1436,9 @@ static int release_lock_state(struct bpf_func_state *state, int type, int id, vo return -EINVAL; } -static struct bpf_resource_state *find_lock_state(struct bpf_verifier_env *env, enum res_state_type type, +static struct bpf_resource_state *find_lock_state(struct bpf_func_state *state, enum res_state_type type, int id, void *ptr) { - struct bpf_func_state *state = cur_func(env); int i; for (i = 0; i < state->acquired_res; i++) { @@ -11873,7 +11872,7 @@ static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_ if (!cur_func(env)->active_locks) return -EINVAL; - s = find_lock_state(env, RES_TYPE_LOCK, id, ptr); + s = find_lock_state(cur_func(env), RES_TYPE_LOCK, id, ptr); if (!s) { verbose(env, "held lock and object are not in the same allocation\n"); return -EINVAL; From patchwork Thu Nov 21 00:53:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13881525 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 036BE2F2A for ; Thu, 21 Nov 2024 00:53:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.67 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150418; cv=none; b=UvfHwfbyl9GyQru62bduzpNRXbyKgXKLk1DRSiKieCSJWa2w4+cbsva1ibUaVPovfc1l+zORRvNr0ptHaCKd46IRZuqTtnUfSsTe4xm2sjH49E1GwBl71wrjW8Zrbf9dHmtkF7UreTVQtuK7QlQc/K4pjNC4WJb/0CPQ62OBKrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150418; c=relaxed/simple; bh=b8tjPt3MWtFkErilVsIILgSbweRdo8W0KiAwJECfgkE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=C8SiVmXeREQw0R1NT6oxH1dGAzwFju4vPfZmRaJ+/oAzvo5iXPEnTPKPCQd6Q6dMOF0O6o4iIBWajUbax/5wQIpieGQc+EIWKW12U8iMMCtypsdocT3NJR1S+/mljkjPbJg3z5O3o/ORi6R1szqnsVMCNULiccwDogdmUsXbasI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=OLFhSCRl; arc=none smtp.client-ip=209.85.221.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="OLFhSCRl" Received: by mail-wr1-f67.google.com with SMTP id ffacd0b85a97d-38241435528so189324f8f.2 for ; Wed, 20 Nov 2024 16:53:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732150415; x=1732755215; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ez7TQzJKD8KdtjksJYcqlg5pJuJ8gA74Da2TBHhaJl0=; b=OLFhSCRlxwxF5lYnjhvqTA3ZJsfteiHIdiRnN+P9+2ks77nXPVD8W4zgYizhtvTVrB PtB6O5IfjTnlQv8vFZ0jmOcQtEQ4m6QdQ6kBVp14PX3crb4VK5euGn9Tf4lPaq3H+JGA DuDMm6UET4KlNJeD84ocRTWorfMa8/q8xybVyVj1Bwki+9H2S3Epmol29kMfATWtTJ/d TtUFpfuqe4YAkl3v5M8zJ0p288Ogo3Q9wxm1TboQkXoBo1UyRdaTUyl9qNcIKAIimj0/ sHFs966hAhYOMUZcge9G9wWu2O/I5x3thTJeHRma+jz5ArkK4bw1N54ayIt0wznf1mQo NscQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732150415; x=1732755215; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ez7TQzJKD8KdtjksJYcqlg5pJuJ8gA74Da2TBHhaJl0=; b=pEUQEzymr8Kn3uUjYeQKqdymCc0AZyMfjtS0cHqPZEQdkhnBwC580STvZxEpM4bk4G zvihS1XLk7lPHFFw41kMj7WmBHEEsh1vIdB1X1anbAEiPCi2Jr6jcMcbh6Mf0Y7dyozb C+KfTkzguhFvGuxWyxltSKAdnRy+Rm8/SpHYA2r2qZzYwr0AjSUIxOA5fOJSJmzWmbI0 PBb+p1xe1FBiA2quXAymSggI6ezQeCs6PlOUtJRqM0YNjqMsp8hd65ivlbDruGTWZ+Xo S6pTTrbAKvbmc2C6iJbVDRhHOjQVtqmJMTeD7agCHXrpVlTcj+TPzD2FTK/xGnUdQohO VYmA== X-Gm-Message-State: AOJu0YySvOi4xWScfXnzPC6WmYcbNeq0zoaZtwZbLed3dfjvq5WrVfrY TJYoHsjDCmF39kKPrOFXDX5G8rRRD9OJUkQzR/JKYhRSV3Ur57TXMhwCJhIb6Jk= X-Google-Smtp-Source: AGHT+IHFo8pCfMUwj622xpFctQfjLnkXqVCakjUUDgyYOgtvrl4F9UmrEztmxpUynyKMDVA/xX930g== X-Received: by 2002:a5d:6da7:0:b0:382:512b:baff with SMTP id ffacd0b85a97d-38254b266a6mr4027408f8f.59.1732150414976; Wed, 20 Nov 2024 16:53:34 -0800 (PST) Received: from localhost (fwdproxy-cln-024.fbsv.net. [2a03:2880:31ff:18::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825494ca3fsm3406758f8f.111.2024.11.20.16.53.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Nov 2024 16:53:34 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v1 3/7] bpf: Consolidate RCU and preempt locks in bpf_func_state Date: Wed, 20 Nov 2024 16:53:25 -0800 Message-ID: <20241121005329.408873-4-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241121005329.408873-1-memxor@gmail.com> References: <20241121005329.408873-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7478; h=from:subject; bh=b8tjPt3MWtFkErilVsIILgSbweRdo8W0KiAwJECfgkE=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnPoQ2WCvqobNq0K4IgIMwfEKPSiBDqlPbysEkxcwX jsFLfnmJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZz6ENgAKCRBM4MiGSL8Ryr+XD/ wI/iE60rLeQ358ckPmwfCGbGhaOLwAIOdypd3uE3PKlAjNi3DxZH2AUxyPCooD9c4LOmvYsM6r5dTd 9J5nkectM/94Y9aqcy0wVcyAgF6pVz0/C88/oJ7bAekgX7BWwK0CJbXy3bj6wF2vMtkvPgZC3u1I7w 9resjo/5Dz7/cFazt4wkJp8sNl5K1Y5HDKvGb4X8FfwehKbR05bgK6UVQi+y0EC+VpOaPUmnIWi2fa EmwSCsI0f9DznfNUea0iyH0yjiegzpl3Wep6iVCdEMrGPjfjDdAy0almUOMjPgPXipWzMLu/33gytO e581RS03nAXSmBSGIF2noTqrJLSL5RTP3iPTm/tS3yXYBM81JQKffZsFZr5tbBCXdRsHMZy94sY7J9 8nOCjt/Xh57GnGJiuqW9nP+gNXdCglzfo7EMdlAJwaHLmL46uq8me1xdSZm4/7vNRpv7357PQwNBLI 8u1OA9O+UKtRTsnLW0oFwzoDwYwyFNAndDA/2u9/aZQwFq+/CrhQ2dQbEDBp0OiDI5WGkG7uZu1u+W u+6aDZ1O1wXH/fx2LsX26Sd0wNaNlgiCA9XsOvpNUVjxN8jlwrv1C+mGoET0Qynz2BSd1aySpoKdAr mLOdSZuVONxnE8mx36UFflwz5eSw5sS58qJFEeX2FjRjuSUQhrH1meNZoe7w== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net To ensure consistency in resource handling, move RCU and preemption state counters to bpf_func_state, and convert all users to access them through cur_func(env). For the sake of consistency, also compare active_locks in ressafe as a quick way to eliminate iteration and entry matching if the number of locks are not the same. OTOH, the comparison of active_preempt_locks and active_rcu_lock is needed for correctness, as state exploration cannot be avoided if these counters do not match, and not comparing them will lead to problems since they lack an actual entry in the acquired_res array. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 4 ++-- kernel/bpf/verifier.c | 46 ++++++++++++++++++++---------------- 2 files changed, 27 insertions(+), 23 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index e5123b6804eb..fa09538a35bc 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -315,6 +315,8 @@ struct bpf_func_state { /* The following fields should be last. See copy_func_state() */ int acquired_res; int active_locks; + int active_preempt_locks; + bool active_rcu_lock; struct bpf_resource_state *res; /* The state of the stack. Each element of the array describes BPF_REG_SIZE * (i.e. 8) bytes worth of stack memory. @@ -418,8 +420,6 @@ struct bpf_verifier_state { u32 curframe; bool speculative; - bool active_rcu_lock; - u32 active_preempt_lock; /* If this state was ever pointed-to by other state's loop_entry field * this flag would be set to true. Used to avoid freeing such states * while they are still in use. diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 0ff436c06c13..25c44b68f16a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1287,7 +1287,10 @@ static int copy_resource_state(struct bpf_func_state *dst, const struct bpf_func return -ENOMEM; dst->acquired_res = src->acquired_res; + dst->active_locks = src->active_locks; + dst->active_preempt_locks = src->active_preempt_locks; + dst->active_rcu_lock = src->active_rcu_lock; return 0; } @@ -1504,8 +1507,6 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, dst_state->frame[i] = NULL; } dst_state->speculative = src->speculative; - dst_state->active_rcu_lock = src->active_rcu_lock; - dst_state->active_preempt_lock = src->active_preempt_lock; dst_state->in_sleepable = src->in_sleepable; dst_state->curframe = src->curframe; dst_state->branches = src->branches; @@ -5505,7 +5506,7 @@ static bool in_sleepable(struct bpf_verifier_env *env) */ static bool in_rcu_cs(struct bpf_verifier_env *env) { - return env->cur_state->active_rcu_lock || + return cur_func(env)->active_rcu_lock || cur_func(env)->active_locks || !in_sleepable(env); } @@ -10009,7 +10010,7 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } /* Only global subprogs cannot be called with preemption disabled. */ - if (env->cur_state->active_preempt_lock) { + if (cur_func(env)->active_preempt_locks) { verbose(env, "global function calls are not allowed with preemption disabled,\n" "use static function instead\n"); return -EINVAL; @@ -10544,12 +10545,12 @@ static int check_resource_leak(struct bpf_verifier_env *env, bool exception_exit return err; } - if (check_lock && env->cur_state->active_rcu_lock) { + if (check_lock && cur_func(env)->active_rcu_lock) { verbose(env, "%s cannot be used inside bpf_rcu_read_lock-ed region\n", prefix); return -EINVAL; } - if (check_lock && env->cur_state->active_preempt_lock) { + if (check_lock && cur_func(env)->active_preempt_locks) { verbose(env, "%s cannot be used inside bpf_preempt_disable-ed region\n", prefix); return -EINVAL; } @@ -10726,7 +10727,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn return err; } - if (env->cur_state->active_rcu_lock) { + if (cur_func(env)->active_rcu_lock) { if (fn->might_sleep) { verbose(env, "sleepable helper %s#%d in rcu_read_lock region\n", func_id_name(func_id), func_id); @@ -10737,7 +10738,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn env->insn_aux_data[insn_idx].storage_get_func_atomic = true; } - if (env->cur_state->active_preempt_lock) { + if (cur_func(env)->active_preempt_locks) { if (fn->might_sleep) { verbose(env, "sleepable helper %s#%d in non-preemptible region\n", func_id_name(func_id), func_id); @@ -12767,7 +12768,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, preempt_disable = is_kfunc_bpf_preempt_disable(&meta); preempt_enable = is_kfunc_bpf_preempt_enable(&meta); - if (env->cur_state->active_rcu_lock) { + if (cur_func(env)->active_rcu_lock) { struct bpf_func_state *state; struct bpf_reg_state *reg; u32 clear_mask = (1 << STACK_SPILL) | (1 << STACK_ITER); @@ -12787,29 +12788,29 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, reg->type |= PTR_UNTRUSTED; } })); - env->cur_state->active_rcu_lock = false; + cur_func(env)->active_rcu_lock = false; } else if (sleepable) { verbose(env, "kernel func %s is sleepable within rcu_read_lock region\n", func_name); return -EACCES; } } else if (rcu_lock) { - env->cur_state->active_rcu_lock = true; + cur_func(env)->active_rcu_lock = true; } else if (rcu_unlock) { verbose(env, "unmatched rcu read unlock (kernel function %s)\n", func_name); return -EINVAL; } - if (env->cur_state->active_preempt_lock) { + if (cur_func(env)->active_preempt_locks) { if (preempt_disable) { - env->cur_state->active_preempt_lock++; + cur_func(env)->active_preempt_locks++; } else if (preempt_enable) { - env->cur_state->active_preempt_lock--; + cur_func(env)->active_preempt_locks--; } else if (sleepable) { verbose(env, "kernel func %s is sleepable within non-preemptible region\n", func_name); return -EACCES; } } else if (preempt_disable) { - env->cur_state->active_preempt_lock++; + cur_func(env)->active_preempt_locks++; } else if (preempt_enable) { verbose(env, "unmatched attempt to enable preemption (kernel function %s)\n", func_name); return -EINVAL; @@ -17768,6 +17769,15 @@ static bool ressafe(struct bpf_func_state *old, struct bpf_func_state *cur, if (old->acquired_res != cur->acquired_res) return false; + if (old->active_locks != cur->active_locks) + return false; + + if (old->active_preempt_locks != cur->active_preempt_locks) + return false; + + if (old->active_rcu_lock != cur->active_rcu_lock) + return false; + for (i = 0; i < old->acquired_res; i++) { if (!check_ids(old->res[i].id, cur->res[i].id, idmap) || old->res[i].type != cur->res[i].type) @@ -17860,12 +17870,6 @@ static bool states_equal(struct bpf_verifier_env *env, if (old->speculative && !cur->speculative) return false; - if (old->active_rcu_lock != cur->active_rcu_lock) - return false; - - if (old->active_preempt_lock != cur->active_preempt_lock) - return false; - if (old->in_sleepable != cur->in_sleepable) return false; From patchwork Thu Nov 21 00:53:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13881526 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67ECABA3D for ; Thu, 21 Nov 2024 00:53:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150420; cv=none; b=qPPgPolES91ItV/OyfEYjX78ALgoDvdOuD2GRogoKOlVJDf6axN1NxUnw2C3Lcd3d38FO8ALItrRIw6xy06a9ybjujTchTBSZXMVQnuJ9FhYKo2cYO/aRDjkFQdgdKia3vmh1CdlBIBGdVmeOv1C4LV2JBiAHYzPZ/kkbgyV83Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150420; c=relaxed/simple; bh=au67/hzqFENstmM5aJNjBVKZHUdGnPgaPmyxDVoTQ5c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iKyvXfD0mmdTZybZ5Lq1jFF3++zYPs0wTeBvBMrimbH3V5DCM+2OUmJ3CzDz9sKFPXmdG7Tp0bkdoECpLt4qDC3B2pHTrP8VyrgJZ9Jyjpom04vLM09CCTjCgdVvxD9kVTm32p2+6wRp8YYrgSvj4Q71or3z1AmwWM45iR1YC9s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MvgULgss; arc=none smtp.client-ip=209.85.221.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MvgULgss" Received: by mail-wr1-f65.google.com with SMTP id ffacd0b85a97d-382378f359dso191333f8f.1 for ; Wed, 20 Nov 2024 16:53:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732150416; x=1732755216; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eGsoG7M1oYGL+lMn+vL7PI02L0w38AIEIFUvT7cyaFA=; b=MvgULgssj/7R63UgetUUm0V0TLWCX4AqB48W/jw2oyjE+4NKx95mEKRJsMqlDeRrYa ypWcVp79VN4e9FrKkTVyaGv/285pPWAmyweVSgVH3Yzc9VwLMfQRouOj77+rZBWVHu71 hGox7YLPCG5r5bR19GnE2WBpH+Q+3GgQUuAhO8aVR+mdxv09IftzQstrwJLiuR883uSJ H8VRU3jYk2CmA2XWSMIsDAGF0GgS8UarAsuwONoaLL6Vd2MRmcF/IvXyIfVs/3Gw0H5l myiDcbtOZVWk9VNnorQs03CNg7C9QX3JIKczYZPT+/O+7R4Y3qxi3DvZ8REkO/dtJbw4 s7TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732150416; x=1732755216; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eGsoG7M1oYGL+lMn+vL7PI02L0w38AIEIFUvT7cyaFA=; b=tr75E7R+M3Lt15rpJe3iLZqLqX2/MANl0M3+XB/bDK4fl9QNY+R2TZlBGkvXhWlAJN /ipL3+99uU1/k7OyW/97RLjqkoFzBbwXXTDEwKR50rqZMhlixs59/dQrGccy1m2uwb7C 4y4L0ADaNzhWJJ0Q9kINaYtIGeNOi6C1GE5+fsnAj4RYP7L0/XGrmdTSIXJIV0qOnRfs k+qWnZEyFc2L+aHs/HMVvUctWeu2cL7mUJll3TJybvoCld14w4/+bZeYuCc3zRZtxixj epqjP6PejpagRKu2lL/zNiDFOmWXXPKGL17zSgY+EWhXiHecdMVNLesZ7pOIyMDlZeKX XhDA== X-Gm-Message-State: AOJu0YxPSVQeN7hcXkTYTH3BlCzcXjv4tLiFqW7Tsf3jNCNCj1eUs7pN 6b8DJq0zViGnjbRlZWiXM+JzKwg6jQaAjgnRUwpSfMwkRaekEwhtGcoaBGgthqA= X-Google-Smtp-Source: AGHT+IHM9IG+sSNKbL5vhtDVdJ2QLHq/KZyOpQ4bFYf0oVI4TXTqJtqGQ+cZugFEAvt7Vq66y2ZlIw== X-Received: by 2002:a05:6000:1885:b0:382:4eef:28a with SMTP id ffacd0b85a97d-38254b0e1c5mr3390153f8f.43.1732150416350; Wed, 20 Nov 2024 16:53:36 -0800 (PST) Received: from localhost (fwdproxy-cln-112.fbsv.net. [2a03:2880:31ff:70::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38254905396sm3404490f8f.15.2024.11.20.16.53.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Nov 2024 16:53:35 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v1 4/7] bpf: Refactor mark_{dynptr,iter}_read Date: Wed, 20 Nov 2024 16:53:26 -0800 Message-ID: <20241121005329.408873-5-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241121005329.408873-1-memxor@gmail.com> References: <20241121005329.408873-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2742; h=from:subject; bh=au67/hzqFENstmM5aJNjBVKZHUdGnPgaPmyxDVoTQ5c=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnPoQ2zp/F9TUDgzPZWmGhI1JcwdV4ANo81gmjcvls WDJLPMWJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZz6ENgAKCRBM4MiGSL8RyiBTD/ 9wHI5f6kpfhHcJBqsK+GOYVQFwzEcNem2IYmD+3E7WQPFFBEaRbuzQQhYUCnYhYuGT0ULUSFuXPJNw qUCV4IdC1xXJwLl1jBHYIiW/D6ZiSAFb7jXBHyaoBm5g2XM2jPm/wRxkcdkzSznrqBa0E1ayFYctk8 Nw7aRTeoQWKdZSck0wN7WDyBAE8rHI+66ZYxnXzU7HICI/x4Bk97YU1BuJ9Lbf7Ywbydb/I+EFXUog EmGHERkVUtXxp99C/j12w1MIhAnE03iJS1nDMkRAxBK7ENbRE+WLyF/kzJqgSLg09j+uhj83Gw3A7E Yhe3YKjMwf6XZkuOTvmCGQd0kqOJiQpDRaBZJRNONns+wiCkomJwDtmMOWq1ZKKv/CH6SIpvnOcgCE +6hFD4kQlHu6nNxsU8Bw7kXm8aMIhrsrV1/3VxicFMDo/8tG5bEFfZHA7oI89GuXtxxhko5E39P1F6 YJ6GNo6EzCftysj8RaWgi/vvnOIUq3qcQGPisBM4LmekXeR7WRwcwkYMD0l3LunRkXDEExRviHrlgd MqNiLlDOyFvvNmhAAElYe4mSOyBC1HBwZLWNdUkx+vDyozBDDUyC+GLKQzESQhL2kYqAw3FUb1dnJw bj0wuMd78P0TkUfkoQXfytTON6sfwdNjs5lzP82v4c0ikELcdpOIFXAHazLw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net There is possibility of sharing code between mark_dynptr_read and mark_iter_read for updating liveness information of their stack slots. Consolidate common logic into mark_stack_slot_obj_read function in preparation for the next patch which needs the same logic for its own stack slots. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 43 +++++++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 25c44b68f16a..6cd2bbed4583 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3213,10 +3213,27 @@ static int mark_reg_read(struct bpf_verifier_env *env, return 0; } -static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +static int mark_stack_slot_obj_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg, + int spi, int nr_slots) { struct bpf_func_state *state = func(env, reg); - int spi, ret; + int err, i; + + for (i = 0; i < nr_slots; i++) { + struct bpf_reg_state *st = &state->stack[spi - i].spilled_ptr; + + err = mark_reg_read(env, st, st->parent, REG_LIVE_READ64); + if (err) + return err; + + mark_stack_slot_scratched(env, spi - i); + } + return 0; +} + +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + int spi; /* For CONST_PTR_TO_DYNPTR, it must have already been done by * check_reg_arg in check_helper_call and mark_btf_func_reg_size in @@ -3231,31 +3248,13 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state * * bounds and spi is the first dynptr slot. Simply mark stack slot as * read. */ - ret = mark_reg_read(env, &state->stack[spi].spilled_ptr, - state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64); - if (ret) - return ret; - return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr, - state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64); + return mark_stack_slot_obj_read(env, reg, spi, BPF_DYNPTR_NR_SLOTS); } static int mark_iter_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int spi, int nr_slots) { - struct bpf_func_state *state = func(env, reg); - int err, i; - - for (i = 0; i < nr_slots; i++) { - struct bpf_reg_state *st = &state->stack[spi - i].spilled_ptr; - - err = mark_reg_read(env, st, st->parent, REG_LIVE_READ64); - if (err) - return err; - - mark_stack_slot_scratched(env, spi - i); - } - - return 0; + return mark_stack_slot_obj_read(env, reg, spi, nr_slots); } /* This function is supposed to be used by the following 32-bit optimization From patchwork Thu Nov 21 00:53:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13881527 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 273A02F2A for ; Thu, 21 Nov 2024 00:53:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150422; cv=none; b=r7w31klXqQJiPNIZIcTzpdOQuQSzpmX7EsLdPRmmK/SckMqgA4s4XSOVi43Y9lsypx5a/Hz8KpricwaWpeqhNdRXePmMvZYNB1JBzUPAf4LcgEJ8TizLLxEnwGgHrHYVbYxwaUIvoQa+HYJ/ROldkJNpqBnX4d3poAFmGD+gcLM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150422; c=relaxed/simple; bh=FQo9duHK46sEWRGMNte2SEUVOGGejYuS/q7JFGBdc/E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rIMmnH1wVuL/S6t4YF5ttvhoveWU6G2p2LJg7Ped6pO71w6DTdADn2M1hDWH82H8tiGOC0Sb+4aTLyXUWoZjp7OEZwjfP4gBqNOMI7LofztkRncIw4Ls9HPnZc+tA6K1rAJ93VcFmgCY7ePmvciyqZmALRsmHjQOiFQcFC2+aGo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=AbhO4pkM; arc=none smtp.client-ip=209.85.128.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="AbhO4pkM" Received: by mail-wm1-f66.google.com with SMTP id 5b1f17b1804b1-43159c9f617so2331045e9.2 for ; Wed, 20 Nov 2024 16:53:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732150418; x=1732755218; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C1zorZsS8MqTIRv0YtIJ1N35ObPq7rbsaXF5dIIWjJ8=; b=AbhO4pkMsiqxtRpJvjzhH+T5pbA3WIPlMB0XRw5Z3OLSWdROOCci1iGr6mDhemNJp2 3R+C7Bb/948PUZPgAUGw7YJ2Pt3qJK0jrQzxa0kGwEdqp30BzjtHQpLK+Cbs+3cXvqB2 PyYPZHN9oJwlZgasCXvI5rFWn6KgBALvfs2mWiyQTemjh7vRr9rPh8pRnhvkotA83Dyu e+etDsz2ZxhR6PKrh2vvVW69Aw7UrFjXwnp+P+RVaHObcQfX0RYLqOc4FEvHVi2lF0kg ysCWdHW+0KQwj2ZS890eB4tOxKL+Lwv8s+0Dwa+wcOXCLLkVgTb/p7ufXMXUW35US9oF +0lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732150418; x=1732755218; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C1zorZsS8MqTIRv0YtIJ1N35ObPq7rbsaXF5dIIWjJ8=; b=mZPIAHhJyoB06rehUzRP1GHqQhXe+DF9CeaVQ6qi+0OwleVxmzGXdfj1oWYvqzGZgI QnzDiFUUAyIwbdfUStxfpXdmQGokTrNZCDxFjVJZyEMHrn49vldCMssPGUlhSooepvz5 8azouNl6j/F50LBHY5qEBKP9yyoFd9hXs70DtPhWVt4cqYO2P8CPXcJ7ytGVo84CCel/ p1IHFmUqfZiNmLC080cNLoyMXMuYNyAg6VNcv7zGZ8nzYUdyNkI4V0jTk1ETjnuUJ5+l 7U5GBXnzk+uOl/+4JlG4O2fT6UI5R5/N+a8Glhrjr7F3sucNr81voofeQHAzGzAON7IZ Mlsw== X-Gm-Message-State: AOJu0Yy7ZGQHvLqmCXdczL1llPdXyW4S41R+gWWoOmYcjaSHHZRHhH1S xHLaoXWv2xtPOWu7FKoq9ELLC6MvImKkymS7M1DrAnG6tO3VWZrv7BfYARtAtNk= X-Gm-Gg: ASbGnctIBCdpwRj69h8XbEtxYbxwmIiZjid0yPA/oMM9YMPq1ldTjULw+B9aHUTE6Cm 9M0yFXMPVzuTOx1IWmGu5zjrIy2zaibe271IO4O8nHddd8dok8u1OdcMOUjoqEskyaEd8kLJW1W KMk1l8ClPIQnpNerwBhVfG0DXkmoRZh+/LmlCr7PDcKRIcOH0n+vxqqCVJTxGiIY84UwyK98lTR GD1q4L5heCgttCJ0qw9mdpDtj0Ydv5fPg1QOXI+47P56DrtQUwj6iMd0BhcYswOEbGV7zwOgI7p kw== X-Google-Smtp-Source: AGHT+IE/qjmxzkucCk44OzGFH6NgoApGW6JANrkNFakC0PMIh2EeUEeAX0yJAI0sw0C0Di1H/WnIgw== X-Received: by 2002:a05:600c:4f84:b0:431:5c1c:71b6 with SMTP id 5b1f17b1804b1-433489d5ad0mr46876495e9.17.1732150417475; Wed, 20 Nov 2024 16:53:37 -0800 (PST) Received: from localhost (fwdproxy-cln-019.fbsv.net. [2a03:2880:31ff:13::face:b00c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-433b463af68sm35620825e9.40.2024.11.20.16.53.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Nov 2024 16:53:37 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v1 5/7] bpf: Introduce support for bpf_local_irq_{save,restore} Date: Wed, 20 Nov 2024 16:53:27 -0800 Message-ID: <20241121005329.408873-6-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241121005329.408873-1-memxor@gmail.com> References: <20241121005329.408873-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=19464; h=from:subject; bh=FQo9duHK46sEWRGMNte2SEUVOGGejYuS/q7JFGBdc/E=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnPoQ3htJ12xkCDq/FGznb9myyqVLsZPJoFJEI/VVv bWTxdkOJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZz6ENwAKCRBM4MiGSL8RytSpD/ 9x9bzoK09sRZWrTymfW4DCHEALKkZGeSDItcT8kwQ7QvNSWnpR/AmrY+n2X8r71xMsAtoVd/o0adSr X+ptxNbDyAn2hP0PK9RiDJN21Wl/NHA4uZHydAHzQb3S/DOsYWHsoJLyjMKqnaiagRnPuYszKPW9dR 7zdgaq3Hs76rFQhjg2riH4gQ7gdq22U+xAF9R+YN2Me/0rruFeB7vgvGfW4G0qAcAnqklAvrtPMTH0 T2PRFUwLK4BLWKBwROLLQGvmbLyNuiYmd6kL+5BLI39aRXSn6LCCHPWWyOL/EjAdTHgM+vttJFkPWR QQuJ0LzrTI8GOiFo2j0ZOcW346D3S/tvk/X1jOsXFfHaVJTwoqkV6dxv1VQi0y+v2xO2x+qsCV2YMd xznGgIrVilKbeiyr1OhEPGfmP8daGolUx4NfXLTUwHi0c+0i/03Y9MS2sCvAS0l0dGYarJ1w2URrkk AvDV1MVDfxvzosrbvN9g4baO5r4BbrWiTiLYQ7LbQzsArQsPdRFPobsC+mlyz3XtRTKP+QPrGJX5oo Ax5YaKEXqH8T0w9M2SdLSE3PqCO/1hsPjfeVp8y7SH0Isffhu4hY51/z/4baSx5lAzvLVK9euY+ldR 67Y6Y/nghSK+3xEFHRfWM5S+q879CKN4ViUZuG6XrtwszHGmgFKN5vGhIG2w== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net Teach the verifier about IRQ-disabled sections through the introduction of two new kfuncs, bpf_local_irq_save, to save IRQ state and disable them, and bpf_local_irq_restore, to restore IRQ state and enable them back again. For the purposes of tracking the saved IRQ state, the verifier is taught about a new special object on the stack of type STACK_IRQ_FLAG. This is a 8 byte value which saves the IRQ flags which are to be passed back to the IRQ restore kfunc. To track a dynamic number of IRQ-disabled regions and their associated saved states, a new resource type RES_TYPE_IRQ is introduced, which its state management functions: acquire_irq_state and release_irq_state, taking advantage of the refactoring and clean ups made in earlier commits. One notable requirement of the kernel's IRQ save and restore API is that they cannot happen out of order. For this purpose, resource state is extended with a new type-specific member 'prev_id'. This is used to remember the ordering of acquisitions of IRQ saved states, so that we maintain a logical stack in acquisition order of resource identities, and can enforce LIFO ordering when restoring IRQ state. The top of the stack is maintained using bpf_func_state's active_irq_id. The logic to detect initialized and unitialized irq flag slots, marking and unmarking is similar to how it's done for iterators. We do need to update ressafe to perform check_ids based satisfiability check, and additionally match prev_id for RES_TYPE_IRQ entries in the resource array. The kfuncs themselves are plain wrappers over local_irq_save and local_irq_restore macros. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 19 ++- kernel/bpf/helpers.c | 24 +++ kernel/bpf/log.c | 1 + kernel/bpf/verifier.c | 283 ++++++++++++++++++++++++++++++++++- 4 files changed, 322 insertions(+), 5 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index fa09538a35bc..f44961dccbac 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -233,6 +233,7 @@ enum bpf_stack_slot_type { */ STACK_DYNPTR, STACK_ITER, + STACK_IRQ_FLAG, }; #define BPF_REG_SIZE 8 /* size of eBPF register in bytes */ @@ -253,6 +254,9 @@ struct bpf_resource_state { enum res_state_type { RES_TYPE_INV = -1, RES_TYPE_PTR = 0, + RES_TYPE_IRQ, + + __RES_TYPE_LOCK_BEGIN, RES_TYPE_LOCK, } type; /* Track each resource created with a unique id, even if the same @@ -263,10 +267,16 @@ struct bpf_resource_state { * is used purely to inform the user of a resource leak. */ int insn_idx; - /* Use to keep track of the source object of a lock, to ensure - * it matches on unlock. - */ - void *ptr; + union { + /* Use to keep track of the source object of a lock, to ensure + * it matches on unlock. + */ + void *ptr; + /* Track the reference id preceding the IRQ entry in acquisition + * order, to enforce an ordering on the release. + */ + int prev_id; + }; }; struct bpf_retval_range { @@ -317,6 +327,7 @@ struct bpf_func_state { int active_locks; int active_preempt_locks; bool active_rcu_lock; + int active_irq_id; struct bpf_resource_state *res; /* The state of the stack. Each element of the array describes BPF_REG_SIZE * (i.e. 8) bytes worth of stack memory. diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 751c150f9e1c..302f0d5976be 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -3057,6 +3057,28 @@ __bpf_kfunc int bpf_copy_from_user_str(void *dst, u32 dst__sz, const void __user return ret + 1; } +/* Keep unsinged long in prototype so that kfunc is usable when emitted to + * vmlinux.h in BPF programs directly, but since unsigned long may potentially + * be 4 byte, always cast to u64 when reading/writing from this pointer as it + * always points to an 8-byte memory region in BPF stack. + */ +__bpf_kfunc void bpf_local_irq_save(unsigned long *flags__irq_flag) +{ + u64 *ptr = (u64 *)flags__irq_flag; + unsigned long flags; + + local_irq_save(flags); + *ptr = flags; +} + +__bpf_kfunc void bpf_local_irq_restore(unsigned long *flags__irq_flag) +{ + u64 *ptr = (u64 *)flags__irq_flag; + unsigned long flags = *ptr; + + local_irq_restore(flags); +} + __bpf_kfunc_end_defs(); BTF_KFUNCS_START(generic_btf_ids) @@ -3149,6 +3171,8 @@ BTF_ID_FLAGS(func, bpf_get_kmem_cache) BTF_ID_FLAGS(func, bpf_iter_kmem_cache_new, KF_ITER_NEW | KF_SLEEPABLE) BTF_ID_FLAGS(func, bpf_iter_kmem_cache_next, KF_ITER_NEXT | KF_RET_NULL | KF_SLEEPABLE) BTF_ID_FLAGS(func, bpf_iter_kmem_cache_destroy, KF_ITER_DESTROY | KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_local_irq_save) +BTF_ID_FLAGS(func, bpf_local_irq_restore) BTF_KFUNCS_END(common_btf_ids) static const struct btf_kfunc_id_set common_kfunc_set = { diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index 0ad6f0737c57..fc5520782e5d 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -537,6 +537,7 @@ static char slot_type_char[] = { [STACK_ZERO] = '0', [STACK_DYNPTR] = 'd', [STACK_ITER] = 'i', + [STACK_IRQ_FLAG] = 'f' }; static void print_liveness(struct bpf_verifier_env *env, diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6cd2bbed4583..67ffcbb963bd 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -660,6 +660,11 @@ static int iter_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg, return stack_slot_obj_get_spi(env, reg, "iter", nr_slots); } +static int irq_flag_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + return stack_slot_obj_get_spi(env, reg, "irq_flag", 1); +} + static enum bpf_dynptr_type arg_to_dynptr_type(enum bpf_arg_type arg_type) { switch (arg_type & DYNPTR_TYPE_FLAG_MASK) { @@ -1155,10 +1160,126 @@ static int is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_s return 0; } +static int acquire_irq_state(struct bpf_verifier_env *env, int insn_idx); +static int release_irq_state(struct bpf_func_state *state, int id); + +static int mark_stack_slot_irq_flag(struct bpf_verifier_env *env, + struct bpf_kfunc_call_arg_meta *meta, + struct bpf_reg_state *reg, int insn_idx) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + struct bpf_reg_state *st; + int spi, i, id; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return spi; + + id = acquire_irq_state(env, insn_idx); + if (id < 0) + return id; + + slot = &state->stack[spi]; + st = &slot->spilled_ptr; + + __mark_reg_known_zero(st); + st->type = PTR_TO_STACK; /* we don't have dedicated reg type */ + st->live |= REG_LIVE_WRITTEN; + st->ref_obj_id = id; + + for (i = 0; i < BPF_REG_SIZE; i++) + slot->slot_type[i] = STACK_IRQ_FLAG; + + mark_stack_slot_scratched(env, spi); + return 0; +} + +static int unmark_stack_slot_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + struct bpf_reg_state *st; + int spi, i, err; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return spi; + + slot = &state->stack[spi]; + st = &slot->spilled_ptr; + + err = release_irq_state(cur_func(env), st->ref_obj_id); + WARN_ON_ONCE(err && err != -EPROTO); + if (err) { + verbose(env, "cannot restore irq state out of order\n"); + return err; + } + + __mark_reg_not_init(env, st); + + /* see unmark_stack_slots_dynptr() for why we need to set REG_LIVE_WRITTEN */ + st->live |= REG_LIVE_WRITTEN; + + for (i = 0; i < BPF_REG_SIZE; i++) + slot->slot_type[i] = STACK_INVALID; + + mark_stack_slot_scratched(env, spi - i); + return 0; +} + +static bool is_irq_flag_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + int spi, i; + + /* For -ERANGE (i.e. spi not falling into allocated stack slots), we + * will do check_mem_access to check and update stack bounds later, so + * return true for that case. + */ + spi = irq_flag_get_spi(env, reg); + if (spi == -ERANGE) + return true; + if (spi < 0) + return false; + + slot = &state->stack[spi]; + + for (i = 0; i < BPF_REG_SIZE; i++) + if (slot->slot_type[i] == STACK_IRQ_FLAG) + return false; + return true; +} + +static int is_irq_flag_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + struct bpf_reg_state *st; + int spi, i; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return -EINVAL; + + slot = &state->stack[spi]; + st = &slot->spilled_ptr; + + if (!st->ref_obj_id) + return -EINVAL; + + for (i = 0; i < BPF_REG_SIZE; i++) + if (slot->slot_type[i] != STACK_IRQ_FLAG) + return -EINVAL; + return 0; +} + /* Check if given stack slot is "special": * - spilled register state (STACK_SPILL); * - dynptr state (STACK_DYNPTR); * - iter state (STACK_ITER). + * - irq flag state (STACK_IRQ_FLAG) */ static bool is_stack_slot_special(const struct bpf_stack_state *stack) { @@ -1168,6 +1289,7 @@ static bool is_stack_slot_special(const struct bpf_stack_state *stack) case STACK_SPILL: case STACK_DYNPTR: case STACK_ITER: + case STACK_IRQ_FLAG: return true; case STACK_INVALID: case STACK_MISC: @@ -1291,6 +1413,7 @@ static int copy_resource_state(struct bpf_func_state *dst, const struct bpf_func dst->active_locks = src->active_locks; dst->active_preempt_locks = src->active_preempt_locks; dst->active_rcu_lock = src->active_rcu_lock; + dst->active_irq_id = src->active_irq_id; return 0; } @@ -1398,6 +1521,22 @@ static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum r return 0; } +static int acquire_irq_state(struct bpf_verifier_env *env, int insn_idx) +{ + struct bpf_func_state *state = cur_func(env); + struct bpf_resource_state *s; + int id; + + s = acquire_resource_state(env, insn_idx, &id); + if (!s) + return -ENOMEM; + s->type = RES_TYPE_IRQ; + s->prev_id = state->active_irq_id; + + state->active_irq_id = id; + return id; +} + static void erase_resource_state(struct bpf_func_state *state, int res_idx) { int last_idx = state->acquired_res - 1; @@ -1439,6 +1578,27 @@ static int release_lock_state(struct bpf_func_state *state, int type, int id, vo return -EINVAL; } +static int release_irq_state(struct bpf_func_state *state, int id) +{ + int i; + + if (id != state->active_irq_id) + return -EPROTO; + + for (i = 0; i < state->acquired_res; i++) { + if (state->res[i].type != RES_TYPE_IRQ) + continue; + if (state->res[i].id == id) { + int prev_id = state->res[i].prev_id; + + erase_resource_state(state, i); + state->active_irq_id = prev_id; + return 0; + } + } + return -EINVAL; +} + static struct bpf_resource_state *find_lock_state(struct bpf_func_state *state, enum res_state_type type, int id, void *ptr) { @@ -1447,7 +1607,7 @@ static struct bpf_resource_state *find_lock_state(struct bpf_func_state *state, for (i = 0; i < state->acquired_res; i++) { struct bpf_resource_state *s = &state->res[i]; - if (s->type == RES_TYPE_PTR || s->type != type) + if (s->type < __RES_TYPE_LOCK_BEGIN || s->type != type) continue; if (s->id == id && s->ptr == ptr) @@ -3257,6 +3417,16 @@ static int mark_iter_read(struct bpf_verifier_env *env, struct bpf_reg_state *re return mark_stack_slot_obj_read(env, reg, spi, nr_slots); } +static int mark_irq_flag_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + int spi; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return spi; + return mark_stack_slot_obj_read(env, reg, spi, 1); +} + /* This function is supposed to be used by the following 32-bit optimization * code only. It returns TRUE if the source or destination register operates * on 64-bit, otherwise return FALSE. @@ -10015,6 +10185,12 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return -EINVAL; } + if (cur_func(env)->active_irq_id) { + verbose(env, "global function calls are not allowed with IRQs disabled,\n" + "use static function instead\n"); + return -EINVAL; + } + if (err) { verbose(env, "Caller passes invalid args into func#%d ('%s')\n", subprog, sub_name); @@ -10544,6 +10720,11 @@ static int check_resource_leak(struct bpf_verifier_env *env, bool exception_exit return err; } + if (check_lock && cur_func(env)->active_irq_id) { + verbose(env, "%s cannot be used inside bpf_local_irq_save-ed region\n", prefix); + return -EINVAL; + } + if (check_lock && cur_func(env)->active_rcu_lock) { verbose(env, "%s cannot be used inside bpf_rcu_read_lock-ed region\n", prefix); return -EINVAL; @@ -10748,6 +10929,17 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn env->insn_aux_data[insn_idx].storage_get_func_atomic = true; } + if (cur_func(env)->active_irq_id) { + if (fn->might_sleep) { + verbose(env, "sleepable helper %s#%d in IRQ-disabled region\n", + func_id_name(func_id), func_id); + return -EINVAL; + } + + if (in_sleepable(env) && is_storage_get_function(func_id)) + env->insn_aux_data[insn_idx].storage_get_func_atomic = true; + } + meta.func_id = func_id; /* check args */ for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) { @@ -11309,6 +11501,11 @@ static bool is_kfunc_arg_const_str(const struct btf *btf, const struct btf_param return btf_param_match_suffix(btf, arg, "__str"); } +static bool is_kfunc_arg_irq_flag(const struct btf *btf, const struct btf_param *arg) +{ + return btf_param_match_suffix(btf, arg, "__irq_flag"); +} + static bool is_kfunc_arg_scalar_with_name(const struct btf *btf, const struct btf_param *arg, const char *name) @@ -11462,6 +11659,7 @@ enum kfunc_ptr_arg_type { KF_ARG_PTR_TO_CONST_STR, KF_ARG_PTR_TO_MAP, KF_ARG_PTR_TO_WORKQUEUE, + KF_ARG_PTR_TO_IRQ_FLAG, }; enum special_kfunc_type { @@ -11493,6 +11691,8 @@ enum special_kfunc_type { KF_bpf_iter_css_task_new, KF_bpf_session_cookie, KF_bpf_get_kmem_cache, + KF_bpf_local_irq_save, + KF_bpf_local_irq_restore, }; BTF_SET_START(special_kfunc_set) @@ -11559,6 +11759,8 @@ BTF_ID(func, bpf_session_cookie) BTF_ID_UNUSED #endif BTF_ID(func, bpf_get_kmem_cache) +BTF_ID(func, bpf_local_irq_save) +BTF_ID(func, bpf_local_irq_restore) static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) { @@ -11649,6 +11851,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, if (is_kfunc_arg_wq(meta->btf, &args[argno])) return KF_ARG_PTR_TO_WORKQUEUE; + if (is_kfunc_arg_irq_flag(meta->btf, &args[argno])) + return KF_ARG_PTR_TO_IRQ_FLAG; + if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) { if (!btf_type_is_struct(ref_t)) { verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n", @@ -11752,6 +11957,54 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env, return 0; } +static int process_irq_flag(struct bpf_verifier_env *env, int regno, + struct bpf_kfunc_call_arg_meta *meta) +{ + struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno]; + bool irq_save = false, irq_restore = false; + int err; + + if (meta->func_id == special_kfunc_list[KF_bpf_local_irq_save]) { + irq_save = true; + } else if (meta->func_id == special_kfunc_list[KF_bpf_local_irq_restore]) { + irq_restore = true; + } else { + verbose(env, "verifier internal error: unknown irq flags kfunc\n"); + return -EFAULT; + } + + if (irq_save) { + if (!is_irq_flag_reg_valid_uninit(env, reg)) { + verbose(env, "expected uninitialized irq flag as arg#%d\n", regno); + return -EINVAL; + } + + err = check_mem_access(env, env->insn_idx, regno, 0, BPF_DW, BPF_WRITE, -1, false, false); + if (err) + return err; + + err = mark_stack_slot_irq_flag(env, meta, reg, env->insn_idx); + if (err) + return err; + } else { + err = is_irq_flag_reg_valid_init(env, reg); + if (err) { + verbose(env, "expected an initialized irq flag as arg#%d\n", regno); + return err; + } + + err = mark_irq_flag_read(env, reg); + if (err) + return err; + + err = unmark_stack_slot_irq_flag(env, reg); + if (err) + return err; + } + return 0; +} + + static int ref_set_non_owning(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { struct btf_record *rec = reg_btf_record(reg); @@ -12341,6 +12594,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ case KF_ARG_PTR_TO_REFCOUNTED_KPTR: case KF_ARG_PTR_TO_CONST_STR: case KF_ARG_PTR_TO_WORKQUEUE: + case KF_ARG_PTR_TO_IRQ_FLAG: break; default: WARN_ON_ONCE(1); @@ -12635,6 +12889,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (ret < 0) return ret; break; + case KF_ARG_PTR_TO_IRQ_FLAG: + if (reg->type != PTR_TO_STACK) { + verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i); + return -EINVAL; + } + ret = process_irq_flag(env, regno, meta); + if (ret < 0) + return ret; + break; } } @@ -12815,6 +13078,11 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return -EINVAL; } + if (cur_func(env)->active_irq_id && sleepable) { + verbose(env, "kernel func %s is sleepable within IRQ-disabled region\n", func_name); + return -EACCES; + } + /* In case of release function, we get register number of refcounted * PTR_TO_BTF_ID in bpf_kfunc_arg_meta, do the release now. */ @@ -17748,6 +18016,12 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap)) return false; break; + case STACK_IRQ_FLAG: + old_reg = &old->stack[spi].spilled_ptr; + cur_reg = &cur->stack[spi].spilled_ptr; + if (!check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap)) + return false; + break; case STACK_MISC: case STACK_ZERO: case STACK_INVALID: @@ -17777,6 +18051,9 @@ static bool ressafe(struct bpf_func_state *old, struct bpf_func_state *cur, if (old->active_rcu_lock != cur->active_rcu_lock) return false; + if (!check_ids(old->active_irq_id, cur->active_irq_id, idmap)) + return false; + for (i = 0; i < old->acquired_res; i++) { if (!check_ids(old->res[i].id, cur->res[i].id, idmap) || old->res[i].type != cur->res[i].type) @@ -17784,6 +18061,10 @@ static bool ressafe(struct bpf_func_state *old, struct bpf_func_state *cur, switch (old->res[i].type) { case RES_TYPE_PTR: break; + case RES_TYPE_IRQ: + if (!check_ids(old->res[i].prev_id, cur->res[i].prev_id, idmap)) + return false; + break; case RES_TYPE_LOCK: if (old->res[i].ptr != cur->res[i].ptr) return false; From patchwork Thu Nov 21 00:53:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13881528 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95A2DD53C for ; Thu, 21 Nov 2024 00:53:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150423; cv=none; b=gAD0RD5jDTMOWtCfml7iJhZHmKs+eMV6DdLgClqQZU1ctFYybur5KKessOY/9UoSpjnw3oNYfSKaW4ULrVnWOmZXvkC1LjjnTCDbqDMI/v2rfRbqUpdN4Bkp3Sx5K3NpCaYzneVZYsiNJ0vY9/j8whpLWutkA4VksqoFUvpOjK8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150423; c=relaxed/simple; bh=APxp5nrclt3Ucs6p75iSWXwBhkh6XUOWntnOvQrbTeo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HbCG7lELvONuhNizgCb4tTYjtM+IawsL+l+Vxw0Pe2u65SvwVfg9um3xEYOWp6UXE0tS0jGHpDwtbXojD4ZyUbiG0a/wbBDPXNmyklWi0zaV22QJUdvgAomeW17gg6ecbinuHouqacPERje4+4Onq8M8X3dNe/MlK2Z6M5RZ3sk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=B3DU4gvf; arc=none smtp.client-ip=209.85.221.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="B3DU4gvf" Received: by mail-wr1-f66.google.com with SMTP id ffacd0b85a97d-38230ed9baeso202432f8f.1 for ; Wed, 20 Nov 2024 16:53:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732150419; x=1732755219; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Tej00vkmld4pYTCE/k3luj8HWyhQB200NxTUs+RzfYM=; b=B3DU4gvfwmsCjfT59UKtsIv6h8OAfpNeX/PSwIV2EEG3TEVN0ydUHolBTSqFvG0ltF PbriKbaPjBTTKtx5YPly5t1G+FIQ5AMXPpYweCjxJ4fxrSbsFE9zPJV6zunk1/7hCOw/ eG7S7657BDs9E2KNSBzLTwZsTWY6jCPH3mmCbZ2Aq42LWeBGcQrjNhwK3/5xANnDl80d 2wvZlw8EhW+pBxsc90yXLX0G/8G8r4XP4u2Ur2m2crULlA2apCywG/9Abb0Tdph4gv5z Sj6UbMkBU55c/kI86QKIAr7zAaxXkmbWYC6x25Z2pAZdNpi+402tySwJ3JHV5mMgc71c hbzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732150419; x=1732755219; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Tej00vkmld4pYTCE/k3luj8HWyhQB200NxTUs+RzfYM=; b=TFodX/q9/blfTDikkU1Lokl+vWMB0yaiGfmHlR+65amW9ADaDgh89uDlCUNegh9Rx/ 3rAyPkTNFHND9foQjlX9lgvGO2DLThGNvqTN2QcW35Xq4l9NRNYyMbGYp7iD2B4VJL3s OoSzojQx2xdl84GsgjQJNQqCmE40sCcVHE+a/GwnYcA64yGvsRt+1vL0McQ/sS9Ek9L2 SvY6dvUpv1xkrcndG3fwFoV7qtPrRlQUjXh2U+d/fd2SXEYWvMM6ruIJpm5XeW76dfm7 V+Zw0lsg5fiL26f5dv/VzNbccfa5oFtrtOsAMiwaKl1/dr80066lucpa3ZrqNXPNnwiL U/GQ== X-Gm-Message-State: AOJu0YytmvluWP4Gj4gdh03lg8jwlK88Ycu2ERhTGwCwT6efFfsP+5JI dEafiPi+Vzcgtsu3YxnQaPt4joMN3jVYunht0E2sEUXNwC6TZnhr3H4qLJA/n0U= X-Google-Smtp-Source: AGHT+IGkqkAmfuafvE+EFZ/aeMF+A+PHTPjsWS+FHLjtLRyx1FNcrxuEkQsz5FFfWMA7ZjAVrNJfwQ== X-Received: by 2002:a5d:6c63:0:b0:382:49f9:74c0 with SMTP id ffacd0b85a97d-38254b15895mr4202779f8f.38.1732150419012; Wed, 20 Nov 2024 16:53:39 -0800 (PST) Received: from localhost (fwdproxy-cln-019.fbsv.net. [2a03:2880:31ff:13::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825494b974sm3370005f8f.108.2024.11.20.16.53.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Nov 2024 16:53:38 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v1 6/7] selftests/bpf: Expand coverage of preempt tests to sleepable kfunc Date: Wed, 20 Nov 2024 16:53:28 -0800 Message-ID: <20241121005329.408873-7-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241121005329.408873-1-memxor@gmail.com> References: <20241121005329.408873-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1180; h=from:subject; bh=APxp5nrclt3Ucs6p75iSWXwBhkh6XUOWntnOvQrbTeo=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnPoQ3E4VDCoHf4hgF2Cd//5saAt9hP0wTgJozIaZk xKl9jiKJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZz6ENwAKCRBM4MiGSL8RyqIKEA CX4vlM/ZjaXdkrI+UPeyxitiL8luOGX/jXgomESpflHMSbmxkpqZrUk3Q8JoEMEzNxJLp5p26c0cL+ HkEm9d/6OcbBKewruiGYJPA+cmSZ55Mt1P1kyLlWNvo9H9g2ahCFOwZPQRzNOQuvn+V4EPXRiS7jvy pjtiJMiXFQ8vYW+p7Jq5vp1YfoxxinrsUTn5NpgBASQFqla/wEib3nXIt+QsvlrViRkJy80oQXL5VF sXE5Hf7kV6hubgi6zIWzV19wcq42K+GbvKkh1u4Efc2U7mZVhdOjpMPdaGVkAO5q/yPBkcg1uCsuRQ fy68oPiicOG68N8GAXnIEw9pQo46lEAra29SzsosPSrl6Jue4ewFRrttmoWAtMoozC3jCPQ5jYyqPn a8agyF/7jm92C709dnIzyz38yilwL17sV5CuuntWoSEXD5Vhj5V3Xl+dF6PgMyz2yrLZSGdRCfXkAz usSA4mAxzDq58GiH5N73fmijNsHIXh0UeiRIXLnuedcsGrtzg7FTIxR2iOQPLxjRauLxGAfoEytgFn 7F9NiVhGA/qoQM1lBM62JC2nSzrx66/rkS1zheF4wZKca7Y7xnl05aXDlyruJ0EPCyfADFbMuK1FD+ U0mdCM74E9lAoA+7KnQK8qMExRbFsuzAEMk8agKiaQ2oSGXyFPaTy1ixyG3Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net For preemption-related kfuncs, we don't test their interaction with sleepable kfuncs (we do test helpers) even though the verifier has code to protect against such a pattern. Expand coverage of the selftest to include this case. Signed-off-by: Kumar Kartikeya Dwivedi --- tools/testing/selftests/bpf/progs/preempt_lock.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/preempt_lock.c b/tools/testing/selftests/bpf/progs/preempt_lock.c index 885377e83607..d24314c394c7 100644 --- a/tools/testing/selftests/bpf/progs/preempt_lock.c +++ b/tools/testing/selftests/bpf/progs/preempt_lock.c @@ -113,6 +113,18 @@ int preempt_sleepable_helper(void *ctx) return 0; } +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +__failure __msg("kernel func bpf_copy_from_user_str is sleepable within non-preemptible region") +int preempt_sleepable_kfunc(void *ctx) +{ + u32 data; + + bpf_preempt_disable(); + bpf_copy_from_user_str(&data, sizeof(data), NULL, 0); + bpf_preempt_enable(); + return 0; +} + int __noinline preempt_global_subprog(void) { preempt_balance_subprog(); From patchwork Thu Nov 21 00:53:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13881529 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 123AC1F5E6 for ; Thu, 21 Nov 2024 00:53:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.68 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150425; cv=none; b=pr53r5OTP3T8nfQYYCErUq/a9ERoWXVAhIS+qUjRDvZ2c/jWOAJHIsAeTmD3p7qIznST3Q79obDVCbQrsXDv7sNr0myfksMURiHy12h6h03MBKYCluoWosHtFsi3UQ0ViVnQR5JU+6EAGbQzx3GKmnVlipSrj5cVxCLzxdjR1tY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732150425; c=relaxed/simple; bh=S0c8/0bmQeJ1HCT5yCpbzuDjLmHtYJDKJtDb5NbkrqA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YYJyKLFJGXYSPhhYJCyb4jppyTTLaxnPmKe+hclNFl9Rri4nm4Qn8J05MKZLHYQjSFzEKG3JkeBNWWOyAn5AGO5jwczzPCOiAEAXG/CEKJ8TNBmMQEY364pCmCASwcafm9iUEq8egELs4DR0K3Skl/qS3dxbIduWCr/843rN2+g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ihy5vZCe; arc=none smtp.client-ip=209.85.128.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ihy5vZCe" Received: by mail-wm1-f68.google.com with SMTP id 5b1f17b1804b1-432d9b8558aso8633215e9.0 for ; Wed, 20 Nov 2024 16:53:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732150420; x=1732755220; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ckNAmfMcW6wGdRCmI/h6TZ072pLpHECQCkP4SxalKEo=; b=ihy5vZCeMFKQ5EgQ4QdCtI4IRniWXeESxDfoZiK8u4DbZE1rlwBKp8R3liWwrt4WA/ u9SFA/fZk/3+XfmLG7fOZaaAz0aBO/pU3W3LBFD7gUXluZUs4yDAiW+t+EcXUtseFbID CMW8jnPHiTldWKlskXn6pfbBTduhG5zg4Ojclpfz6J87Bzfne9+RaF5nzZyVnlhD8BMK B092RsXMJf5U3d8hznyN9iCclBgU1RgLyyz60LZ6qtf4kx1Ec9QVMo9iEX+p6BBxP9NV u171DB0+6Lck4vPXiLomaraZ2hUhul3hemonaei0MidVzu3yHM8XR7dJciO8H/CFB1ei AcCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732150420; x=1732755220; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ckNAmfMcW6wGdRCmI/h6TZ072pLpHECQCkP4SxalKEo=; b=ShyGtRHbuV1P/FsCKC5AkN2xYLgJrAscIhOA8odvJ74jr2r2JP3kayLHDnxjLPIo3t BTwIXoWkKHpeOP/uF2wd7auvI7sh4epM9tC10x9WGMDZud0a3S/4yJkoSsmG4A46Kh6N b+92LbzaZyDaBMVdZw1qR3OnSLcoPlKWM9ynA2yyjOsMng7qDwK5GbFqIHUIK0BQyQ4V wtS2y4/LInYlYGOgiSVRnZBXcBfYcenCNd+CFJJ9ZRY97Bb7A1QW4yKO2f4TiOBCnP6l UoJCmHocQC5nIvMzWwwCOmvJd8AEZLr/tanLGV2uN7bvCKA95inVCLXzyC8Wg0S8FPwJ R++A== X-Gm-Message-State: AOJu0YzZDd0fPLafT0QWG2CSDwwQxikh34e14BCnNAysMiDMKAM/bPPI Gc7PvejK3Ev9XM+DAlQXe95kQA2DDZ21Nwrcy0pNyXd71Zhc35MQM34LVltRK4o= X-Google-Smtp-Source: AGHT+IFUhboSl8/YdFY3DTsVypf2n4adF22GnPDyUs2ASOe0grWxJspshvhskflj3SoGrw8RhvdzMg== X-Received: by 2002:a5d:6c6a:0:b0:382:442c:2c69 with SMTP id ffacd0b85a97d-38259cc0b32mr1059444f8f.2.1732150420481; Wed, 20 Nov 2024 16:53:40 -0800 (PST) Received: from localhost (fwdproxy-cln-035.fbsv.net. [2a03:2880:31ff:23::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-382549107a4sm3377340f8f.50.2024.11.20.16.53.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Nov 2024 16:53:39 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v1 7/7] selftests/bpf: Add IRQ save/restore tests Date: Wed, 20 Nov 2024 16:53:29 -0800 Message-ID: <20241121005329.408873-8-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241121005329.408873-1-memxor@gmail.com> References: <20241121005329.408873-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=11715; h=from:subject; bh=S0c8/0bmQeJ1HCT5yCpbzuDjLmHtYJDKJtDb5NbkrqA=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnPoQ34DF6UbxIEClklVBl8C2FceiO6z+E1Mys48qR d/Mt1LOJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZz6ENwAKCRBM4MiGSL8RykX/D/ 9uGweg+dlkGNv4zghMqDrFyDe83tEq1LKcrQyjExuv3tu6+0wcQL9QnSZB8GCUSDTtUFDqMUv/xNr8 qIKAcnlA2wRsXo/t7UDu2k5d9Kjd0I6fjcSZPcjDHW8sSKn96X6E1NvIh06LHiyAx3qukOL68kRS+d 2pGEvS5O24xQyfQt6wpOIg3jZMhzdxCpvdDKyLUkJRpc+J6/s7hwbzuNoMNHKN3uDTZT3qPAS1iRUy qkiL1W8OEaP8hwmcXEDrdEvJSk6pcq4u5YFf9VXx+0q+63tVqrHvhcRh4x5EkS1LLkPmSb96s2Xduf DoL4sr6KtkO9jolS3L/wB9r20CquQO2ve4MRMYa3veJ7UDjvJqbZUdEvQb6h26DJhA4O+8+b8UGABt 5+GtOK7gDv/3uLmtZxsDgUSvck3D86+Z1PJdbrywDqPfqOgVxVvvfyrwyqby7zG/7/6G2dfuzAgx7T TfVc3oCG1km4J2ghydI2baFO6ASuR5mVvZrH6z6rtRRGlm0ADvZ/lt7uya2KmHF6HM00LasGq1o+il oZGNavCLiCP1pkAK3UlaiJpg+o6hgtJt+KbjHqGgQsGHgNCVx1c2gOoSVrOpddCKkrGXgzqxPd5U09 M1xNFRXDw5h0G7uA0LZtndL+WH2Z4m6K3xTxrj0OKwqp8bc1X7h45D8sLtFQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net Include tests that check for rejection in erroneous cases, like unbalanced IRQ-disabled counts, within and across subprogs, invalid IRQ flag state or input to kfuncs, behavior upon overwriting IRQ saved state on stack, interaction with sleepable kfuncs/helpers, global functions, and out of order restore. Include some success scenarios as well to demonstrate usage. #123/1 irq/irq_restore_missing_1:OK #123/2 irq/irq_restore_missing_2:OK #123/3 irq/irq_restore_missing_3:OK #123/4 irq/irq_restore_missing_3_minus_2:OK #123/5 irq/irq_restore_missing_1_subprog:OK #123/6 irq/irq_restore_missing_2_subprog:OK #123/7 irq/irq_restore_missing_3_subprog:OK #123/8 irq/irq_restore_missing_3_minus_2_subprog:OK #123/9 irq/irq_balance:OK #123/10 irq/irq_balance_n:OK #123/11 irq/irq_balance_subprog:OK #123/12 irq/irq_balance_n_subprog:OK #123/13 irq/irq_global_subprog:OK #123/14 irq/irq_restore_ooo:OK #123/15 irq/irq_restore_ooo_3:OK #123/16 irq/irq_restore_3_subprog:OK #123/17 irq/irq_restore_4_subprog:OK #123/18 irq/irq_restore_ooo_3_subprog:OK #123/19 irq/irq_restore_invalid:OK #123/20 irq/irq_save_invalid:OK #123/21 irq/irq_restore_iter:OK #123/22 irq/irq_save_iter:OK #123/23 irq/irq_flag_overwrite:OK #123/24 irq/irq_flag_overwrite_partial:OK #123/25 irq/irq_sleepable_helper:OK #123/26 irq/irq_sleepable_kfunc:OK #123 irq:OK Summary: 1/26 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Kumar Kartikeya Dwivedi --- tools/testing/selftests/bpf/prog_tests/irq.c | 9 + tools/testing/selftests/bpf/progs/irq.c | 393 +++++++++++++++++++ 2 files changed, 402 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/irq.c create mode 100644 tools/testing/selftests/bpf/progs/irq.c diff --git a/tools/testing/selftests/bpf/prog_tests/irq.c b/tools/testing/selftests/bpf/prog_tests/irq.c new file mode 100644 index 000000000000..496f4826ac37 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/irq.c @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */ +#include +#include + +void test_irq(void) +{ + RUN_TESTS(irq); +} diff --git a/tools/testing/selftests/bpf/progs/irq.c b/tools/testing/selftests/bpf/progs/irq.c new file mode 100644 index 000000000000..5301b66fc752 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/irq.c @@ -0,0 +1,393 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */ +#include +#include +#include "bpf_misc.h" + +SEC("?tc") +__failure __msg("BPF_EXIT instruction cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_1(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_2(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_save(&flags3); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3_minus_2(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_save(&flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_restore(&flags2); + return 0; +} + +static __noinline void local_irq_save(unsigned long *flags) +{ + bpf_local_irq_save(flags); +} + +static __noinline void local_irq_restore(unsigned long *flags) +{ + bpf_local_irq_restore(flags); +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_1_subprog(struct __sk_buff *ctx) +{ + unsigned long flags; + + local_irq_save(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_2_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + + local_irq_save(&flags1); + local_irq_save(&flags2); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3_minus_2_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + local_irq_restore(&flags3); + local_irq_restore(&flags2); + return 0; +} + +SEC("?tc") +__success +int irq_balance(struct __sk_buff *ctx) +{ + unsigned long flags; + + local_irq_save(&flags); + local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__success +int irq_balance_n(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + local_irq_restore(&flags3); + local_irq_restore(&flags2); + local_irq_restore(&flags1); + return 0; +} + +static __noinline void local_irq_balance(void) +{ + unsigned long flags; + + local_irq_save(&flags); + local_irq_restore(&flags); +} + +static __noinline void local_irq_balance_n(void) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + local_irq_restore(&flags3); + local_irq_restore(&flags2); + local_irq_restore(&flags1); +} + +SEC("?tc") +__success +int irq_balance_subprog(struct __sk_buff *ctx) +{ + local_irq_balance(); + return 0; +} + +SEC("?tc") +__success +int irq_balance_n_subprog(struct __sk_buff *ctx) +{ + local_irq_balance_n(); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +__failure __msg("sleepable helper bpf_copy_from_user#") +int irq_sleepable_helper(void *ctx) +{ + unsigned long flags; + u32 data; + + local_irq_save(&flags); + bpf_copy_from_user(&data, sizeof(data), NULL); + local_irq_restore(&flags); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +__failure __msg("kernel func bpf_copy_from_user_str is sleepable within IRQ-disabled region") +int irq_sleepable_kfunc(void *ctx) +{ + unsigned long flags; + u32 data; + + local_irq_save(&flags); + bpf_copy_from_user_str(&data, sizeof(data), NULL, 0); + local_irq_restore(&flags); + return 0; +} + +int __noinline global_local_irq_balance(void) +{ + local_irq_balance_n(); + return 0; +} + +SEC("?tc") +__failure __msg("global function calls are not allowed with IRQs disabled") +int irq_global_subprog(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + global_local_irq_balance(); + bpf_local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_ooo(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_restore(&flags1); + bpf_local_irq_restore(&flags2); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_ooo_3(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_restore(&flags2); + bpf_local_irq_save(&flags3); + bpf_local_irq_restore(&flags1); + bpf_local_irq_restore(&flags3); + return 0; +} + +static __noinline void local_irq_save_3(unsigned long *flags1, unsigned long *flags2, + unsigned long *flags3) +{ + local_irq_save(flags1); + local_irq_save(flags2); + local_irq_save(flags3); +} + +SEC("?tc") +__success +int irq_restore_3_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save_3(&flags1, &flags2, &flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_restore(&flags2); + bpf_local_irq_restore(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_4_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + unsigned long flags4; + + local_irq_save_3(&flags1, &flags2, &flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_save(&flags4); + bpf_local_irq_restore(&flags4); + bpf_local_irq_restore(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_ooo_3_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save_3(&flags1, &flags2, &flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_restore(&flags2); + bpf_local_irq_save(&flags3); + bpf_local_irq_restore(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_restore_invalid(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags = 0xfaceb00c; + + bpf_local_irq_save(&flags1); + bpf_local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("expected uninitialized") +int irq_save_invalid(struct __sk_buff *ctx) +{ + unsigned long flags1; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_restore_iter(struct __sk_buff *ctx) +{ + struct bpf_iter_num it; + + bpf_iter_num_new(&it, 0, 42); + bpf_local_irq_restore((unsigned long *)&it); + return 0; +} + +SEC("?tc") +__failure __msg("Unreleased reference id=1") +int irq_save_iter(struct __sk_buff *ctx) +{ + struct bpf_iter_num it; + + /* Ensure same sized slot has st->ref_obj_id set, so we reject based on + * slot_type != STACK_IRQ_FLAG... + */ + _Static_assert(sizeof(it) == sizeof(unsigned long), "broken iterator size"); + + bpf_iter_num_new(&it, 0, 42); + bpf_local_irq_save((unsigned long *)&it); + bpf_local_irq_restore((unsigned long *)&it); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_flag_overwrite(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + flags = 0xdeadbeef; + bpf_local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_flag_overwrite_partial(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + *(((char *)&flags) + 1) = 0xff; + bpf_local_irq_restore(&flags); + return 0; +} + +char _license[] SEC("license") = "GPL";