From patchwork Fri Nov 1 03:09:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13858630 X-Patchwork-Delegate: bpf@iogearbox.net Received: from 66-220-155-179.mail-mxout.facebook.com (66-220-155-179.mail-mxout.facebook.com [66.220.155.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8570C13D893 for ; Fri, 1 Nov 2024 03:10:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.220.155.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730430610; cv=none; b=NigUT2OIWJsqzLTW4g0tX/nGeXMAWjoy1VaHYFUiDSEGOchgpQzut2l6ghNbM6gBFLUh4BvLPKNabp6zHsAPw3RKLnQ5n2JJo9rutRcDo6ywh6tqIO414IOGEhmHfHawAbySHhbHFuZRf8v6nsYXpRDBQmkNMKyryf9y94UtmF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730430610; c=relaxed/simple; bh=Gw5e3HerW2V96yedmprrICn4PtHFGZKdTZhW89n3nm4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TDmDjtrKz+d9cf+D1+w0FX+V7DC4GC25zGLp9VNlgXOOUSLDSPMABazQSEBm3gltSQT+hCy7V6w7UIh/SpRODkJDoCiwfE60pKPdgb6v1BS9YO5/wNWm+XTX1xOCuos46Le2y//2Y7Q626QkJ6kFxuPgtABnkCqvane6Sc7AZ2c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.dev; spf=fail smtp.mailfrom=linux.dev; arc=none smtp.client-ip=66.220.155.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=linux.dev Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id D6918AA2ED29; Thu, 31 Oct 2024 20:09:55 -0700 (PDT) From: Yonghong Song To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , kernel-team@fb.com, Martin KaFai Lau , Tejun Heo Subject: [PATCH bpf-next v8 1/9] bpf: Check stack depth limit after visiting all subprogs Date: Thu, 31 Oct 2024 20:09:55 -0700 Message-ID: <20241101030955.2677478-1-yonghong.song@linux.dev> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241101030950.2677215-1-yonghong.song@linux.dev> References: <20241101030950.2677215-1-yonghong.song@linux.dev> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Check stack depth limit after all subprogs are visited. Note that if private stack is enabled, the only stack size restriction is for a single subprog with size less than or equal to MAX_BPF_STACK. In subsequent patches, in function check_max_stack_depth(), there could be a flip from enabling private stack to disabling private stack due to potential nested bpf subprog run. Moving stack depth limit checking after visiting all subprogs ensures the checking not missed in such flipping cases. The useless 'continue' statement in the loop in func check_max_stack_depth() is also removed. Signed-off-by: Yonghong Song --- kernel/bpf/verifier.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 797cf3ed32e0..89b0a980d0f9 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -6032,7 +6032,8 @@ static int round_up_stack_depth(struct bpf_verifier_env *env, int stack_depth) * Since recursion is prevented by check_cfg() this algorithm * only needs a local stack of MAX_CALL_FRAMES to remember callsites */ -static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx) +static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx, + int *subtree_depth, int *depth_frame) { struct bpf_subprog_info *subprog = env->subprog_info; struct bpf_insn *insn = env->prog->insnsi; @@ -6070,10 +6071,9 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx) return -EACCES; } depth += round_up_stack_depth(env, subprog[idx].stack_depth); - if (depth > MAX_BPF_STACK) { - verbose(env, "combined stack size of %d calls is %d. Too large\n", - frame + 1, depth); - return -EACCES; + if (depth > MAX_BPF_STACK && !*subtree_depth) { + *subtree_depth = depth; + *depth_frame = frame + 1; } continue_func: subprog_end = subprog[idx + 1].start; @@ -6173,15 +6173,19 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx) static int check_max_stack_depth(struct bpf_verifier_env *env) { struct bpf_subprog_info *si = env->subprog_info; - int ret; + int ret, subtree_depth = 0, depth_frame; for (int i = 0; i < env->subprog_cnt; i++) { if (!i || si[i].is_async_cb) { - ret = check_max_stack_depth_subprog(env, i); + ret = check_max_stack_depth_subprog(env, i, &subtree_depth, &depth_frame); if (ret < 0) return ret; } - continue; + } + if (subtree_depth > MAX_BPF_STACK) { + verbose(env, "combined stack size of %d calls is %d. Too large\n", + depth_frame, subtree_depth); + return -EACCES; } return 0; }