From patchwork Wed Apr 5 00:42:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13201070 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D35C9C761AF for ; Wed, 5 Apr 2023 00:42:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236656AbjDEAmv (ORCPT ); Tue, 4 Apr 2023 20:42:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236686AbjDEAms (ORCPT ); Tue, 4 Apr 2023 20:42:48 -0400 Received: from mail-wr1-x441.google.com (mail-wr1-x441.google.com [IPv6:2a00:1450:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2626B49C0 for ; Tue, 4 Apr 2023 17:42:45 -0700 (PDT) Received: by mail-wr1-x441.google.com with SMTP id h17so34521193wrt.8 for ; Tue, 04 Apr 2023 17:42:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680655363; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eJKHhiCvSUGB7yfEVdeOcZPbuJuycLbvSt7yxyJPxFI=; b=BL5HhYj2BuR9c5rjG8lOO7YpyF2Pv08HHzgLhQODsYNWiXslN2Ld8RzSFU4K6yS4h6 hpVJ8xo75QxPH8F6scQdpLT/WGxauuLyCGA3ahn8HZGo1fLPnKnM3FPTD40j3ev52S9H aJmB2zgs/QzrM8esUAYyUBer1dA6Lgd3fdFjwPd7xYxbCgyA+rk2G/4NBJ5QXGSloVH4 5JEjSB0O0DkBHK9D+FQkOIP2Z3C8f08yESCsqLwiykHDPujkHDd2aLlpJ6Ngrku8qN4S W0FGryRABCJQmF+2pwYhKoSa/lJD9MRP7+31B6HvWhc2DFnKTH7IzG8uZLuREs8ASILX LVLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680655363; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eJKHhiCvSUGB7yfEVdeOcZPbuJuycLbvSt7yxyJPxFI=; b=Wjeq4g7inBEh696xuvUcYJaX/Lgp5MCEWmXMyoRILe0JBvpOnUBOnT2mapWJmQINae HUlo3IhqlaYHoug5wBNqI5yiJe7bqKzF9fOZHtkiIsQFYZxmJbPxcJetqKABDRnuFeoq 3XcDlzO9Y5tC4H88WYHfUOtRRcDNvmuxuhGnyFt7zWYSp8/OTy6A0Gg/XM5V+hHORnLb GRvpDXsO4AI34nJBKyiiMF/wiJd85o13QPQOMLNakhtihJxeq2qglDetLUHxm5dR5mY3 PRonq0leWhYTrgx8OynDXvyitL+pCwHhsr7fSaw1Zj5QpN3W3CdL1ZaPmV9BtvVHixJn xbUQ== X-Gm-Message-State: AAQBX9f88nByBrQlOMjGWE3iwU55BF5596wRDbcZF0ChmgSbE58q/lv8 FS757YYih7u3biDH9O3ryNaDplce5CT3lw== X-Google-Smtp-Source: AKy350Y1kG7f5McLnbE7ikFmUKK+O9GWPgmQWwMdWBOVLsxo7+rP4FMwsmriqbcIs0Q47jKhzcrccg== X-Received: by 2002:a5d:4fcd:0:b0:2d8:a55e:1fd7 with SMTP id h13-20020a5d4fcd000000b002d8a55e1fd7mr2432701wrw.21.1680655362971; Tue, 04 Apr 2023 17:42:42 -0700 (PDT) Received: from localhost ([2a02:1210:74a0:3200:2fc:d4f0:c121:5e8b]) by smtp.gmail.com with ESMTPSA id a5-20020adffb85000000b002c794495f6fsm13415094wrr.117.2023.04.04.17.42.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Apr 2023 17:42:42 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , David Vernet Subject: [PATCH RFC bpf-next v1 1/9] bpf: Fix kfunc callback handling Date: Wed, 5 Apr 2023 02:42:31 +0200 Message-Id: <20230405004239.1375399-2-memxor@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230405004239.1375399-1-memxor@gmail.com> References: <20230405004239.1375399-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1009; i=memxor@gmail.com; h=from:subject; bh=ED85LQy7/HWe10Am0kcN0WVhBnx9puiHtcZuAD+qJ3k=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBkLMPv0vRJvZB9S4CYUPCG7Jl+DWLoRMBFWvGIu pyqdaH8u9GJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZCzD7wAKCRBM4MiGSL8R yiA5D/9j+t7XeG2Q8flPmdsQ9Uzi483ZttEOUa8K5mWW3+kL/HJh1flE2QxtmMBWNTlJ2DO5wHV 4Miimzz+TIFkz7GguatmOTgRXOSQpT4dF8eiaPmzkB3ImLfjb+t/QDxUX2vF0beb/2oTTp4eHf4 w1RZWese8/ipmmM8q6H8w6w8FRkaI6g0Bo43N5DOSeDjoC/HXEvd9mFJCs3uHGdF5RkHXETLAwn ZlFztvKdzvmnnhMZCtgURwBV/BZptP/Aq77XFWUMENbq/fBNjKqGptMC9tnWoSgAIQLo4W5holB p61/t2B/qljn0FteSux57HXHcYfrGLRTjoiAixErXqB1z5aFMgSq/lKj8CPlqH9Tv3qIxBEUH+T f/cPFmPAXwZZErsLcRDJKvkbUjSPgyeYGe0yytYuTLLt7DmQkanwmYeGJQiPMwjBS6HFouoB7pd 2BnU6MbqFOXwyheP4yS9ikA9tcUqya1oVmx7n4vU1fR9JUfkrrFKGHZ7txDOGopA79CF1rxkcOq 1RumhGL6agWGH6wZzo6iiSeDKJQoJwYB8O/+Y7y2BauSUAuIk7Fno9+cXvo4UvAGV9w9avBzYcV Qwt/lJXs0DuIyO4SDu+weObxVyP+e5swFSeKCYMr5HLRj1+55b39wRRl4xaDBuqhGacbrMRstoy Dmbqo1Bxvs3V/Qw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The kfunc code to handle KF_ARG_PTR_TO_CALLBACK does not check the reg type before using reg->subprogno. This can accidently permit invalid pointers from being passed into callback helpers (e.g. silently from different paths). We need to reject any other type except PTR_TO_FUNC. Fixes: 5d92ddc3de1b ("bpf: Add callback validation to kfunc verifier logic") Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 56f569811f70..693aeddc9fe2 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -10562,6 +10562,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ break; } case KF_ARG_PTR_TO_CALLBACK: + if (reg->type != PTR_TO_FUNC) { + verbose(env, "arg%d expected pointer to func\n", i); + return -EINVAL; + } meta->subprogno = reg->subprogno; break; } From patchwork Wed Apr 5 00:42:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13201071 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90802C761A6 for ; Wed, 5 Apr 2023 00:42:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231589AbjDEAmx (ORCPT ); Tue, 4 Apr 2023 20:42:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236701AbjDEAmv (ORCPT ); Tue, 4 Apr 2023 20:42:51 -0400 Received: from mail-wr1-x442.google.com (mail-wr1-x442.google.com [IPv6:2a00:1450:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B071C4C27 for ; Tue, 4 Apr 2023 17:42:46 -0700 (PDT) Received: by mail-wr1-x442.google.com with SMTP id v1so34546004wrv.1 for ; Tue, 04 Apr 2023 17:42:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680655364; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bIB/I3GNbYx0JLF+VxiO1i4imSSVAVtp1ptB2Nb8FwI=; b=C5boSwF67cgMYmyELKNZVAAw/BSWGt7saDRL5pU7V2I7QwYueqlR2QqMuQUTYLXI57 8i9/dI2HIKh+RRMGZg6UVCJrnTColuuhU49us3KdkRlUMzihDv69nLapkDL7tSmGdyki m8TuVOyyQD3VvxUBgDu9C9PVCSQVRJvQi8RaNAu4IFD3OWwmtIbuats+vFTzKf6nu5yq giKV35IQx57wEp9T3FCDO9a2puovbv4VFlxQ+pG5iP+d/CmBAjQVN3dLj0QJsL6QOQB1 6ZhC5HncJXmX/FNG+imQsSWo6BLcFYDcZLUWAVjSoWauKwVdKBqNXwB+A2We1NEEMy+0 lweA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680655364; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bIB/I3GNbYx0JLF+VxiO1i4imSSVAVtp1ptB2Nb8FwI=; b=JQGndSgD6hE5F4acIwMb022qTdHWPcy3H90ZsatdgItqvyDIxL2e+An8UMLLt+/1jM 1TCdy/JcFDZCrxPoUyLtFU0xQF6AJHbj6oDHLUq6P7nNq8VoJ6Or2ZuULn8s0cQKHhx7 M6tpwq0D+5cMKdgadYzcOTIcSUQi829ujVjuVZvq6J5MJoslu8q6D52Cssr2RS11bYpX lhTLgnI04xH8F6FySbQ4wS2W2HTtINmAIrQSdeao0+HQcWXuCpig2mWP0Q65voCyEJtb 1eRUNVYZunCacROnNyvjOPbXx6MOF2DodMXi44Lqe7ggcZao9qY4zxpIDrgF5FQoeRu+ NjUQ== X-Gm-Message-State: AAQBX9dWFJsH5du3GDWdYtD11xkOPGEct8nKA4uU63BYK7obh5BIUKLo YUGIaZcszK+ehCk1LwD89frIHmnAZtvjJg== X-Google-Smtp-Source: AKy350Zr4oqPySWFBa0YgDsWSPgtS2c9iM1JebUCZVWBPUZeiSIOBkRD9xSSj4IDoWUkt4ANGKwlwg== X-Received: by 2002:adf:f18d:0:b0:2d8:5df8:4566 with SMTP id h13-20020adff18d000000b002d85df84566mr2847204wro.8.1680655364362; Tue, 04 Apr 2023 17:42:44 -0700 (PDT) Received: from localhost ([2a02:1210:74a0:3200:2fc:d4f0:c121:5e8b]) by smtp.gmail.com with ESMTPSA id k12-20020adfe8cc000000b002c7b229b1basm13493543wrn.15.2023.04.04.17.42.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Apr 2023 17:42:43 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , David Vernet Subject: [PATCH RFC bpf-next v1 2/9] bpf: Refactor and generalize optimize_bpf_loop Date: Wed, 5 Apr 2023 02:42:32 +0200 Message-Id: <20230405004239.1375399-3-memxor@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230405004239.1375399-1-memxor@gmail.com> References: <20230405004239.1375399-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4433; i=memxor@gmail.com; h=from:subject; bh=o1zqDrct7u9UCLa1J6knNQC28LaF6jwE6l5uY8hzS4U=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBkLMPvDtsmbDZNupzG7kdgFuaPCJsFZ6ZMYusS5 6tArT0N9zOJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZCzD7wAKCRBM4MiGSL8R yghtEACm9U5zFQkoQhbzjokoqyXw1GRpVKPy9PrdApwMm+HLvv4EXcNkwlVS+H9XEPghVPspQ89 o/rmUUdmxk9TQZgnkUHlAHIwpscM2rx0gaZFCcZj4lRP+v841q1Ri7nYtbp9gRMLaAYtz12ZuO3 XkJpZ1M7O16Ifkp81Wsl81h1aWQQSzgv7qcoKkCnkyUkGV7CbIIMi3dJLn9QBiyFu7yGWD7LHKc SIYRSLznTGDUqajJIM0/+iHO3fl7EnQy4K0EJd1QjSIO+Nvz9Dm2vD3BTlfJ5wNnpm1n6ApO4a7 HxNupU314UYsgnqScFzUXmDiYLMuBS0py67LjT6PG9HQUcXfBOcHBsIV2wGqVtcLoX3hmLdinIh +mCMb6VVPkrPyuzW6wofBSdCbrltawaGkneiWI/uQWhbeQGv0WTHvY6LJ8ZCD3H1Q+V8mSteVDw jk02ia/g57QVplEnoAXURNf6CBkbldz8/CeS6Mn1whjNLwZbktdy8zpa0dVdzWBkgvZsBgTo5iH skybn0CozfmZK5vwtt5Vp4Wbe4irJ9elayHaPdZFQ2nsJpL9SzYbNCWGW2gAJQwswmGcjK3FWUD FRgVvE9rbB6P7TE8DsBGJ7rmXU4Jb6ETVqfMZabOg+RutlBWXJejPCKVU48vRSsIlQQ3i6wcWqm 9L2K/0ws0gU9ywQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The optimize_bpf_loop pass is currently used to transform calls to the bpf_loop helper function into an inlined instruction sequence which elides the helper call and does the looping directly and emits the call to the loop callback. The future patches for exception propagation for BPF programs will require similar rewriting which needs access to extra stack space to spill registers which may be clobbered in the emitted sequence. We wish to reuse the logic of updating subprog stack depth by tracking the extra stack depth needed across all rewrites in a subprog in subseqeunt patches. Hence, refactor the code to make it amenable for plugging extra rewrite passes over other instructions. Note that the stack_depth_extra is now set by a max of existing and required extra stack depth. This allows rewrites to reuse the extra stack depth among themselves, by figuring the maximum depth needed for a subprog. Note that we only do one rewrite in one loop iteration, and thus new_prog is set only once. This can be used to pull out shared updates of delta, env->prog, etc. into common code that occurs after all cases of possible rewrites are examined, and if application, performed. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 693aeddc9fe2..8ecd5df73b07 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -18024,23 +18024,26 @@ static struct bpf_prog *inline_bpf_loop(struct bpf_verifier_env *env, return new_prog; } -static bool is_bpf_loop_call(struct bpf_insn *insn) +static bool is_inlineable_bpf_loop_call(struct bpf_insn *insn, + struct bpf_insn_aux_data *aux) { return insn->code == (BPF_JMP | BPF_CALL) && insn->src_reg == 0 && - insn->imm == BPF_FUNC_loop; + insn->imm == BPF_FUNC_loop && + aux->loop_inline_state.fit_for_inline; } /* For all sub-programs in the program (including main) check - * insn_aux_data to see if there are bpf_loop calls that require - * inlining. If such calls are found the calls are replaced with a + * insn_aux_data to see if there are any instructions that need to be + * transformed into an instruction sequence. E.g. bpf_loop calls that + * require inlining. If such calls are found the calls are replaced with a * sequence of instructions produced by `inline_bpf_loop` function and * subprog stack_depth is increased by the size of 3 registers. * This stack space is used to spill values of the R6, R7, R8. These * registers are used to store the loop bound, counter and context * variables. */ -static int optimize_bpf_loop(struct bpf_verifier_env *env) +static int do_misc_rewrites(struct bpf_verifier_env *env) { struct bpf_subprog_info *subprogs = env->subprog_info; int i, cur_subprog = 0, cnt, delta = 0; @@ -18051,13 +18054,14 @@ static int optimize_bpf_loop(struct bpf_verifier_env *env) u16 stack_depth_extra = 0; for (i = 0; i < insn_cnt; i++, insn++) { - struct bpf_loop_inline_state *inline_state = - &env->insn_aux_data[i + delta].loop_inline_state; + struct bpf_insn_aux_data *insn_aux = &env->insn_aux_data[i + delta]; + struct bpf_prog *new_prog = NULL; - if (is_bpf_loop_call(insn) && inline_state->fit_for_inline) { - struct bpf_prog *new_prog; + if (is_inlineable_bpf_loop_call(insn, insn_aux)) { + struct bpf_loop_inline_state *inline_state = &insn_aux->loop_inline_state; - stack_depth_extra = BPF_REG_SIZE * 3 + stack_depth_roundup; + stack_depth_extra = max_t(u16, stack_depth_extra, + BPF_REG_SIZE * 3 + stack_depth_roundup); new_prog = inline_bpf_loop(env, i + delta, -(stack_depth + stack_depth_extra), @@ -18065,7 +18069,9 @@ static int optimize_bpf_loop(struct bpf_verifier_env *env) &cnt); if (!new_prog) return -ENOMEM; + } + if (new_prog) { delta += cnt - 1; env->prog = new_prog; insn = new_prog->insnsi + i + delta; @@ -18876,7 +18882,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) /* instruction rewrites happen after this point */ if (ret == 0) - ret = optimize_bpf_loop(env); + ret = do_misc_rewrites(env); if (is_priv) { if (ret == 0) From patchwork Wed Apr 5 00:42:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13201072 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18DC4C76188 for ; Wed, 5 Apr 2023 00:42:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236688AbjDEAm5 (ORCPT ); Tue, 4 Apr 2023 20:42:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236686AbjDEAm4 (ORCPT ); Tue, 4 Apr 2023 20:42:56 -0400 Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35FC03C19 for ; Tue, 4 Apr 2023 17:42:48 -0700 (PDT) Received: by mail-wm1-x343.google.com with SMTP id v14-20020a05600c470e00b003f06520825fso116292wmo.0 for ; Tue, 04 Apr 2023 17:42:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680655366; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=teGUJf4XiPYWvFzv+TLHkUZ2Ii4ceIWfrI+kO8mPC0Y=; b=XmlB5C5uZ0RlC3WKYVsCWqatA0yLF+kSrhuoX+YyWKFN28KNGhkxs/zGhXPvgKR/m1 EfPFkvzGIEjMC6xFJQdtMdVmUeLSzgS7b35Lmz6lMH4CkO4qZt/UmmR4YYWpklOjPScH sIVjrw0Dk+3hZd6Z2+uFNxG+sk802JeRuJBoiv8Iso+Q8rbKeTfzix/zamFNt4FZ4FQw I59FjtftIKxJJh66AAReClP/b5mAiwgJNpMEYwnrU7lQ5X/voD3R7UN6c08jADPKx4Wc c9LfgUHMOZ3peb+/s+y7ICSPpErKtrAighRPkhQFEwqSnnUSDW6gO8geEi31/7bcBT9C KeUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680655366; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=teGUJf4XiPYWvFzv+TLHkUZ2Ii4ceIWfrI+kO8mPC0Y=; b=MvwGlqckWN3zYjMRlqYKV8P0HITbTqtW1tARzs3ShSY2OI1hPDWjfcASlsD4ifbdNx JhZgc/97Q91+bNJ2N62bLTlRwkUwD9W9iuaIz+CY75sdRSua6mYWM98R4DFYkmpI7Uns FQw7a3u5LP1Jr3NsmV7jc8VS/q/9y7xZKb5AmuEieHkp/zw7Y2c9kVXk2T5+oeLUCioG ki1WjmfJYwxOVWUEP4hcaATTtMwsiV5RmWGCiDGDI3eCZ/EAGiFomyI6IhvS4S85+lxd nLMgV3R+ujOgjExczp71kp3YDixnOK75dKUXodhzExYyspAD3JH9Qxd38QJpZg/VlHoZ Tnrw== X-Gm-Message-State: AAQBX9fpT5zifmhrtad3ytn41X5PQs4L3lg9bwjiSv+w6ZKYdCA7qJdX EiqO9yv1P+RlsXO1Z2CGCCQeamwn6x1LIg== X-Google-Smtp-Source: AKy350a3CJ+urjRn8BWXkUOvnHOeR5w8xHHhzSFn3tIXFNWHj/QLlgOnlJMdubhPW4oWBE6I6DXFxQ== X-Received: by 2002:a7b:c415:0:b0:3ed:c84c:7efe with SMTP id k21-20020a7bc415000000b003edc84c7efemr595650wmi.7.1680655365941; Tue, 04 Apr 2023 17:42:45 -0700 (PDT) Received: from localhost ([2a02:1210:74a0:3200:2fc:d4f0:c121:5e8b]) by smtp.gmail.com with ESMTPSA id m30-20020a05600c3b1e00b003ef5deb4188sm439927wms.17.2023.04.04.17.42.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Apr 2023 17:42:45 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , David Vernet Subject: [PATCH RFC bpf-next v1 3/9] bpf: Implement bpf_throw kfunc Date: Wed, 5 Apr 2023 02:42:33 +0200 Message-Id: <20230405004239.1375399-4-memxor@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230405004239.1375399-1-memxor@gmail.com> References: <20230405004239.1375399-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=27567; i=memxor@gmail.com; h=from:subject; bh=StpVKABle4Ovro+eQlE8xC6lViXNR11jiYWOx1VBboA=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBkLMPveF8p06L8Rt/BKCFa2mQY/0oz3iiiI7r94 iMF/7GERzOJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZCzD7wAKCRBM4MiGSL8R ypr5D/0fHpNjYDit+z3oBp+IaXG9QjVwjfKWHoraq6DfdJyJofGfjedNB/LDVwUTdLQg2tM3Y6m 4polZTeE+gHUsMO5ZBgT07UGUbTlsGaBFzkaNcpmOKXVJ2evhc1En0a1VLrQ0TazLu9NJnieSov uq3ynG1Wi8XkfBUZo2yQMAxAys8r/9WdMKJwZLQEMkeDXUs077CVTi4NfqVmzXLJMtUXwq0O7Sw Ax7Tf8eURFUL7dshlmsmL4XCCn2924vPL32tIMrEh5VZn+R6qFrk7pw+HuS+1qCBcNOPAd2fYHs goILsZfzZ4DYvU8TWc9g2LnMhgW2+Uae/qkfBh2n2rNMeLWX0NBcGWtpSt81O2Dpch1C8pXtbB+ v3lvtLLf1aa97p7bWqc32BE+nNrwqPNqtzRzSRT7qHgpZnbpM4Zsc+lhe2BS2X6PAYMbgYni8gy PqtoclqBzMLIprljP7bWkzvlaKtKAMDfQmnPBYqmJGL9EXw/5OB4oc/e6yJrDpZiBD+kQfCYRhh 6iV3uI2mcOcozxQymigk4JGYGRuoAsj1mhHiPNtviNQ1f+iKYz1QNc5avS5SvdewnVMlObYvDpo 9M50KkOdzgB7L7hRzOUwcvhiuCEJF9mrKLYfjbz0ZGGWkJk6rUyTmdGH43S86b8rrZTIjYj+XEO u4K6m3WJHkBYFyw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Introduce the support for exceptions in BPF runtime. The bpf_throw kfunc is the main helper to be used by programs to throw an exception. In the verifier domain, it will be processed as an immediate main exit for the program, ending exploration of that path. This proves to be a powerful property: a program can perform a condition check and call bpf_throw for the case that will never occur at runtime, but it's still something it needs to prove to the verifier. The unwinding of the program stack and any resources will automatically be performed by the BPF runtime. For now, we fail if we see lingering references, locks, etc., but a future patch will extend the current infrastructure to generate the cleanup code for those too. Programs that do not use exceptions today have no change in their .text and performance, as all extra code generated to throw, propagate, unwind the stack, etc. only applies to programs that use this new facility. - The exception state is represented using four booleans in the task_struct of current task. Each boolean corresponds to the exception state for each kernel context. This allows BPF programs to be interrupted and still not clobber the other's exception state. - The other vexing case is of recursion. If a program calls into another program (e.g. call into helper which invokes tracing program eventually), it may throw and clobber the current exception state. To avoid this, an invariant is maintained across the implementation: Exception state is always cleared on entry and exit of the main BPF program. This implies that if recursion occurs, the BPF program will clear the current exception state on entry and exit. However, callbacks do not do the same, because they are subprograms. The case for propagating exceptions of callbacks invoked by the kernel back to the BPF program is handled in the next commit. This is also the main reason to clear exception state on entry, asynchronous callbacks can clobber exception state even though we make sure it's always set to be 0 within the kernel. Anyhow, the only other thing to be kept in mind is to never allow a BPF program to execute when the program is being unwinded. This implies that every function involved in this path must be notrace, which is the case for bpf_throw, bpf_get_exception and bpf_reset_exception. - Rewrites happen for bpf_throw and call instructions to subprogs. The instructions which are executed in the main frame of the main program (thus, not global functions and extension programs, which end up executing in frame > 0) need to be rewritten differently. This is tracked using BPF_THROW_OUTER vs BPF_THROW_INNER. If not done, a recursing tracing program may set exception state which the main program is instrumented to handle eventually, causing it to unwind when it shouldn't. - Callsite specific marking is done. It is possible to reduce the instrumentation needed if we were marking callsites. Only all calls to global subprogs would need to be rewritten to handle thrown exceptions, otherwise for each callsite to static subprogs, the verifier's path awareness allows us to skip the handling if all passible paths taken using that callsite never throw. This propagates into all callers and prog may end up having throws_exception as false. Typically this reduces the amount of instrumentation when subprogs throwing are deeply nested and only throw under specific conditions. - BPF_PROG_TYPE_EXT is special in that it replaces global functions in other BPF programs. A check is added after we know exception specification of a prog (throws_exception) to ensure we don't attach throwing extension to a program not instrumented to handle them, or to main subprog which has BPF_THROW_OUTER handling compared to extension prog's BPF_THROW_INNER handling. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 9 +- include/linux/bpf_verifier.h | 13 + include/linux/sched.h | 1 + kernel/bpf/arraymap.c | 9 +- kernel/bpf/helpers.c | 22 ++ kernel/bpf/syscall.c | 10 + kernel/bpf/trampoline.c | 4 +- kernel/bpf/verifier.c | 241 ++++++++++++++++-- .../testing/selftests/bpf/bpf_experimental.h | 9 + 9 files changed, 299 insertions(+), 19 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 002a811b6b90..04b81f5fe809 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1287,6 +1287,7 @@ static inline bool bpf_prog_has_trampoline(const struct bpf_prog *prog) struct bpf_func_info_aux { u16 linkage; bool unreliable; + bool throws_exception; }; enum bpf_jit_poke_reason { @@ -1430,7 +1431,8 @@ struct bpf_prog { enforce_expected_attach_type:1, /* Enforce expected_attach_type checking at attach time */ call_get_stack:1, /* Do we call bpf_get_stack() or bpf_get_stackid() */ call_get_func_ip:1, /* Do we call get_func_ip() */ - tstamp_type_access:1; /* Accessed __sk_buff->tstamp_type */ + tstamp_type_access:1, /* Accessed __sk_buff->tstamp_type */ + throws_exception:1; /* Does this program throw exceptions? */ enum bpf_prog_type type; /* Type of BPF program */ enum bpf_attach_type expected_attach_type; /* For some prog types */ u32 len; /* Number of filter blocks */ @@ -3035,4 +3037,9 @@ static inline gfp_t bpf_memcg_flags(gfp_t flags) return flags; } +/* BPF Exception helpers */ +void bpf_reset_exception(void); +u64 bpf_get_exception(void); +void bpf_throw(void); + #endif /* _LINUX_BPF_H */ diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 81d525d057c7..bc067223d3ee 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -430,6 +430,17 @@ struct bpf_loop_inline_state { u32 callback_subprogno; /* valid when fit_for_inline is true */ }; +enum { + BPF_THROW_NONE, + BPF_THROW_OUTER, + BPF_THROW_INNER, +}; + +struct bpf_throw_state { + int type; + bool check_helper_ret_code; +}; + /* Possible states for alu_state member. */ #define BPF_ALU_SANITIZE_SRC (1U << 0) #define BPF_ALU_SANITIZE_DST (1U << 1) @@ -464,6 +475,7 @@ struct bpf_insn_aux_data { */ struct bpf_loop_inline_state loop_inline_state; }; + struct bpf_throw_state throw_state; u64 obj_new_size; /* remember the size of type passed to bpf_obj_new to rewrite R1 */ struct btf_struct_meta *kptr_struct_meta; u64 map_key_state; /* constant (32 bit) key tracking for maps */ @@ -537,6 +549,7 @@ struct bpf_subprog_info { bool tail_call_reachable; bool has_ld_abs; bool is_async_cb; + bool can_throw; }; /* single container for all structs diff --git a/include/linux/sched.h b/include/linux/sched.h index b11b4517760f..a568245b59a2 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1480,6 +1480,7 @@ struct task_struct { struct bpf_local_storage __rcu *bpf_storage; /* Used for BPF run context */ struct bpf_run_ctx *bpf_ctx; + bool bpf_exception_thrown[4]; #endif #ifdef CONFIG_GCC_PLUGIN_STACKLEAK diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index 2058e89b5ddd..de0eadf8706f 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -905,7 +905,14 @@ static void *prog_fd_array_get_ptr(struct bpf_map *map, if (IS_ERR(prog)) return prog; - if (!bpf_prog_map_compatible(map, prog)) { + /* Programs which throw exceptions are not allowed to be tail call + * targets. This is because it forces us to be conservative for each + * bpf_tail_call invocation and assume it may throw, since we do not + * know what the target program may do, thus causing us to propagate the + * exception and mark calling prog as potentially throwing. Just be + * restrictive for now and disallow this. + */ + if (prog->throws_exception || !bpf_prog_map_compatible(map, prog)) { bpf_prog_put(prog); return ERR_PTR(-EINVAL); } diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 6be16db9f188..89e70907257c 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1879,6 +1879,20 @@ void bpf_rb_root_free(const struct btf_field *field, void *rb_root, } } +notrace void bpf_reset_exception(void) +{ + int i = interrupt_context_level(); + + current->bpf_exception_thrown[i] = false; +} + +notrace u64 bpf_get_exception(void) +{ + int i = interrupt_context_level(); + + return current->bpf_exception_thrown[i]; +} + __diag_push(); __diag_ignore_all("-Wmissing-prototypes", "Global functions as their definitions will be in vmlinux BTF"); @@ -2295,6 +2309,13 @@ __bpf_kfunc void bpf_rcu_read_unlock(void) rcu_read_unlock(); } +__bpf_kfunc notrace void bpf_throw(void) +{ + int i = interrupt_context_level(); + + current->bpf_exception_thrown[i] = true; +} + __diag_pop(); BTF_SET8_START(generic_btf_ids) @@ -2321,6 +2342,7 @@ BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_RCU | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_cgroup_from_id, KF_ACQUIRE | KF_RET_NULL) #endif BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_throw) BTF_SET8_END(generic_btf_ids) static const struct btf_kfunc_id_set generic_kfunc_set = { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index e18ac7fdc210..f82e7a174d6a 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3144,6 +3144,16 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, tgt_prog = prog->aux->dst_prog; } + /* Don't allow tracing programs to attach to fexit and clear exception + * state when we are unwinding the program. + */ + if (prog->type == BPF_PROG_TYPE_TRACING && + (prog->expected_attach_type == BPF_TRACE_FEXIT) && + tgt_prog && tgt_prog->throws_exception && prog->throws_exception) { + err = -EINVAL; + goto out_unlock; + } + err = bpf_link_prime(&link->link.link, &link_primer); if (err) goto out_unlock; diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index f61d5138b12b..e9f9dd52f16c 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -514,7 +514,9 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr kind = bpf_attach_type_to_tramp(link->link.prog); if (tr->extension_prog) /* cannot attach fentry/fexit if extension prog is attached. - * cannot overwrite extension prog either. + * cannot overwrite extension prog either. We rely on this to + * not check extension prog's exception specification (since + * throwing extension may not replace non-throwing). */ return -EBUSY; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 8ecd5df73b07..6981d8817c71 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2787,6 +2787,8 @@ static int add_subprog_and_kfunc(struct bpf_verifier_env *env) return 0; } +static bool is_bpf_throw_call(struct bpf_insn *insn); + static int check_subprogs(struct bpf_verifier_env *env) { int i, subprog_start, subprog_end, off, cur_subprog = 0; @@ -2820,11 +2822,12 @@ static int check_subprogs(struct bpf_verifier_env *env) if (i == subprog_end - 1) { /* to avoid fall-through from one subprog into another * the last insn of the subprog should be either exit - * or unconditional jump back + * or unconditional jump back or bpf_throw call */ if (code != (BPF_JMP | BPF_EXIT) && - code != (BPF_JMP | BPF_JA)) { - verbose(env, "last insn is not an exit or jmp\n"); + code != (BPF_JMP | BPF_JA) && + !is_bpf_throw_call(insn + i)) { + verbose(env, "last insn is not an exit or jmp or bpf_throw call\n"); return -EINVAL; } subprog_start = subprog_end; @@ -8200,6 +8203,7 @@ static int set_callee_state(struct bpf_verifier_env *env, struct bpf_func_state *callee, int insn_idx); static bool is_callback_calling_kfunc(u32 btf_id); +static int mark_chain_throw(struct bpf_verifier_env *env, int insn_idx); static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, int *insn_idx, int subprog, @@ -8247,6 +8251,12 @@ static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn caller->regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG; /* continue with next insn after call */ + + /* We don't explore the global function, but if it + * throws, mark the callchain as throwing. + */ + if (env->subprog_info[subprog].can_throw) + return mark_chain_throw(env, *insn_idx); return 0; } } @@ -8382,6 +8392,53 @@ static int set_callee_state(struct bpf_verifier_env *env, return 0; } +static int set_throw_state_type(struct bpf_verifier_env *env, int insn_idx, + int frame, int subprog) +{ + struct bpf_throw_state *ts = &env->insn_aux_data[insn_idx].throw_state; + int type; + + if (!frame && !subprog && env->prog->type != BPF_PROG_TYPE_EXT) + type = BPF_THROW_OUTER; + else + type = BPF_THROW_INNER; + if (ts->type != BPF_THROW_NONE) { + if (ts->type != type) { + verbose(env, + "conflicting rewrite type for throwing call insn %d: %d and %d\n", + insn_idx, ts->type, type); + return -EINVAL; + } + } + ts->type = type; + return 0; +} + +static int mark_chain_throw(struct bpf_verifier_env *env, int insn_idx) { + struct bpf_func_info_aux *func_info_aux = env->prog->aux->func_info_aux; + struct bpf_subprog_info *subprog = env->subprog_info; + struct bpf_verifier_state *state = env->cur_state; + struct bpf_func_state **frame = state->frame; + u32 cur_subprogno; + int ret; + + /* Mark all callsites leading up to this throw and their corresponding + * subprogs and update their func_info_aux table. + */ + for (int i = 1; i <= state->curframe; i++) { + u32 subprogno = frame[i - 1]->subprogno; + + func_info_aux[subprogno].throws_exception = subprog[subprogno].can_throw = true; + ret = set_throw_state_type(env, frame[i]->callsite, i - 1, subprogno); + if (ret < 0) + return ret; + } + /* Now mark actual instruction which caused the throw */ + cur_subprogno = frame[state->curframe]->subprogno; + func_info_aux[cur_subprogno].throws_exception = subprog[cur_subprogno].can_throw = true; + return set_throw_state_type(env, insn_idx, state->curframe, cur_subprogno); +} + static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, int *insn_idx) { @@ -8394,7 +8451,6 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, target_insn); return -EFAULT; } - return __check_func_call(env, insn, insn_idx, subprog, set_callee_state); } @@ -8755,17 +8811,17 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta, return 0; } -static int check_reference_leak(struct bpf_verifier_env *env) +static int check_reference_leak(struct bpf_verifier_env *env, bool exception_exit) { struct bpf_func_state *state = cur_func(env); bool refs_lingering = false; int i; - if (state->frameno && !state->in_callback_fn) + if (!exception_exit && state->frameno && !state->in_callback_fn) return 0; for (i = 0; i < state->acquired_refs; i++) { - if (state->in_callback_fn && state->refs[i].callback_ref != state->frameno) + if (!exception_exit && state->in_callback_fn && state->refs[i].callback_ref != state->frameno) continue; verbose(env, "Unreleased reference id=%d alloc_insn=%d\n", state->refs[i].id, state->refs[i].insn_idx); @@ -8999,7 +9055,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn switch (func_id) { case BPF_FUNC_tail_call: - err = check_reference_leak(env); + err = check_reference_leak(env, false); if (err) { verbose(env, "tail_call would lead to reference leak\n"); return err; @@ -9615,6 +9671,7 @@ enum special_kfunc_type { KF_bpf_dynptr_from_xdp, KF_bpf_dynptr_slice, KF_bpf_dynptr_slice_rdwr, + KF_bpf_throw, }; BTF_SET_START(special_kfunc_set) @@ -9633,6 +9690,7 @@ BTF_ID(func, bpf_dynptr_from_skb) BTF_ID(func, bpf_dynptr_from_xdp) BTF_ID(func, bpf_dynptr_slice) BTF_ID(func, bpf_dynptr_slice_rdwr) +BTF_ID(func, bpf_throw) BTF_SET_END(special_kfunc_set) BTF_ID_LIST(special_kfunc_list) @@ -9653,6 +9711,7 @@ BTF_ID(func, bpf_dynptr_from_skb) BTF_ID(func, bpf_dynptr_from_xdp) BTF_ID(func, bpf_dynptr_slice) BTF_ID(func, bpf_dynptr_slice_rdwr) +BTF_ID(func, bpf_throw) static bool is_kfunc_bpf_rcu_read_lock(struct bpf_kfunc_call_arg_meta *meta) { @@ -10736,6 +10795,13 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } } + if (meta.btf == btf_vmlinux && meta.func_id == special_kfunc_list[KF_bpf_throw]) { + err = mark_chain_throw(env, insn_idx); + if (err < 0) + return err; + return 1; + } + for (i = 0; i < CALLER_SAVED_REGS; i++) mark_reg_not_init(env, regs, caller_saved[i]); @@ -13670,7 +13736,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) * gen_ld_abs() may terminate the program at runtime, leading to * reference leak. */ - err = check_reference_leak(env); + err = check_reference_leak(env, false); if (err) { verbose(env, "BPF_LD_[ABS|IND] cannot be mixed with socket references\n"); return err; @@ -14075,6 +14141,10 @@ static int visit_insn(int t, struct bpf_verifier_env *env) if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { struct bpf_kfunc_call_arg_meta meta; + /* 'call bpf_throw' has no fallthrough edge, same as BPF_EXIT */ + if (is_bpf_throw_call(insn)) + return DONE_EXPLORING; + ret = fetch_kfunc_meta(env, insn, &meta, NULL); if (ret == 0 && is_iter_next_kfunc(&meta)) { mark_prune_point(env, t); @@ -14738,7 +14808,7 @@ static bool regs_exact(const struct bpf_reg_state *rold, const struct bpf_reg_state *rcur, struct bpf_id_pair *idmap) { - return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 && + return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 && check_ids(rold->id, rcur->id, idmap) && check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap); } @@ -15617,6 +15687,7 @@ static int do_check(struct bpf_verifier_env *env) int prev_insn_idx = -1; for (;;) { + bool exception_exit = false; struct bpf_insn *insn; u8 class; int err; @@ -15830,12 +15901,18 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } } - if (insn->src_reg == BPF_PSEUDO_CALL) + if (insn->src_reg == BPF_PSEUDO_CALL) { err = check_func_call(env, insn, &env->insn_idx); - else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) + } else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { err = check_kfunc_call(env, insn, &env->insn_idx); - else + if (err == 1) { + err = 0; + exception_exit = true; + goto process_bpf_exit_full; + } + } else { err = check_helper_call(env, insn, &env->insn_idx); + } if (err) return err; @@ -15863,6 +15940,7 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } +process_bpf_exit_full: if (env->cur_state->active_lock.ptr && !in_rbtree_lock_required_cb(env)) { verbose(env, "bpf_spin_unlock is missing\n"); @@ -15880,10 +15958,23 @@ static int do_check(struct bpf_verifier_env *env) * function, for which reference_state must * match caller reference state when it exits. */ - err = check_reference_leak(env); + err = check_reference_leak(env, exception_exit); if (err) return err; + /* The side effect of the prepare_func_exit + * which is being skipped is that it frees + * bpf_func_state. Typically, process_bpf_exit + * will only be hit with outermost exit. + * copy_verifier_state in pop_stack will handle + * freeing of any extra bpf_func_state left over + * from not processing all nested function + * exits. We also skip return code checks as + * they are not needed for exceptional exits. + */ + if (exception_exit) + goto process_bpf_exit; + if (state->curframe) { /* exit from nested function */ err = prepare_func_exit(env, &env->insn_idx); @@ -17438,6 +17529,33 @@ static int do_misc_fixups(struct bpf_verifier_env *env) int i, ret, cnt, delta = 0; for (i = 0; i < insn_cnt; i++, insn++) { + /* Typically, exception state is always cleared on entry and we + * ensure to clear it before exiting, but in some cases, our + * invocation can occur after a BPF callback has been executed + * asynchronously in the context of the current task, which may + * clobber the state (think of BPF timer callbacks). Callbacks + * never reset exception state (as they may be called from + * within a program). Thus, if we rely on seeing the exception + * state, always clear it on entry. + */ + if (i == 0 && prog->throws_exception) { + struct bpf_insn entry_insns[] = { + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_EMIT_CALL(bpf_reset_exception), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), + insn[i], + }; + + cnt = ARRAY_SIZE(entry_insns); + new_prog = bpf_patch_insn_data(env, i + delta, entry_insns, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = new_prog; + insn = new_prog->insnsi + i + delta; + } + /* Make divide-by-zero exceptions impossible. */ if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) || insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) || @@ -18030,7 +18148,33 @@ static bool is_inlineable_bpf_loop_call(struct bpf_insn *insn, return insn->code == (BPF_JMP | BPF_CALL) && insn->src_reg == 0 && insn->imm == BPF_FUNC_loop && - aux->loop_inline_state.fit_for_inline; + aux->loop_inline_state.fit_for_inline && + aux->throw_state.type == BPF_THROW_NONE; +} + +static struct bpf_prog *rewrite_bpf_throw_call(struct bpf_verifier_env *env, + int position, + struct bpf_throw_state *tstate, + u32 *cnt) +{ + struct bpf_insn insn_buf[] = { + env->prog->insnsi[position], + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }; + + *cnt = ARRAY_SIZE(insn_buf); + /* We don't need the call instruction for throws in frame 0 */ + if (tstate->type == BPF_THROW_OUTER) + return bpf_patch_insn_data(env, position, insn_buf + 1, *cnt - 1); + return bpf_patch_insn_data(env, position, insn_buf, *cnt); +} + +static bool is_bpf_throw_call(struct bpf_insn *insn) +{ + return insn->code == (BPF_JMP | BPF_CALL) && + insn->src_reg == BPF_PSEUDO_KFUNC_CALL && + insn->off == 0 && insn->imm == special_kfunc_list[KF_bpf_throw]; } /* For all sub-programs in the program (including main) check @@ -18069,8 +18213,24 @@ static int do_misc_rewrites(struct bpf_verifier_env *env) &cnt); if (!new_prog) return -ENOMEM; + } else if (is_bpf_throw_call(insn)) { + struct bpf_throw_state *throw_state = &insn_aux->throw_state; + + /* The verifier was able to prove that the bpf_throw + * call was unreachable, hence it must have not been + * seen and will be removed by opt_remove_dead_code. + */ + if (throw_state->type == BPF_THROW_NONE) { + WARN_ON_ONCE(insn_aux->seen); + goto skip; + } + + new_prog = rewrite_bpf_throw_call(env, i + delta, throw_state, &cnt); + if (!new_prog) + return -ENOMEM; } +skip: if (new_prog) { delta += cnt - 1; env->prog = new_prog; @@ -18240,6 +18400,12 @@ static int do_check_subprogs(struct bpf_verifier_env *env) "Func#%d is safe for any args that match its prototype\n", i); } + /* Only reliable functions from BTF PoV can be extended, hence + * remember their exception specification to check that we don't + * replace non-throwing subprog with throwing subprog. The + * opposite is fine though. + */ + aux->func_info_aux[i].throws_exception = env->subprog_info[i].can_throw; } return 0; } @@ -18250,8 +18416,12 @@ static int do_check_main(struct bpf_verifier_env *env) env->insn_idx = 0; ret = do_check_common(env, 0); - if (!ret) + if (!ret) { env->prog->aux->stack_depth = env->subprog_info[0].stack_depth; + env->prog->throws_exception = env->subprog_info[0].can_throw; + if (env->prog->aux->func_info) + env->prog->aux->func_info_aux[0].throws_exception = env->prog->throws_exception; + } return ret; } @@ -18753,6 +18923,42 @@ struct btf *bpf_get_btf_vmlinux(void) return btf_vmlinux; } +static int check_ext_prog(struct bpf_verifier_env *env) +{ + struct bpf_prog *tgt_prog = env->prog->aux->dst_prog; + u32 btf_id = env->prog->aux->attach_btf_id; + struct bpf_prog *prog = env->prog; + int subprog = -1; + + if (prog->type != BPF_PROG_TYPE_EXT) + return 0; + for (int i = 0; i < tgt_prog->aux->func_info_cnt; i++) { + if (tgt_prog->aux->func_info[i].type_id == btf_id) { + subprog = i; + break; + } + } + if (subprog == -1) { + verbose(env, "verifier internal error: extension prog's subprog not found\n"); + return -EFAULT; + } + /* BPF_THROW_OUTER rewrites won't match BPF_PROG_TYPE_EXT's + * BPF_THROW_INNER rewrites. + */ + if (!subprog && prog->throws_exception) { + verbose(env, "Cannot attach throwing extension to main subprog\n"); + return -EINVAL; + } + /* Overwriting extensions is not allowed, so we can simply check + * the specification of the subprog we are replacing. + */ + if (!tgt_prog->aux->func_info_aux[subprog].throws_exception && prog->throws_exception) { + verbose(env, "Cannot attach throwing extension to non-throwing subprog\n"); + return -EINVAL; + } + return 0; +} + int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) { u64 start_time = ktime_get_ns(); @@ -18871,6 +19077,9 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) ret = do_check_subprogs(env); ret = ret ?: do_check_main(env); + ret = ret ?: check_ext_prog(env); + + if (ret == 0 && bpf_prog_is_offloaded(env->prog->aux)) ret = bpf_prog_offload_finalize(env); diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index dbd2c729781a..d5de9251e775 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -89,4 +89,13 @@ extern void bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node, */ extern struct bpf_rb_node *bpf_rbtree_first(struct bpf_rb_root *root) __ksym; +/* Description + * Throw an exception, terminating the execution of the program immediately. + * The eBPF runtime unwinds the stack automatically and exits the program with + * the default return value of 0. + * Returns + * This function never returns. + */ +extern void bpf_throw(void) __attribute__((noreturn)) __ksym; + #endif From patchwork Wed Apr 5 00:42:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13201073 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AED03C77B6C for ; Wed, 5 Apr 2023 00:43:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236694AbjDEAnB (ORCPT ); Tue, 4 Apr 2023 20:43:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236696AbjDEAnA (ORCPT ); Tue, 4 Apr 2023 20:43:00 -0400 Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C336469C for ; Tue, 4 Apr 2023 17:42:49 -0700 (PDT) Received: by mail-wr1-x444.google.com with SMTP id v1so34546091wrv.1 for ; Tue, 04 Apr 2023 17:42:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680655368; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hIwcmJJLvUHzc+15u9i6IUYiBurkRNKHwn7vcP2ZAhg=; b=jd4dqrPHGCnWE1QV6dGG5ZP+QM6RWao1OKXqnmzKMn0UrUA29jkP5/9jI1jChMkeGE BvHsZJqwb5yZyBQac7F80WpUBvFEKm5putdRZIlQDQ7OJirDS7lVL13fq8KyLfBA1ALM x4Ra+CrhNEXVgFroeg8AYRfxU+az9MwCKpNmZPOmrRNd+Y7HdOFLpCJFlc5nsNJwKh2Z M4he2nZwDWoau4xe9sUjGO+K4lPZ3P9uD+nLFHJYF0MKy1fGhlbXW/Sd8z1+JeQyaanR 3r5Nq+RGiuSNsMX08aISeISQvYIwv9cH6VLqBAi3PRjjjGRcILP/mRDD39ebaJocOaw9 X8hA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680655368; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hIwcmJJLvUHzc+15u9i6IUYiBurkRNKHwn7vcP2ZAhg=; b=pZMbwq2JQa19hXVrvGFggGzaUOwhCBUZR4LDgqWI16y8HAI3eLw3r1MPz/P26sNPs5 sHUtruuofoD3cJz+fjLnakH3zwPz/y2GKAFZUzkzFGvqxeOaG1PhXb7NOiEhJ8oKTNzh 5vYqwktJRLPQD2aD3x4SXEELugcCZNwWEDN4/KN+PzOp+VbWsc83KYii0hCyJx+8UbDW MQ6DBL94ExritH3UGWHTFrtQSOUeOO0g94NQTFX0IeoSv4VdXLs/4mbhXLqxZUXWIGvQ EIqfh4fcrKJP1ao62CwDurruPIhZZHTMytbdNC6oBG4yGhG990NJpNGlcbbpT649Qu+2 cWwA== X-Gm-Message-State: AAQBX9f7FaHNzwo470ie8dJF5SrFKEpe9w1syo5eO5egOXX8+6xAWL8m /crhtEtMlBqxscR8AGjJmG7SY+j5IsVetg== X-Google-Smtp-Source: AKy350YIuzZIl4hvelnFAQ4g1EtjDHc8yRW4b8dJDVDmqXaxyRpm3KA+ssD2fiQil/ACDxOFRMYkoA== X-Received: by 2002:a5d:58c9:0:b0:2e8:9db2:d294 with SMTP id o9-20020a5d58c9000000b002e89db2d294mr556044wrf.26.1680655367656; Tue, 04 Apr 2023 17:42:47 -0700 (PDT) Received: from localhost ([2a02:1210:74a0:3200:2fc:d4f0:c121:5e8b]) by smtp.gmail.com with ESMTPSA id t17-20020adff611000000b002c5a790e959sm13505812wrp.19.2023.04.04.17.42.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Apr 2023 17:42:47 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , David Vernet Subject: [PATCH RFC bpf-next v1 4/9] bpf: Handle throwing BPF callbacks in helpers and kfuncs Date: Wed, 5 Apr 2023 02:42:34 +0200 Message-Id: <20230405004239.1375399-5-memxor@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230405004239.1375399-1-memxor@gmail.com> References: <20230405004239.1375399-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=13063; i=memxor@gmail.com; h=from:subject; bh=SJU/B5euGdj1/qxvpdL4muFNBUIxeOuvS82h9ya+ZSM=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBkLMPvRJ0hp8F8rvYuW3hVPrh/+x63Ayh933qDl /m7Mktk1eCJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZCzD7wAKCRBM4MiGSL8R ynMdD/9Ib9h8rEpR24lr9oOBQ97PcjyFuD1mHAE3KoV0rT6y2rl+plI/G8kZ6/ksjQmkhpgPspe EFcsjvXEV5GG9dV7TXo4BpLaR5FQkax/OzPcRULHIrMJatuWy6T3y/ehAiD7lS7NUXnYbhMynEd h2tQpMpGeaMV3scfkbRCHIDmL3OoTOPH1A8jngR9ntEOsLPPBAHC/4m7kqJSw+vIqd5YJdDECHD 1fta99AXjBPK6PjcQ9QnUEp1K2ZHA1OFZXjm5qzYUWs+J1SWjUR6a4f+2Ia30fELBl4anG2eqgq D3AwKerX8Wn52LHHhk/R5izGy5awOhNDN7IBGJUPusQKY1dT92rol4OVhXxBhb90yz9/Tp03O5x mjO/H38U5oPm7YPagKib68/53eXcIzTJumqeS9TqGCC4Dx5zHVh0mEeg1Ue6kLW6xxtScboQhP1 xXRRrChtts5R/Oey6jp4y1HSxTogzc84hmI5UaypBYxg876joD4mEsbTWvK1yQhRe7zqa8YGW7A iBZaPeNu8VCPztS1QrsgxSn8Hwfd4HzUgrDTLVaOOvfHRQZ3wuL1s6jWpS6ZfEk2nzIFTXxLdzu vp+nW0cjGfqzHnRp/bW7qwDaY9zgJiSf5OQ9+bsnUGKxQrN9XbrHnCeVCkvqSq+pECpA22SbV1h X6OXsokg0ihZD3Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Enable handling of callbacks that throw within helpers and kfuncs taking them. The problem with helpers is that exception state can be manipulated by a tracing program which is invoked between the checking of exception after calling callback and returning back to the program. Hence, helpers always return -EJUKEBOX whenever they detect an exception. Only 1 kfunc takes a callback (bpf_rbtree_add), so it is updated to be notrace to avoid this pitfall. This allows us to use bpf_get_exception to detect the thrown case, and in case we miss exception event we use return code to unwind. TODO: It might be possible to simply check return code in case of helpers or kfuncs taking callbacks and check_helper_ret_code = true, and not bother with exception state. This should lead to less code being generated per-callsite. For all other cases, we can rely on bpf_get_exception, and ensure that helper/kfunc uses notrace to avoid invocation of tracing programs that clobber exception state on return path. But make this change in v2 after ensuring current->bpf_exception_thrown approach is acceptable. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 1 + kernel/bpf/arraymap.c | 4 +- kernel/bpf/bpf_iter.c | 2 + kernel/bpf/hashtab.c | 4 +- kernel/bpf/helpers.c | 18 +++-- kernel/bpf/ringbuf.c | 4 ++ kernel/bpf/task_iter.c | 2 + kernel/bpf/verifier.c | 129 +++++++++++++++++++++++++++++++++++ 8 files changed, 156 insertions(+), 8 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index bc067223d3ee..a5346a2b7e68 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -485,6 +485,7 @@ struct bpf_insn_aux_data { bool zext_dst; /* this insn zero extends dst reg */ bool storage_get_func_atomic; /* bpf_*_storage_get() with atomic memory alloc */ bool is_iter_next; /* bpf_iter__next() kfunc call */ + bool skip_patch_call_imm; /* Skip patch_call_imm phase in do_misc_fixups */ u8 alu_state; /* used in combination with alu_limit */ /* below fields are initialized once */ diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index de0eadf8706f..6c0c5e726ebf 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -711,6 +711,8 @@ static long bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback key = i; ret = callback_fn((u64)(long)map, (u64)(long)&key, (u64)(long)val, (u64)(long)callback_ctx, 0); + if (bpf_get_exception()) + ret = -EJUKEBOX; /* return value: 0 - continue, 1 - stop and return */ if (ret) break; @@ -718,7 +720,7 @@ static long bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback if (is_percpu) migrate_enable(); - return num_elems; + return ret == -EJUKEBOX ? ret : num_elems; } static u64 array_map_mem_usage(const struct bpf_map *map) diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c index 96856f130cbf..6e4e4b6213f8 100644 --- a/kernel/bpf/bpf_iter.c +++ b/kernel/bpf/bpf_iter.c @@ -759,6 +759,8 @@ BPF_CALL_4(bpf_loop, u32, nr_loops, void *, callback_fn, void *, callback_ctx, for (i = 0; i < nr_loops; i++) { ret = callback((u64)i, (u64)(long)callback_ctx, 0, 0, 0); + if (bpf_get_exception()) + return -EJUKEBOX; /* return value: 0 - continue, 1 - stop and return */ if (ret) return i + 1; diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 00c253b84bf5..5e70151e0414 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -2178,6 +2178,8 @@ static long bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_ num_elems++; ret = callback_fn((u64)(long)map, (u64)(long)key, (u64)(long)val, (u64)(long)callback_ctx, 0); + if (bpf_get_exception()) + ret = -EJUKEBOX; /* return value: 0 - continue, 1 - stop and return */ if (ret) { rcu_read_unlock(); @@ -2189,7 +2191,7 @@ static long bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_ out: if (is_percpu) migrate_enable(); - return num_elems; + return ret == -EJUKEBOX ? ret : num_elems; } static u64 htab_map_mem_usage(const struct bpf_map *map) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 89e70907257c..82db3a64fa3f 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1982,10 +1982,11 @@ __bpf_kfunc struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root, } /* Need to copy rbtree_add_cached's logic here because our 'less' is a BPF - * program + * program. + * Marked notrace to avoid clobbering of exception state in current by BPF + * programs. */ -static void __bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node, - void *less) +static notrace void __bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node, void *less) { struct rb_node **link = &((struct rb_root_cached *)root)->rb_root.rb_node; bpf_callback_t cb = (bpf_callback_t)less; @@ -1993,8 +1994,13 @@ static void __bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node, bool leftmost = true; while (*link) { + u64 cb_res; + parent = *link; - if (cb((uintptr_t)node, (uintptr_t)parent, 0, 0, 0)) { + cb_res = cb((uintptr_t)node, (uintptr_t)parent, 0, 0, 0); + if (bpf_get_exception()) + return; + if (cb_res) { link = &parent->rb_left; } else { link = &parent->rb_right; @@ -2007,8 +2013,8 @@ static void __bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node, (struct rb_root_cached *)root, leftmost); } -__bpf_kfunc void bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node, - bool (less)(struct bpf_rb_node *a, const struct bpf_rb_node *b)) +__bpf_kfunc notrace void bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node, + bool (less)(struct bpf_rb_node *a, const struct bpf_rb_node *b)) { __bpf_rbtree_add(root, node, (void *)less); } diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c index 875ac9b698d9..7f6764ae4fff 100644 --- a/kernel/bpf/ringbuf.c +++ b/kernel/bpf/ringbuf.c @@ -766,6 +766,10 @@ BPF_CALL_4(bpf_user_ringbuf_drain, struct bpf_map *, map, bpf_dynptr_init(&dynptr, sample, BPF_DYNPTR_TYPE_LOCAL, 0, size); ret = callback((uintptr_t)&dynptr, (uintptr_t)callback_ctx, 0, 0, 0); + if (bpf_get_exception()) { + ret = -EJUKEBOX; + goto schedule_work_return; + } __bpf_user_ringbuf_sample_release(rb, size, flags); } ret = samples - discarded_samples; diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c index c4ab9d6cdbe9..6e8667f03784 100644 --- a/kernel/bpf/task_iter.c +++ b/kernel/bpf/task_iter.c @@ -807,6 +807,8 @@ BPF_CALL_5(bpf_find_vma, struct task_struct *, task, u64, start, callback_fn((u64)(long)task, (u64)(long)vma, (u64)(long)callback_ctx, 0, 0); ret = 0; + if (bpf_get_exception()) + ret = -EJUKEBOX; } bpf_mmap_unlock_mm(work, mm); return ret; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6981d8817c71..07d808b05044 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9053,6 +9053,24 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn } } + /* For each helper call which invokes a callback which may throw, it + * will propagate the thrown exception to us. For helpers, we check the + * return code in addition to exception state, as it may be reset + * between detection and return within kernel. Note that we don't + * include async callbacks (passed to bpf_timer_set_callback) because + * exceptions won't be propagated. + */ + if (is_callback_calling_function(meta.func_id) && + meta.func_id != BPF_FUNC_timer_set_callback) { + struct bpf_throw_state *ts = &env->insn_aux_data[insn_idx].throw_state; + /* Check for -EJUKEBOX in case exception state is clobbered by + * some other program executing between bpf_get_exception and + * return from helper. + */ + if (base_type(fn->ret_type) == RET_INTEGER) + ts->check_helper_ret_code = true; + } + switch (func_id) { case BPF_FUNC_tail_call: err = check_reference_leak(env, false); @@ -17691,6 +17709,9 @@ static int do_misc_fixups(struct bpf_verifier_env *env) continue; } + if (env->insn_aux_data[i + delta].skip_patch_call_imm) + continue; + if (insn->imm == BPF_FUNC_get_route_realm) prog->dst_needed = 1; if (insn->imm == BPF_FUNC_get_prandom_u32) @@ -18177,6 +18198,94 @@ static bool is_bpf_throw_call(struct bpf_insn *insn) insn->off == 0 && insn->imm == special_kfunc_list[KF_bpf_throw]; } +static struct bpf_prog *rewrite_bpf_call(struct bpf_verifier_env *env, + int position, + s32 stack_base, + struct bpf_throw_state *tstate, + u32 *cnt) +{ + s32 r0_offset = stack_base + 0 * BPF_REG_SIZE; + struct bpf_insn_aux_data *aux_data; + struct bpf_insn insn_buf[] = { + env->prog->insnsi[position], + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, r0_offset), + BPF_EMIT_CALL(bpf_get_exception), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, r0_offset), + BPF_JMP32_IMM(BPF_JNE, BPF_REG_0, -EJUKEBOX, 3), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }; + struct bpf_prog *new_prog; + int type, tsubprog = -1; + u32 callback_start; + u32 call_insn_offset; + s32 callback_offset; + bool ret_code; + + type = tstate->type; + ret_code = tstate->check_helper_ret_code; + if (type == BPF_THROW_OUTER) + insn_buf[4] = insn_buf[9] = BPF_EMIT_CALL(bpf_reset_exception); + if (type == BPF_THROW_INNER) + insn_buf[9] = BPF_EMIT_CALL(bpf_throw); + + /* We need to fix offset of the pseudo call after patching. + * Note: The actual call instruction is at insn_buf[0] + */ + if (bpf_pseudo_call(&insn_buf[0])) { + tsubprog = find_subprog(env, position + insn_buf[0].imm + 1); + if (WARN_ON_ONCE(tsubprog < 0)) + return NULL; + } + /* For helpers, the code path between checking bpf_get_exception and + * returning may involve invocation of tracing progs which reset + * exception state, so also use the return value to invoke exception + * path. Otherwise, exception event from callback is lost. + */ + if (ret_code) + *cnt = ARRAY_SIZE(insn_buf); + else + *cnt = ARRAY_SIZE(insn_buf) - 4; + new_prog = bpf_patch_insn_data(env, position, insn_buf, *cnt); + if (!new_prog) + return new_prog; + + /* Note: The actual call instruction is at insn_buf[0] */ + if (bpf_pseudo_call(&insn_buf[0])) { + callback_start = env->subprog_info[tsubprog].start; + call_insn_offset = position + 0; + callback_offset = callback_start - call_insn_offset - 1; + new_prog->insnsi[call_insn_offset].imm = callback_offset; + } + + aux_data = env->insn_aux_data; + /* Note: We already patched in call at insn_buf[2], insn_buf[9]. */ + aux_data[position + 2].skip_patch_call_imm = true; + if (ret_code) + aux_data[position + 9].skip_patch_call_imm = true; + /* Note: For BPF_THROW_OUTER, we already patched in call at insn_buf[4] */ + if (type == BPF_THROW_OUTER) + aux_data[position + 4].skip_patch_call_imm = true; + return new_prog; +} + +static bool is_throwing_bpf_call(struct bpf_verifier_env *env, struct bpf_insn *insn, + struct bpf_insn_aux_data *insn_aux) +{ + if (insn->code != (BPF_JMP | BPF_CALL)) + return false; + if (insn->src_reg == BPF_PSEUDO_CALL || + insn->src_reg == BPF_PSEUDO_KFUNC_CALL || + insn->src_reg == 0) + return insn_aux->throw_state.type != BPF_THROW_NONE; + return false; +} + /* For all sub-programs in the program (including main) check * insn_aux_data to see if there are any instructions that need to be * transformed into an instruction sequence. E.g. bpf_loop calls that @@ -18228,6 +18337,26 @@ static int do_misc_rewrites(struct bpf_verifier_env *env) new_prog = rewrite_bpf_throw_call(env, i + delta, throw_state, &cnt); if (!new_prog) return -ENOMEM; + } else if (is_throwing_bpf_call(env, insn, insn_aux)) { + struct bpf_throw_state *throw_state = &insn_aux->throw_state; + + stack_depth_extra = max_t(u16, stack_depth_extra, + BPF_REG_SIZE * 1 + stack_depth_roundup); + + /* The verifier was able to prove that the throwing call + * was unreachable, hence it must have not been seen and + * will be removed by opt_remove_dead_code. + */ + if (throw_state->type == BPF_THROW_NONE) { + WARN_ON_ONCE(insn_aux->seen); + goto skip; + } + + new_prog = rewrite_bpf_call(env, i + delta, + -(stack_depth + stack_depth_extra), + throw_state, &cnt); + if (!new_prog) + return -ENOMEM; } skip: From patchwork Wed Apr 5 00:42:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13201074 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B8B2C76188 for ; Wed, 5 Apr 2023 00:43:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236690AbjDEAnC (ORCPT ); Tue, 4 Apr 2023 20:43:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236698AbjDEAnB (ORCPT ); Tue, 4 Apr 2023 20:43:01 -0400 Received: from mail-wm1-x344.google.com (mail-wm1-x344.google.com [IPv6:2a00:1450:4864:20::344]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 441CB49C0 for ; Tue, 4 Apr 2023 17:42:51 -0700 (PDT) Received: by mail-wm1-x344.google.com with SMTP id n10-20020a05600c4f8a00b003ee93d2c914so22438382wmq.2 for ; Tue, 04 Apr 2023 17:42:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680655369; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H1lUqsIAEYTJ8ZnNXnHCoXIx40Afuu8M4s/AS5zPZxs=; b=n9rKi81dJIEnQxjT+iWnPvvFtEvxNiEYKC3FayDx9fz1+wesBCNiaeHqwiQNkXMElM 76GdA+qoQbBqQ1VIdJH2yiIQzlw3tfS25Mp1y4qTFi9GX8wp4YQvyIb+y+Tc50rIL0X1 3crpb5/qCvACAxdEilONrJ4sl53qFKVJIhJi02O50ibk5T/ht62wpJ1s2gpzifeVP4l7 5IEVnoXIxxsIy0hVEPM7rxf7Wvo+1bWY0ZUPQUntT7asz+UqqdPmeFqJZn0f7Dtq8WCh h9BsxjsT1pyVZX08fvjXYhG4efQ/yhvvgeCCPKPLGEDrpgbenk+xQ1A7eKC4Wls497oY RMFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680655369; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H1lUqsIAEYTJ8ZnNXnHCoXIx40Afuu8M4s/AS5zPZxs=; b=zdMLG4O+/t779Vgsss5ZbvdPp/X9NHb62lV+/Za4H/yjCXpHYZPkQhfRZUoPDWA9Rr LZEwyTyR/WkZqTC11/ZT1dFQ4LCuJqgZOtJWXB5M3BtRIaWlcHN50jxzPDWJfUYftJhd XWbtW4uPFI6CKiLI7IrAaMPgu5I9njicuroopTRA1Fo+fWXec5MLWOe+MsIC1AdWkDBB z4IJmM4ONesethA8i09TChUwvndvvqr9dNaH4aUYHJCetGeqmOnPGPDjBTRaLamGqoxZ XKLm72eqOmi0NJClwhgVC5PB6yjKQHNe3GG4ST+tA1BChTw/Lu/3rs1wZbf7RwxGRDoq /U3w== X-Gm-Message-State: AAQBX9efrLBCaU6q1GCoumigvB4HcbfcYdyhW3gzXmPd+3PAcHGLy14H afRgZgeYUpKDwp6B5zFju9H8fB8xsd15dQ== X-Google-Smtp-Source: AKy350Y8f9ZegmcAiuEI2Vu1lje7/hVrnIMSR7Tl0rzs6Ut499iFMJH6UuvVxr64V/Ycxsi2U20vJg== X-Received: by 2002:a05:600c:2312:b0:3ed:2b27:5bcc with SMTP id 18-20020a05600c231200b003ed2b275bccmr3482145wmo.38.1680655369176; Tue, 04 Apr 2023 17:42:49 -0700 (PDT) Received: from localhost ([2a02:1210:74a0:3200:2fc:d4f0:c121:5e8b]) by smtp.gmail.com with ESMTPSA id u11-20020a05600c19cb00b003ede3f5c81fsm414809wmq.41.2023.04.04.17.42.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Apr 2023 17:42:48 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , David Vernet Subject: [PATCH RFC bpf-next v1 5/9] bpf: Add pass to fixup global function throw information Date: Wed, 5 Apr 2023 02:42:35 +0200 Message-Id: <20230405004239.1375399-6-memxor@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230405004239.1375399-1-memxor@gmail.com> References: <20230405004239.1375399-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6079; i=memxor@gmail.com; h=from:subject; bh=9fnVPcXRP2rJGjCXRNDjXVXazYSKBKpnCXL81OBR0y4=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBkLMPw3oQxJbuK+1n53nZMocqhCaTStWwXZxl1V vGcpgw5GM+JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZCzD8AAKCRBM4MiGSL8R ynZRD/9qH0LfWsr8kAmWMKaGXzcHZmnoU35xqakTdgDV26Xxg49K+lRGqY6FtYHIm6h0atLCz9I sDIDKsM7j8DJOHbJJAGHMsyjI+Rv9a3r0PhcsEnM3ttXzt4n5TZrZQrVC97NgyVY3iUuKpVXVod W9G4cJ7bpYSdjkvBex7n5Fjyv1IvGxnatqjsshS9t96rAujEdc2PlGzCfpZjoZAUyO8pU3t0Vyn KWFiZGVENXK4Wu88kmNcaV5kgsBA/gdTNaClva4wNU5L36q8BBycskATKTbtb3vjQdHGY95ZGAx RqXL8LuFmo/ieMzn/xsU2F55r+sYdm7TeJHn4PUiuBYZT6Zbhbs/On6mV3jFx8zDPa4Zx6J4Sas Gaad5B63v2zHcZpX+HaEY7AldYtR4RM23GN17kSA2jAab5l6bfAvbGy3hSJBxbMvQ+0XbYDVP+w 0QM6LYtxzo/JFkoqLM51RuB8JCvL5L1LSscqYyq5cfkoWW4BeQJDI7mvc70HJePfdx64CqcwaKv c55A/vTNY/1Gk9AE9Hpa2wT8Lc1lHyKfOOEsUerrpWUaajAZu2rX59sLl39tQvF8ZOvJ4UcbVvi dYoaRX8agmVTjpWYAkTgnomC1IyFSUIGJOwoYayscAM+4AzynTlxHX13cu5XITLZAcTPROnDU+c +kfDJsrzkHOBKpg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Within the do_check pass, we made a core assumption that we have correct can_throw info about all global subprogs and simply used mark_chain_throw without entering them to mark callsites leading up to their call. However, the do_check_subprogs pass of verifier is iterative and does not propagate can_throw information across global subprogs which call into each other. We need an extra pass through all of them to propagate can_throw information visibility throwing global subprogs into global subprogs that call into them. After doing this pass, do_check_main will directly use mark_chain_throw again and have the correct information about all global subprogs which are called by it. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 118 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 117 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 07d808b05044..acfcaadca3b6 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -13664,6 +13664,12 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn) verbose(env, "missing btf func_info\n"); return -EINVAL; } + /* NOTE: Do not change this directly, as we rely on only + * BPF_FUNC_STATIC allowed as BPF_PSEUDO_FUNC targets in + * do_check_subprogs, see comment about propagating exception + * information across global functions. When changing this, add + * bpf_pseudo_func handling to the propagating loop as well. + */ if (aux->func_info_aux[subprogno].linkage != BTF_FUNC_STATIC) { verbose(env, "callback function not static\n"); return -EINVAL; @@ -18491,6 +18497,110 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog) return ret; } +/* We have gone through all global subprogs, and we know which ones were seen as + * throwing exceptions. Since calls to other global functions are not explored + * and we simply continue exploration at the next instruction, we may have not + * fully propagated can_throw information. E.g. consider the case below, where 1 + * and 2 are verified in order. + * + * gfunc 1: + * call gfunc2 + * exit + * gfunc 2: + * call bpf_throw + * + * At this point, gfunc1 is not marked as throwing, but it calls gfunc2 which + * actually throws. The only thing we need to do is go through every global + * function, and propagate the information back to their callers. We only care + * about BPF_PSEUDO_CALL, as BPF_PSEUDO_FUNC loads cannot have global functions + * as targets + * + * Logic mostly mimics check_max_stack_depth, but adjusted and simplified for + * our use case. + */ +static int fixup_global_subprog_throw_info(struct bpf_verifier_env *env) +{ + struct bpf_func_info_aux *func_info_aux = env->prog->aux->func_info_aux; + struct bpf_subprog_info *subprog = env->subprog_info; + int frame = 0, idx = 0, i = 0, subprog_end; + struct bpf_insn *insn = env->prog->insnsi; + int ret_insn[MAX_CALL_FRAMES]; + int ret_prog[MAX_CALL_FRAMES]; + bool can_throw; + int j, ret; + + /* Start at first global subprog */ + for (int s = 1; s < env->subprog_cnt; s++) { + if (func_info_aux[s].linkage != BTF_FUNC_GLOBAL) + continue; + idx = s; + break; + } + if (!idx) + return -EFAULT; + i = subprog[idx].start; +continue_func: + can_throw = false; + subprog_end = subprog[idx + 1].start; + for (; i < subprog_end; i++) { + int next_insn; + + if (!bpf_pseudo_call(insn + i)) + continue; + /* remember insn and function to return to */ + ret_insn[frame] = i + 1; + ret_prog[frame] = idx; + + /* find the callee */ + next_insn = i + insn[i].imm + 1; + idx = find_subprog(env, next_insn); + if (idx < 0) { + WARN_ONCE(1, "verifier bug. No program starts at insn %d\n", next_insn); + return -EFAULT; + } + + /* Only follow global subprog calls */ + if (func_info_aux[idx].linkage != BTF_FUNC_GLOBAL) + continue; + /* If this subprog already throws, mark all callers and continue + * with next instruction in current subprog. + */ + if (subprog[idx].can_throw) { + /* Include current frame info when marking */ + for (j = frame; j >= 0; j--) { + func_info_aux[ret_prog[j]].throws_exception = subprog[ret_prog[j]].can_throw = true; + /* Exception subprog cannot be set in global + * function context, so set_throw_state_type + * will always mark type as BPF_THROW_INNER + * and subprog as -1. + */ + ret = set_throw_state_type(env, ret_insn[j] - 1, j, ret_prog[j]); + if (ret < 0) + return ret; + } + continue; + } + + i = next_insn; + frame++; + if (frame >= MAX_CALL_FRAMES) { + verbose(env, "the call stack of %d frames is too deep !\n", + frame); + return -E2BIG; + } + goto continue_func; + } + /* end of for() loop means the last insn of the 'subprog' + * was reached. Doesn't matter whether it was JA or EXIT + */ + if (frame == 0) + return 0; + frame--; + i = ret_insn[frame]; + idx = ret_prog[frame]; + goto continue_func; +} + /* Verify all global functions in a BPF program one by one based on their BTF. * All global functions must pass verification. Otherwise the whole program is rejected. * Consider: @@ -18511,6 +18621,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog) static int do_check_subprogs(struct bpf_verifier_env *env) { struct bpf_prog_aux *aux = env->prog->aux; + bool does_anyone_throw = false; int i, ret; if (!aux->func_info) @@ -18535,8 +18646,13 @@ static int do_check_subprogs(struct bpf_verifier_env *env) * opposite is fine though. */ aux->func_info_aux[i].throws_exception = env->subprog_info[i].can_throw; + if (!does_anyone_throw && env->subprog_info[i].can_throw) + does_anyone_throw = true; } - return 0; + + if (!does_anyone_throw) + return 0; + return fixup_global_subprog_throw_info(env); } static int do_check_main(struct bpf_verifier_env *env) From patchwork Wed Apr 5 00:42:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13201075 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3045C6FD1D for ; Wed, 5 Apr 2023 00:43:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236686AbjDEAnD (ORCPT ); Tue, 4 Apr 2023 20:43:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236699AbjDEAnB (ORCPT ); Tue, 4 Apr 2023 20:43:01 -0400 Received: from mail-wm1-x342.google.com (mail-wm1-x342.google.com [IPv6:2a00:1450:4864:20::342]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEC7249F5 for ; Tue, 4 Apr 2023 17:42:52 -0700 (PDT) Received: by mail-wm1-x342.google.com with SMTP id j18-20020a05600c1c1200b003ee5157346cso22774408wms.1 for ; Tue, 04 Apr 2023 17:42:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680655370; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=57iumcasqknZOnjXenM7MimJ0hLapVG9R2csE8pbamw=; b=ONrh4LSJTrBNUxjC3GV834atMrI4V5CdcsaxhFKKWHNVAV8visvmJmujGs1VyWEgEb KUjLooib4nc2NYSQHn2YmIbQ79/NLYzBDWel5ba2nLXXGUO1gvlGnfd37FxHQVgtJG+/ OrojL3SVx8POuNOCzriGqcBUPYa19rjuPTAGTwDzVLeTCcdMsxBvJkTYIg1fByyy241P JV/JrewPwz9IiqlQkh8YnZaHIagPf7vvxQKjsDa6MVbmt4p0BV7nKMgE028I+l+d7TGR U9eOiCrtkUJ58pjKkvVcpQ6ARUe7Q/dLY8rNW6XnTCQjqEiKDcBXjFnWIltA+6IRmc2f 8rbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680655370; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=57iumcasqknZOnjXenM7MimJ0hLapVG9R2csE8pbamw=; b=3KoPrEU5OTB90+dgkcSVGwn0tGQcgX8/SmV+ZmtsZD/1wLRe37a/VvTUbZ5NKTIuTd CrsyYcy2TpCDd98toHoTu1TZNrT7h9Z3P2hpuBhQ10LBXgLYNReveUS6wtbRSDtVzmC9 OZ3r3YTVhdhjBeEISkxp+fnFc92YXwNeo8TZk+WEb38dpndcclaI+DaOU2qgdlIQzq/+ wCoANuBwAQCsxnYCY8iMfb93yhLp4Mv0PKLfkOJXg8GG1K5MEbKMjYtn59xzwJg/+VoY uHikhVZ0DJfT5MJUJJovi7nZmWrNWyuDbU3I95joUAy6/J6zkZf6ExiUVe6kwsWzXXdZ rhlg== X-Gm-Message-State: AAQBX9ciIDp5PJs1kPDVYY9f2CS4TngO7s0YiyWFQEWhATA/5xP93Sf2 bdFk3BcPQUrs/VYexd0MOVNDQaGq3QHaDQ== X-Google-Smtp-Source: AKy350bdOVfHrZ8LqGI5Qs5hBLWTUVqvBqUp99aorK9/qelOKyZraLtH5jJqtouIrW21yaXsueLs6g== X-Received: by 2002:a1c:f617:0:b0:3ee:672d:caae with SMTP id w23-20020a1cf617000000b003ee672dcaaemr3277684wmc.36.1680655370519; Tue, 04 Apr 2023 17:42:50 -0700 (PDT) Received: from localhost ([2a02:1210:74a0:3200:2fc:d4f0:c121:5e8b]) by smtp.gmail.com with ESMTPSA id d21-20020a1c7315000000b003ed1f6878a5sm473434wmb.5.2023.04.04.17.42.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Apr 2023 17:42:50 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , David Vernet Subject: [PATCH RFC bpf-next v1 6/9] bpf: Add KF_THROW annotation for kfuncs Date: Wed, 5 Apr 2023 02:42:36 +0200 Message-Id: <20230405004239.1375399-7-memxor@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230405004239.1375399-1-memxor@gmail.com> References: <20230405004239.1375399-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3408; i=memxor@gmail.com; h=from:subject; bh=BkopB9cnZ0lcE2FeE9hPW8L26C85kzTNg+WjrB34Loo=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBkLMPwRHycp93QQgjY7UWoeAZzPtGFXrfSHYmFP wL/QgfXu3uJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZCzD8AAKCRBM4MiGSL8R ypOwEAC+vqNRgFBs6+kUnWd41XtEnvSKdtUFsM1YrQBD66cgBWH/ILsM8SI3gr43XfCAe9P8O96 8Yz2gW3Sah0Tud84huzNmnlxoAY4ky1CaIrD1HrmElG1dVJxEb7cBrErznmzJyGTDe6KIEZqb/1 ItsvO0haAHzrneTlBSFXcXrC6vMkv/XKbkZ1y68ZTLP7bKVQhB5IaExuQtg1gKIP98eX4VU7qCE Va4qjSkHz0y72vngAaMuZvPBRwYeZtQIZYCf4ubDHpDWWWLjQk+saBzDc1K3MkKDAU36XyH1V1R sP+batOJfdmUOxPUxs8Pt4pVqVu+M/Dl/WtkF15BORAoraeDMYehPobQCju+yQ61nrhBiwMpAJ/ ZslMrNQ87z/RMRuaewnlvnm+NdjdRiBtTda8J+/D5W3W0okNqUi+sWoQI/MVJaQ3/uk6P7R3+0q WhJqxyLM225YYNPd8WmiRaXvzNevCZYUQIVjHguxAXoTqJH8MccWTD8HIdvLBMbUFHWioK6FsOJ QW3C9qdT7m9ztCuNm2tfUKwaPSQXz9EmcqEHJ2xnJ6OjGouAfRmT12u5kSjziDQ8aPh1exGRIZL xvvz2mtYBybul4N4J7xCbL5VZ9chSucF3d7+TTERuKRGl2yn8BiYOyM1Op89i1aDzT0gzPJRng9 DNPaPplb+ghozRg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Add KF_THROW annotation to kfuncs to indicate that they may throw. This is mostly for testing for now, but in the future it could be used by kfuncs to throw on invalid arguments or invalid conditions based on their input arguments, causing the program to abort, and simplify the overall user experience of kfuncs for the happy case, without having to deal with corner cases that never occur at runtime. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/btf.h | 1 + kernel/bpf/verifier.c | 12 ++++++++++-- net/bpf/test_run.c | 12 ++++++++++++ 3 files changed, 23 insertions(+), 2 deletions(-) diff --git a/include/linux/btf.h b/include/linux/btf.h index d53b10cc55f2..8dfa4113822b 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -75,6 +75,7 @@ #define KF_ITER_NEW (1 << 8) /* kfunc implements BPF iter constructor */ #define KF_ITER_NEXT (1 << 9) /* kfunc implements BPF iter next method */ #define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter destructor */ +#define KF_THROW (1 << 11) /* kfunc may throw a BPF exception */ /* * Tag marking a kernel function as a kfunc. This is meant to minimize the diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index acfcaadca3b6..b9f4b1849647 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9454,6 +9454,11 @@ static bool is_kfunc_arg_kptr_get(struct bpf_kfunc_call_arg_meta *meta, int arg) return arg == 0 && (meta->kfunc_flags & KF_KPTR_GET); } +static bool is_kfunc_throwing(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_THROW; +} + static bool __kfunc_param_match_suffix(const struct btf *btf, const struct btf_param *arg, const char *suffix) @@ -10813,11 +10818,14 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } } - if (meta.btf == btf_vmlinux && meta.func_id == special_kfunc_list[KF_bpf_throw]) { + if (is_kfunc_throwing(&meta) || + (meta.btf == btf_vmlinux && meta.func_id == special_kfunc_list[KF_bpf_throw])) { err = mark_chain_throw(env, insn_idx); if (err < 0) return err; - return 1; + /* Halt exploration only for bpf_throw */ + if (!is_kfunc_throwing(&meta)) + return 1; } for (i = 0; i < CALLER_SAVED_REGS; i++) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index f1652f5fbd2e..31f76ee4218b 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -766,6 +766,16 @@ __bpf_kfunc static u32 bpf_kfunc_call_test_static_unused_arg(u32 arg, u32 unused return arg; } +__bpf_kfunc notrace void bpf_kfunc_call_test_always_throws(void) +{ + bpf_throw(); +} + +__bpf_kfunc notrace void bpf_kfunc_call_test_never_throws(void) +{ + return; +} + __diag_pop(); BTF_SET8_START(bpf_test_modify_return_ids) @@ -806,6 +816,8 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS | KF_RCU) BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE) BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg) BTF_ID_FLAGS(func, bpf_kfunc_call_test_offset) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_always_throws, KF_THROW) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_never_throws, KF_THROW) BTF_SET8_END(test_sk_check_kfunc_ids) static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size, From patchwork Wed Apr 5 00:42:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13201076 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06B3DC77B6E for ; Wed, 5 Apr 2023 00:43:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236699AbjDEAnD (ORCPT ); Tue, 4 Apr 2023 20:43:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236696AbjDEAnD (ORCPT ); Tue, 4 Apr 2023 20:43:03 -0400 Received: from mail-wr1-x441.google.com (mail-wr1-x441.google.com [IPv6:2a00:1450:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5738944BA for ; Tue, 4 Apr 2023 17:42:54 -0700 (PDT) Received: by mail-wr1-x441.google.com with SMTP id q19so31476362wrc.5 for ; Tue, 04 Apr 2023 17:42:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680655372; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v/2gok4ZYNnw4ha5LbLeSVFL1gfKUZ0QgMNLTly7xMQ=; b=GI5dRT5O+PRngKj5MkhR4L18slHa8D7+toRmb6AQlx6zEKihrDLhI20prjmBrTPqOA E7J2hpjCc3dyEFyWSkkfQ855pL7SGoqvI5qZx8JQ40NOSJh1KTUTfswwVpiU+mLkjIDF r56VCKwF43K1BqIUa6eLyQtJr6FTITODq+OjLkx8pdEDHmUjdaBKTskiXm1YDO7XF4Ul j/kmDJ9OhpcaL0yOGOK7O9Y+Rn6CeJlwtUSNXcCl/H+w3tBcwfOHX1bAxAj7EWLQw2kz TJWhXKOvZgscz8mVqZyy9EuSiTmLv0GG0HgqZIQdI4Sc8TalJoqo42pwCV5Di/6Rfrhx WjxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680655372; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v/2gok4ZYNnw4ha5LbLeSVFL1gfKUZ0QgMNLTly7xMQ=; b=CvXuEDQDRBe7mpAkY1+fviQ/Q5gFIYdr/WUntui/5PM5+f1JUdwwgtqOF6YDFu8cY6 xy25BAHVnFrhrufwzsiJEvtl2jhgsq9aLIbyzENiNmKzYellQKYzm//SQfkDhMvkxEKP 33gSwj/NLjInJkS5opPGwkJdqtfMpfJtvG79Cd0E4/2EbCQi1eFBFh7x5y2sWVPd9dqa IEdn+JHrhLQLBTPuBhEFoUPvQaJwqvDaoVlXvsWtKYD+G0LRzEO+NLV0IEufJqk9ONBK HgU7eZoolo6XBfb+7u0qOHWvCOfmBwdt/l87bJYzyU7kmmlfGg/32Lm5B2HU+ILKzkP0 h1yg== X-Gm-Message-State: AAQBX9c2SxNbD4489eeoM82GT09KFyoySQY6ElvbqNm8C1mabSjCWWFy UBYbcVXRWx0HmRblwGX5CzMpxN8yeTXSpg== X-Google-Smtp-Source: AKy350YzYV0wmuhSARCPovP70ICdnCRqGc9HokbJ3FGAAUgGKL3429CHl+0WLoOivQC5Q5cUxzyjVA== X-Received: by 2002:a5d:678e:0:b0:2ce:a9e9:5da2 with SMTP id v14-20020a5d678e000000b002cea9e95da2mr2672209wru.6.1680655372055; Tue, 04 Apr 2023 17:42:52 -0700 (PDT) Received: from localhost ([2a02:1210:74a0:3200:2fc:d4f0:c121:5e8b]) by smtp.gmail.com with ESMTPSA id r8-20020a5d4e48000000b002c70e60abd4sm13689930wrt.2.2023.04.04.17.42.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Apr 2023 17:42:51 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , David Vernet Subject: [PATCH RFC bpf-next v1 7/9] bpf: Introduce bpf_set_exception_callback kfunc Date: Wed, 5 Apr 2023 02:42:37 +0200 Message-Id: <20230405004239.1375399-8-memxor@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230405004239.1375399-1-memxor@gmail.com> References: <20230405004239.1375399-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=19576; i=memxor@gmail.com; h=from:subject; bh=/aS+fddSNWpXHR6p+P2u0Sh7R59bvq37oRK00iHzJow=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBkLMPwgLZneVVr0tpeqK/zME+BiJ68iLSv1Gpz5 AiKj15UCy+JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZCzD8AAKCRBM4MiGSL8R ygKtD/9KIV9kj0Y57B5rX0pjkacri8D14tUPQgshp5ti1GjhjzTLGMDh/kcsD+1cTSLNG0Rjr6r R4XvEPfJKtWEBdspchbOYjHN68GT9+9JTj4O/0uy51Y1l/W0QrgLR1pVX3IHT5oMoRWItNnhjJq wh+TXjYG6DGuyMs40aG/U31IZyBFi/+KzgJMVDtR0zWuPSbyUGuExJzb0AmlxmqMK5Nzr+tZrPo +e88IXZ4c6UY5dZGif0A3W4USHTEaaeR/yM9cEqLBBVscorMebr1BWpRu2Pi/Ln+e6rhd08tRJw VT8x5NGcROxbHxVJ86AAo8xNE2QNrJnIcbE4TzRSQlY1OEMYrxSc96hlRskBbQPNCkSAO72pEJc W87+/LSX72yX6pwXvHo9hkaEnJmPe5rTGFBO/2q11Bq1CReT19ck9tpqYf5Qjv5cMg04zRzLv3f wrcDNKk0BepfSvTva82yLFyjD9K7giZX5W8XAjaE2O/9P6+srVx63ersa7yuaInZd7b1vLm6pXo JxHn0n9IDlyzQyg7sqqRuMjyUTJGmR595nbqxC88tqfVlKZdOB3DUzU+8MSSs//pLQfkzEtUELe tC6p4U61a5eSc2m4dgbMyFCepS8kshWQRPLKTfAxi5r0ZC73hIqCBAL9DVQk8EhexAb17l+Dpwv WZ8GeCOE7Epn8/Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC This patch allows the BPF program to queue the invocation of a callback after an exception has been thrown and the BPF runtime has completed the unwinding of the program stack. This is the last past point before the program returns control back to the kernel. In the earlier patches, by default, whenever an exception is thrown, we return 0 to the kernel context. However, this is not always desirable and the program may want to customize the default return code or perform some program specific action when an exception is thrown. This commit sets up the infrastructure to allow this. The semantics and implementation notes are as follows. - The program may use bpf_set_exception_callback to set up the exception callback only once. - Since this callback is invoked during unwinding, it cannot throw itself. - The exception state has been reset before it is invoked, hence any program attachments (fentry/fexit) are allowed as usual. The check to disallow throwing FEXIT for throwing program is relaxed at the level of subprog it is attaching to. - Exception callbacks are disallowed to call bpf_set_exception_callback. - bpf_set_exception_callback may not be called in global functions or BPF_PROG_TYPE_EXT (because they propagate exception outwards and do not perform the final exit back to the kernel, which is where the callback call is inserted). This can be supported in the future (by verifying that all paths which throw have the same callback set, and remember this in the global subprog/ext prog summary in func_info_aux, but is skipped for simplicity). - For a given outermost throwing instruction, it cannot see different exception callbacks from different paths, since it has to hardcode one during the rewrite phase. - From stack depth check point of view, async and exception callbacks are similar in the sense that they don't contribute to current callchain's stack_depth, but they need to be tested against the main subprog's stack depth and the exception callback's stack depth being within MAX_BPF_STACK. The variable in subprog_info is named to reflect this combined meaning, and appropriate handling is introduced. - frame->in_exception_callback_fn is used to subject the exception callback to the same return code checks as the BPF program's main exit, so that it can return a meaningful return code based on the program type and is not limited to some fixed callback return range. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 6 +- kernel/bpf/helpers.c | 6 + kernel/bpf/verifier.c | 184 ++++++++++++++++-- .../testing/selftests/bpf/bpf_experimental.h | 11 ++ 4 files changed, 188 insertions(+), 19 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index a5346a2b7e68..301318ed04f5 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -293,6 +293,7 @@ struct bpf_func_state { bool in_callback_fn; struct tnum callback_ret_range; bool in_async_callback_fn; + bool in_exception_callback_fn; /* The following fields should be last. See copy_func_state() */ int acquired_refs; @@ -370,6 +371,7 @@ struct bpf_verifier_state { struct bpf_active_lock active_lock; bool speculative; bool active_rcu_lock; + s32 exception_callback_subprog; /* first and last insn idx of this verifier state */ u32 first_insn_idx; @@ -439,6 +441,7 @@ enum { struct bpf_throw_state { int type; bool check_helper_ret_code; + s32 subprog; }; /* Possible states for alu_state member. */ @@ -549,7 +552,8 @@ struct bpf_subprog_info { bool has_tail_call; bool tail_call_reachable; bool has_ld_abs; - bool is_async_cb; + bool is_async_or_exception_cb; + bool is_exception_cb; bool can_throw; }; diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 82db3a64fa3f..e6f15da8f154 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -2322,6 +2322,11 @@ __bpf_kfunc notrace void bpf_throw(void) current->bpf_exception_thrown[i] = true; } +__bpf_kfunc notrace void bpf_set_exception_callback(int (*cb)(void)) +{ + WARN_ON_ONCE(1); +} + __diag_pop(); BTF_SET8_START(generic_btf_ids) @@ -2349,6 +2354,7 @@ BTF_ID_FLAGS(func, bpf_cgroup_from_id, KF_ACQUIRE | KF_RET_NULL) #endif BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_throw) +BTF_ID_FLAGS(func, bpf_set_exception_callback) BTF_SET8_END(generic_btf_ids) static const struct btf_kfunc_id_set generic_kfunc_set = { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b9f4b1849647..5015abf246b1 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1736,6 +1736,7 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, } dst_state->speculative = src->speculative; dst_state->active_rcu_lock = src->active_rcu_lock; + dst_state->exception_callback_subprog = src->exception_callback_subprog; dst_state->curframe = src->curframe; dst_state->active_lock.ptr = src->active_lock.ptr; dst_state->active_lock.id = src->active_lock.id; @@ -5178,10 +5179,16 @@ static int check_max_stack_depth(struct bpf_verifier_env *env) next_insn); return -EFAULT; } - if (subprog[idx].is_async_cb) { + if (subprog[idx].is_async_or_exception_cb) { if (subprog[idx].has_tail_call) { - verbose(env, "verifier bug. subprog has tail_call and async cb\n"); + verbose(env, "verifier bug. subprog has tail_call and async or exception cb\n"); return -EFAULT; + } + if (subprog[idx].is_exception_cb) { + if (subprog[0].stack_depth + subprog[idx].stack_depth > MAX_BPF_STACK) { + verbose(env, "combined stack size of main and exception calls is %d. Too large\n", depth); + return -EACCES; + } } /* async callbacks don't increase bpf prog stack size */ continue; @@ -8203,6 +8210,7 @@ static int set_callee_state(struct bpf_verifier_env *env, struct bpf_func_state *callee, int insn_idx); static bool is_callback_calling_kfunc(u32 btf_id); +static bool is_set_exception_cb_kfunc(struct bpf_insn *insn); static int mark_chain_throw(struct bpf_verifier_env *env, int insn_idx); static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, @@ -8279,13 +8287,16 @@ static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn } } - if (insn->code == (BPF_JMP | BPF_CALL) && - insn->src_reg == 0 && - insn->imm == BPF_FUNC_timer_set_callback) { + if ((insn->code == (BPF_JMP | BPF_CALL) && + insn->src_reg == 0 && + insn->imm == BPF_FUNC_timer_set_callback) || + is_set_exception_cb_kfunc(insn)) { struct bpf_verifier_state *async_cb; /* there is no real recursion here. timer callbacks are async */ - env->subprog_info[subprog].is_async_cb = true; + env->subprog_info[subprog].is_async_or_exception_cb = true; + if (is_set_exception_cb_kfunc(insn)) + env->subprog_info[subprog].is_exception_cb = true; async_cb = push_async_cb(env, env->subprog_info[subprog].start, *insn_idx, subprog); if (!async_cb) @@ -8396,12 +8407,15 @@ static int set_throw_state_type(struct bpf_verifier_env *env, int insn_idx, int frame, int subprog) { struct bpf_throw_state *ts = &env->insn_aux_data[insn_idx].throw_state; - int type; + int exception_subprog, type; - if (!frame && !subprog && env->prog->type != BPF_PROG_TYPE_EXT) + if (!frame && !subprog && env->prog->type != BPF_PROG_TYPE_EXT) { type = BPF_THROW_OUTER; - else + exception_subprog = env->cur_state->exception_callback_subprog; + } else { type = BPF_THROW_INNER; + exception_subprog = -1; + } if (ts->type != BPF_THROW_NONE) { if (ts->type != type) { verbose(env, @@ -8409,8 +8423,14 @@ static int set_throw_state_type(struct bpf_verifier_env *env, int insn_idx, insn_idx, ts->type, type); return -EINVAL; } + if (ts->subprog != exception_subprog) { + verbose(env, "different exception callback subprogs for same insn %d: %d and %d\n", + insn_idx, ts->subprog, exception_subprog); + return -EINVAL; + } } ts->type = type; + ts->subprog = exception_subprog; return 0; } @@ -8432,9 +8452,23 @@ static int mark_chain_throw(struct bpf_verifier_env *env, int insn_idx) { ret = set_throw_state_type(env, frame[i]->callsite, i - 1, subprogno); if (ret < 0) return ret; + /* Have we seen this being used as exception cb? Reject! */ + if (subprog[subprogno].is_exception_cb) { + verbose(env, + "subprog %d (at insn %d) is used as exception callback, cannot throw\n", + subprogno, subprog[subprogno].start); + return -EACCES; + } } /* Now mark actual instruction which caused the throw */ cur_subprogno = frame[state->curframe]->subprogno; + /* Have we seen this being used as exception cb? Reject! */ + if (subprog[cur_subprogno].is_exception_cb) { + verbose(env, + "subprog %d (at insn %d) is used as exception callback, cannot throw\n", + cur_subprogno, subprog[cur_subprogno].start); + return -EACCES; + } func_info_aux[cur_subprogno].throws_exception = subprog[cur_subprogno].can_throw = true; return set_throw_state_type(env, insn_idx, state->curframe, cur_subprogno); } @@ -8619,6 +8653,23 @@ static int set_rbtree_add_callback_state(struct bpf_verifier_env *env, return 0; } +static int set_exception_callback_state(struct bpf_verifier_env *env, + struct bpf_func_state *caller, + struct bpf_func_state *callee, + int insn_idx) +{ + /* void bpf_exception_callback(int (*cb)(void)); */ + + __mark_reg_not_init(env, &callee->regs[BPF_REG_1]); + __mark_reg_not_init(env, &callee->regs[BPF_REG_2]); + __mark_reg_not_init(env, &callee->regs[BPF_REG_3]); + __mark_reg_not_init(env, &callee->regs[BPF_REG_4]); + __mark_reg_not_init(env, &callee->regs[BPF_REG_5]); + callee->in_exception_callback_fn = true; + callee->callback_ret_range = tnum_range(0, 0); + return 0; +} + static bool is_rbtree_lock_required_kfunc(u32 btf_id); /* Are we currently verifying the callback for a rbtree helper that must @@ -9695,6 +9746,7 @@ enum special_kfunc_type { KF_bpf_dynptr_slice, KF_bpf_dynptr_slice_rdwr, KF_bpf_throw, + KF_bpf_set_exception_callback, }; BTF_SET_START(special_kfunc_set) @@ -9714,6 +9766,7 @@ BTF_ID(func, bpf_dynptr_from_xdp) BTF_ID(func, bpf_dynptr_slice) BTF_ID(func, bpf_dynptr_slice_rdwr) BTF_ID(func, bpf_throw) +BTF_ID(func, bpf_set_exception_callback) BTF_SET_END(special_kfunc_set) BTF_ID_LIST(special_kfunc_list) @@ -9735,6 +9788,7 @@ BTF_ID(func, bpf_dynptr_from_xdp) BTF_ID(func, bpf_dynptr_slice) BTF_ID(func, bpf_dynptr_slice_rdwr) BTF_ID(func, bpf_throw) +BTF_ID(func, bpf_set_exception_callback) static bool is_kfunc_bpf_rcu_read_lock(struct bpf_kfunc_call_arg_meta *meta) { @@ -10080,7 +10134,14 @@ static bool is_bpf_graph_api_kfunc(u32 btf_id) static bool is_callback_calling_kfunc(u32 btf_id) { - return btf_id == special_kfunc_list[KF_bpf_rbtree_add]; + return btf_id == special_kfunc_list[KF_bpf_rbtree_add] || + btf_id == special_kfunc_list[KF_bpf_set_exception_callback]; +} + +static bool is_set_exception_cb_kfunc(struct bpf_insn *insn) +{ + return bpf_pseudo_kfunc_call(insn) && insn->off == 0 && + insn->imm == special_kfunc_list[KF_bpf_set_exception_callback]; } static bool is_rbtree_lock_required_kfunc(u32 btf_id) @@ -10704,6 +10765,9 @@ static int fetch_kfunc_meta(struct bpf_verifier_env *env, return 0; } +#define BPF_EXCEPTION_CB_CAN_SET (-1) +#define BPF_EXCEPTION_CB_CANNOT_SET (-2) + static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, int *insn_idx_p) { @@ -10818,6 +10882,33 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } } + if (meta.btf == btf_vmlinux && meta.func_id == special_kfunc_list[KF_bpf_set_exception_callback]) { + if (env->cur_state->exception_callback_subprog == BPF_EXCEPTION_CB_CANNOT_SET) { + verbose(env, "exception callback cannot be set within global function or extension program\n"); + return -EINVAL; + } + if (env->cur_state->frame[env->cur_state->curframe]->in_exception_callback_fn) { + verbose(env, "exception callback cannot be set from within exception callback\n"); + return -EINVAL; + } + /* If we didn't explore and mark can_throw yet, we will see it + * when we pop_stack for the pushed async cb which gets the + * is_exception_cb marking and is caught in mark_chain_throw. + */ + if (env->subprog_info[meta.subprogno].can_throw) { + verbose(env, "exception callback can throw, which is not allowed\n"); + return -EINVAL; + } + err = __check_func_call(env, insn, insn_idx_p, meta.subprogno, + set_exception_callback_state); + if (err) { + verbose(env, "kfunc %s#%d failed callback verification\n", + func_name, meta.func_id); + return err; + } + env->cur_state->exception_callback_subprog = meta.subprogno; + } + if (is_kfunc_throwing(&meta) || (meta.btf == btf_vmlinux && meta.func_id == special_kfunc_list[KF_bpf_throw])) { err = mark_chain_throw(env, insn_idx); @@ -13829,7 +13920,7 @@ static int check_return_code(struct bpf_verifier_env *env) const bool is_subprog = frame->subprogno; /* LSM and struct_ops func-ptr's return type could be "void" */ - if (!is_subprog) { + if (!is_subprog || frame->in_exception_callback_fn) { switch (prog_type) { case BPF_PROG_TYPE_LSM: if (prog->expected_attach_type == BPF_LSM_CGROUP) @@ -13877,7 +13968,7 @@ static int check_return_code(struct bpf_verifier_env *env) return 0; } - if (is_subprog) { + if (is_subprog && !frame->in_exception_callback_fn) { if (reg->type != SCALAR_VALUE) { verbose(env, "At subprogram exit the register R0 is not a scalar value (%s)\n", reg_type_str(env, reg->type)); @@ -15134,6 +15225,9 @@ static bool states_equal(struct bpf_verifier_env *env, if (old->active_rcu_lock != cur->active_rcu_lock) return false; + if (old->exception_callback_subprog != cur->exception_callback_subprog) + return false; + /* for states to be equal callsites have to be the same * and all frame states need to be equivalent */ @@ -17538,6 +17632,9 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, * may_access_direct_pkt_data mutates it */ env->seen_direct_write = seen_direct_write; + } else if (desc->func_id == special_kfunc_list[KF_bpf_set_exception_callback]) { + insn_buf[0] = BPF_JMP_IMM(BPF_JA, 0, 0, 0); + *cnt = 1; } return 0; } @@ -18194,15 +18291,35 @@ static struct bpf_prog *rewrite_bpf_throw_call(struct bpf_verifier_env *env, { struct bpf_insn insn_buf[] = { env->prog->insnsi[position], - BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }; + struct bpf_prog *new_prog; + u32 callback_start; + u32 call_insn_offset; + s32 callback_offset; + int type, esubprog; + type = tstate->type; + esubprog = tstate->subprog; *cnt = ARRAY_SIZE(insn_buf); /* We don't need the call instruction for throws in frame 0 */ - if (tstate->type == BPF_THROW_OUTER) - return bpf_patch_insn_data(env, position, insn_buf + 1, *cnt - 1); - return bpf_patch_insn_data(env, position, insn_buf, *cnt); + if (type == BPF_THROW_OUTER) { + /* We need to return r0 of exception callback from outermost frame */ + if (esubprog != -1) + insn_buf[0] = BPF_CALL_REL(0); + else + insn_buf[0] = BPF_MOV64_IMM(BPF_REG_0, 0); + } + new_prog = bpf_patch_insn_data(env, position, insn_buf, *cnt); + if (!new_prog || esubprog == -1) + return new_prog; + + callback_start = env->subprog_info[esubprog].start; + /* Note: insn_buf[0] is an offset of BPF_CALL_REL instruction */ + call_insn_offset = position + 0; + callback_offset = callback_start - call_insn_offset - 1; + new_prog->insnsi[call_insn_offset].imm = callback_offset; + return new_prog; } static bool is_bpf_throw_call(struct bpf_insn *insn) @@ -18234,17 +18351,25 @@ static struct bpf_prog *rewrite_bpf_call(struct bpf_verifier_env *env, BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }; + int type, tsubprog = -1, esubprog; struct bpf_prog *new_prog; - int type, tsubprog = -1; u32 callback_start; u32 call_insn_offset; s32 callback_offset; bool ret_code; type = tstate->type; + esubprog = tstate->subprog; ret_code = tstate->check_helper_ret_code; - if (type == BPF_THROW_OUTER) + if (type == BPF_THROW_OUTER) { insn_buf[4] = insn_buf[9] = BPF_EMIT_CALL(bpf_reset_exception); + /* Note that we allow progs to attach to exception callbacks, + * even if they do, they won't clobber any exception state that + * we care about at this point. + */ + if (esubprog != -1) + insn_buf[5] = insn_buf[10] = BPF_CALL_REL(0); + } if (type == BPF_THROW_INNER) insn_buf[9] = BPF_EMIT_CALL(bpf_throw); @@ -18285,6 +18410,25 @@ static struct bpf_prog *rewrite_bpf_call(struct bpf_verifier_env *env, /* Note: For BPF_THROW_OUTER, we already patched in call at insn_buf[4] */ if (type == BPF_THROW_OUTER) aux_data[position + 4].skip_patch_call_imm = true; + + /* Fixups for exception callback begin here */ + if (esubprog == -1) + return new_prog; + callback_start = env->subprog_info[esubprog].start; + + /* Note: insn_buf[5] is an offset of BPF_CALL_REL instruction */ + call_insn_offset = position + 5; + callback_offset = callback_start - call_insn_offset - 1; + new_prog->insnsi[call_insn_offset].imm = callback_offset; + + if (!ret_code) + return new_prog; + + /* Note: insn_buf[10] is an offset of BPF_CALL_REL instruction */ + call_insn_offset = position + 10; + callback_offset = callback_start - call_insn_offset - 1; + new_prog->insnsi[call_insn_offset].imm = callback_offset; + return new_prog; } @@ -18439,6 +18583,10 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog) return -ENOMEM; state->curframe = 0; state->speculative = false; + if (subprog || env->prog->type == BPF_PROG_TYPE_EXT) + state->exception_callback_subprog = BPF_EXCEPTION_CB_CANNOT_SET; + else + state->exception_callback_subprog = BPF_EXCEPTION_CB_CAN_SET; state->branches = 1; state->frame[0] = kzalloc(sizeof(struct bpf_func_state), GFP_KERNEL); if (!state->frame[0]) { diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index d5de9251e775..a9c75270e49b 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -98,4 +98,15 @@ extern struct bpf_rb_node *bpf_rbtree_first(struct bpf_rb_root *root) __ksym; */ extern void bpf_throw(void) __attribute__((noreturn)) __ksym; +/* + * Description + * Set the callback which will be invoked after an exception is thrown and the + * eBPF runtime has completely unwinded the program stack. The return value of + * this callback is treated as the return value of the program when the + * exception is thrown. + * Returns + * Void + */ +extern void bpf_set_exception_callback(int (*)(void)) __ksym; + #endif From patchwork Wed Apr 5 00:42:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13201077 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1797C76188 for ; Wed, 5 Apr 2023 00:43:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235981AbjDEAnE (ORCPT ); Tue, 4 Apr 2023 20:43:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236461AbjDEAnD (ORCPT ); Tue, 4 Apr 2023 20:43:03 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABADC49F2 for ; Tue, 4 Apr 2023 17:42:55 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id hg25-20020a05600c539900b003f05a99a841so3118927wmb.3 for ; Tue, 04 Apr 2023 17:42:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680655374; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ykGyOB5b209I2tk+Fxn9YOszRs2NFTKrGeaS8yYbBUI=; b=pTbe7A0gyLDyurGZIO8mBn1MN079JVFNMznzQ3okxQ9gYZBFvKUGVYbIXzfnsA2PNS JV3yGJTCP/9rWqxm3ZeUt7Fmt2++Achw8VbKnwK5q45MrhKAbyP82AciRH9P6mX9g4Lw uHphazfcChfRvrb2GiEln4UasbuBhwiY+h73jvhpBf2fujJ/KUgNRdvBspBwBRAFqxnA 0d8laQ4IUeOZ37qMJIaAg1ONn7jt8cMi2Lt6HTZWIqy5nGWcnVwKTzl5q+tjZ7BSssUp 8lD0esWrD1HztuREWnquCYWbzEAPrCAQ8U++5PMZFy5HO5ikwq1SkYeyQG4hxjzeGQX4 I1Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680655374; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ykGyOB5b209I2tk+Fxn9YOszRs2NFTKrGeaS8yYbBUI=; b=nQv94aKzY0Yl92ajAVAueOvCW2493CA/uGCFje+Zkb0RO5q69lIlcZHjnhTEnRqXy2 y3b9HC0tt4Vr8rxDttf54D5a2OUaZjmeWlVs7FhfHg/yzK16RgNrhHk+Kuy9lBCmYdeP X0vnIB5lgjQtFh4koUtLtkFYCplX2fJUTjBDjQhfgSgu2mevmELU57WWnOmf6Ukyp+is El3bIstJHMymW9I0RO/rK/Fbi41sj9DMyL8VgLFuNWKSYvsiiF/RzzgMYPKFSd/Zjxvr BogAwCubr8Rek1FRqo+3IPsE63/2M5gtcZrGckeft40DSpjz3JUUmnntQGwM3mjQ6/Fo PeIA== X-Gm-Message-State: AAQBX9dzce+PPhjV4zmdvkhvAX4SBcK4ZZWSH44zu0fK+Zvs67Sg3c9b TqvWn62v0RnXE+7x9fUwWMAd9URYiokVoQ== X-Google-Smtp-Source: AKy350b7Mg21/D0iLIwgf3Pt4HLpU6hnUUymPvL/LFbTKPwPGCZ70QR+PKKunSIm24yTE9UPhoXTKA== X-Received: by 2002:a1c:7501:0:b0:3dc:433a:e952 with SMTP id o1-20020a1c7501000000b003dc433ae952mr3145862wmc.33.1680655373702; Tue, 04 Apr 2023 17:42:53 -0700 (PDT) Received: from localhost ([2a02:1210:74a0:3200:2fc:d4f0:c121:5e8b]) by smtp.gmail.com with ESMTPSA id u13-20020a05600c210d00b003f0373d077csm401696wml.47.2023.04.04.17.42.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Apr 2023 17:42:53 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , David Vernet Subject: [PATCH RFC bpf-next v1 8/9] bpf: Introduce BPF assertion macros Date: Wed, 5 Apr 2023 02:42:38 +0200 Message-Id: <20230405004239.1375399-9-memxor@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230405004239.1375399-1-memxor@gmail.com> References: <20230405004239.1375399-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2062; i=memxor@gmail.com; h=from:subject; bh=e7dZxPP9D8GUSLZ8fVHzJlD1B4G6/CZV8jJJ6/clzvo=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBkLMPw6Cmx3fGPW5I4qIqYT7K7Q7PQ6eTUm02yc svbPGgZ25mJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZCzD8AAKCRBM4MiGSL8R yp2KEACSI1uFHdAz6mHYJgbDOZkgEJqJMdTNGPRxzCTZIQRRJi7jYJ++vbv7zh15Rf3LgluaM6n IZ72fhngErumDsF77ogSCcoaxQDYgyrz59NAsUS1YKHeJyib1qkpIL0psvA9Sl5GSqKAfZVH2v5 m7X85TJldPjll9IApN2sDX+Pb707oe1TMUEfl20ICa1BoJaBaimxtXeEILHjfKsmybyebiJr3/D 8moipw1CQQLw5oYpLxfGIiZ01oIkwfBDG5pGqkRs0sgDkYfJc32lcGannDeVE7OGtCHgcYrb9XX eWTAmkfLbFBHtmYLru58+U+/bv5BzdeaPE47YRLri476t08nKYV4/I4m1U3u952Tji3Rc7vD3We j8Lx/zH4xRmsp0mDPLBubs9ZlmwmYTp3a7KjZpGbBRbQzu9zDR8eiycKxIUudjTu9K8nsqpm+om sJrdBnkISbySGliv6BCqkIWlFGpfviGdYmRuKsP+2EmGVInhnpfGHNy+MaONLKkT8chVni0HGfm OVtFpioW1m5dkMFHmGcTTGE0i5OivWkvksIqHwWAAqyZlGgfdw3r/48NwSiDr4savphW5TSkTEk P1Xplo2dWp1Nmvcw3y20YyOI4mhHazeiMxYqzDub9ppYmFgEObZeP+GKRH7qx1gqKxdSkq5PXHt VPk4SYQ3AHKU3eA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Implement macros that allow for asserting conditions for with a given register and constant value to prove a condition to the verifier, and safely abort the program in case the condition is not true. The verifier can still perform dead code elimination of the bpf_throw call if it can actually prove the condition based on the seen data flow during path exploration, and the function may not be marked as throwing. Signed-off-by: Kumar Kartikeya Dwivedi --- tools/testing/selftests/bpf/bpf_experimental.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index a9c75270e49b..aae358a8db4b 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -109,4 +109,21 @@ extern void bpf_throw(void) __attribute__((noreturn)) __ksym; */ extern void bpf_set_exception_callback(int (*)(void)) __ksym; +#define __bpf_assert_op(LHS, op, RHS) \ + _Static_assert(sizeof(&(LHS)), "1st argument must be an lvalue expression"); \ + _Static_assert(__builtin_constant_p((RHS)), "2nd argument must be a constant expression"); \ + asm volatile ("if %[lhs] " op " %[rhs] goto +1; call bpf_throw" \ + : : [lhs] "r"(LHS) , [rhs] "i"(RHS) :) + +#define bpf_assert_eq(LHS, RHS) __bpf_assert_op(LHS, "==", RHS) +#define bpf_assert_ne(LHS, RHS) __bpf_assert_op(LHS, "!=", RHS) +#define bpf_assert_lt(LHS, RHS) __bpf_assert_op(LHS, "<", RHS) +#define bpf_assert_gt(LHS, RHS) __bpf_assert_op(LHS, ">", RHS) +#define bpf_assert_le(LHS, RHS) __bpf_assert_op(LHS, "<=", RHS) +#define bpf_assert_ge(LHS, RHS) __bpf_assert_op(LHS, ">=", RHS) +#define bpf_assert_slt(LHS, RHS) __bpf_assert_op(LHS, "s<", RHS) +#define bpf_assert_sgt(LHS, RHS) __bpf_assert_op(LHS, "s>", RHS) +#define bpf_assert_sle(LHS, RHS) __bpf_assert_op(LHS, "s<=", RHS) +#define bpf_assert_sge(LHS, RHS) __bpf_assert_op(LHS, "s>=", RHS) + #endif From patchwork Wed Apr 5 00:42:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13201078 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1CF8C76188 for ; Wed, 5 Apr 2023 00:43:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236461AbjDEAnI (ORCPT ); Tue, 4 Apr 2023 20:43:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236649AbjDEAnH (ORCPT ); Tue, 4 Apr 2023 20:43:07 -0400 Received: from mail-wm1-x342.google.com (mail-wm1-x342.google.com [IPv6:2a00:1450:4864:20::342]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8470F3 for ; Tue, 4 Apr 2023 17:42:57 -0700 (PDT) Received: by mail-wm1-x342.google.com with SMTP id v20-20020a05600c471400b003ed8826253aso1546754wmo.0 for ; Tue, 04 Apr 2023 17:42:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680655375; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uMrAvEXr/1JHkqOmtR/28cms2jsyiziijxpUrZXMEkI=; b=J3GWYeHTM7U5Mo1dWcqVhwsQ1Xs+jlGLcpn9l7Y+8UExjWVme6V3QF1eHO7kdaa8zV fozwcFs77jdbFqbrtNHLMmx2Pyyx23z/mDGw9U6BkE8T6vK0hh6GjDVJQf6yt0cRIufS xpGf0dNMjTZurbDxlsh5n/6dO1ns3eRoiFT++qjRVdaWhjDfut8ueR6RfuOl2d++OaRE Lln6Npc2z+lV0lIyDqeneDw2aa3ODymNrWK2dqhTcLSUWPTv/fgsG2sEdanNou2yoxOu Bc/oiRG3KjgBuxqc1ZyaSeInznKuKUZ+eZXt0dGOMIg5rtznhZ7NiUWSAoppA9HO5vnm BC0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680655375; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uMrAvEXr/1JHkqOmtR/28cms2jsyiziijxpUrZXMEkI=; b=banOXKw2pxNwE0TKckNI5oZBMO2CHi7T0m5eRqC37iDzgW2AX+RzNwfTE1JW4lPfLR cJk7F0OYOxYWSr9Yb8oCfLlZW++lHc5MOcamHg7+/zLG/l3V8IW5nSXuOw+D8RQxB+p/ SBFliSwMnJ9hUgofzKKmQ2P6bbaTktv5D49ys7AVUxpmCzyBCKy2zhJY51Wu038+tiuz 0UhUs9mvFXD4SZnsu2Y/tRa+WJuy37U2dbTuiXBAiUU46mH9vd0uLUPjkHq3eh28D8RL HlHkiNAZHos6V6yVGBcKix6xlhMccr1u4IDYAxlOcJ04zjfjlcunqckD4dqKyi3tPWaq 5qeA== X-Gm-Message-State: AAQBX9cOMKP2ckDB+r6tgD+Rg7ED+a8jmwM3pW0wuPyIkPL8BnLOwOr1 5XDFoRRfCqdr8hzG1sImoGAlIz92SnYRsA== X-Google-Smtp-Source: AKy350Zh3jG06CaLj7LamYR4sGVqwQ8+oXWt2CSAzDF0wUcSslcw0XTwEULg9xsUfezb8Ws8xyQKjw== X-Received: by 2002:a7b:cbda:0:b0:3ed:8360:e54 with SMTP id n26-20020a7bcbda000000b003ed83600e54mr3648554wmi.8.1680655375151; Tue, 04 Apr 2023 17:42:55 -0700 (PDT) Received: from localhost ([2a02:1210:74a0:3200:2fc:d4f0:c121:5e8b]) by smtp.gmail.com with ESMTPSA id p19-20020a05600c469300b003eda46d6792sm414036wmo.32.2023.04.04.17.42.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Apr 2023 17:42:54 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , David Vernet Subject: [PATCH RFC bpf-next v1 9/9] selftests/bpf: Add tests for BPF exceptions Date: Wed, 5 Apr 2023 02:42:39 +0200 Message-Id: <20230405004239.1375399-10-memxor@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230405004239.1375399-1-memxor@gmail.com> References: <20230405004239.1375399-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=21184; i=memxor@gmail.com; h=from:subject; bh=F09Ds31b3RkPW3UAWnWy73rK7E06qICpDaIG+Vsh6Ng=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBkLMPwm+7LMPXf8kBCp1s+2TFHQIRvIr3R4X0yq oCVuwvBdyCJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZCzD8AAKCRBM4MiGSL8R yjRuEACQkpfdNU7pUdfcaBr0WOm3W7BY+bDtPx7UBKdTZLDn6r9KBcad0CVzAhnqZIodPdJQTCo SBGNBtQlhY3rqlK9TE/MVEjXJJiKTarnMnssu3WHQ4MMuirKpGbov7LzNUpZL1WazSI/RTKqdJC x2bOjotsPxe2JRmtVraFE++a7O26EKFq1zvDZkg8RDWLBp4gsHic2xe6xiBD/JYEU91ocKZoqLn XRIaALZgxLylbARm7q0eeZ80HawPsl4/+QmkpNbbAuY/wJP5GgsHAKCYEJhxLMhuGHhpmhWlPZU sEcCmfoPJtuWtohXDR0vTANJxqUx6valz8Wdvoq9TS+k/q+LRzBFVOI++qcpZhmt2uMcetgyGFs 46V/mILlpx1nD7fgSIZYubLLlr9GrWS60homx5RlACcSfoZHkTc1NJxbv6DyH2ebCC8FDIMNbtf O05ll7ePWCRJpB7Tf0MBn+0+Q73oZCU2LFFV5aDowBcFxVFfFhaK0H8CSfOyhLmfQEHOvPhRqWp SWKpKgbBmWtlSXIPipIiPUbNTtLqtUd3tyfgc1tt9ccsvkMejSg6t7Z2ndAciWQLOzKlk7W7/GT ga1eObAiKb9KCrCG2EWOHlBUF94VmnerEcrQ+4Ve6bWhYpfZW7Astt5u24uca+q2kw5du/BCwJh +/U5r6Otl78P4Uw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Add selftests to cover success and failure cases of API usage, runtime behavior and invariants that need to be maintained for implementation correctness. Signed-off-by: Kumar Kartikeya Dwivedi --- .../selftests/bpf/prog_tests/exceptions.c | 240 ++++++++++++++++ .../testing/selftests/bpf/progs/exceptions.c | 218 ++++++++++++++ .../selftests/bpf/progs/exceptions_ext.c | 42 +++ .../selftests/bpf/progs/exceptions_fail.c | 267 ++++++++++++++++++ 4 files changed, 767 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/exceptions.c create mode 100644 tools/testing/selftests/bpf/progs/exceptions.c create mode 100644 tools/testing/selftests/bpf/progs/exceptions_ext.c create mode 100644 tools/testing/selftests/bpf/progs/exceptions_fail.c diff --git a/tools/testing/selftests/bpf/prog_tests/exceptions.c b/tools/testing/selftests/bpf/prog_tests/exceptions.c new file mode 100644 index 000000000000..342f44a12c65 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/exceptions.c @@ -0,0 +1,240 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +#include "exceptions.skel.h" +#include "exceptions_ext.skel.h" +#include "exceptions_fail.skel.h" + +static char log_buf[1024 * 1024]; + +static void test_exceptions_failure(void) +{ + RUN_TESTS(exceptions_fail); +} + +static void test_exceptions_success(void) +{ + LIBBPF_OPTS(bpf_test_run_opts, ropts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + struct exceptions_ext *eskel = NULL; + struct exceptions *skel; + int ret; + + skel = exceptions__open_and_load(); + if (!ASSERT_OK_PTR(skel, "exceptions__open_and_load")) + return; + +#define RUN_SUCCESS(_prog, return_val) \ + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs._prog), &ropts); \ + ASSERT_OK(ret, #_prog " prog run ret"); \ + ASSERT_EQ(ropts.retval, return_val, #_prog " prog run retval"); + + RUN_SUCCESS(exception_throw_subprog, 16); + RUN_SUCCESS(exception_throw, 0); + RUN_SUCCESS(exception_throw_gfunc1, 1); + RUN_SUCCESS(exception_throw_gfunc2, 0); + RUN_SUCCESS(exception_throw_gfunc3, 1); + RUN_SUCCESS(exception_throw_gfunc4, 0); + RUN_SUCCESS(exception_throw_gfunc5, 1); + RUN_SUCCESS(exception_throw_gfunc6, 16); + RUN_SUCCESS(exception_throw_func1, 1); + RUN_SUCCESS(exception_throw_func2, 0); + RUN_SUCCESS(exception_throw_func3, 1); + RUN_SUCCESS(exception_throw_func4, 0); + RUN_SUCCESS(exception_throw_func5, 1); + RUN_SUCCESS(exception_throw_func6, 16); + RUN_SUCCESS(exception_throw_cb1, 0); + RUN_SUCCESS(exception_throw_cb2, 16); + RUN_SUCCESS(exception_throw_cb_diff, 16); + RUN_SUCCESS(exception_throw_kfunc1, 0); + RUN_SUCCESS(exception_throw_kfunc2, 1); + +#define RUN_EXT(load_ret, attach_err, expr, msg) \ + { \ + LIBBPF_OPTS(bpf_object_open_opts, o, .kernel_log_buf = log_buf, \ + .kernel_log_size = sizeof(log_buf), \ + .kernel_log_level = 2); \ + exceptions_ext__destroy(eskel); \ + eskel = exceptions_ext__open_opts(&o); \ + struct bpf_program *prog = NULL; \ + struct bpf_link *link = NULL; \ + if (!ASSERT_OK_PTR(eskel, "exceptions_ext__open")) \ + goto done; \ + (expr); \ + ASSERT_OK_PTR(bpf_program__name(prog), bpf_program__name(prog)); \ + if (!ASSERT_EQ(exceptions_ext__load(eskel), load_ret, \ + "exceptions_ext__load")) { \ + printf("%s\n", log_buf); \ + goto done; \ + } \ + if (load_ret != 0) { \ + printf("%s\n", log_buf); \ + if (!ASSERT_OK_PTR(strstr(log_buf, msg), "strstr")) \ + goto done; \ + } \ + if (!load_ret && attach_err) { \ + if (!ASSERT_ERR_PTR(link = bpf_program__attach(prog), "attach err")) \ + goto done; \ + } else if (!load_ret) { \ + if (!ASSERT_OK_PTR(link = bpf_program__attach(prog), "attach ok")) \ + goto done; \ + bpf_link__destroy(link); \ + } \ + } + + /* non-throwing fexit -> non-throwing subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.pfexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), ""); + + /* throwing fexit -> non-throwing subprog : BAD */ + RUN_EXT(0, true, ({ + prog = eskel->progs.throwing_fexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), ""); + + /* non-throwing fexit -> throwing subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.pfexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* throwing fexit -> throwing subprog : BAD */ + RUN_EXT(0, true, ({ + prog = eskel->progs.throwing_fexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* fmod_ret not allowed for subprog - Check so we remember to handle its + * throwing specification compatibility with target when supported. + */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.pfmod_ret; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), "can't modify return codes of BPF program"); + + /* fmod_ret not allowed for global subprog - Check so we remember to + * handle its throwing specification compatibility with target when + * supported. + */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.pfmod_ret; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "global_subprog"), "set_attach_target")) + goto done; + }), "can't modify return codes of BPF program"); + + /* non-throwing extension -> non-throwing subprog : BAD (!global) + * We need to handle and reject it for static subprogs when supported + * when extension is throwing as not all callsites are marked to handle + * them. + */ + RUN_EXT(-EINVAL, true, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), "subprog() is not a global function"); + + /* non-throwing extension -> throwing subprog : BAD (!global) + * We need to handle and reject it for static subprogs when supported + * when extension is throwing as not all callsites are marked to handle + * them. + */ + RUN_EXT(-EINVAL, true, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_subprog"), "set_attach_target")) + goto done; + }), "throwing_subprog() is not a global function"); + + /* non-throwing extension -> non-throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "global_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* non-throwing extension -> throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_global_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* throwing extension -> throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_global_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* throwing extension -> main subprog : BAD (OUTER vs INNER mismatch) */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "exception_throw_subprog"), "set_attach_target")) + goto done; + }), "Cannot attach throwing extension to main subprog"); + + /* throwing extension -> non-throwing global subprog : BAD */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "global_subprog"), "set_attach_target")) + goto done; + }), "Cannot attach throwing extension to non-throwing subprog"); +done: + exceptions_ext__destroy(eskel); + exceptions__destroy(skel); +} + +void test_exceptions(void) +{ + test_exceptions_failure(); + test_exceptions_success(); +} diff --git a/tools/testing/selftests/bpf/progs/exceptions.c b/tools/testing/selftests/bpf/progs/exceptions.c new file mode 100644 index 000000000000..9a33f88e7e2c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions.c @@ -0,0 +1,218 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include "bpf_misc.h" +#include "bpf_experimental.h" + +SEC("tc") +int exception_throw(struct __sk_buff *ctx) +{ + if (ctx->data) + bpf_throw(); + return 1; +} + + +static __noinline int subprog(struct __sk_buff *ctx) +{ + return ctx->len; +} + +static __noinline int throwing_subprog(struct __sk_buff *ctx) +{ + if (ctx) + bpf_throw(); + return 0; +} + +__noinline int global_subprog(struct __sk_buff *ctx) +{ + return subprog(ctx) + 1; +} + +__noinline int throwing_global_subprog(struct __sk_buff *ctx) +{ + if (ctx) + bpf_throw(); + return 0; +} + +static __noinline int exception_cb(void) +{ + return 16; +} + +SEC("tc") +int exception_throw_subprog(struct __sk_buff *ctx) +{ + volatile int i; + + exception_cb(); + bpf_set_exception_callback(exception_cb); + i = subprog(ctx); + i += global_subprog(ctx) - 1; + if (!i) + return throwing_global_subprog(ctx); + else + return throwing_subprog(ctx); + bpf_throw(); + return 0; +} + +__noinline int throwing_gfunc(volatile int i) +{ + bpf_assert_eq(i, 0); + return 1; +} + +__noinline static int throwing_func(volatile int i) +{ + bpf_assert_lt(i, 1); + return 1; +} + +SEC("tc") +int exception_throw_gfunc1(void *ctx) +{ + return throwing_gfunc(0); +} + +SEC("tc") +__noinline int exception_throw_gfunc2() +{ + return throwing_gfunc(1); +} + +__noinline int throwing_gfunc_2(volatile int i) +{ + return throwing_gfunc(i); +} + +SEC("tc") +int exception_throw_gfunc3(void *ctx) +{ + return throwing_gfunc_2(0); +} + +SEC("tc") +int exception_throw_gfunc4(void *ctx) +{ + return throwing_gfunc_2(1); +} + +SEC("tc") +int exception_throw_gfunc5(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_gfunc_2(0); +} + +SEC("tc") +int exception_throw_gfunc6(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_gfunc_2(1); +} + + +SEC("tc") +int exception_throw_func1(void *ctx) +{ + return throwing_func(0); +} + +SEC("tc") +int exception_throw_func2(void *ctx) +{ + return throwing_func(1); +} + +__noinline static int throwing_func_2(volatile int i) +{ + return throwing_func(i); +} + +SEC("tc") +int exception_throw_func3(void *ctx) +{ + return throwing_func_2(0); +} + +SEC("tc") +int exception_throw_func4(void *ctx) +{ + return throwing_func_2(1); +} + +SEC("tc") +int exception_throw_func5(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_func_2(0); +} + +SEC("tc") +int exception_throw_func6(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_func_2(1); +} + +__noinline static int loop_cb1(u32 index, int *ctx) +{ + bpf_throw(); + return 0; +} + +__noinline static int loop_cb2(u32 index, int *ctx) +{ + bpf_throw(); + return 0; +} + +SEC("tc") +int exception_throw_cb1(struct __sk_buff *ctx) +{ + bpf_loop(5, loop_cb1, NULL, 0); + return 1; +} + +SEC("tc") +int exception_throw_cb2(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_loop(5, loop_cb1, NULL, 0); + return 0; +} + +SEC("tc") +int exception_throw_cb_diff(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + if (ctx->protocol) + bpf_loop(5, loop_cb1, NULL, 0); + else + bpf_loop(5, loop_cb2, NULL, 0); + return 1; +} + +extern void bpf_kfunc_call_test_always_throws(void) __ksym; +extern void bpf_kfunc_call_test_never_throws(void) __ksym; + +SEC("tc") +int exception_throw_kfunc1(struct __sk_buff *ctx) +{ + bpf_kfunc_call_test_always_throws(); + return 1; +} + +SEC("tc") +int exception_throw_kfunc2(struct __sk_buff *ctx) +{ + bpf_kfunc_call_test_never_throws(); + return 1; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/exceptions_ext.c b/tools/testing/selftests/bpf/progs/exceptions_ext.c new file mode 100644 index 000000000000..d3b9e32681ec --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions_ext.c @@ -0,0 +1,42 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include "bpf_experimental.h" + +SEC("?freplace") +int extension(struct __sk_buff *ctx) +{ + return 0; +} + +SEC("?freplace") +int throwing_extension(struct __sk_buff *ctx) +{ + bpf_throw(); +} + +SEC("?fexit") +int pfexit(void *ctx) +{ + return 0; +} + +SEC("?fexit") +int throwing_fexit(void *ctx) +{ + bpf_throw(); +} + +SEC("?fmod_ret") +int pfmod_ret(void *ctx) +{ + return 1; +} + +SEC("?fmod_ret") +int throwing_fmod_ret(void *ctx) +{ + bpf_throw(); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/exceptions_fail.c b/tools/testing/selftests/bpf/progs/exceptions_fail.c new file mode 100644 index 000000000000..d8459c3840e2 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions_fail.c @@ -0,0 +1,267 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#include "bpf_misc.h" +#include "bpf_experimental.h" + +extern void bpf_rcu_read_lock(void) __ksym; + +#define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8))) + +struct foo { + struct bpf_rb_node node; +}; + +private(A) struct bpf_spin_lock lock; +private(A) struct bpf_rb_root rbtree __contains(foo, node); + +__noinline static int subprog_lock(struct __sk_buff *ctx) +{ + bpf_spin_lock(&lock); + if (ctx->len) + bpf_throw(); + return 0; +} + +SEC("?tc") +__failure __msg("function calls are not allowed while holding a lock") +int reject_with_lock(void *ctx) +{ + bpf_spin_lock(&lock); + bpf_throw(); +} + +SEC("?tc") +__failure __msg("function calls are not allowed while holding a lock") +int reject_subprog_with_lock(void *ctx) +{ + return subprog_lock(ctx); +} + +SEC("?tc") +__failure __msg("bpf_rcu_read_unlock is missing") +int reject_with_rcu_read_lock(void *ctx) +{ + bpf_rcu_read_lock(); + bpf_throw(); +} + +__noinline static int throwing_subprog(struct __sk_buff *ctx) +{ + if (ctx->len) + bpf_throw(); + return 0; +} + +SEC("?tc") +__failure __msg("bpf_rcu_read_unlock is missing") +int reject_subprog_with_rcu_read_lock(void *ctx) +{ + bpf_rcu_read_lock(); + return throwing_subprog(ctx); +} + +static bool rbless(struct bpf_rb_node *n1, const struct bpf_rb_node *n2) +{ + bpf_throw(); +} + +SEC("?tc") +__failure __msg("function calls are not allowed while holding a lock") +int reject_with_rbtree_add_throw(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&lock); + bpf_rbtree_add(&rbtree, &f->node, rbless); + return 0; +} + +SEC("?tc") +__failure __msg("Unreleased reference") +int reject_with_reference(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_throw(); +} + +__noinline static int subprog_ref(struct __sk_buff *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_throw(); +} + +__noinline static int subprog_cb_ref(u32 i, void *ctx) +{ + bpf_throw(); +} + +SEC("?tc") +__failure __msg("Unreleased reference") +int reject_with_cb_reference(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_loop(5, subprog_cb_ref, NULL, 0); + return 0; +} + +SEC("?tc") +__failure __msg("Unreleased reference") +int reject_with_subprog_reference(void *ctx) +{ + return subprog_ref(ctx) + 1; +} + +static __noinline int throwing_exception_cb(void) +{ + int i = 0; + + bpf_assert_ne(i, 0); + return i; +} + +static __noinline int exception_cb1(void) +{ + int i = 0; + + bpf_assert_eq(i, 0); + return i; +} + +static __noinline int exception_cb2(void) +{ + int i = 0; + + bpf_assert_eq(i, 0); + return i; +} + +__noinline int throwing_exception_gfunc(void) +{ + return throwing_exception_cb(); +} + +SEC("?tc") +__failure __msg("is used as exception callback, cannot throw") +int reject_throwing_exception_cb_1(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(throwing_exception_cb); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback can throw, which is not allowed") +int reject_throwing_exception_cb_2(struct __sk_buff *ctx) +{ + throwing_exception_gfunc(); + bpf_set_exception_callback(throwing_exception_cb); + return 0; +} + +SEC("?tc") +__failure __msg("different exception callback subprogs for same insn 7: 2 and 1") +int reject_throwing_exception_cb_3(struct __sk_buff *ctx) +{ + if (ctx->protocol) + bpf_set_exception_callback(exception_cb1); + else + bpf_set_exception_callback(exception_cb2); + bpf_throw(); +} + +__noinline int gfunc_set_exception_cb(void) +{ + bpf_set_exception_callback(exception_cb1); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback cannot be set within global function or extension program") +int reject_set_exception_cb_gfunc(struct __sk_buff *ctx) +{ + gfunc_set_exception_cb(); + return 0; +} + +static __noinline int exception_cb_rec(void) +{ + bpf_set_exception_callback(exception_cb_rec); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback cannot be set from within exception callback") +int reject_set_exception_cb_rec1(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_rec); + return 0; +} + +static __noinline int exception_cb_rec2(void); + +static __noinline int exception_cb_rec1(void) +{ + bpf_set_exception_callback(exception_cb_rec2); + return 0; +} + +static __noinline int exception_cb_rec2(void) +{ + bpf_set_exception_callback(exception_cb_rec2); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback cannot be set from within exception callback") +int reject_set_exception_cb_rec2(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_rec1); + return 0; +} + +static __noinline int exception_cb_rec3(void) +{ + bpf_set_exception_callback(exception_cb1); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback cannot be set from within exception callback") +int reject_set_exception_cb_rec3(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_rec3); + return 0; +} + +static __noinline int exception_cb_bad_ret(void) +{ + return 4242; +} + +SEC("?fentry/bpf_check") +__failure __msg("At program exit the register R0 has value") +int reject_set_exception_cb_bad_ret(void *ctx) +{ + bpf_set_exception_callback(exception_cb_bad_ret); + return 0; +} + +char _license[] SEC("license") = "GPL";