From patchwork Thu Jul 13 02:32:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311193 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BAA817C for ; Thu, 13 Jul 2023 02:32:43 +0000 (UTC) Received: from mail-oi1-x244.google.com (mail-oi1-x244.google.com [IPv6:2607:f8b0:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D1F9100 for ; Wed, 12 Jul 2023 19:32:42 -0700 (PDT) Received: by mail-oi1-x244.google.com with SMTP id 5614622812f47-3a3b7fafd61so220123b6e.2 for ; Wed, 12 Jul 2023 19:32:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215560; x=1691807560; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ropIrAVLQYm2YcXI4bbAuo3188dUL4M6GB0nJbYk2BI=; b=EbHyplpluHX9s9Et4ahtFaZhsYAwnYKMVaDo2xTnh+CvW7KSCxI3IBR8xkYrpsQSvU DW/BfDpLgsUSFyJcaKjM/7XHkSNI376wwvkTC2h1D0K4xdlddlhxWubsGfg0uJhhcSA0 DMqwje/9PRI6xTSMos+5Vx539i3fHG9U6uKP4KIUyt4f4Sqs24S80lI0fPUrC1kwu9M1 pD84Y7QUaQkEU8WpfqAM06MVcgBeTiQWUvCkpvVWXzUY+vliTzkiHV8IC3Li5H5QakgF cef+Qvl0+J26B8sH3+zN8pn6BsCccMM1Okjk23QEhyPcZMV1GixdHgVdRodaT5U2R/n3 4hrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215560; x=1691807560; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ropIrAVLQYm2YcXI4bbAuo3188dUL4M6GB0nJbYk2BI=; b=b6C84O4EBNZMUlw8d+zNs1zwExwd8J8NwMJTgj0VU4ONV6/+GRYFc5513HZLJpieXb 2AweLTLIZ2GpgL3lzRxRXjrtLo3sh7rUMl2l4rILQ1EptneqMDv9LOLRmA2QABNLaJdZ l6CyOeIPSsjgojg6/psZ5YoiZ7lau7Ji6HRg3x2GHAzyxsIb8jzg1+ldoSOp3BKeRy6+ PXL9jwdfJ392CJggkBxd8o1gyMdUn8loNrriiEBdFROUvI7mOkKzRosD5o8PJdgsVpBf W4O4B+dA0aqKY/1/mjJ3LEB2pqFN+D+Mzviublve4HpgnefuOrbJNJdumXxr3FuJZIaf +Jrw== X-Gm-Message-State: ABy/qLZz3uFxCMBWOcln5bT46/ryutLUQM/85PQNzS60bGAdiCrpTKir K/qw9YxDb0JeY3JOrgJQfbEA/Vi3xmN+Bg== X-Google-Smtp-Source: APBJJlHhAzMUZ1goOxwLurGbT99RoiYVY88ghtzVknyFdVfoTQTp8pgtGiyputs+ihexQg6kj+4Fnw== X-Received: by 2002:a05:6808:159f:b0:3a3:ed1f:6376 with SMTP id t31-20020a056808159f00b003a3ed1f6376mr373652oiw.43.1689215560499; Wed, 12 Jul 2023 19:32:40 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id c4-20020aa78c04000000b0063afb08afeesm4241984pfd.67.2023.07.12.19.32.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:32:39 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Dave Marchevsky , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 01/10] bpf: Fix kfunc callback register type handling Date: Thu, 13 Jul 2023 08:02:23 +0530 Message-Id: <20230713023232.1411523-2-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1178; i=memxor@gmail.com; h=from:subject; bh=n4c3/yhJhcy+RNsNpcu+6ta3NIE0EeM/R9cVgNtvyQc=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HFyrdtws6Bqly2KXgnQNteG3/Z++JnivfB9 g+gdHyjHrSJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hxQAKCRBM4MiGSL8R ynjwD/9Or+Cm1mfmRmc6Uf1ATncFQhN/XgaiNNKzppSDit+VBVkfn5zoKI7Radu1kncvD1vDZIH g/P3r1h8GSLIlrAywosDuGfzqwE6SjBXwkkXXcoBay5nawLR9dw15dHp2HC3LJYloyXpXqOnzgc 2m6rHVvWzEcYOfqbC7Or22bL7mWBooDe3pcjvlnLgbgwxvBMTay3yXPx7JGG0hMZc0PTCl3Ifl2 Oijv3ozgzVVSqhz3LAycvVG764HX9e8tNoG+eoMjwN1iQ3AMvNHC4kD2ih/jobJr6d0bIT1M7PQ obLRUKZXY3OvmWPI7QQKXzI96Vn3CQCjbSuc3nCnKXDQL+tJLuzTjWkcn5VzGoSlo+fCgXwMhDI plZz2h4M+8IkS8tTnTaDReEsUwudS8rJREh6c7q2PFxNtJmURpoyzJuOdDPxdCZ+JuLdjAErPH0 tCfuHH7ilOHDXaDgYmj54Ex6wswaKpErZHH/8EaYfyZExWJR8Gyxt5S+KPom77l1c6t2OislX3V Kf9jS+QZrbrZZyQZ1536AApxbCfkXNSXkxv7/FDoUQthtJ6HaLLQEZb223BMX7OUvcfFFzIEvaJ COFaKJSn/FEtcu1VC41nF51qfg9jdpXq60LKI4t31HdbfXRTm/2mbANGUZ8eO4h6WqfyIzHPeeJ ottdqwwgkfvdn8A== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net The kfunc code to handle KF_ARG_PTR_TO_CALLBACK does not check the reg type before using reg->subprogno. This can accidently permit invalid pointers from being passed into callback helpers (e.g. silently from different paths). Likewise, reg->subprogno from the per-register type union may not be meaningful either. We need to reject any other type except PTR_TO_FUNC. Cc: Dave Marchevsky Fixes: 5d92ddc3de1b ("bpf: Add callback validation to kfunc verifier logic") Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 81a93eeac7a0..7a00bf69bff8 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -11023,6 +11023,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ break; } case KF_ARG_PTR_TO_CALLBACK: + if (reg->type != PTR_TO_FUNC) { + verbose(env, "arg%d expected pointer to func\n", i); + return -EINVAL; + } meta->subprogno = reg->subprogno; break; case KF_ARG_PTR_TO_REFCOUNTED_KPTR: From patchwork Thu Jul 13 02:32:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311194 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 203337C for ; Thu, 13 Jul 2023 02:32:48 +0000 (UTC) Received: from mail-yb1-xb42.google.com (mail-yb1-xb42.google.com [IPv6:2607:f8b0:4864:20::b42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E94A21BF2 for ; Wed, 12 Jul 2023 19:32:45 -0700 (PDT) Received: by mail-yb1-xb42.google.com with SMTP id 3f1490d57ef6-bfe6ea01ff5so141409276.3 for ; Wed, 12 Jul 2023 19:32:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215564; x=1691807564; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J7CFuBvLEzgoeXNGKBAKHCvnxcWd+kgd7MmTjG5s2D8=; b=jrnC/1I0UKGSK3cPvRtyKi4QmVheYwiww6nr7QrRT1Uz13QnjxBLwsEu/1dnP9wQZH MBcTeKu5hr9T7VCPraZxDmhHpFHivDpEFvNrFxxiT6w9foCNNGNZtiMIhAvNGPck4dIm z2YEPuLERtpXBKpZ0GAdzMd2a2BpFXnReTrE58M2EfVUG2E0DyoJW43jwuRKFr28M3bS voGseRJ+hvv+8rbOauNHPYKwxGJfpY9JmB5n3A0n6V3QEouTGXXAbHW1pnBDdFCo5SZT x2nlBxPXmwQYYc4uJ0ycpm4pOxVQ61KICJHs21laQN0XXU7yXk2K8dP1CK/BFIcLus9Y JwJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215564; x=1691807564; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J7CFuBvLEzgoeXNGKBAKHCvnxcWd+kgd7MmTjG5s2D8=; b=lTX6vZVxxDK8TEuNY6rFxu9whQeuto4EndC8NkgYUhkNygvlTxFKv+Ou6klp/Kmg61 paCTLdjdOovOfHeyFtk6H6YUAB9qe7BMuhDdtCtUGObOpzN+TXuIXegbxvIVWHqTMRUO ocxFP3/Cd4mro450ICVyJYDu1JDS0vK0AIWDmjLewCDGws3QcQtXuj7896rQDPaNj02Y 2hal24255kIY3vJk9SaLIYTNpP5LjLBQ4E6S8LFlLbsW8wlmB1QvySjecv7Ot8azWC/f 5YmVfUfK3xRDz61qYYzAWkRUATFWlxAYnE5vde2Crj646iJ8ZJ8x7sFocn2Z70aZ3Oj6 bOSQ== X-Gm-Message-State: ABy/qLYoiDR8sQpAuSc3WLlBgakIfuG8uNW9LaZFEbjm1ynLLOnSi52e nLuCAyzEUMIvSoHOfkjiriotP+L9Mo2l8w== X-Google-Smtp-Source: APBJJlGtf7xhlaZerAAesjbxO4whMVFhmRi3ENXH3OU5vPepaZrTFyqXoc/0Kf3XIgV5v/BitYFvDg== X-Received: by 2002:a81:6e88:0:b0:57a:2f01:31d7 with SMTP id j130-20020a816e88000000b0057a2f0131d7mr542991ywc.1.1689215564249; Wed, 12 Jul 2023 19:32:44 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id e15-20020a170902ed8f00b001b7fd27144dsm4673906plj.40.2023.07.12.19.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:32:43 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 02/10] bpf: Fix subprog idx logic in check_max_stack_depth Date: Thu, 13 Jul 2023 08:02:24 +0530 Message-Id: <20230713023232.1411523-3-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2367; i=memxor@gmail.com; h=from:subject; bh=RPkTRYnhwdzuzbsqrc3s4KbQ8tewtgot4zOvkFZAyCo=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HG/QMVrIOSwbINJbiW5Hz++rUG4LQCSQvau /o2z8btvQSJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hxgAKCRBM4MiGSL8R ysqPD/9maZGbdpebHpLUEiC8SZzcg6Eeg2Nrk5maSdjGZIWE7WHoVab93QbMvLjb3Vy3uwEnbiS T//ulJyX5+F9WnJHy+b2kH1s6QxpeheWWfBPFiNyvFnJj0kXN77Vm7iv2AzD0MaB0qUtf1MjhNW OHIfZ0JGLuoTlrrZblvWQWg14JPoVGU8L8mLy5y1wFXe6iwqJDABI42dSSaYwwrVcl0vWlY9Kdd NeAw94+4RmDGumpzOy/fIJUDpDtMkRLfwbFrCSBiFR/EP4bdRpMpM316OZ1lyt+ajv0whpas5ps CiZxzW3cu5ZKPwgDYm5JGUBxMc8/p5apY8hJ3H9BjzMIh4DXMu2vkTqL6sxEPtiQu9a3q46Chak zQosjpZwJgkLEP1yJiziCaezmw2moFThvE0ZrWpfS6t19Bjjef7ki9H83+ujEIyUSOlsA7lyq9U Jm4I2J0bn1qq3iA4DcZ02pJjrlPUlBLdIZUtd2U6rGxRGCJmkN5LgbSdOJyO1klFQiNUwUmfT8t /Ggqk17I+YmWuzGu+PTrG+lWSh9agdKakJKBOFI7fC5/zL7wsPXP/QQp1TC1CbsgeCeGhKWTevL apely3qUB7hAIeUDWwea1xw/LPlH0ZEmwGn5ZMG7hM6G/t4/fgrNeze3TRCORo45Ojn6UvbbIjG ZjBOomsKNIR8y3A== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net The assignment to idx in check_max_stack_depth happens once we see a bpf_pseudo_call or bpf_pseudo_func. This is not an issue as the rest of the code performs a few checks and then pushes the frame to the frame stack, except the case of async callbacks. If the async callback case causes the loop iteration to be skipped, the idx assignment will be incorrect on the next iteration of the loop. The value stored in the frame stack (as the subprogno of the current subprog) will be incorrect. This leads to incorrect checks and incorrect tail_call_reachable marking. Save the target subprog in a new variable and only assign to idx once we are done with the is_async_cb check which may skip pushing of frame to the frame stack and subsequent stack depth checks and tail call markings. Fixes: 7ddc80a476c2 ("bpf: Teach stack depth check about async callbacks.") Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 7a00bf69bff8..66fee45d313d 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5622,7 +5622,7 @@ static int check_max_stack_depth(struct bpf_verifier_env *env) continue_func: subprog_end = subprog[idx + 1].start; for (; i < subprog_end; i++) { - int next_insn; + int next_insn, sidx; if (!bpf_pseudo_call(insn + i) && !bpf_pseudo_func(insn + i)) continue; @@ -5632,14 +5632,14 @@ static int check_max_stack_depth(struct bpf_verifier_env *env) /* find the callee */ next_insn = i + insn[i].imm + 1; - idx = find_subprog(env, next_insn); - if (idx < 0) { + sidx = find_subprog(env, next_insn); + if (sidx < 0) { WARN_ONCE(1, "verifier bug. No program starts at insn %d\n", next_insn); return -EFAULT; } - if (subprog[idx].is_async_cb) { - if (subprog[idx].has_tail_call) { + if (subprog[sidx].is_async_cb) { + if (subprog[sidx].has_tail_call) { verbose(env, "verifier bug. subprog has tail_call and async cb\n"); return -EFAULT; } @@ -5647,6 +5647,7 @@ static int check_max_stack_depth(struct bpf_verifier_env *env) continue; } i = next_insn; + idx = sidx; if (subprog[idx].has_tail_call) tail_call_reachable = true; From patchwork Thu Jul 13 02:32:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311195 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C87C7C for ; Thu, 13 Jul 2023 02:32:51 +0000 (UTC) Received: from mail-ot1-x341.google.com (mail-ot1-x341.google.com [IPv6:2607:f8b0:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6FD251986 for ; Wed, 12 Jul 2023 19:32:49 -0700 (PDT) Received: by mail-ot1-x341.google.com with SMTP id 46e09a7af769-6b74b37fbe0so171147a34.1 for ; Wed, 12 Jul 2023 19:32:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215568; x=1691807568; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W9TLCcC/dxKK9DyiNnY8tXOVbmGLInSZ4Jmy/vT5YPk=; b=B/DDOSkIhZl5RosVIPZpSkfi5jnXJUpWeaVLnXxIOUtMzpO9cP6vU4EALAyUjzS1dN 2LpAeQ1XG0/tSJpApjz9kDerTXaOaQzvq+KR1Kx9t1PtY/Y6LEY3HLdAAgcH3NMReDcM gMmd9uaE1sZJBGR6X4q0QuWvo7t45ECgPpuAjJglLqUhD0MJCZY2ywsyDPiqbp/ibiza 8drMVMG4paqSkWiGlfMa1FAr6Enh4Kn8qSZ+SUn3J+g5xUx8t/eA52d2EWguwC+Y6gF6 jrNsQm3FZgqZhy/y6aKMjFxOu7xWR26L5uP+IiiAM1UT9Ilu/FpsbuZUIR+jKSZH2uAZ 60ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215568; x=1691807568; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W9TLCcC/dxKK9DyiNnY8tXOVbmGLInSZ4Jmy/vT5YPk=; b=ktOFwcDb61lb2njLuwRSqYhGUC1IZo/nn0wu6W7nfXnj4hue/9cUmaGdST6dtRGutc rrVEtGcGqpdS7qZerBrnwIVbUAOH8Vf0UzZSzvEM6OE+BH8gKbqAvynufkEJ04OjCPcN 75NS+x/ixbS+Ezzvfoc+DpbYoteywwtyN0SCx+NUo54itN3ATgIJOiDj2UwMDjnHGnXH 3CsAmIcrczF+On2UXxaosDxoloAHgyM4XHTaS/BrwkNMcB1IBxCwkcT2C/I4aT54DzAY kBfR4KuZr1dkZTADhV35KMl28LkIfTFuvjDfD1wSq6wqrRRs+/5nNxgKHqEckMccJ9cu YgQw== X-Gm-Message-State: ABy/qLbTcDGUtd79maOCfiLnKvngmykMBlE6Ajd9qpN23UNixHuWa0ys 9SvOXIe+NrfbrDUr1qD4UqnWTMGcVrFtsg== X-Google-Smtp-Source: APBJJlEI6XhprVHUwpacURA7bfcfAsqCtY4yx+1E+065AdHxkp3dKdEo+bcEuTvd4bo3cGGn3szevg== X-Received: by 2002:a9d:5d17:0:b0:6b9:742a:750e with SMTP id b23-20020a9d5d17000000b006b9742a750emr413787oti.2.1689215568036; Wed, 12 Jul 2023 19:32:48 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id s13-20020a17090a764d00b00262d6ac0140sm4333376pjl.9.2023.07.12.19.32.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:32:47 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 03/10] bpf: Repeat check_max_stack_depth for async callbacks Date: Thu, 13 Jul 2023 08:02:25 +0530 Message-Id: <20230713023232.1411523-4-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3059; i=memxor@gmail.com; h=from:subject; bh=16Y3yKTFfp6fOZjFCd1jyAG6JmopFKuERZo/ZGzw4hU=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HGTvPL2BhbEjVFP2B9mgWJHCOKpg3I5RBR/ Y4Ka92K84CJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hxgAKCRBM4MiGSL8R ypArEACa25CEyfy506etMgQUzt8aaqTjgvxbfvwYV5gixEsjMr35VI0PW2eUEDmG+66cDAQL8UE TgKZGJkV5KBktwTS3jW6IyEkZFtzxDpr+vvQogjslCuXcJupVOG7Re7zUP3xWbc99DPJu35dCJh VpKzFJ7A53Cmkn0Y0X7+QCZPe3adZqRaXo+mQtTbP9sWXOhIfTdL536+qXWjSdD0R8M4V/aVZ5/ DZfNCPX3Ia8elUwNJ4pffZVU26kuZ/0S9FC7d+kgNj/ht/BDHdb3SXXgdcOhM/Pb4aSaFoH7Kw5 BqDAg24984vFx1+kdMkiPkmXx82XciNfTSth4XISqT9fk4nh9Fv1JkykRfVJAhkfht+1VneekqM dL7nPb/+7PzQpkg739SP7IkzBocn6ruY/W/EMEnB4jOKR1owtA6GNlenCy93coeL3rvYjE9uSRI 7lYv1+Xy48fOBa8xprMoi3aYb3skNAmxyiM6Uxr1o6oBQeWo32rkqFHgfxtu2ijnf9WukDS6jSf UdxteFsRLAKNHYOY5NHefRx1pDiyX5c9w9w6oV8egZq2Jr02LY4GToa+Qd9VKTRZKpvIo1VS6ar ffgVe1HEg5ZTOquwddeMF4GT9a6D5XwLU1b8fu/o9WkAecltkfdw029BOiV4ELotKdnfHbKuN4h s1CWKqvY///PyDg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net While the check_max_stack_depth function explores call chains emanating from the main prog, which is typically enough to cover all possible call chains, it doesn't explore those rooted at async callbacks unless the async callback will have been directly called, since unlike non-async callbacks it skips their instruction exploration as they don't contribute to stack depth. It could be the case that the async callback leads to a callchain which exceeds the stack depth, but this is never reachable while only exploring the entry point from main subprog. Hence, repeat the check for the main subprog *and* all async callbacks marked by the symbolic execution pass of the verifier, as execution of the program may begin at any of them. Consider a function with following stack depths: main : 256 async : 256 foo: 256 main: rX = async bpf_timer_set_callback(...) async: foo() Here, async is never seen to contribute to the stack depth of main, so it is not descended, but when async is invoked asynchronously when the timer fires, it will end up breaching MAX_BPF_STACK limit imposed by the verifier. Fixes: 7ddc80a476c2 ("bpf: Teach stack depth check about async callbacks.") Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 66fee45d313d..5d432df5df06 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5574,16 +5574,17 @@ static int update_stack_depth(struct bpf_verifier_env *env, * Since recursion is prevented by check_cfg() this algorithm * only needs a local stack of MAX_CALL_FRAMES to remember callsites */ -static int check_max_stack_depth(struct bpf_verifier_env *env) +static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx) { - int depth = 0, frame = 0, idx = 0, i = 0, subprog_end; struct bpf_subprog_info *subprog = env->subprog_info; struct bpf_insn *insn = env->prog->insnsi; + int depth = 0, frame = 0, i, subprog_end; bool tail_call_reachable = false; int ret_insn[MAX_CALL_FRAMES]; int ret_prog[MAX_CALL_FRAMES]; int j; + i = subprog[idx].start; process_func: /* protect against potential stack overflow that might happen when * bpf2bpf calls get combined with tailcalls. Limit the caller's stack @@ -5683,6 +5684,22 @@ static int check_max_stack_depth(struct bpf_verifier_env *env) goto continue_func; } +static int check_max_stack_depth(struct bpf_verifier_env *env) +{ + struct bpf_subprog_info *si = env->subprog_info; + int ret; + + for (int i = 0; i < env->subprog_cnt; i++) { + if (!i || si[i].is_async_cb) { + ret = check_max_stack_depth_subprog(env, i); + if (ret < 0) + return ret; + } + continue; + } + return 0; +} + #ifndef CONFIG_BPF_JIT_ALWAYS_ON static int get_callee_stack_depth(struct bpf_verifier_env *env, const struct bpf_insn *insn, int idx) From patchwork Thu Jul 13 02:32:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311196 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 767F47C for ; Thu, 13 Jul 2023 02:32:55 +0000 (UTC) Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3847E172C for ; Wed, 12 Jul 2023 19:32:53 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id d9443c01a7336-1b89114266dso2295375ad.0 for ; Wed, 12 Jul 2023 19:32:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215572; x=1691807572; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6OK5DwAJDi0WeJ1STGjJlEFL99F9NpE9t2dxL/AuL2A=; b=qqBbwS4w7uhtA9joWYu2IS7fhSuPb/D0BqvOn/sQM9TsjcXqWH30TVOi3su+AjnIDy oTwA27j45vlWTdVxvZbhNzoJfI2HpxxRAABJMYF6K/HyWB7RLA3T19+6IywU5V1roW2n Ulas9mYMR4BD0/sCx+wPp6swoBihzbejBx/zmbJFQIQzbZiNN53cBx/o1gopAXwkUXsE Iiw8wqoOEpcIVtr7kur598rmTVMJZDLH3E2KLOrXDsPTI4dPzeRipOhxaqPgBpc94i/k oj4zOqk8gm1XvoCJpX9jxe0sqalSssIYQwNFC+Vb5c3c7rlCH7LL07wLURXPKQ1KiiLI gJ2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215572; x=1691807572; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6OK5DwAJDi0WeJ1STGjJlEFL99F9NpE9t2dxL/AuL2A=; b=bAmOHt7Amf+WEd9/Mo7sJoIK33Ot5ndt8/J+W+5DQ8DYBsE96h1jfd2wZbSHtz8wzl BWPVkciqiggszYLPeVlBKXt2l+jw5PXlI8C+ylABoauouhEo4r4vNeSDndFYiSizJVX6 Ruf5elhn3gS1pN8NE1oYDTDgpCFLxQLB1SyefMZTkNix3E7cB+N5oMy69L2agl8HvPAe gaT5J2HSNxoP48LfKGcO/NqAP7ArLbafBvWKX5MKYMqwt/kZOS2U/3jfGBIYJ4Ub2mQG SyhAaZp1XOvhlxBXrRvg5n5C/HNMUcxDu8xhh2M7fz+E0aRygdK3pvaQmysx9gEDIacn RhAA== X-Gm-Message-State: ABy/qLY3WPvwvI8pW3xEtYK/HWoklZWGlHugzkHmfguRlLixrnxlAP1c vude9XaCicT/Nn60lpbYMgEiX8/x8RBDlw== X-Google-Smtp-Source: APBJJlEUr+RZoQS5FNs9e17ZJ9ggxUUx22DnkBISbKd6XQsgA2KdON4Vqr66BGCktqhyeaQcBUVsYg== X-Received: by 2002:a17:903:2345:b0:1b5:5fd2:4d6e with SMTP id c5-20020a170903234500b001b55fd24d6emr413704plh.58.1689215571882; Wed, 12 Jul 2023 19:32:51 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id c1-20020a170902848100b001b89b7e208fsm4673469plo.88.2023.07.12.19.32.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:32:51 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 04/10] bpf: Add support for inserting new subprogs Date: Thu, 13 Jul 2023 08:02:26 +0530 Message-Id: <20230713023232.1411523-5-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8621; i=memxor@gmail.com; h=from:subject; bh=cVIqR22tKoTIF4sEUogLkHcJYgzg898L8XXJ1zEC0No=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HHpXDNFJvOLbPA/wALcXYCdUo/M0CedDKaY ZRKcmlBeC2JAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hxwAKCRBM4MiGSL8R yiPcEACt+YxNt6fdUecRYwjFtJveAOr2J0ity46SA/N5OxOtNaZO3C9VqOitA0CRpizExJmcZMk D7c4ymy9luEi+nIyWiU4CdBPrg2rw2gCrnOIUewYE2qf+r4oBc3WFzPbkMOQ9SoqgrDW3LN3vAS 5S3iSMXZCr8RAToBTgSZzjd+kN73i0sQTQorIJYffV3vwpwGhZ9A642KEkeLq2p51QcVIS8nj6Z 4RycqwLKoB5e85zBzQTF+LZdXeBo5DzS9fdduDaQMUn8HYrlbTE8Ay80ZixODfFSTc387qcY4Y3 tOk3XZheDrZuqYDGY6ZGdK7V/rmT1qpsUuMNID0/PRrdMYe5fn71JH5bqWeNEb5BkwyFpqlj+AC kywUJPZsy+n/whzKPx48gW/CDa8I3qVnMrk0SSZdp3LOhgBinmb+Ywu0+Fv6osb+qbljdQy99x7 fdj/BT3Yuv+IeJBWR0mil54zfb3GhVwUJtF8u+3/3IsRmcBVuY7TIxRp/kNI/pGxm4Tbe/n8b/E u7bSZ3WiCeFORrd33jg/KkqnQ97zcC7HuhgiVIJOcNJV6qDPdvlRgDF61mk9aIw2yXKVw8AJkOr hh+PPjT5MikIAxAO1vQRmx86L3j9FA1QGkq39BosM+yDHb9nToz/JS5xcKDl+dgHP3rUZ4RJ1X6 fD8BcqyEWZXExSw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Introduce support in the verifier for generating a subprogram and include it as part of a BPF program dynamically after the do_check phase is complete. The appropriate place of invocation would be do_misc_fixups. Since they are always appended to the end of the instruction sequence of the program, it becomes relatively inexpensive to do the related adjustments to the subprog_info of the program. Only the fake exit subprogram is shifted forward by 1, making room for our invented subprog. This is useful to insert a new subprogram and obtain its function pointer. The next patch will use this functionality to insert a default exception callback which will be invoked after unwinding the stack. Note that these invented subprograms are invisible to userspace, and never reported in BPF_OBJ_GET_INFO_BY_ID etc. For now, only a single invented program is supported, but more can be easily supported in the future. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 1 + include/linux/bpf_verifier.h | 4 +++- kernel/bpf/core.c | 4 ++-- kernel/bpf/syscall.c | 19 ++++++++++++++++++- kernel/bpf/verifier.c | 29 ++++++++++++++++++++++++++++- 5 files changed, 52 insertions(+), 5 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 360433f14496..70f212dddfbf 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1385,6 +1385,7 @@ struct bpf_prog_aux { bool sleepable; bool tail_call_reachable; bool xdp_has_frags; + bool invented_prog; /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index f70f9ac884d2..360aa304ec09 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -540,6 +540,7 @@ struct bpf_subprog_info { bool has_tail_call; bool tail_call_reachable; bool has_ld_abs; + bool invented_prog; bool is_async_cb; }; @@ -594,10 +595,11 @@ struct bpf_verifier_env { bool bypass_spec_v1; bool bypass_spec_v4; bool seen_direct_write; + bool invented_prog; struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */ const struct bpf_line_info *prev_linfo; struct bpf_verifier_log log; - struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1]; + struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 2]; /* max + 2 for the fake and exception subprogs */ union { struct bpf_idmap idmap_scratch; struct bpf_idset idset_scratch; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index dc85240a0134..5c484b2bc3d6 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -211,7 +211,7 @@ void bpf_prog_fill_jited_linfo(struct bpf_prog *prog, const struct bpf_line_info *linfo; void **jited_linfo; - if (!prog->aux->jited_linfo) + if (!prog->aux->jited_linfo || prog->aux->invented_prog) /* Userspace did not provide linfo */ return; @@ -579,7 +579,7 @@ bpf_prog_ksym_set_name(struct bpf_prog *prog) sym = bin2hex(sym, prog->tag, sizeof(prog->tag)); /* prog->aux->name will be ignored if full btf name is available */ - if (prog->aux->func_info_cnt) { + if (prog->aux->func_info_cnt && !prog->aux->invented_prog) { type = btf_type_by_id(prog->aux->btf, prog->aux->func_info[prog->aux->func_idx].type_id); func_name = btf_name_by_offset(prog->aux->btf, type->name_off); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index ee8cb1a174aa..769550287bed 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4291,8 +4291,11 @@ static int bpf_prog_get_info_by_fd(struct file *file, u32 i; info.jited_prog_len = 0; - for (i = 0; i < prog->aux->func_cnt; i++) + for (i = 0; i < prog->aux->func_cnt; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; info.jited_prog_len += prog->aux->func[i]->jited_len; + } } else { info.jited_prog_len = prog->jited_len; } @@ -4311,6 +4314,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, free = ulen; for (i = 0; i < prog->aux->func_cnt; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; len = prog->aux->func[i]->jited_len; len = min_t(u32, len, free); img = (u8 *) prog->aux->func[i]->bpf_func; @@ -4332,6 +4337,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, ulen = info.nr_jited_ksyms; info.nr_jited_ksyms = prog->aux->func_cnt ? : 1; + if (prog->aux->func_cnt && prog->aux->func[prog->aux->func_cnt - 1]->aux->invented_prog) + info.nr_jited_ksyms--; if (ulen) { if (bpf_dump_raw_ok(file->f_cred)) { unsigned long ksym_addr; @@ -4345,6 +4352,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, user_ksyms = u64_to_user_ptr(info.jited_ksyms); if (prog->aux->func_cnt) { for (i = 0; i < ulen; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; ksym_addr = (unsigned long) prog->aux->func[i]->bpf_func; if (put_user((u64) ksym_addr, @@ -4363,6 +4372,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, ulen = info.nr_jited_func_lens; info.nr_jited_func_lens = prog->aux->func_cnt ? : 1; + if (prog->aux->func_cnt && prog->aux->func[prog->aux->func_cnt - 1]->aux->invented_prog) + info.nr_jited_func_lens--; if (ulen) { if (bpf_dump_raw_ok(file->f_cred)) { u32 __user *user_lens; @@ -4373,6 +4384,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, user_lens = u64_to_user_ptr(info.jited_func_lens); if (prog->aux->func_cnt) { for (i = 0; i < ulen; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; func_len = prog->aux->func[i]->jited_len; if (put_user(func_len, &user_lens[i])) @@ -4443,6 +4456,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, ulen = info.nr_prog_tags; info.nr_prog_tags = prog->aux->func_cnt ? : 1; + if (prog->aux->func_cnt && prog->aux->func[prog->aux->func_cnt - 1]->aux->invented_prog) + info.nr_prog_tags--; if (ulen) { __u8 __user (*user_prog_tags)[BPF_TAG_SIZE]; u32 i; @@ -4451,6 +4466,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, ulen = min_t(u32, info.nr_prog_tags, ulen); if (prog->aux->func_cnt) { for (i = 0; i < ulen; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; if (copy_to_user(user_prog_tags[i], prog->aux->func[i]->tag, BPF_TAG_SIZE)) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5d432df5df06..8ce72a7b4758 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -14891,8 +14891,11 @@ static void adjust_btf_func(struct bpf_verifier_env *env) if (!aux->func_info) return; - for (i = 0; i < env->subprog_cnt; i++) + for (i = 0; i < env->subprog_cnt; i++) { + if (env->subprog_info[i].invented_prog) + break; aux->func_info[i].insn_off = env->subprog_info[i].start; + } } #define MIN_BPF_LINEINFO_SIZE offsetofend(struct bpf_line_info, line_col) @@ -17778,6 +17781,7 @@ static int jit_subprogs(struct bpf_verifier_env *env) } func[i]->aux->num_exentries = num_exentries; func[i]->aux->tail_call_reachable = env->subprog_info[i].tail_call_reachable; + func[i]->aux->invented_prog = env->subprog_info[i].invented_prog; func[i] = bpf_int_jit_compile(func[i]); if (!func[i]->jited) { err = -ENOTSUPP; @@ -18071,6 +18075,29 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return 0; } +/* The function requires that first instruction in 'patch' is insnsi[prog->len - 1] */ +static int invent_subprog(struct bpf_verifier_env *env, struct bpf_insn *patch, int len) +{ + struct bpf_subprog_info *info = env->subprog_info; + int cnt = env->subprog_cnt; + struct bpf_prog *prog; + + if (env->invented_prog) { + verbose(env, "verifier internal error: only one invented prog supported\n"); + return -EFAULT; + } + prog = bpf_patch_insn_data(env, env->prog->len - 1, patch, len); + if (!prog) + return -ENOMEM; + env->prog = prog; + info[cnt + 1].start = info[cnt].start; + info[cnt].start = prog->len - len + 1; + info[cnt].invented_prog = true; + env->subprog_cnt++; + env->invented_prog = true; + return 0; +} + /* Do various post-verification rewrites in a single program pass. * These rewrites simplify JIT and interpreter implementations. */ From patchwork Thu Jul 13 02:32:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311197 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 515987C for ; Thu, 13 Jul 2023 02:32:58 +0000 (UTC) Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29C389E for ; Wed, 12 Jul 2023 19:32:57 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id d9443c01a7336-1b9d80e33fbso1770115ad.0 for ; Wed, 12 Jul 2023 19:32:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215576; x=1691807576; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JY7lUvZfaRDtuXJ10+Y5Ag203FYD7K70sHWae4tiFEw=; b=phINwnN/NbLSkOQcHpLXZVRHTl7FbyMj/fdXybnmZF6KV3YUrl/t2MmRqNuBHKOZKk hHb9SmrfHfoUoXODGxIlQz7lbj0a1aMh/CRFs6wf/lo/frFs3G10+BzVwP2hcDfr2VGe 4poueWKTu6eA6GE5mp/TjImLofLKQlXUVRikuqxUyQILyRS6f9sbmjM47DR6MRFnUwtu UaGfsJuZblcr9QElYQWkh8PuW2+1JupMOka3blEq1ZwNBjxohokn7lHwCXwHnnYvj4mk RRVbjEenXuRZVObLGX0f70q3HP6RrkzkOd7oZREt3f5XBvGxicMJ4vOdXpP0+d1HUi5S 6Aeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215576; x=1691807576; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JY7lUvZfaRDtuXJ10+Y5Ag203FYD7K70sHWae4tiFEw=; b=gjrqPIxoHIoLST8f0Xn9EHLnUF7sx4p0LALLoSRtLvCOKX1BnmcR10iExIOJaxU1rX Ba4A7qKNCCx2/TB4M4jM2qB90kh3p5xdi38fiq9v0IZyb+M6yv+qgpUaLxXkrfc25D1y Ky9AwQFLGEIM/iGcCyOOEnQ+TCr8EJ3oLTdq5CL7HF+aoGSTmEaSgI9gyQ7SVBRK+wfL vF1q7on64TApC96mo2IbRgSZbabeDTUHkRp5XNRIXJDC8MZefVNa0x2pdDRsKOiez7ge shrv9p3xmVBPVFxqVx5tbZkikenVHBLNucS/KyfZEKdH6ca4ojArThQZKtI07uo0hVxy H6LA== X-Gm-Message-State: ABy/qLZHkjqMKXx60CZYnOTdM3Qz5px7nM8Tr5YlTnDWxwF4FxVSnhj9 B9V5iqquP2hUni1BpS9ePpMoPrXX9WPH1Q== X-Google-Smtp-Source: APBJJlGA+lRmbi6jgTZM5w0ivRxRsut6HqOD2L0XaEat/sE5LpLbU5oRPdYFRjuFxF+HwJZfv1nOWA== X-Received: by 2002:a17:902:b286:b0:1b8:6b17:9093 with SMTP id u6-20020a170902b28600b001b86b179093mr273627plr.1.1689215575965; Wed, 12 Jul 2023 19:32:55 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id e12-20020a170902d38c00b001b9d335223csm4677911pld.26.2023.07.12.19.32.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:32:55 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 05/10] arch/x86: Implement arch_bpf_stack_walk Date: Thu, 13 Jul 2023 08:02:27 +0530 Message-Id: <20230713023232.1411523-6-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3278; i=memxor@gmail.com; h=from:subject; bh=VQRkAequmejclqTBjBYFceDyxm5b49BZNtvPyADp8Z4=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HHHhg73JaWPy6I3dqWEctAFf/S2Njo8RgVN IuHAA9ku+KJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hxwAKCRBM4MiGSL8R yto1D/40QYxEos/ZxESSSmhKtXqUXsTeJaS0oGCLEqGilQlZyFdHel0/Noq/bc3jYrZtwNG4UNi ufAJlnzPkP9SuIMFa19VWZWPtHJoMiFHdWDHqmGkA6h9buqF6XN43c1I5YpYSNyVhRnRIN3iKMO w+BjZ7xUZg1/xOZIpwryfEWembTJ7E4tYlqSFF3P/f27oR8ZPP65szF73qPGfgseWCOkp0LAEns qRQzkTUtpp2sbMP2MfnCzUtKS8AKNgcWtaKcWip9/0DXeMDt93JyqPMOeq/GbBgMHJ44R/gw7Gd TAfCWWtUmZyBb+mpGMdCAnv53+wyvftbCthT5VC8v8EfH9R6uLVksIkssk8+deuIojEQrN5SIJe mvjAurrrBtDms7eaFlZXYz2uDL7bXp96io8Xp+p85Ozz4BNDP2f3wD0cYUUoVu0tz6osN62f1VB o3hrD9WIMJZI4S061eVtcVhPEBoGQi2lq3SiWLpJrANrvDAquAd1yR/sJHa40J9VDMtlCfVCBDp SwMP3Dhak2NcssqRc9NllV1mYGP6QMykfjFo5X/SNx7QSMTnC06OpbEqubj0Ra3dD08BWovoeRf a4lAcz5zSOjupmg+zaZrKmkfTYD0+N74jL/Xf77ulmUWeWgrtqGqzuvU5bNQPsZLDOIHO0hugS5 gOH1pfCm1/UNwXQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net The plumbing for offline unwinding when we throw an exception in programs would require walking the stack, hence introduce a new arch_bpf_stack_walk function. This is provided when the JIT supports exceptions, i.e. bpf_jit_supports_exceptions is true. The arch-specific code is really minimal, hence it should straightforward to extend this support to other architectures as well, as it reuses the logic of arch_stack_walk, but allowing access to unwind_state data. Once the stack pointer and frame pointer are known for the main subprog during the unwinding, we know the stack layout and location of any callee-saved registers which must be restored before we return back to the kernel. This handling will be added in the next patch. Signed-off-by: Kumar Kartikeya Dwivedi --- arch/x86/net/bpf_jit_comp.c | 21 +++++++++++++++++++++ include/linux/filter.h | 2 ++ kernel/bpf/core.c | 9 +++++++++ 3 files changed, 32 insertions(+) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 438adb695daa..d326503ce242 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -16,6 +16,7 @@ #include #include #include +#include static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len) { @@ -2660,3 +2661,23 @@ void bpf_jit_free(struct bpf_prog *prog) bpf_prog_unlock_free(prog); } + +bool bpf_jit_supports_exceptions(void) +{ + return IS_ENABLED(CONFIG_UNWINDER_ORC) || IS_ENABLED(CONFIG_UNWINDER_FRAME_POINTER); +} + +void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie) +{ +#if defined(CONFIG_UNWINDER_ORC) || defined(CONFIG_UNWINDER_FRAME_POINTER) + struct unwind_state state; + unsigned long addr; + + for (unwind_start(&state, current, NULL, NULL); !unwind_done(&state); + unwind_next_frame(&state)) { + addr = unwind_get_return_address(&state); + if (!addr || !consume_fn(cookie, (u64)addr, (u64)state.sp, (u64)state.bp)) + break; + } +#endif +} diff --git a/include/linux/filter.h b/include/linux/filter.h index f69114083ec7..21ac801330bb 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -920,6 +920,8 @@ bool bpf_jit_needs_zext(void); bool bpf_jit_supports_subprog_tailcalls(void); bool bpf_jit_supports_kfunc_call(void); bool bpf_jit_supports_far_kfunc_call(void); +bool bpf_jit_supports_exceptions(void); +void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie); bool bpf_helper_changes_pkt_data(void *func); static inline bool bpf_dump_raw_ok(const struct cred *cred) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 5c484b2bc3d6..5e224cf0ec27 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2770,6 +2770,15 @@ int __weak bpf_arch_text_invalidate(void *dst, size_t len) return -ENOTSUPP; } +bool __weak bpf_jit_supports_exceptions(void) +{ + return false; +} + +void __weak arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie) +{ +} + #ifdef CONFIG_BPF_SYSCALL static int __init bpf_global_ma_init(void) { From patchwork Thu Jul 13 02:32:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311198 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 328BF7C for ; Thu, 13 Jul 2023 02:33:04 +0000 (UTC) Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DFF0E70 for ; Wed, 12 Jul 2023 19:33:01 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id 41be03b00d2f7-54290603887so186865a12.1 for ; Wed, 12 Jul 2023 19:33:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215580; x=1691807580; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=f6SdCPSZxq5oQlMU3O1V+/amiNOipHiDy+zqyJEvHRU=; b=HkhcZM9x0pFeYjnOuP3ROXt67CRT9vZ9t5u1wApY8+vc0fHXN6l8LBCGp5nZIU3UiJ uWflNGptC7qRT8D0445S0NijBeevFqpWbggx2+WN/JrEKbCC1fYptXsRI3SAdZ27kELI aYF7tGx7ADP1BFjJsQUIBOG+VFyZJ9s/NLMU5+XlQyjfgoIbZqVFl7KTEkWcj4iBYwXC qeYVV+qFponxg/3/g5kb9thFfm+zHe5RfvZke4E69ZLS/EOg61G1EfBNT6A72cug0xqs 90hLSKkGwwuNXRvp8JxBKpoyD84nED5Jqh/TNgvdH/0NTTY+9b7KuSvIRxtpgXBhbn3E pjuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215580; x=1691807580; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=f6SdCPSZxq5oQlMU3O1V+/amiNOipHiDy+zqyJEvHRU=; b=X6YypU13DARVjd//Kz8Xab09axJRSiL/ri11Zc/YY77TyeNviVys1AvKtCZ3dELWNK QBI9ys1yXI4xgfuFRUSh2yOPU7ObNbQbJCSBWVQtyW3pwNfkK1ELbp/olQZ9GUC/+WSN GkVJ+th4OmsDf3e+vZingsJVHfhKvd3CtrVr6ULPCWJ6df3nNI9MfHYkbYJfwXXsbfLA Xb6fwNxy6WwhNI9URUvHZMLJou73lc15GRk8wgxPiPo08w5XDuFi/WD1tmcuqktsh4zm OVuXKQ3dsIUPZcGqk12wKjEQoZr/wuocont0XAJGiLj074CZ+HGItQO31p4ze7+SbZYo IZ+A== X-Gm-Message-State: ABy/qLZmFaI/POk3hc881ZdgK6bI07Rph7yJw9Qtzhb5EBw2/4ba3uv7 VycBKxjrD4aPbgGgWWLzp5d5FyVpTDMw5A== X-Google-Smtp-Source: APBJJlGlB+nrzyjtBvm3Jtja45fMGUd3IcTj5Au70/DEGgJ58xmuCadYfB4K23EO+dX6dOKskO22dQ== X-Received: by 2002:a17:902:e744:b0:1b8:5ab2:49a4 with SMTP id p4-20020a170902e74400b001b85ab249a4mr215318plf.53.1689215580034; Wed, 12 Jul 2023 19:33:00 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id ij15-20020a170902ab4f00b001b8ad8382a4sm4716330plb.216.2023.07.12.19.32.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:32:59 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 06/10] bpf: Implement bpf_throw kfunc Date: Thu, 13 Jul 2023 08:02:28 +0530 Message-Id: <20230713023232.1411523-7-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=25994; i=memxor@gmail.com; h=from:subject; bh=GuCwv3B60Ao1OoNDHTQfkcO4AeLUtPu7b+MhKk2msM8=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HHRtLVOX8f8daNvtsNvLCirn85tnFlWEocG WJ5waJAxjCJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hxwAKCRBM4MiGSL8R yrPpEACOq0WQ7CnIQV3oYvJTDS+WhRI2DYHeshejy+xs0ziC+K4llrTFpJ9GhyWYNbbXuvHhlKC 2Dtq06969BR5N6vQp/TciPPRFlO6pJJEw1zSf30Dkwd4KqjLcg5Pmc3ESKjVF1P7OWT8KKcZcxL BLR9f0lYDNlGOn3mla69pj7xQsZ1odvPuUjC/EneM2aBS9d2YtH/VBcJfOfr7s7QPgR3+ESKlx0 nr9frgsnjrk3TR+ozP21pbcZWzqu4KpSJZwO2HIdO1opA3bcH7DUZNpwfoShVv2jsfFzzk189r3 jY8h+2Y/7albQRnBW5WqgFOPz3j56u3kXnVkVdNLdDb+6tCpoF7Q/QkAUYQkF5cv7lJ1xy7sh/9 Q4hekoUugdIWgLQ/muY1Oqz22wzYfdy0DFdJP4TVwFckix4daySzkAXGgZwYjMOCQy0fKf3THR7 +nrBumr+19zbIgYb3XgF1Yuglv9ZwDbISlPcYAagvJMhOb1qimYEaKNs5rMvutxYD3Ho79kLEXw 0WPiD1hB4U5ZfvP6MzM3wi6S+mbrFvaTMrA3tguU35++wlJQRjSv2HnXZnEYVfA6PnOw4/pwDvz FFdrKlvsNyse8xYxvXnjOvKlqsSVGX0EPebGg89Z22m/WQUZJeO+Nuv1Bmxf5nVsZXAIeETNSO3 9vc4idHf+wAyezw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net This patch implements BPF exceptions, and introduces a bpf_throw kfunc to allow programs to throw exceptions during their execution at runtime. A bpf_throw invocation is treated as an immediate termination of the program, returning back to its caller within the kernel, unwinding all stack frames. This allows the program to simplify its implementation, by testing for runtime conditions which the verifier has no visibility into, and assert that they are true. In case they are not, the program can simply throw an exception from the other branch. BPF exceptions are explicitly *NOT* an unlikely slowpath error handling primitive, and this objective has guided design choices of the implementation of the them within the kernel (with the bulk of the cost for unwinding the stack offloaded to the bpf_throw kfunc). The implementation of this mechanism requires use of the invent_subprog mechanism introduced in the previous patch, which generates a couple of instructions to zero R0 and exit. The JIT then rewrites the prologue of this subprog to take the stack pointer and frame pointer as inputs and reset the stack frame, popping all callee-saved registers saved by the main subprog. The bpf_throw function then walks the stack at runtime, and invokes this exception subprog with the stack and frame pointers as parameters. Reviewers must take note that currently the main program is made to save all callee-saved registers on x86_64 during entry into the program. This is because we must do an equivalent of a lightweight context switch when unwinding the stack, therefore we need the callee-saved registers of the caller of the BPF program to be able to return with a sane state. Note that we have to additionally handle r12, even though it is not used by the program, because when throwing the exception the program makes an entry into the kernel which could clobber r12 after saving it on the stack. To be able to preserve the value we received on program entry, we push r12 and restore it from the generated subprogram when unwinding the stack. All of this can however be addressed by recording which callee-saved registers are saved for each program, and then restore them from the corresponding stack frames (mapped to each program) when unwinding. This would not require pushing all callee-saved registers on entry into a BPF program. However, this optimization is deferred for a future patch to manage the number of moving pieces within this set. For now, bpf_throw invocation fails when lingering resources or locks exist in that path of the program. In a future followup, bpf_throw will be extended to perform frame-by-frame unwinding to release lingering resources for each stack frame, removing this limitation. Signed-off-by: Kumar Kartikeya Dwivedi --- arch/x86/net/bpf_jit_comp.c | 73 +++++++---- include/linux/bpf.h | 3 + include/linux/bpf_verifier.h | 4 + include/linux/filter.h | 6 + kernel/bpf/core.c | 2 +- kernel/bpf/helpers.c | 38 ++++++ kernel/bpf/verifier.c | 124 ++++++++++++++++-- .../testing/selftests/bpf/bpf_experimental.h | 6 + 8 files changed, 219 insertions(+), 37 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index d326503ce242..8d97c6a60f9a 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -256,32 +256,36 @@ struct jit_context { /* Number of bytes that will be skipped on tailcall */ #define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE) -static void push_callee_regs(u8 **pprog, bool *callee_regs_used) +static void push_callee_regs(u8 **pprog, bool *callee_regs_used, bool force) { u8 *prog = *pprog; - if (callee_regs_used[0]) + if (callee_regs_used[0] || force) EMIT1(0x53); /* push rbx */ - if (callee_regs_used[1]) + if (force) + EMIT2(0x41, 0x54); /* push r12 */ + if (callee_regs_used[1] || force) EMIT2(0x41, 0x55); /* push r13 */ - if (callee_regs_used[2]) + if (callee_regs_used[2] || force) EMIT2(0x41, 0x56); /* push r14 */ - if (callee_regs_used[3]) + if (callee_regs_used[3] || force) EMIT2(0x41, 0x57); /* push r15 */ *pprog = prog; } -static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) +static void pop_callee_regs(u8 **pprog, bool *callee_regs_used, bool force) { u8 *prog = *pprog; - if (callee_regs_used[3]) + if (callee_regs_used[3] || force) EMIT2(0x41, 0x5F); /* pop r15 */ - if (callee_regs_used[2]) + if (callee_regs_used[2] || force) EMIT2(0x41, 0x5E); /* pop r14 */ - if (callee_regs_used[1]) + if (callee_regs_used[1] || force) EMIT2(0x41, 0x5D); /* pop r13 */ - if (callee_regs_used[0]) + if (force) + EMIT2(0x41, 0x5C); /* pop r12 */ + if (callee_regs_used[0] || force) EMIT1(0x5B); /* pop rbx */ *pprog = prog; } @@ -292,7 +296,8 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) * while jumping to another program */ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, - bool tail_call_reachable, bool is_subprog) + bool tail_call_reachable, bool is_subprog, + bool is_exception_cb) { u8 *prog = *pprog; @@ -308,8 +313,23 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, else EMIT2(0x66, 0x90); /* nop2 */ } - EMIT1(0x55); /* push rbp */ - EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */ + /* Exception callback receives FP as second parameter */ + if (is_exception_cb) { + bool regs_used[4] = {}; + + EMIT3(0x48, 0x89, 0xF4); /* mov rsp, rsi */ + EMIT3(0x48, 0x89, 0xD5); /* mov rbp, rdx */ + /* The main frame must have seen_exception as true, so we first + * restore those callee-saved regs from stack, before reusing + * the stack frame. + */ + pop_callee_regs(&prog, regs_used, true); + /* Reset the stack frame. */ + EMIT3(0x48, 0x89, 0xEC); /* mov rsp, rbp */ + } else { + EMIT1(0x55); /* push rbp */ + EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */ + } /* X86_TAIL_CALL_OFFSET is here */ EMIT_ENDBR(); @@ -468,10 +488,12 @@ static void emit_return(u8 **pprog, u8 *ip) * goto *(prog->bpf_func + prologue_size); * out: */ -static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used, +static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, + u8 **pprog, bool *callee_regs_used, u32 stack_depth, u8 *ip, struct jit_context *ctx) { + bool force_pop_all = bpf_prog->aux->seen_exception; int tcc_off = -4 - round_up(stack_depth, 8); u8 *prog = *pprog, *start = *pprog; int offset; @@ -518,7 +540,7 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used, offset = ctx->tail_call_indirect_label - (prog + 2 - start); EMIT2(X86_JE, offset); /* je out */ - pop_callee_regs(&prog, callee_regs_used); + pop_callee_regs(&prog, callee_regs_used, force_pop_all); EMIT1(0x58); /* pop rax */ if (stack_depth) @@ -542,11 +564,13 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used, *pprog = prog; } -static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke, +static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, + struct bpf_jit_poke_descriptor *poke, u8 **pprog, u8 *ip, bool *callee_regs_used, u32 stack_depth, struct jit_context *ctx) { + bool force_pop_all = bpf_prog->aux->seen_exception; int tcc_off = -4 - round_up(stack_depth, 8); u8 *prog = *pprog, *start = *pprog; int offset; @@ -571,7 +595,7 @@ static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke, emit_jump(&prog, (u8 *)poke->tailcall_target + X86_PATCH_SIZE, poke->tailcall_bypass); - pop_callee_regs(&prog, callee_regs_used); + pop_callee_regs(&prog, callee_regs_used, force_pop_all); EMIT1(0x58); /* pop rax */ if (stack_depth) EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8)); @@ -987,8 +1011,11 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image emit_prologue(&prog, bpf_prog->aux->stack_depth, bpf_prog_was_classic(bpf_prog), tail_call_reachable, - bpf_prog->aux->func_idx != 0); - push_callee_regs(&prog, callee_regs_used); + bpf_prog->aux->func_idx != 0, bpf_prog->aux->exception_cb); + /* Exception callback will clobber callee regs for its own use, and + * restore the original callee regs from main prog's stack frame. + */ + push_callee_regs(&prog, callee_regs_used, bpf_prog->aux->seen_exception); ilen = prog - temp; if (rw_image) @@ -1557,13 +1584,15 @@ st: if (is_imm8(insn->off)) case BPF_JMP | BPF_TAIL_CALL: if (imm32) - emit_bpf_tail_call_direct(&bpf_prog->aux->poke_tab[imm32 - 1], + emit_bpf_tail_call_direct(bpf_prog, + &bpf_prog->aux->poke_tab[imm32 - 1], &prog, image + addrs[i - 1], callee_regs_used, bpf_prog->aux->stack_depth, ctx); else - emit_bpf_tail_call_indirect(&prog, + emit_bpf_tail_call_indirect(bpf_prog, + &prog, callee_regs_used, bpf_prog->aux->stack_depth, image + addrs[i - 1], @@ -1808,7 +1837,7 @@ st: if (is_imm8(insn->off)) seen_exit = true; /* Update cleanup_addr */ ctx->cleanup_addr = proglen; - pop_callee_regs(&prog, callee_regs_used); + pop_callee_regs(&prog, callee_regs_used, bpf_prog->aux->seen_exception); EMIT1(0xC9); /* leave */ emit_return(&prog, image + addrs[i - 1] + (prog - temp)); break; diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 70f212dddfbf..61cdb291311f 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1386,6 +1386,8 @@ struct bpf_prog_aux { bool tail_call_reachable; bool xdp_has_frags; bool invented_prog; + bool exception_cb; + bool seen_exception; /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ @@ -1408,6 +1410,7 @@ struct bpf_prog_aux { int cgroup_atype; /* enum cgroup_bpf_attach_type */ struct bpf_map *cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE]; char name[BPF_OBJ_NAME_LEN]; + unsigned int (*bpf_exception_cb)(u64 cookie, u64 sp, u64 bp); #ifdef CONFIG_SECURITY void *security; #endif diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 360aa304ec09..e28386fa462f 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -541,7 +541,9 @@ struct bpf_subprog_info { bool tail_call_reachable; bool has_ld_abs; bool invented_prog; + bool is_cb; bool is_async_cb; + bool is_exception_cb; }; struct bpf_verifier_env; @@ -588,6 +590,7 @@ struct bpf_verifier_env { u32 used_map_cnt; /* number of used maps */ u32 used_btf_cnt; /* number of used BTF objects */ u32 id_gen; /* used to generate unique reg IDs */ + int exception_callback_subprog; bool explore_alu_limits; bool allow_ptr_leaks; bool allow_uninit_stack; @@ -596,6 +599,7 @@ struct bpf_verifier_env { bool bypass_spec_v4; bool seen_direct_write; bool invented_prog; + bool seen_exception; struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */ const struct bpf_line_info *prev_linfo; struct bpf_verifier_log log; diff --git a/include/linux/filter.h b/include/linux/filter.h index 21ac801330bb..f45a54f8dd7d 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1137,6 +1137,7 @@ const char *__bpf_address_lookup(unsigned long addr, unsigned long *size, bool is_bpf_text_address(unsigned long addr); int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type, char *sym); +struct bpf_prog *bpf_prog_ksym_find(unsigned long addr); static inline const char * bpf_address_lookup(unsigned long addr, unsigned long *size, @@ -1204,6 +1205,11 @@ static inline int bpf_get_kallsym(unsigned int symnum, unsigned long *value, return -ERANGE; } +static inline struct bpf_prog *bpf_prog_ksym_find(unsigned long addr) +{ + return NULL; +} + static inline const char * bpf_address_lookup(unsigned long addr, unsigned long *size, unsigned long *off, char **modname, char *sym) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 5e224cf0ec27..efbc2f965226 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -723,7 +723,7 @@ bool is_bpf_text_address(unsigned long addr) return ret; } -static struct bpf_prog *bpf_prog_ksym_find(unsigned long addr) +struct bpf_prog *bpf_prog_ksym_find(unsigned long addr) { struct bpf_ksym *ksym = bpf_ksym_find(addr); diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 9e80efa59a5d..da1493a1da25 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -2402,6 +2402,43 @@ __bpf_kfunc void bpf_rcu_read_unlock(void) rcu_read_unlock(); } +struct bpf_throw_ctx { + struct bpf_prog_aux *aux; + u64 sp; + u64 bp; + int cnt; +}; + +static bool bpf_stack_walker(void *cookie, u64 ip, u64 sp, u64 bp) +{ + struct bpf_throw_ctx *ctx = cookie; + struct bpf_prog *prog; + + if (!is_bpf_text_address(ip)) + return !ctx->cnt; + prog = bpf_prog_ksym_find(ip); + ctx->cnt++; + if (!prog->aux->id) + return true; + ctx->aux = prog->aux; + ctx->sp = sp; + ctx->bp = bp; + return false; +} + +__bpf_kfunc void bpf_throw(u64 cookie) +{ + struct bpf_throw_ctx ctx = {}; + + arch_bpf_stack_walk(bpf_stack_walker, &ctx); + WARN_ON_ONCE(!ctx.aux); + if (ctx.aux) + WARN_ON_ONCE(!ctx.aux->seen_exception); + WARN_ON_ONCE(!ctx.bp); + WARN_ON_ONCE(!ctx.cnt); + ctx.aux->bpf_exception_cb(cookie, ctx.sp, ctx.bp); +} + __diag_pop(); BTF_SET8_START(generic_btf_ids) @@ -2429,6 +2466,7 @@ BTF_ID_FLAGS(func, bpf_cgroup_from_id, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_task_under_cgroup, KF_RCU) #endif BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_throw) BTF_SET8_END(generic_btf_ids) static const struct btf_kfunc_id_set generic_kfunc_set = { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 8ce72a7b4758..61101a87d96b 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -542,6 +542,8 @@ static bool is_dynptr_ref_function(enum bpf_func_id func_id) } static bool is_callback_calling_kfunc(u32 btf_id); +static bool is_forbidden_exception_kfunc_in_cb(u32 btf_id); +static bool is_bpf_throw_kfunc(struct bpf_insn *insn); static bool is_callback_calling_function(enum bpf_func_id func_id) { @@ -2864,11 +2866,12 @@ static int check_subprogs(struct bpf_verifier_env *env) if (i == subprog_end - 1) { /* to avoid fall-through from one subprog into another * the last insn of the subprog should be either exit - * or unconditional jump back + * or unconditional jump back or bpf_throw call */ if (code != (BPF_JMP | BPF_EXIT) && - code != (BPF_JMP | BPF_JA)) { - verbose(env, "last insn is not an exit or jmp\n"); + code != (BPF_JMP | BPF_JA) && + !is_bpf_throw_kfunc(insn + i)) { + verbose(env, "last insn is not an exit or jmp or bpf_throw call\n"); return -EINVAL; } subprog_start = subprog_end; @@ -5625,6 +5628,25 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx) for (; i < subprog_end; i++) { int next_insn, sidx; + if (bpf_pseudo_kfunc_call(insn + i) && !insn[i].off) { + bool err = false; + + if (!is_forbidden_exception_kfunc_in_cb(insn[i].imm)) + continue; + if (subprog[idx].is_cb) + err = true; + for (int c = 0; c < frame && !err; c++) { + if (subprog[ret_prog[c]].is_cb) { + err = true; + break; + } + } + if (!err) + continue; + verbose(env, "exception kfunc insn %d cannot be called from callback\n", i); + return -EINVAL; + } + if (!bpf_pseudo_call(insn + i) && !bpf_pseudo_func(insn + i)) continue; /* remember insn and function to return to */ @@ -8734,6 +8756,7 @@ static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn * callbacks */ if (set_callee_state_cb != set_callee_state) { + env->subprog_info[subprog].is_cb = true; if (bpf_pseudo_kfunc_call(insn) && !is_callback_calling_kfunc(insn->imm)) { verbose(env, "verifier bug: kfunc %s#%d not marked as callback-calling\n", @@ -9247,17 +9270,17 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta, return 0; } -static int check_reference_leak(struct bpf_verifier_env *env) +static int check_reference_leak(struct bpf_verifier_env *env, bool exception_exit) { struct bpf_func_state *state = cur_func(env); bool refs_lingering = false; int i; - if (state->frameno && !state->in_callback_fn) + if (!exception_exit && state->frameno && !state->in_callback_fn) return 0; for (i = 0; i < state->acquired_refs; i++) { - if (state->in_callback_fn && state->refs[i].callback_ref != state->frameno) + if (!exception_exit && state->in_callback_fn && state->refs[i].callback_ref != state->frameno) continue; verbose(env, "Unreleased reference id=%d alloc_insn=%d\n", state->refs[i].id, state->refs[i].insn_idx); @@ -9491,7 +9514,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn switch (func_id) { case BPF_FUNC_tail_call: - err = check_reference_leak(env); + err = check_reference_leak(env, false); if (err) { verbose(env, "tail_call would lead to reference leak\n"); return err; @@ -10109,6 +10132,7 @@ enum special_kfunc_type { KF_bpf_dynptr_slice, KF_bpf_dynptr_slice_rdwr, KF_bpf_dynptr_clone, + KF_bpf_throw, }; BTF_SET_START(special_kfunc_set) @@ -10129,6 +10153,7 @@ BTF_ID(func, bpf_dynptr_from_xdp) BTF_ID(func, bpf_dynptr_slice) BTF_ID(func, bpf_dynptr_slice_rdwr) BTF_ID(func, bpf_dynptr_clone) +BTF_ID(func, bpf_throw) BTF_SET_END(special_kfunc_set) BTF_ID_LIST(special_kfunc_list) @@ -10151,6 +10176,7 @@ BTF_ID(func, bpf_dynptr_from_xdp) BTF_ID(func, bpf_dynptr_slice) BTF_ID(func, bpf_dynptr_slice_rdwr) BTF_ID(func, bpf_dynptr_clone) +BTF_ID(func, bpf_throw) static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) { @@ -10464,6 +10490,17 @@ static bool is_callback_calling_kfunc(u32 btf_id) return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl]; } +static bool is_bpf_throw_kfunc(struct bpf_insn *insn) +{ + return bpf_pseudo_kfunc_call(insn) && insn->off == 0 && + insn->imm == special_kfunc_list[KF_bpf_throw]; +} + +static bool is_forbidden_exception_kfunc_in_cb(u32 btf_id) +{ + return btf_id == special_kfunc_list[KF_bpf_throw]; +} + static bool is_rbtree_lock_required_kfunc(u32 btf_id) { return is_bpf_rbtree_api_kfunc(btf_id); @@ -11140,6 +11177,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, const struct btf_param *args; const struct btf_type *ret_t; struct btf *desc_btf; + bool throw = false; /* skip for now, but return error when we find this in fixup_kfunc_call */ if (!insn->imm) @@ -11242,6 +11280,16 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } } + if (meta.func_id == special_kfunc_list[KF_bpf_throw]) { + if (!bpf_jit_supports_exceptions()) { + verbose(env, "JIT does not support calling kfunc %s#%d\n", + func_name, meta.func_id); + return -EINVAL; + } + env->seen_exception = true; + throw = true; + } + for (i = 0; i < CALLER_SAVED_REGS; i++) mark_reg_not_init(env, regs, caller_saved[i]); @@ -11463,7 +11511,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return err; } - return 0; + return throw ? 1 : 0; } static bool signed_add_overflows(s64 a, s64 b) @@ -14211,7 +14259,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) * gen_ld_abs() may terminate the program at runtime, leading to * reference leak. */ - err = check_reference_leak(env); + err = check_reference_leak(env, false); if (err) { verbose(env, "BPF_LD_[ABS|IND] cannot be mixed with socket references\n"); return err; @@ -14619,6 +14667,9 @@ static int visit_insn(int t, struct bpf_verifier_env *env) if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { struct bpf_kfunc_call_arg_meta meta; + /* No fallthrough edge to walk, same as BPF_EXIT */ + if (is_bpf_throw_kfunc(insn)) + return DONE_EXPLORING; ret = fetch_kfunc_meta(env, insn, &meta, NULL); if (ret == 0 && is_iter_next_kfunc(&meta)) { mark_prune_point(env, t); @@ -16222,6 +16273,7 @@ static int do_check(struct bpf_verifier_env *env) int prev_insn_idx = -1; for (;;) { + bool exception_exit = false; struct bpf_insn *insn; u8 class; int err; @@ -16435,12 +16487,18 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } } - if (insn->src_reg == BPF_PSEUDO_CALL) + if (insn->src_reg == BPF_PSEUDO_CALL) { err = check_func_call(env, insn, &env->insn_idx); - else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) + } else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { err = check_kfunc_call(env, insn, &env->insn_idx); - else + if (err == 1) { + err = 0; + exception_exit = true; + goto process_bpf_exit_full; + } + } else { err = check_helper_call(env, insn, &env->insn_idx); + } if (err) return err; @@ -16467,7 +16525,7 @@ static int do_check(struct bpf_verifier_env *env) verbose(env, "BPF_EXIT uses reserved fields\n"); return -EINVAL; } - +process_bpf_exit_full: if (env->cur_state->active_lock.ptr && !in_rbtree_lock_required_cb(env)) { verbose(env, "bpf_spin_unlock is missing\n"); @@ -16485,10 +16543,23 @@ static int do_check(struct bpf_verifier_env *env) * function, for which reference_state must * match caller reference state when it exits. */ - err = check_reference_leak(env); + err = check_reference_leak(env, exception_exit); if (err) return err; + /* The side effect of the prepare_func_exit + * which is being skipped is that it frees + * bpf_func_state. Typically, process_bpf_exit + * will only be hit with outermost exit. + * copy_verifier_state in pop_stack will handle + * freeing of any extra bpf_func_state left over + * from not processing all nested function + * exits. We also skip return code checks as + * they are not needed for exceptional exits. + */ + if (exception_exit) + goto process_bpf_exit; + if (state->curframe) { /* exit from nested function */ err = prepare_func_exit(env, &env->insn_idx); @@ -17782,6 +17853,9 @@ static int jit_subprogs(struct bpf_verifier_env *env) func[i]->aux->num_exentries = num_exentries; func[i]->aux->tail_call_reachable = env->subprog_info[i].tail_call_reachable; func[i]->aux->invented_prog = env->subprog_info[i].invented_prog; + func[i]->aux->exception_cb = env->subprog_info[i].is_exception_cb; + if (!i) + func[i]->aux->seen_exception = env->seen_exception; func[i] = bpf_int_jit_compile(func[i]); if (!func[i]->jited) { err = -ENOTSUPP; @@ -17868,6 +17942,8 @@ static int jit_subprogs(struct bpf_verifier_env *env) prog->aux->num_exentries = func[0]->aux->num_exentries; prog->aux->func = func; prog->aux->func_cnt = env->subprog_cnt; + prog->aux->bpf_exception_cb = (void *)func[env->exception_callback_subprog]->bpf_func; + prog->aux->seen_exception = func[0]->aux->seen_exception; bpf_prog_jit_attempt_done(prog); return 0; out_free: @@ -18116,6 +18192,26 @@ static int do_misc_fixups(struct bpf_verifier_env *env) struct bpf_map *map_ptr; int i, ret, cnt, delta = 0; + if (env->seen_exception && !env->exception_callback_subprog) { + struct bpf_insn patch[] = { + env->prog->insnsi[insn_cnt - 1], + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }; + + ret = invent_subprog(env, patch, ARRAY_SIZE(patch)); + if (ret < 0) + return ret; + prog = env->prog; + insn = prog->insnsi; + + env->exception_callback_subprog = env->subprog_cnt - 1; + /* Don't update insn_cnt, as invent_subprog always appends insns */ + env->subprog_info[env->exception_callback_subprog].is_cb = true; + env->subprog_info[env->exception_callback_subprog].is_async_cb = true; + env->subprog_info[env->exception_callback_subprog].is_exception_cb = true; + } + for (i = 0; i < insn_cnt; i++, insn++) { /* Make divide-by-zero exceptions impossible. */ if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) || diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index 209811b1993a..f1d7de1349bc 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -131,4 +131,10 @@ extern int bpf_rbtree_add_impl(struct bpf_rb_root *root, struct bpf_rb_node *nod */ extern struct bpf_rb_node *bpf_rbtree_first(struct bpf_rb_root *root) __ksym; +__attribute__((noreturn)) +extern void bpf_throw(u64 cookie) __ksym; + +#define throw bpf_throw(0) +#define throw_value(cookie) bpf_throw(cookie) + #endif From patchwork Thu Jul 13 02:32:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311199 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80AEF7C for ; Thu, 13 Jul 2023 02:33:06 +0000 (UTC) Received: from mail-oi1-x241.google.com (mail-oi1-x241.google.com [IPv6:2607:f8b0:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DDD19E for ; Wed, 12 Jul 2023 19:33:05 -0700 (PDT) Received: by mail-oi1-x241.google.com with SMTP id 5614622812f47-3a3373211a1so219039b6e.0 for ; Wed, 12 Jul 2023 19:33:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215584; x=1691807584; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DzLE27wZ9RTtwtcsT3D79Brq91d5MH8HhRDwTtSvoc0=; b=E8uknnmae1YKJm+YTS3bNv6jeSnwf8iSTdN9J+eHWMg/D2SAj5dc+T6TjdiT0WV4Hf fRXkxzi7YLdJzZO9qwuC2GxlelTAv+y7Rno1FrelD2So4D8wvXKBphUXPe3ro0qtAIpy OL6HCoPAJjD4i4g3BHuz+DY3Pl8zVKiCfv/Fh4D2wiTqeWO0a3D8ArO0O137gCiij0dO mtcE9YyLTFKdNr3aGbQ5JlHfst6GfKIqWYeQByqCoQgmYLlqIlLYrdAtZIwenHBrEsof za/bOHwO7O6Jq3jzdu24jrcXgt1tNlWOgBk4EyLb0o3Nta5JIpuuADR5yNPER2gSUsFQ wOCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215584; x=1691807584; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DzLE27wZ9RTtwtcsT3D79Brq91d5MH8HhRDwTtSvoc0=; b=K7Xd0Bx21jRxLhDmf3oIqqNVhUJ4JFIIAiux59Ibbg6NBFvQXBx9bbmetZGDtQcQsQ ZWw2cCjX0GhaT6FlYuLJUPgZm54OpBd7qEizbczXRhALcrbbbUyik308EvHtnSPGlE0Z CVIz/zj1hHxBo1arp61yFJWBiXW3bI6hH9R7I8jvZIuPWZ/fMJfzT+HQlPcQSi2pnfoh ZJaTIYPP4VOrYacyUlotPWrSw2Oy8EAd8uIR36E2BS3uYRVlVjcr3NEJ/zNdj3v5TVFr rlgJF84n8Jtpat5y8XfA37MQEYbI4XA3NeJgscB9+px1KgP84rWiys2qa+NS9MNx3nM8 Y5wg== X-Gm-Message-State: ABy/qLY8wwSv/79cLTGeQ3/6mA6ax8sLfJ8BJp9L0T9iBcuI+etRuQ0/ yaNcLdtYfdtNEau52x1Rbj/WC2CCH3G07A== X-Google-Smtp-Source: APBJJlELrwXzgsQOwuRBTEiWea7q4v9FlBV5gK7mSRlR4cwdvQQp6oMME+L/T3I2kmWMIyw/DY57Sg== X-Received: by 2002:a05:6808:d52:b0:3a4:f8e:d798 with SMTP id w18-20020a0568080d5200b003a40f8ed798mr339670oik.36.1689215584150; Wed, 12 Jul 2023 19:33:04 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id fe11-20020a056a002f0b00b006827d9dd64esm4275696pfb.8.2023.07.12.19.33.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:33:03 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 07/10] bpf: Ensure IP is within prog->jited_length for bpf_throw calls Date: Thu, 13 Jul 2023 08:02:29 +0530 Message-Id: <20230713023232.1411523-8-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2343; i=memxor@gmail.com; h=from:subject; bh=9pzt7wZ/J35F3BWasIxB0BTPih6S4YIqMjJB+Ic5+Pg=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HIfCJ0f/8ZjN0METxLvDGsqndp5w6zcWya+ WQIoHemMuGJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hyAAKCRBM4MiGSL8R yrsbD/4sI45rI8DWHZ1onVzBSezq5UdBc4mXGfw52vKxxEdY1MMk/Jw8a8BxXogkoYEnQHgykig DtspP/zOl+EbtpaM14Qtka8R9DwuJ7amRfDS1oGwZQ98Lkd+H9qK623A9saXADCB6JEj3d/WwDs QHWmEMGfCw0ZrQmaLFkMZBUX1sNngH8oh7KFV8T6qYaIwa+7fwjn3x+lDd2dAvxRDMTAB2t7D21 mv58oG7kDCjHLn/NiVHCywRQDe1TEx9Yzt0NB4hYKszLmWFNpFCkylM382UBTt6mhbAZeUM4BRa 8xOgwnm7Ow0iF+O7bPaCIn9mYM/ZD6vZBH8/UMRmLwTN5LpZZ2/YlYrvlfq33+GjQ2sYlUV0/5y ugHpBYIFXoeyA8K0ah+jbRTSSTAzmMtbagYjdVzGQg4WfjjpjdOpFe64iT6UzS26mNpfpLmVKGN TfZttlF2lCr69kPtbq7bbDYAAA9riHEFjhEnybCj+mUZ5ma7OnT+7LkIanf79KvMmV+jz+kn4bA ab4yAId8g0N7qAYI5kkXeyPlqDmqNSyWJjlD0CNdAuc7DJImnGYU27PbFfXfOe/oSMSV+xm8Vqt KHDoBhTgiI/7hfs5b/ZW5B4r7R3+9/HbF9CWNlWinCPsJlLJKUw3ajLdoF7bhagcrtsOzfDS+JW 15aT4f5UEi0Terg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Now that we allow exception throwing using bpf_throw kfunc, it can appear as the final instruction in a prog. When this happens, and we begin to unwind the stack using arch_bpf_stack_walk, the instruction pointer (IP) may appear to lie outside the JITed instructions. This happens because the return address is the instruction following the call, but the bpf_throw never returns to the program, so the JIT considers instruction ending at the bpf_throw call as the final JITed instruction and end of the jited_length for the program. This becomes a problem when we search the IP using is_bpf_text_address and bpf_prog_ksym_find, both of which use bpf_ksym_find under the hood, and it rightfully considers addr == ksym.end to be outside the program's boundaries. Insert a dummy 'int3' instruction which will never be hit to bump the jited_length and allow us to handle programs with their final isntruction being a call to bpf_throw. Signed-off-by: Kumar Kartikeya Dwivedi --- arch/x86/net/bpf_jit_comp.c | 11 +++++++++++ include/linux/bpf.h | 2 ++ 2 files changed, 13 insertions(+) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 8d97c6a60f9a..052230cc7f50 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1579,6 +1579,17 @@ st: if (is_imm8(insn->off)) } if (emit_call(&prog, func, image + addrs[i - 1] + offs)) return -EINVAL; + /* Similar to BPF_EXIT_INSN, call for bpf_throw may be + * the final instruction in the program. Insert an int3 + * following the call instruction so that we can still + * detect pc to be part of the bpf_prog in + * bpf_ksym_find, otherwise when this is the last + * instruction (as allowed by verifier, similar to exit + * and jump instructions), pc will be == ksym.end, + * leading to bpf_throw failing to unwind the stack. + */ + if (func == (u8 *)&bpf_throw) + EMIT1(0xCC); /* int3 */ break; } diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 61cdb291311f..1652d184ee7f 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -3111,4 +3111,6 @@ static inline gfp_t bpf_memcg_flags(gfp_t flags) return flags; } +extern void bpf_throw(u64); + #endif /* _LINUX_BPF_H */ From patchwork Thu Jul 13 02:32:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311200 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11FFE7C for ; Thu, 13 Jul 2023 02:33:12 +0000 (UTC) Received: from mail-oi1-x244.google.com (mail-oi1-x244.google.com [IPv6:2607:f8b0:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 311E6E7E for ; Wed, 12 Jul 2023 19:33:09 -0700 (PDT) Received: by mail-oi1-x244.google.com with SMTP id 5614622812f47-3a04e5baffcso219267b6e.3 for ; Wed, 12 Jul 2023 19:33:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215588; x=1691807588; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XQe0J58NWoZhAitZmCsyDVCzP91pOflb24L84mzbhP8=; b=aJsfUAMdUH03tv7akJBo1G/BWXkdOvl+m9fdOTX7sVHVqym5QMw/FkNdAYqRPoo0OP HJzrCv9sHOhSfzzvNr//uBtBuP2Knghz2IqX1+YQj1/gMaPNR16peH1kCYPXwgqYyABB evQqcAHaxjZexzxZhmwiIlU9Jo+AAEooufKLxMQEZ5X7dtqARBFD9gQ6IivnPQ4QUJ5j rhGhrKOprwa0Y0NmL8ILEkeP+fEC1HC2MiRX2uGAsBMqAMck4p4Qr5X25GlZIwqkCiuF Zd5k8sIUEUsI9l2jmimgq8M1mRpDovMCOyuXzMKcNgg7C9HltfnGbrPk1TlkbwVZEfsb frdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215588; x=1691807588; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XQe0J58NWoZhAitZmCsyDVCzP91pOflb24L84mzbhP8=; b=ilR7r1H7/pycIu1L3Cambl+VlsbhNZrPc0KWbqhTM3hVr/dO9K0WUk4VBhl9LCY1zo UeivzkCMPJrTpgy58iH8SPLTyStjuTBPJ0dpIij95of0BAMfVznrQq4GzrGuqHpzVYfj MsCe1mbNOqLRQM5CO2cjKLRxFn8Yvl91VEGNxocxaqnQqAoVCUpwHpwW7pMIbGfx/MAz WQnZGqAJMDq8nT1Z+AKpwxgxxjzjic8T1FdhhT3Gme47RU1C9/imfIZX8K9dLJRXJ3pb nrxAjiAXpH6KF8FxIm8i8481ysOICyajPCzesJbnTjxwI6eWqUHBqIu9tPnHO+CQyOuI L3WQ== X-Gm-Message-State: ABy/qLYkEstuVo72P4Arbz+3H9SQOOEAnw1kwLMRI6H8d7xXJucRODe2 ML7s6ol66R1VIMd08yaas975JPBd0JCR9g== X-Google-Smtp-Source: APBJJlFXoQU2czbktkZDzoxPmZE9kEIJQ6PDuAhFvueATKHwcG/nolSaz047Xe6TuC8PDkUqqDuztQ== X-Received: by 2002:aca:f104:0:b0:3a1:eee6:774f with SMTP id p4-20020acaf104000000b003a1eee6774fmr261106oih.50.1689215587856; Wed, 12 Jul 2023 19:33:07 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id v11-20020aa7808b000000b00682a27905b9sm4409762pff.13.2023.07.12.19.33.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:33:07 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 08/10] bpf: Introduce bpf_set_exception_callback Date: Thu, 13 Jul 2023 08:02:30 +0530 Message-Id: <20230713023232.1411523-9-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=12213; i=memxor@gmail.com; h=from:subject; bh=LGFTtA4ljIw84HxvWm9tBatHmSxIelNhpB7DiPy50yc=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HIWB0tiHnSCjpFB+Lo+DWEYfbloHyqbxPRJ Of69v4yZO2JAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hyAAKCRBM4MiGSL8R yolgD/42ppQJhEftXRQPlNm7b0MR0Se1QJGxVAfcPfj1zzbm95NEOX/vI0p1li84t101tFz5GI2 ZAnbl+OzVNRzih03qMGq6qRHOU1GbZ2IdHW9Jg+PBLV1r47tf5UgBxuFF/yXsJYdJ8X0z77oBHq Wl+0R+zteIf5QW9dTQpSO0tWmyqKykHsLihxY+0hmKgxk025J3j89+viwNWyhgSGfDPRsGCS7XF Evt+NAV4RXxJC/xOOQ/aVqnUzPYK8vGMFutJ8EQN0j5fHzHxAkNg/ZR4KJERBAOq71p40GcJAoz dr4oqedX/sMzCkDXWHMhFymIwTWBrrg7D0V1ldchLvT+PdJPUYNnAzaH9M3PLENkaxOUfj+NJaN oYwwus3M6YFswVVRrfb8ZsW9gzh8efEVHulg2eHYMIIeXvtLU5qfDpHhotl+f96buhGJNmixmg0 lxgvGHB3JKoTEgD49W5Si5eg1aTSaqMV4sRFg3UiJq1cYne1G3HV0WjLBMUqCcy+82XrieXXkf0 HD46PK05I6VdiJvfu77YJ9dnAxlDDJikh2T+VT1f0T5ua/njXM6BrQBkWgdIiqhfGArupzTxIUj aj7YnclGLE5uwgtpCzLyjQ1H4w5xzZk6o/Odf35FBx0L/kqEN+rwkckgwAM6XYb7yWfgfMKtr5W tTOaF/opWUVw31A== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net By default, the subprog generated by the verifier to handle a thrown exception hardcodes a return value of 0. To allow user-defined logic and modification of the return value when an exception is thrown, introduce the bpf_set_exception_callback kfunc, which installs a callback as the default exception handler for the program. Compared to runtime kfuncs, this kfunc acts a built-in, i.e. it only takes semantic effect during verification, and is erased from the program at runtime. This kfunc can only be called once within a program, and always sets the global exception handler, regardless of whether it was invoked in all paths of the program or not. The kfunc is idempotent, and the default exception callback cannot be modified at runtime. Allowing modification of the callback for the current program execution at runtime leads to issues when the programs begin to nest, as any per-CPU state maintaing this information will have to be saved and restored. We don't want it to stay in bpf_prog_aux as this takes a global effect for all programs. An alternative solution is spilling the callback pointer at a known location on the program stack on entry, and then passing this location to bpf_throw as a parameter. However, since exceptions are geared more towards a use case where they are ideally never invoked, optimizing for this use case and adding to the complexity has diminishing returns. In the future, a variant of bpf_throw which allows supplying a callback can also be introduced, to modify the callback for a certain throw statement. For now, bpf_set_exception_callback is meant to serve as a way to set statically set a subprog as the exception handler of a BPF program. TODO: Should we change default behavior to just return whatever cookie value was passed to bpf_throw? That might allow people to avoid installing a callback in case they just want to manipulate the return value. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 1 + kernel/bpf/helpers.c | 6 ++ kernel/bpf/verifier.c | 97 +++++++++++++++++-- .../testing/selftests/bpf/bpf_experimental.h | 2 + 4 files changed, 98 insertions(+), 8 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index e28386fa462f..bd9d73b0647e 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -300,6 +300,7 @@ struct bpf_func_state { bool in_callback_fn; struct tnum callback_ret_range; bool in_async_callback_fn; + bool in_exception_callback_fn; /* The following fields should be last. See copy_func_state() */ int acquired_refs; diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index da1493a1da25..a2cb7ebf3d99 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -2439,6 +2439,11 @@ __bpf_kfunc void bpf_throw(u64 cookie) ctx.aux->bpf_exception_cb(cookie, ctx.sp, ctx.bp); } +__bpf_kfunc void bpf_set_exception_callback(int (*cb)(u64)) +{ + WARN_ON_ONCE(1); +} + __diag_pop(); BTF_SET8_START(generic_btf_ids) @@ -2467,6 +2472,7 @@ BTF_ID_FLAGS(func, bpf_task_under_cgroup, KF_RCU) #endif BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_throw) +BTF_ID_FLAGS(func, bpf_set_exception_callback) BTF_SET8_END(generic_btf_ids) static const struct btf_kfunc_id_set generic_kfunc_set = { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 61101a87d96b..9bdb3c7d3926 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -544,6 +544,8 @@ static bool is_dynptr_ref_function(enum bpf_func_id func_id) static bool is_callback_calling_kfunc(u32 btf_id); static bool is_forbidden_exception_kfunc_in_cb(u32 btf_id); static bool is_bpf_throw_kfunc(struct bpf_insn *insn); +static bool is_async_callback_calling_kfunc(u32 btf_id); +static bool is_exception_cb_kfunc(struct bpf_insn *insn); static bool is_callback_calling_function(enum bpf_func_id func_id) { @@ -3555,7 +3557,8 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, } else if ((bpf_helper_call(insn) && is_callback_calling_function(insn->imm) && !is_async_callback_calling_function(insn->imm)) || - (bpf_pseudo_kfunc_call(insn) && is_callback_calling_kfunc(insn->imm))) { + (bpf_pseudo_kfunc_call(insn) && is_callback_calling_kfunc(insn->imm) && + !is_async_callback_calling_kfunc(insn->imm))) { /* callback-calling helper or kfunc call, which means * we are exiting from subprog, but unlike the subprog * call handling above, we shouldn't propagate @@ -5665,6 +5668,14 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx) if (subprog[sidx].has_tail_call) { verbose(env, "verifier bug. subprog has tail_call and async cb\n"); return -EFAULT; + } + if (subprog[sidx].is_exception_cb && bpf_pseudo_call(insn + i)) { + verbose(env, "insn %d cannot call exception cb directly\n", i); + return -EINVAL; + } + if (subprog[sidx].is_exception_cb && subprog[sidx].has_tail_call) { + verbose(env, "insn %d cannot tail call within exception cb\n", i); + return -EINVAL; } /* async callbacks don't increase bpf prog stack size */ continue; @@ -5689,8 +5700,13 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx) * tail call counter throughout bpf2bpf calls combined with tailcalls */ if (tail_call_reachable) - for (j = 0; j < frame; j++) + for (j = 0; j < frame; j++) { + if (subprog[ret_prog[j]].is_exception_cb) { + verbose(env, "cannot tail call within exception cb\n"); + return -EINVAL; + } subprog[ret_prog[j]].tail_call_reachable = true; + } if (subprog[0].tail_call_reachable) env->prog->aux->tail_call_reachable = true; @@ -8770,13 +8786,16 @@ static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn } } - if (insn->code == (BPF_JMP | BPF_CALL) && + if ((insn->code == (BPF_JMP | BPF_CALL) && insn->src_reg == 0 && - insn->imm == BPF_FUNC_timer_set_callback) { + insn->imm == BPF_FUNC_timer_set_callback) || + is_exception_cb_kfunc(insn)) { struct bpf_verifier_state *async_cb; /* there is no real recursion here. timer callbacks are async */ env->subprog_info[subprog].is_async_cb = true; + if (is_exception_cb_kfunc(insn)) + env->subprog_info[subprog].is_exception_cb = true; async_cb = push_async_cb(env, env->subprog_info[subprog].start, *insn_idx, subprog); if (!async_cb) @@ -9032,6 +9051,22 @@ static int set_user_ringbuf_callback_state(struct bpf_verifier_env *env, return 0; } +static int set_exception_callback_state(struct bpf_verifier_env *env, + struct bpf_func_state *caller, + struct bpf_func_state *callee, + int insn_idx) +{ + __mark_reg_unknown(env, &callee->regs[BPF_REG_1]); + __mark_reg_not_init(env, &callee->regs[BPF_REG_2]); + __mark_reg_not_init(env, &callee->regs[BPF_REG_3]); + __mark_reg_not_init(env, &callee->regs[BPF_REG_4]); + __mark_reg_not_init(env, &callee->regs[BPF_REG_5]); + callee->in_exception_callback_fn = true; + callee->callback_ret_range = tnum_range(0, 0); + return 0; + +} + static int set_rbtree_add_callback_state(struct bpf_verifier_env *env, struct bpf_func_state *caller, struct bpf_func_state *callee, @@ -10133,6 +10168,7 @@ enum special_kfunc_type { KF_bpf_dynptr_slice_rdwr, KF_bpf_dynptr_clone, KF_bpf_throw, + KF_bpf_set_exception_callback, }; BTF_SET_START(special_kfunc_set) @@ -10154,6 +10190,7 @@ BTF_ID(func, bpf_dynptr_slice) BTF_ID(func, bpf_dynptr_slice_rdwr) BTF_ID(func, bpf_dynptr_clone) BTF_ID(func, bpf_throw) +BTF_ID(func, bpf_set_exception_callback) BTF_SET_END(special_kfunc_set) BTF_ID_LIST(special_kfunc_list) @@ -10177,6 +10214,7 @@ BTF_ID(func, bpf_dynptr_slice) BTF_ID(func, bpf_dynptr_slice_rdwr) BTF_ID(func, bpf_dynptr_clone) BTF_ID(func, bpf_throw) +BTF_ID(func, bpf_set_exception_callback) static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) { @@ -10487,7 +10525,19 @@ static bool is_bpf_graph_api_kfunc(u32 btf_id) static bool is_callback_calling_kfunc(u32 btf_id) { - return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl]; + return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl] || + btf_id == special_kfunc_list[KF_bpf_set_exception_callback]; +} + +static bool is_async_callback_calling_kfunc(u32 btf_id) +{ + return btf_id == special_kfunc_list[KF_bpf_set_exception_callback]; +} + +static bool is_exception_cb_kfunc(struct bpf_insn *insn) +{ + return bpf_pseudo_kfunc_call(insn) && insn->off == 0 && + insn->imm == special_kfunc_list[KF_bpf_set_exception_callback]; } static bool is_bpf_throw_kfunc(struct bpf_insn *insn) @@ -10498,7 +10548,8 @@ static bool is_bpf_throw_kfunc(struct bpf_insn *insn) static bool is_forbidden_exception_kfunc_in_cb(u32 btf_id) { - return btf_id == special_kfunc_list[KF_bpf_throw]; + return btf_id == special_kfunc_list[KF_bpf_throw] || + btf_id == special_kfunc_list[KF_bpf_set_exception_callback]; } static bool is_rbtree_lock_required_kfunc(u32 btf_id) @@ -11290,6 +11341,33 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, throw = true; } + if (meta.func_id == special_kfunc_list[KF_bpf_set_exception_callback]) { + if (!bpf_jit_supports_exceptions()) { + verbose(env, "JIT does not support calling kfunc %s#%d\n", + func_name, meta.func_id); + return -EINVAL; + } + + err = __check_func_call(env, insn, insn_idx_p, meta.subprogno, + set_exception_callback_state); + if (err) { + verbose(env, "kfunc %s#%d failed callback verification\n", + func_name, meta.func_id); + return err; + } + if (env->cur_state->frame[0]->subprogno) { + verbose(env, "kfunc %s#%d can only be called from main prog\n", + func_name, meta.func_id); + return -EINVAL; + } + if (env->exception_callback_subprog) { + verbose(env, "kfunc %s#%d can only be called once to set exception callback\n", + func_name, meta.func_id); + return -EINVAL; + } + env->exception_callback_subprog = meta.subprogno; + } + for (i = 0; i < CALLER_SAVED_REGS; i++) mark_reg_not_init(env, regs, caller_saved[i]); @@ -14320,7 +14398,7 @@ static int check_return_code(struct bpf_verifier_env *env) const bool is_subprog = frame->subprogno; /* LSM and struct_ops func-ptr's return type could be "void" */ - if (!is_subprog) { + if (!is_subprog || frame->in_exception_callback_fn) { switch (prog_type) { case BPF_PROG_TYPE_LSM: if (prog->expected_attach_type == BPF_LSM_CGROUP) @@ -14368,7 +14446,7 @@ static int check_return_code(struct bpf_verifier_env *env) return 0; } - if (is_subprog) { + if (is_subprog && !frame->in_exception_callback_fn) { if (reg->type != SCALAR_VALUE) { verbose(env, "At subprogram exit the register R0 is not a scalar value (%s)\n", reg_type_str(env, reg->type)); @@ -18147,6 +18225,9 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) { insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1); *cnt = 1; + } else if (desc->func_id == special_kfunc_list[KF_bpf_set_exception_callback]) { + insn_buf[0] = BPF_JMP_IMM(BPF_JA, 0, 0, 0); + *cnt = 1; } return 0; } diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index f1d7de1349bc..d27e694392a7 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -137,4 +137,6 @@ extern void bpf_throw(u64 cookie) __ksym; #define throw bpf_throw(0) #define throw_value(cookie) bpf_throw(cookie) +extern void bpf_set_exception_callback(int (*cb)(u64)) __ksym; + #endif From patchwork Thu Jul 13 02:32:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311201 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 993BB7C for ; Thu, 13 Jul 2023 02:33:13 +0000 (UTC) Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACB88E7E for ; Wed, 12 Jul 2023 19:33:12 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id d2e1a72fcca58-666eba6f3d6so121061b3a.3 for ; Wed, 12 Jul 2023 19:33:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215591; x=1691807591; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xNehhycek+drsWLR/IwH9p25Up1Tc/cl19ZXBMEPYRI=; b=qzNCNzaXo/6aN9p2F3JD2yveDfORqLKOAE+WF1hmPLipQno/Gn08plw8wpFB3Eeb0U W9qX2D5Ly7u6W3OJwMQedGUUN6Op1WdQ3UbTlt5XWyM8KBOid0Q5T5voMHnJCfn9O64U mxBUl2oXHSc+NL40mQ+oBQrqkp/vryalwVd/gZh2pN97wza3P1OXcySND8ICJs0VDFyG +YLwCd5l0cCBB98XwZV6gfVWdSZ1ONMlCtwcyOYjjlnlL8/YtU0lkFmeT+gV+exktNJ+ uH+zeGwQG83eZJAqERPgfM7Aq2XrMFq40z3PxeRZALFtphicf6ld8V4qOb04SfqJ43Ym 85RA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215591; x=1691807591; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xNehhycek+drsWLR/IwH9p25Up1Tc/cl19ZXBMEPYRI=; b=MwjsQcak4pNMYZUQzcxAkAa/+uuoAsnPXamjXcEdHmW5gd1sUj+u37WXebp5XMDxtd 0B6no3eL2A/5IouR3QogH9F7zXFM1u5HuEFjfrtupp4S1kaN3sMrJVblOkqU6mxhDUAX frPN0plll5Ec5/dRe1UX6MvmWgqglYpxKuLR74tBnRMumkJ8KDl6hgNgVnoZqHm+yRVV LXemEYW9ozdXURW0bvpGWhC1KkE+yfWPj9Fk0Ts5VAHuPNa0WIwAy7gqJxavgftrUowj QM/HK+fQXVLodNyw8YtCpXSWvfh1uLom5WpTKHvf+THR/5AXmSyNI1kULXYzRcL4eIBd hdhQ== X-Gm-Message-State: ABy/qLbInqLQHCKqh7PZRcRxNwfEA+OQPpk/PLQQrC/v1/hku2lUtdPf WAgTKx+IGw2c4eX5MO5/L8MWDyTbMP/1Mg== X-Google-Smtp-Source: APBJJlFjD+WEXf2uIJd+kJBsER18vx3ild4/cAbEC9orvyChVBEySASyL419iswzkIjMcvR9dYhKnw== X-Received: by 2002:a05:6a20:8f15:b0:129:c38e:cdd7 with SMTP id b21-20020a056a208f1500b00129c38ecdd7mr84620pzk.38.1689215591575; Wed, 12 Jul 2023 19:33:11 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id y1-20020a170902ed4100b001b7fa81b145sm4611299plb.265.2023.07.12.19.33.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:33:11 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 09/10] selftests/bpf: Add BPF assertion macros Date: Thu, 13 Jul 2023 08:02:31 +0530 Message-Id: <20230713023232.1411523-10-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2269; i=memxor@gmail.com; h=from:subject; bh=HDXy5CXUtzcMi3yW/JNz0jTfoZtVdLHrKtwGhdrzRL8=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HJ+0S77AQ4AGXmOuCxQuzwtbtiON0ogOlsA k/grlrxe9CJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hyQAKCRBM4MiGSL8R ym6AEADAJbxK8fZvtEWIwiPsWpbeu7UXqlQju4Cq+ItV/h3udcqtZ7bcrbU5jkgi/rpAHHcajL0 exS2Zeq6ht0pXx5KDGe1079rKcv1OU9sgOR9TZC9ROXmO9F83GQfvwyLllak5T1oUBZNFBgCSdE 92ELfwGZcRI1WgWwowiqQhBz9td2moyVxU7+ojBY6CxrFwVNjCl0jifDsC3nMaw/12yoxB19rki D/tcc577uktYJasR+Ti66ZhPEFgYCR8gDEZK+4gy4VsE29IHrUi+JmWlsfeLT5xhQ1eEXFelfDH fyYcAhE2HJeJ5PvG6aVqt9HOB2I01NfETEPzzEpaovuS+Y0uetz7y5TPb2pNCFS/tqF4AyK+IMc OIUnRrFOXBt+a4/BfMQu1ahaGPicdDFW7T8se368pB6aM/rdeQD8Jh/bUXE0hPF3olAEvNQvC4q CvqwSHMFsHpHTsu/yHMA77hVabNAilkTkCwZP5vCuW1kA78xG/gvoPmrCH1B1UGFXBdjs6SIwQR cXEmYkfyuxU6Uf+a7Ija+zsI/sVtrYUABcDG3Rj6L7mhZdgwpOK7DYHCQy2vPiIP+SBAf4oXwPU 1uB3CAlJkePiMdMrYwlEk0/XGbAdqIyVlc8YLZASpzKTKcVY14h/4ab/bLG6K/+0IoWT5g7hRmy V1RqTQgMi52jmgA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Add macros implementing an 'assert' statement primitive using macros, built on top of the BPF exceptions support introduced in previous patches. The bpf_assert_*_value variants allow supplying a value which can the be inspected within the exception handler to signify the assert statement that led to the program being terminated abruptly. Signed-off-by: Kumar Kartikeya Dwivedi --- .../testing/selftests/bpf/bpf_experimental.h | 20 +++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index d27e694392a7..30774fef455c 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -139,4 +139,24 @@ extern void bpf_throw(u64 cookie) __ksym; extern void bpf_set_exception_callback(int (*cb)(u64)) __ksym; +#define __bpf_assert_op(LHS, op, RHS, VAL) \ + _Static_assert(sizeof(&(LHS)), "1st argument must be an lvalue expression"); \ + _Static_assert(__builtin_constant_p((RHS)), "2nd argument must be a constant expression"); \ + asm volatile ("if %[lhs] " op " %[rhs] goto +2; r1 = %[value]; call bpf_throw" \ + : : [lhs] "r"(LHS), [rhs] "i"(RHS), [value] "ri"(VAL) : ) + +#define bpf_assert_eq(LHS, RHS) __bpf_assert_op(LHS, "==", RHS, 0) +#define bpf_assert_ne(LHS, RHS) __bpf_assert_op(LHS, "!=", RHS, 0) +#define bpf_assert_lt(LHS, RHS) __bpf_assert_op(LHS, "<", RHS, 0) +#define bpf_assert_gt(LHS, RHS) __bpf_assert_op(LHS, ">", RHS, 0) +#define bpf_assert_le(LHS, RHS) __bpf_assert_op(LHS, "<=", RHS, 0) +#define bpf_assert_ge(LHS, RHS) __bpf_assert_op(LHS, ">=", RHS, 0) + +#define bpf_assert_eq_value(LHS, RHS, value) __bpf_assert_op(LHS, "==", RHS, value) +#define bpf_assert_ne_value(LHS, RHS, value) __bpf_assert_op(LHS, "!=", RHS, value) +#define bpf_assert_lt_value(LHS, RHS, value) __bpf_assert_op(LHS, "<", RHS, value) +#define bpf_assert_gt_value(LHS, RHS, value) __bpf_assert_op(LHS, ">", RHS, value) +#define bpf_assert_le_value(LHS, RHS, value) __bpf_assert_op(LHS, "<=", RHS, value) +#define bpf_assert_ge_value(LHS, RHS, value) __bpf_assert_op(LHS, ">=", RHS, value) + #endif From patchwork Thu Jul 13 02:32:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13311202 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B91187C for ; Thu, 13 Jul 2023 02:33:19 +0000 (UTC) Received: from mail-oi1-x243.google.com (mail-oi1-x243.google.com [IPv6:2607:f8b0:4864:20::243]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAF429E for ; Wed, 12 Jul 2023 19:33:16 -0700 (PDT) Received: by mail-oi1-x243.google.com with SMTP id 5614622812f47-3942c6584f0so208274b6e.3 for ; Wed, 12 Jul 2023 19:33:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689215596; x=1691807596; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jfjD7bMV19cFL/Jauzq65OPqBLLmv1KgbJTYePqjgWI=; b=Veo7fcW/t+rrgtRXqX7nZMOrnj/0TTjSUazbd5v4r2cG5gZ5ckMzCEJjCdhNRvC4/a 08Af4Jg8Vn4QbObKNwaW3B6RZJ3LK1i13+9QwOTR3J/sci1Ify0GC4IpQhg5A+hmcOvi VZB0PB68yBDzFozLRUkNxrnC0qWLbXZohKa2GTEo1i01L6I6fNqsAF0419Cs8ryAds1r Cz1DLXK+wUckloKvKM1bbZn68bLJ7DYpIry4FOTfcRbe8cXPGXkVGAEuRx/67nl5IY61 ghoKgjAmmwYBe99bcGbBfWzTctnrANx8f2xd7THd9atV3ToXxX3JMaogeklcpHwkpbbt 54Cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689215596; x=1691807596; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jfjD7bMV19cFL/Jauzq65OPqBLLmv1KgbJTYePqjgWI=; b=btcpF8w9yBm/gpXqT/IbkFhIS9/YgTsSnJjVT2ToMqj+RpQlxJaso9ktzvU3gXupS0 JPaM0Oa09ljG9zYnmuXHlxRr3atk4Rrfo1k45svVe7DBukxozQrdfKcn7zV+yGHaVR53 n1ZX8Rt/YhSdTyPLtweEWowleNeBwuRqjxc5Cv3imsctIKHKg+2FAHohJftjSCO+c2QM s8VCQb0woZKBhB+H3fX1Y8XmMk6dKv6KreSZ2YfNM/CBgFQcUX1qMA8MtR8WJ8oFncJM zAdl/WWAFPahLkeACNYgb3X1N1wLlfeeH+nsxS5eiOubjtws692q+tntm4v6U4Lj8KJ6 xT6g== X-Gm-Message-State: ABy/qLafZ1WXArnMNnT7cFGhmxWPykBH+AXhTMi1wG2fCa8/VOcrLGPI EzlnUrNjRv4T1rsGAbd55NbTp/kPwJBQ3A== X-Google-Smtp-Source: APBJJlHObZ8cdVmqlC5HRNeQSI8kUmPEgiYcOckjwNYk7qnuxJIJ1YtmKZu0RNjuO0k/p5TIYM3+ZA== X-Received: by 2002:a05:6808:10c1:b0:3a3:f92c:3f38 with SMTP id s1-20020a05680810c100b003a3f92c3f38mr498538ois.6.1689215595561; Wed, 12 Jul 2023 19:33:15 -0700 (PDT) Received: from localhost ([49.36.211.37]) by smtp.gmail.com with ESMTPSA id g13-20020a62e30d000000b00682bec0b680sm4259625pfh.89.2023.07.12.19.33.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 19:33:14 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , David Vernet Subject: [PATCH bpf-next v1 10/10] selftests/bpf: Add tests for BPF exceptions Date: Thu, 13 Jul 2023 08:02:32 +0530 Message-Id: <20230713023232.1411523-11-memxor@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713023232.1411523-1-memxor@gmail.com> References: <20230713023232.1411523-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=28734; i=memxor@gmail.com; h=from:subject; bh=ua20QwhudnRSjTCZgLbr9dJ+pra2cbA1mpaybPmsf80=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBkr2HJgvHGGy3Fhptd8wxSI4tjmmJQk8m+tDAZA Zi1zoxUQqWJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZK9hyQAKCRBM4MiGSL8R ykcpEAC5IaBC+RdE98mTM7OM7uWPwYEcmDkIc3qoLddk1P/NHgOg6qN0uHclvQYtstRzeCYeXIP wTeedSO6Qh3fwuYwKv9HamOlkC2fw2L9uKYyDKjJLGX0xlAg+U4N9ZCn9G71QKe1U2y1G63YlWK Knk1FlFv3yxoz3CQUvRM4WkF5n6x+ZXnUetygicwmeExqzhs3tXVUMFdh67nYX7xt5ubOFQuiR4 VVlN53JCkZtWeTYGmn4Dxoz0JMEl+S6uqB8iOaJSITKP1MnJ8rgIjdsocOT5cFkgdY1lKtYeGRl DViSUXPuanaehohQBuH150SouPjqFXcfXzQOULArCKqvCgZhCA2oPTAVYm/EVjogdKQ7S6jvhh7 tAM3JieCzvSDbYGviYoiv75PhQD+c6pENMRn3WpIAYM73k/4wnHJbmGFryEkXoCY5grKKESiH0I DGa5vcdrdDv7mzx+44jTvT1ZJhqgd5y7YIJN5Ps14p1QxgVoAXuHq8z7a/rCSRfWWgf6Luf39dH 43GTVWXJd5FZ6cU6QMIfHBmK5KZDLtzaQI6KnDAM6PtbR2gqjAHgVB501CnJNor1e3jXMf0MLMt hhTgBLaYDotnpf58y7YjEb1zRL/w4C1uONwrVcJESHCNaxFXR/Mu/ckkfab3hqUNtqBzfJ/aGny FcDGBW7ZzK6DMsg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Add selftests to cover success and failure cases of API usage, runtime behavior and invariants that need to be maintained for implementation correctness. Signed-off-by: Kumar Kartikeya Dwivedi --- .../selftests/bpf/prog_tests/exceptions.c | 272 +++++++++++ .../testing/selftests/bpf/progs/exceptions.c | 450 ++++++++++++++++++ .../selftests/bpf/progs/exceptions_ext.c | 42 ++ .../selftests/bpf/progs/exceptions_fail.c | 311 ++++++++++++ 4 files changed, 1075 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/exceptions.c create mode 100644 tools/testing/selftests/bpf/progs/exceptions.c create mode 100644 tools/testing/selftests/bpf/progs/exceptions_ext.c create mode 100644 tools/testing/selftests/bpf/progs/exceptions_fail.c diff --git a/tools/testing/selftests/bpf/prog_tests/exceptions.c b/tools/testing/selftests/bpf/prog_tests/exceptions.c new file mode 100644 index 000000000000..e6a906ef6852 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/exceptions.c @@ -0,0 +1,272 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +#include "exceptions.skel.h" +#include "exceptions_ext.skel.h" +#include "exceptions_fail.skel.h" + +static char log_buf[1024 * 1024]; + +static void test_exceptions_failure(void) +{ + RUN_TESTS(exceptions_fail); +} + +static void test_exceptions_success(void) +{ + LIBBPF_OPTS(bpf_test_run_opts, ropts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + struct exceptions_ext *eskel = NULL; + struct exceptions *skel; + int ret; + + skel = exceptions__open(); + if (!ASSERT_OK_PTR(skel, "exceptions__open")) + return; + + ret = exceptions__load(skel); + if (!ASSERT_OK(ret, "exceptions__load")) + goto done; + + if (!ASSERT_OK(bpf_map_update_elem(bpf_map__fd(skel->maps.jmp_table), &(int){0}, + &(int){bpf_program__fd(skel->progs.exception_tail_call_target)}, BPF_ANY), + "bpf_map_update_elem jmp_table")) + goto done; + +#define RUN_SUCCESS(_prog, return_val) \ + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs._prog), &ropts); \ + ASSERT_OK(ret, #_prog " prog run ret"); \ + ASSERT_EQ(ropts.retval, return_val, #_prog " prog run retval"); + + RUN_SUCCESS(exception_throw_subprog, 16); + RUN_SUCCESS(exception_throw, 0); + RUN_SUCCESS(exception_throw_gfunc1, 1); + RUN_SUCCESS(exception_throw_gfunc2, 0); + RUN_SUCCESS(exception_throw_gfunc3, 1); + RUN_SUCCESS(exception_throw_gfunc4, 0); + RUN_SUCCESS(exception_throw_gfunc5, 1); + RUN_SUCCESS(exception_throw_gfunc6, 16); + RUN_SUCCESS(exception_throw_func1, 1); + RUN_SUCCESS(exception_throw_func2, 0); + RUN_SUCCESS(exception_throw_func3, 1); + RUN_SUCCESS(exception_throw_func4, 0); + RUN_SUCCESS(exception_throw_func5, 1); + RUN_SUCCESS(exception_throw_func6, 16); + RUN_SUCCESS(exception_tail_call, 50); + RUN_SUCCESS(exception_ext, 5); + RUN_SUCCESS(exception_throw_value, 60); + RUN_SUCCESS(exception_assert_eq, 16); + RUN_SUCCESS(exception_assert_ne, 16); + RUN_SUCCESS(exception_assert_lt, 16); + RUN_SUCCESS(exception_assert_gt, 16); + RUN_SUCCESS(exception_assert_le, 16); + RUN_SUCCESS(exception_assert_ge, 16); + RUN_SUCCESS(exception_assert_eq_ok, 6); + RUN_SUCCESS(exception_assert_ne_ok, 6); + RUN_SUCCESS(exception_assert_lt_ok, 6); + RUN_SUCCESS(exception_assert_gt_ok, 6); + RUN_SUCCESS(exception_assert_le_ok, 6); + RUN_SUCCESS(exception_assert_ge_ok, 6); + RUN_SUCCESS(exception_assert_eq_value, 42); + RUN_SUCCESS(exception_assert_ne_value, 42); + RUN_SUCCESS(exception_assert_lt_value, 42); + RUN_SUCCESS(exception_assert_gt_value, 42); + RUN_SUCCESS(exception_assert_le_value, 42); + RUN_SUCCESS(exception_assert_ge_value, 42); + RUN_SUCCESS(exception_assert_eq_ok_value, 5); + RUN_SUCCESS(exception_assert_ne_ok_value, 5); + RUN_SUCCESS(exception_assert_lt_ok_value, 5); + RUN_SUCCESS(exception_assert_gt_ok_value, 5); + RUN_SUCCESS(exception_assert_le_ok_value, 5); + RUN_SUCCESS(exception_assert_ge_ok_value, 5); + +#define RUN_EXT(load_ret, attach_err, expr, msg, after_link) \ + { \ + LIBBPF_OPTS(bpf_object_open_opts, o, .kernel_log_buf = log_buf, \ + .kernel_log_size = sizeof(log_buf), \ + .kernel_log_level = 2); \ + exceptions_ext__destroy(eskel); \ + eskel = exceptions_ext__open_opts(&o); \ + struct bpf_program *prog = NULL; \ + struct bpf_link *link = NULL; \ + if (!ASSERT_OK_PTR(eskel, "exceptions_ext__open")) \ + goto done; \ + (expr); \ + ASSERT_OK_PTR(bpf_program__name(prog), bpf_program__name(prog)); \ + if (!ASSERT_EQ(exceptions_ext__load(eskel), load_ret, \ + "exceptions_ext__load")) { \ + printf("%s\n", log_buf); \ + goto done; \ + } \ + if (load_ret != 0) { \ + printf("%s\n", log_buf); \ + if (!ASSERT_OK_PTR(strstr(log_buf, msg), "strstr")) \ + goto done; \ + } \ + if (!load_ret && attach_err) { \ + if (!ASSERT_ERR_PTR(link = bpf_program__attach(prog), "attach err")) \ + goto done; \ + } else if (!load_ret) { \ + if (!ASSERT_OK_PTR(link = bpf_program__attach(prog), "attach ok")) \ + goto done; \ + (void)(after_link); \ + bpf_link__destroy(link); \ + } \ + } + + RUN_EXT(0, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_ext), + "exception_ext_global"), "set_attach_target")) + goto done; + }), "", ({ RUN_SUCCESS(exception_ext, 0); })); + + /* non-throwing fexit -> non-throwing subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.pfexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), "", (void)0); + + /* throwing fexit -> non-throwing subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.throwing_fexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), "", (void)0); + + /* non-throwing fexit -> throwing subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.pfexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_subprog"), "set_attach_target")) + goto done; + }), "", (void)0); + + /* throwing fexit -> throwing subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.throwing_fexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_subprog"), "set_attach_target")) + goto done; + }), "", (void)0); + + /* fmod_ret not allowed for subprog - Check so we remember to handle its + * throwing specification compatibility with target when supported. + */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.pfmod_ret; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), "can't modify return codes of BPF program", (void)0); + + /* fmod_ret not allowed for subprog - Check so we remember to handle its + * throwing specification compatibility with target when supported. + */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.pfmod_ret; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "global_subprog"), "set_attach_target")) + goto done; + }), "can't modify return codes of BPF program", (void)0); + + /* non-throwing extension -> non-throwing subprog : BAD (!global) */ + RUN_EXT(-EINVAL, true, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), "subprog() is not a global function", (void)0); + + /* non-throwing extension -> throwing subprog : BAD (!global) */ + RUN_EXT(-EINVAL, true, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_subprog"), "set_attach_target")) + goto done; + }), "throwing_subprog() is not a global function", (void)0); + + /* non-throwing extension -> non-throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "global_subprog"), "set_attach_target")) + goto done; + }), "", (void)0); + + /* non-throwing extension -> throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_global_subprog"), "set_attach_target")) + goto done; + }), "", (void)0); + + /* throwing extension -> throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_global_subprog"), "set_attach_target")) + goto done; + }), "", (void)0); + + /* throwing extension -> main subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "exception_throw_subprog"), "set_attach_target")) + goto done; + }), "", (void)0); + + /* throwing extension -> non-throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "global_subprog"), "set_attach_target")) + goto done; + }), "", (void)0); +done: + exceptions_ext__destroy(eskel); + exceptions__destroy(skel); +} + +void test_exceptions(void) +{ + test_exceptions_success(); + test_exceptions_failure(); +} diff --git a/tools/testing/selftests/bpf/progs/exceptions.c b/tools/testing/selftests/bpf/progs/exceptions.c new file mode 100644 index 000000000000..f8c2727f4584 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions.c @@ -0,0 +1,450 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include "bpf_misc.h" +#include "bpf_experimental.h" + +#ifndef ETH_P_IP +#define ETH_P_IP 0x0800 /* Internet Protocol packet */ +#endif + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 4); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table SEC(".maps"); + +SEC("tc") +int exception_throw(struct __sk_buff *ctx) +{ + volatile int ret = 1; + + if (ctx->protocol) + throw; + return ret; +} + + +static __noinline int subprog(struct __sk_buff *ctx) +{ + return ctx->len; +} + +static __noinline int throwing_subprog(struct __sk_buff *ctx) +{ + volatile int ret = 0; + + if (ctx->protocol) + throw; + return ret; +} + +__noinline int global_subprog(struct __sk_buff *ctx) +{ + return subprog(ctx) + 1; +} + +__noinline int throwing_global_subprog(struct __sk_buff *ctx) +{ + volatile int ret = 0; + + if (ctx->protocol) + throw; + return ret; +} + +__noinline int throwing_global_subprog_value(struct __sk_buff *ctx, u64 value) +{ + volatile int ret = 0; + + if (ctx->protocol) + throw_value(value); + return ret; +} + +static __noinline int exception_cb(u64 c) +{ + volatile int ret = 16; + + return ret; +} + +SEC("tc") +int exception_throw_subprog(struct __sk_buff *ctx) +{ + volatile int i; + + bpf_set_exception_callback(exception_cb); + i = subprog(ctx); + i += global_subprog(ctx) - 1; + if (!i) + return throwing_global_subprog(ctx); + else + return throwing_subprog(ctx); + throw; + return 0; +} + +__noinline int throwing_gfunc(volatile int i) +{ + volatile int ret = 1; + + bpf_assert_eq(i, 0); + return ret; +} + +__noinline static int throwing_func(volatile int i) +{ + volatile int ret = 1; + + bpf_assert_lt(i, 1); + return ret; +} + +SEC("tc") +int exception_throw_gfunc1(void *ctx) +{ + return throwing_gfunc(0); +} + +SEC("tc") +__noinline int exception_throw_gfunc2() +{ + return throwing_gfunc(1); +} + +__noinline int throwing_gfunc_2(volatile int i) +{ + return throwing_gfunc(i); +} + +SEC("tc") +int exception_throw_gfunc3(void *ctx) +{ + return throwing_gfunc_2(0); +} + +SEC("tc") +int exception_throw_gfunc4(void *ctx) +{ + return throwing_gfunc_2(1); +} + +SEC("tc") +int exception_throw_gfunc5(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_gfunc_2(0); +} + +SEC("tc") +int exception_throw_gfunc6(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_gfunc_2(1); +} + + +SEC("tc") +int exception_throw_func1(void *ctx) +{ + return throwing_func(0); +} + +SEC("tc") +int exception_throw_func2(void *ctx) +{ + return throwing_func(1); +} + +__noinline static int throwing_func_2(volatile int i) +{ + return throwing_func(i); +} + +SEC("tc") +int exception_throw_func3(void *ctx) +{ + return throwing_func_2(0); +} + +SEC("tc") +int exception_throw_func4(void *ctx) +{ + return throwing_func_2(1); +} + +SEC("tc") +int exception_throw_func5(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_func_2(0); +} + +SEC("tc") +int exception_throw_func6(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_func_2(1); +} + +static int exception_cb_nz(u64 cookie) +{ + volatile int ret = 42; + + return ret; +} + +SEC("tc") +int exception_tail_call_target(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_nz); + throw; +} + +static __noinline +int exception_tail_call_subprog(struct __sk_buff *ctx) +{ + volatile int ret = 10; + + bpf_tail_call_static(ctx, &jmp_table, 0); + return ret; +} + +SEC("tc") +int exception_tail_call(struct __sk_buff *ctx) { + volatile int ret = 0; + + bpf_set_exception_callback(exception_cb); + ret = exception_tail_call_subprog(ctx); + return ret + 8; +} + +__noinline int exception_ext_global(struct __sk_buff *ctx) +{ + volatile int ret = 5; + + return ret; +} + +static __noinline int exception_ext_static(struct __sk_buff *ctx) +{ + return exception_ext_global(ctx); +} + +SEC("tc") +int exception_ext(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_nz); + return exception_ext_static(ctx); +} + +static __noinline int exception_cb_value(u64 cookie) +{ + return cookie - 4; +} + +SEC("tc") +int exception_throw_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + return throwing_global_subprog_value(ctx, 64); +} + +SEC("tc") +int exception_assert_eq(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_eq(ctx->protocol, IPPROTO_UDP); + return 6; +} + +SEC("tc") +int exception_assert_ne(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_ne(ctx->protocol, __bpf_htons(ETH_P_IP)); + return 6; +} + +SEC("tc") +int exception_assert_lt(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_lt(ctx->protocol, __bpf_htons(ETH_P_IP) - 1); + return 6; +} + +SEC("tc") +int exception_assert_gt(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_gt(ctx->protocol, __bpf_htons(ETH_P_IP) + 1); + return 6; +} + +SEC("tc") +int exception_assert_le(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_le(ctx->protocol, __bpf_htons(ETH_P_IP) - 1); + return 6; +} + +SEC("tc") +int exception_assert_ge(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_ge(ctx->protocol, __bpf_htons(ETH_P_IP) + 1); + return 6; +} + +SEC("tc") +int exception_assert_eq_ok(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_eq(ctx->protocol, __bpf_htons(ETH_P_IP)); + return 6; +} + +SEC("tc") +int exception_assert_ne_ok(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_ne(ctx->protocol, IPPROTO_UDP); + return 6; +} + +SEC("tc") +int exception_assert_lt_ok(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_lt(ctx->protocol, __bpf_htons(ETH_P_IP) + 1); + return 6; +} + +SEC("tc") +int exception_assert_gt_ok(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_gt(ctx->protocol, __bpf_htons(ETH_P_IP) - 1); + return 6; +} + +SEC("tc") +int exception_assert_le_ok(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_le(ctx->protocol, __bpf_htons(ETH_P_IP)); + return 6; +} + +SEC("tc") +int exception_assert_ge_ok(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_assert_ge(ctx->protocol, __bpf_htons(ETH_P_IP)); + return 6; +} + +SEC("tc") +int exception_assert_eq_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_eq_value(ctx->protocol, IPPROTO_UDP, 46); + return 5; +} + +SEC("tc") +int exception_assert_ne_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_ne_value(ctx->protocol, __bpf_htons(ETH_P_IP), 46); + return 5; +} + +SEC("tc") +int exception_assert_lt_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_lt_value(ctx->protocol, __bpf_htons(ETH_P_IP) - 1, 46); + return 5; +} + +SEC("tc") +int exception_assert_gt_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_gt_value(ctx->protocol, __bpf_htons(ETH_P_IP) + 1, 46); + return 5; +} + +SEC("tc") +int exception_assert_le_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_le_value(ctx->protocol, __bpf_htons(ETH_P_IP) - 1, 46); + return 5; +} + +SEC("tc") +int exception_assert_ge_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_ge_value(ctx->protocol, __bpf_htons(ETH_P_IP) + 1, 46); + return 5; +} + +SEC("tc") +int exception_assert_eq_ok_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_eq_value(ctx->protocol, __bpf_htons(ETH_P_IP), 46); + return 5; +} + +SEC("tc") +int exception_assert_ne_ok_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_ne_value(ctx->protocol, IPPROTO_UDP, 46); + return 5; +} + +SEC("tc") +int exception_assert_lt_ok_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_lt_value(ctx->protocol, __bpf_htons(ETH_P_IP) + 1, 46); + return 5; +} + +SEC("tc") +int exception_assert_gt_ok_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_gt_value(ctx->protocol, __bpf_htons(ETH_P_IP) - 1, 46); + return 5; +} + +SEC("tc") +int exception_assert_le_ok_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_le_value(ctx->protocol, __bpf_htons(ETH_P_IP), 46); + return 5; +} + +SEC("tc") +int exception_assert_ge_ok_value(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_value); + bpf_assert_ge_value(ctx->protocol, __bpf_htons(ETH_P_IP), 46); + return 5; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/exceptions_ext.c b/tools/testing/selftests/bpf/progs/exceptions_ext.c new file mode 100644 index 000000000000..9ce9752254bc --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions_ext.c @@ -0,0 +1,42 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include "bpf_experimental.h" + +SEC("?freplace") +int extension(struct __sk_buff *ctx) +{ + return 0; +} + +SEC("?freplace") +int throwing_extension(struct __sk_buff *ctx) +{ + throw; +} + +SEC("?fexit") +int pfexit(void *ctx) +{ + return 0; +} + +SEC("?fexit") +int throwing_fexit(void *ctx) +{ + throw; +} + +SEC("?fmod_ret") +int pfmod_ret(void *ctx) +{ + return 1; +} + +SEC("?fmod_ret") +int throwing_fmod_ret(void *ctx) +{ + throw; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/exceptions_fail.c b/tools/testing/selftests/bpf/progs/exceptions_fail.c new file mode 100644 index 000000000000..94ee6ae452c8 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions_fail.c @@ -0,0 +1,311 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#include "bpf_misc.h" +#include "bpf_experimental.h" + +extern void bpf_rcu_read_lock(void) __ksym; + +#define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8))) + +struct foo { + struct bpf_rb_node node; +}; + +private(A) struct bpf_spin_lock lock; +private(A) struct bpf_rb_root rbtree __contains(foo, node); + +__noinline static int subprog_lock(struct __sk_buff *ctx) +{ + volatile int ret = 0; + + bpf_spin_lock(&lock); + if (ctx->len) + throw; + return ret; +} + +SEC("?tc") +__failure __msg("function calls are not allowed while holding a lock") +int reject_with_lock(void *ctx) +{ + bpf_spin_lock(&lock); + throw; +} + +SEC("?tc") +__failure __msg("function calls are not allowed while holding a lock") +int reject_subprog_with_lock(void *ctx) +{ + return subprog_lock(ctx); +} + +SEC("?tc") +__failure __msg("bpf_rcu_read_unlock is missing") +int reject_with_rcu_read_lock(void *ctx) +{ + bpf_rcu_read_lock(); + throw; +} + +__noinline static int throwing_subprog(struct __sk_buff *ctx) +{ + if (ctx->len) + throw; + return 0; +} + +SEC("?tc") +__failure __msg("bpf_rcu_read_unlock is missing") +int reject_subprog_with_rcu_read_lock(void *ctx) +{ + bpf_rcu_read_lock(); + return throwing_subprog(ctx); +} + +static bool rbless(struct bpf_rb_node *n1, const struct bpf_rb_node *n2) +{ + throw; +} + +SEC("?tc") +__failure __msg("function calls are not allowed while holding a lock") +int reject_with_rbtree_add_throw(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&lock); + bpf_rbtree_add(&rbtree, &f->node, rbless); + return 0; +} + +SEC("?tc") +__failure __msg("Unreleased reference") +int reject_with_reference(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + throw; +} + +__noinline static int subprog_ref(struct __sk_buff *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + throw; +} + +__noinline static int subprog_cb_ref(u32 i, void *ctx) +{ + throw; +} + +SEC("?tc") +__failure __msg("Unreleased reference") +int reject_with_cb_reference(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_loop(5, subprog_cb_ref, NULL, 0); + return 0; +} + +SEC("?tc") +__failure __msg("cannot be called from callback") +int reject_with_cb(void *ctx) +{ + bpf_loop(5, subprog_cb_ref, NULL, 0); + return 0; +} + +SEC("?tc") +__failure __msg("Unreleased reference") +int reject_with_subprog_reference(void *ctx) +{ + return subprog_ref(ctx) + 1; +} + +static __noinline int throwing_exception_cb(u64 c) +{ + if (!c) + throw; + return c; +} + +static __noinline int exception_cb1(u64 c) +{ + volatile int i = 0; + + bpf_assert_eq(i, 0); + return i; +} + +static __noinline int exception_cb2(u64 c) +{ + volatile int i = 0; + + bpf_assert_eq(i, 0); + return i; +} + +__noinline int throwing_exception_gfunc(struct __sk_buff *ctx) +{ + return throwing_exception_cb(ctx->protocol); +} + +SEC("?tc") +__failure __msg("cannot be called from callback") +int reject_throwing_exception_cb_1(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(throwing_exception_cb); + return 0; +} + +SEC("?tc") +__failure __msg("cannot call exception cb directly") +int reject_throwing_exception_cb_2(struct __sk_buff *ctx) +{ + throwing_exception_gfunc(ctx); + bpf_set_exception_callback(throwing_exception_cb); + return 0; +} + +SEC("?tc") +__failure __msg("can only be called once to set exception callback") +int reject_throwing_exception_cb_3(struct __sk_buff *ctx) +{ + if (ctx->protocol) + bpf_set_exception_callback(exception_cb1); + else + bpf_set_exception_callback(exception_cb2); + throw; +} + +__noinline int gfunc_set_exception_cb(u64 c) +{ + bpf_set_exception_callback(exception_cb1); + return 0; +} + +SEC("?tc") +__failure __msg("can only be called from main prog") +int reject_set_exception_cb_gfunc(struct __sk_buff *ctx) +{ + gfunc_set_exception_cb(0); + return 0; +} + +static __noinline int exception_cb_rec(u64 c) +{ + bpf_set_exception_callback(exception_cb_rec); + return 0; +} + +SEC("?tc") +__failure __msg("can only be called from main prog") +int reject_set_exception_cb_rec1(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_rec); + return 0; +} + +static __noinline int exception_cb_rec2(u64 c); + +static __noinline int exception_cb_rec1(u64 c) +{ + bpf_set_exception_callback(exception_cb_rec2); + return 0; +} + +static __noinline int exception_cb_rec2(u64 c) +{ + bpf_set_exception_callback(exception_cb_rec2); + return 0; +} + +SEC("?tc") +__failure __msg("can only be called from main prog") +int reject_set_exception_cb_rec2(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_rec1); + return 0; +} + +static __noinline int exception_cb_rec3(u64 c) +{ + bpf_set_exception_callback(exception_cb1); + return 0; +} + +SEC("?tc") +__failure __msg("can only be called from main prog") +int reject_set_exception_cb_rec3(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_rec3); + return 0; +} + +static __noinline int exception_cb_bad_ret(u64 c) +{ + return 4242; +} + +SEC("?fentry/bpf_check") +__failure __msg("At program exit the register R0 has value") +int reject_set_exception_cb_bad_ret(void *ctx) +{ + bpf_set_exception_callback(exception_cb_bad_ret); + return 0; +} + +__noinline static int loop_cb1(u32 index, int *ctx) +{ + throw; + return 0; +} + +__noinline static int loop_cb2(u32 index, int *ctx) +{ + throw; + return 0; +} + +SEC("?tc") +__failure __msg("cannot be called from callback") +int reject_exception_throw_cb(struct __sk_buff *ctx) +{ + volatile int ret = 1; + + bpf_loop(5, loop_cb1, NULL, 0); + return ret; +} + +SEC("?tc") +__failure __msg("cannot be called from callback") +int exception_throw_cb_diff(struct __sk_buff *ctx) +{ + volatile int ret = 1; + + if (ctx->protocol) + bpf_loop(5, loop_cb1, NULL, 0); + else + bpf_loop(5, loop_cb2, NULL, 0); + return ret; +} + +char _license[] SEC("license") = "GPL";