From patchwork Tue Aug 23 01:31:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12951599 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81495C28D13 for ; Tue, 23 Aug 2022 01:31:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239275AbiHWBbY (ORCPT ); Mon, 22 Aug 2022 21:31:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239333AbiHWBbW (ORCPT ); Mon, 22 Aug 2022 21:31:22 -0400 Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com [IPv6:2a00:1450:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D12255A3F7 for ; Mon, 22 Aug 2022 18:31:21 -0700 (PDT) Received: by mail-ej1-x641.google.com with SMTP id u15so15918515ejt.6 for ; Mon, 22 Aug 2022 18:31:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=v5wa0sDhiXcOHOmZYvW1r3YmLcJ/blrvBOQMDTMbYHc=; b=QbzXZVN0ki8kUNNmI+ixYnaZBTC2WB6UuOge2Z1J5Hj6QVmljAm4Td5dPmcn/Aw0sx oPRlDIGgo/sinB3w0v5y07kZEB4ibmP2uguQGWNfj8l0R1ZeRQSCTB3ZWpgG9DQ6XYpS UFYRcy+zw9HhPV/yPcucphnliGEvIJy1K8+vkALKmDV9dCCL1hkO9soOKw4ovL3IgrCP ftAe7bNHk4fXjBN0RzeY5rsOARjQDHo6UG6VLT/tkS7QfCr6VamMQlf+dyofFhwAKjYN bMX/QBF8k6Qv4UTKry8e2O6xuN2Ws1pMCIjrNi2pS87YW1JPTd/RGoeSUzDLikhD/bqH esxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=v5wa0sDhiXcOHOmZYvW1r3YmLcJ/blrvBOQMDTMbYHc=; b=suihJ7g7rm/0rl78oxYJCF+Y0hKKNzrij0eikvPb4h7emOauFzmen1rTDX83YXWGzM U+muJn8lPMOnmvWM7XeSfNZCh2lh+U5Ewy7tjwNmVXEhpLk4f6l900GX6RdBasDuRqXW zI2jMcVukI0Gw8+FL+cdoOkYGIItUFqTw9fNZsqGt3sKrRHFl7I7aLuP5qUh/nEXcggz WLVi0bNO+bOvosWih+z0IsgTsWWgPieKf1x0F9K+QCXqgr+SKLV6pmogzyrQeDcw+Jzq dq4CryzmX0fSUbMB7nmkGc0+3gl2x0WGuwy0+uid2oFRRk+7wLUN4T4iLm2j8mAjepof CbcA== X-Gm-Message-State: ACgBeo0k/r121b/3c8NCoRtVeadeNqr4MBkO2rbbag20OLC5AkrtOJRR WmU8f1oT0TCDWz4OntHz5pc66LgO2dM= X-Google-Smtp-Source: AA6agR6kjdobyiUHmWMfd94k+NaVvfByk21uAIDAlylPNPKxgYwfF4U4vwNuYY+hDm8vYXqn/nNnXA== X-Received: by 2002:a17:907:dab:b0:73d:5ada:cff5 with SMTP id go43-20020a1709070dab00b0073d5adacff5mr10120307ejc.275.1661218280038; Mon, 22 Aug 2022 18:31:20 -0700 (PDT) Received: from localhost (vpn-253-070.epfl.ch. [128.179.253.70]) by smtp.gmail.com with ESMTPSA id bi4-20020a170906a24400b0072af4af2f46sm6782616ejb.74.2022.08.22.18.31.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 18:31:19 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Andrii Nakryiko , Daniel Borkmann Subject: [PATCH bpf v2 1/3] bpf: Move bpf_loop and bpf_for_each_map_elem under CAP_BPF Date: Tue, 23 Aug 2022 03:31:17 +0200 Message-Id: <20220823013117.24916-1-memxor@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220823012759.24844-1-memxor@gmail.com> References: <20220823012759.24844-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1373; i=memxor@gmail.com; h=from:subject; bh=i9XsXlrauYqZVhAtHZgDG3t3go7UNmDH/TPP+h/wWnk=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBjBCbaHdVpVhYSQrcoRERNcValxamu2SeAqeJu7v3j OZeqE06JAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCYwQm2gAKCRBM4MiGSL8Ryt9QD/ 4q28YSl+SuyPqoWEwa+zJ2BJA/8IDFVDuohh84yagNvIrauODv2Q2YKDOdGlOi2eWTN4oK7ynmRn2g LZsJSYuZbW/lcumKp/FgbK2RR7QKJe/Xi0cPn0NNuKtpxJd4vT5XRGJzA5EF5K38AvOtZFmEj9YSEq Ce8OeZlQb9bTEAtQOaa4Y9kYuxIzI1g0mwiumN1VbWKgRmnFQJSrAClqmod2hzaFoJvxAJ0TVDZotK mD1STYjhxXGrNvdRbmGgSL9oWKDh/gIz7RwqD2uqEurY4WAzBJl7uYQcZK2tn822Vs45yb2rOjT+vB x95kNAU0YtDZjuT8YLgZl5aVB7Rn+cdrkhRGoUNmHRnJayq2L5XLAkOvhHF2ntZc0//uGameCbEjzL PiyPo1C2+4dgm9ZREvklCgbEFwTsqoXGDtiUUhdeZys9ad6d/QENr8cK2g2PRX8/GOt0dbobMBAAWN y/+FMGrExkUgX47g90W/LYzvF6Jy02x1h8PT1LJqjFIVV8fta0l1XZfvgJlQ8urAiqt0bakNQAFW1K ZEZrE4d+9cTfnKM3cn7MtRryxQTnh+b/EpQTHwOdWgIuVkPImKg88CV96I9JHDVh+zKNITqb33KO9o SCe15uZvMxw3vgJ0kIlt1YRONX9voCOVYMSl3mypONsu6kpgVVBeDATziqhA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net They would require func_info which needs prog BTF anyway. Loading BTF and setting the prog btf_fd while loading the prog indirectly requires CAP_BPF, so just to reduce confusion, move both these helpers taking callback under bpf_capable() protection as well, since they cannot be used without CAP_BPF. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/helpers.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 1f961f9982d2..d0e80926bac5 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1633,10 +1633,6 @@ bpf_base_func_proto(enum bpf_func_id func_id) return &bpf_ringbuf_submit_dynptr_proto; case BPF_FUNC_ringbuf_discard_dynptr: return &bpf_ringbuf_discard_dynptr_proto; - case BPF_FUNC_for_each_map_elem: - return &bpf_for_each_map_elem_proto; - case BPF_FUNC_loop: - return &bpf_loop_proto; case BPF_FUNC_strncmp: return &bpf_strncmp_proto; case BPF_FUNC_dynptr_from_mem: @@ -1675,6 +1671,10 @@ bpf_base_func_proto(enum bpf_func_id func_id) return &bpf_timer_cancel_proto; case BPF_FUNC_kptr_xchg: return &bpf_kptr_xchg_proto; + case BPF_FUNC_for_each_map_elem: + return &bpf_for_each_map_elem_proto; + case BPF_FUNC_loop: + return &bpf_loop_proto; default: break; } From patchwork Tue Aug 23 01:31:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12951600 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA6FBC28D13 for ; Tue, 23 Aug 2022 01:31:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231491AbiHWBbb (ORCPT ); Mon, 22 Aug 2022 21:31:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234840AbiHWBba (ORCPT ); Mon, 22 Aug 2022 21:31:30 -0400 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FDD55A3D3 for ; Mon, 22 Aug 2022 18:31:29 -0700 (PDT) Received: by mail-ej1-x643.google.com with SMTP id w19so24613040ejc.7 for ; Mon, 22 Aug 2022 18:31:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=w3DIblq5I4goc0U89COhLl4RqI0oajOLovsgFfN6l1M=; b=N+7WWUSvVBdusAEB/bieE7jMERrbJt9tCcJVR1gnp/cHgpdhClYr3HHvUbXTAFW7gx EYUZeXacIrqxpgyb5hR5cODyNVEQKhBe7hzMu81q03onoicsLTbqJEFfEs+57q740pOb bB05S0fUehbFE9YuqeWvfsGN6sz0GOI1owVXLQl+Q78PcLqUgFDgyzfP4FYlSrR1wtV8 /30v2gW/UDhkHuUwJCwnL13cN4h/L59w8Va/8f4o0ueZ6ARYEpTO+aeL4qE6nCv+/zQn etnwfQNK1cYVxRRzNJhh4EGEu5M/GH7V/6B6oKk9h6vaSeaNUaIyasuKDyqOPPskk3TX YSFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=w3DIblq5I4goc0U89COhLl4RqI0oajOLovsgFfN6l1M=; b=c8TbbqR0+HRTOQSy/3BRnT+uXpxpwRa3b9G65ePVnJv9PPRjslK8oVbWUk/cCFH0bD G044kyvxwwV+LzrIB8E/Jyv1OCCME/CCqb1W/3jfW+Vcq6EgzuYtddXiYEhfzJFW+VhV 3zJOakiwCgIhRdYDj7FOL883duuVJLI6JAIcIPTDOllKgAQCVq0RcyzTraiVXCjDp4bd HFCCjHtrdA9wMZD/uq/78DnQq3J9sX8Pcn3wTXuz+xZHEVzuvFgphKnHWy6Hkq0YF/W2 8qu9PltVvAl3TyydSU7ey9wdNYb65AiS/Hav1TWdD6bYMs3DJgCsXbkqCMUwoAneD71c t4qA== X-Gm-Message-State: ACgBeo2bKGS9l333FjhCsV4yXeBdhxE/cvSzW8NI3Z0Wf0og+Pe5YVHS UYx504xFJ+t3/xawZJy2oYnUZIefhAM= X-Google-Smtp-Source: AA6agR6Sb0WMBFHgDBcY5itCWmGVQO8y5zFTBjPK3DafT78IGuPyunQ4z9e8vRJMyOmU/6QtkQrhwQ== X-Received: by 2002:a17:906:58c8:b0:6fe:91d5:18d2 with SMTP id e8-20020a17090658c800b006fe91d518d2mr15074185ejs.190.1661218287606; Mon, 22 Aug 2022 18:31:27 -0700 (PDT) Received: from localhost (vpn-253-070.epfl.ch. [128.179.253.70]) by smtp.gmail.com with ESMTPSA id bm2-20020a0564020b0200b0044604ad8b41sm565890edb.23.2022.08.22.18.31.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 18:31:27 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Andrii Nakryiko , Daniel Borkmann Subject: [PATCH bpf v2 2/3] bpf: Fix reference state management for synchronous callbacks Date: Tue, 23 Aug 2022 03:31:25 +0200 Message-Id: <20220823013125.24938-1-memxor@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220823012759.24844-1-memxor@gmail.com> References: <20220823012759.24844-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7453; i=memxor@gmail.com; h=from:subject; bh=4iF0BwftQkv5QRVp4VMUTpHeTouhrf3u55v38lpIXoM=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBjBCbayRA+JphAwZsi+Rls7plm9Qxbr9MYyC6JRQqc 5tp3nKiJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCYwQm2gAKCRBM4MiGSL8Ryh3XD/ 0bSagr8SZOXSHN+fD0q62NmoheIZkVfcd069YgXPsjg+NGOICvJg7iSDkzSIAkZULF0osJFlgu1SSV 4N0TfS3vGoInS0WbghOJcG9UCJQhNBsGFaa1Tr2LEGqDacgS9HMqJBlX1npmEdMLApCeoa31c/iojT SzXrJBFJ7L/gycSg1ADbBWlkA8h72puMYAGvZme4gCZ7LA/T2Jm9ZgSJY4fwJxwO9ddrPYuUnZESXe MloxzVKrPc8r6YUOqG+AkLdW8fY3Dk31vwHefFoes+TNc/aUxyLLbnZiS5/BCN7w4VLNCKKPJkl7Gx DceQ/ylxatdEUfrH2PcdHfSf53gIGWbkDquMNiio5ttFnXP9FD3Va/of0E53tropcKpnujATcx+Zat rQ27Vcf1Vln2UHyb9Al+VKwV5AEaEl2fR0C3cisUgoCnNn4tsxJ6hsQ5ItCTkYXpUgseY4OzLRqUss K6dx1XV7nwcRUAIwXCyCj9/KZ7/WgEx6iQWqjJmvJJPw6k2lefggcSDCG8Db0ok0BeOF6816IimIPQ 49WrUjxGNjBA4X/X92TjM8bxzR35JCMBdHOoRlkyWPofGunklnFTR/HUEzs4pOAmcXOgjuZ3brIxGq E5Lz1ESgpizxwa0+grJEJaSDFvs9SMI9S3e9Hs9vYGDFoa1uirOnJS6Q5RUw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, verifier verifies callback functions (sync and async) as if they will be executed once, (i.e. it explores execution state as if the function was being called once). The next insn to explore is set to start of subprog and the exit from nested frame is handled using curframe > 0 and prepare_func_exit. In case of async callback it uses a customized variant of push_stack simulating a kind of branch to set up custom state and execution context for the async callback. While this approach is simple and works when callback really will be executed only once, it is unsafe for all of our current helpers which are for_each style, i.e. they execute the callback multiple times. A callback releasing acquired references of the caller may do so multiple times, but currently verifier sees it as one call inside the frame, which then returns to caller. Hence, it thinks it released some reference that the cb e.g. got access through callback_ctx (register filled inside cb from spilled typed register on stack). Similarly, it may see that an acquire call is unpaired inside the callback, so the caller will copy the reference state of callback and then will have to release the register with new ref_obj_ids. But again, the callback may execute multiple times, but the verifier will only account for acquired references for a single symbolic execution of the callback, which will cause leaks. Note that for async callback case, things are different. While currently we have bpf_timer_set_callback which only executes it once, even for multiple executions it would be safe, as reference state is NULL and check_reference_leak would force program to release state before BPF_EXIT. The state is also unaffected by analysis for the caller frame. Hence async callback is safe. Since we want the reference state to be accessible, e.g. for pointers loaded from stack through callback_ctx's PTR_TO_STACK, we still have to copy caller's reference_state to callback's bpf_func_state, but we enforce that whatever references it adds to that reference_state has been released before it hits BPF_EXIT. This requires introducing a new callback_ref member in the reference state to distinguish between caller vs callee references. Hence, check_reference_leak now errors out if it sees we are in callback_fn and we have not released callback_ref refs. Since there can be multiple nested callbacks, like frame 0 -> cb1 -> cb2 etc. we need to also distinguish between whether this particular ref belongs to this callback frame or parent, and only error for our own, so we store state->frameno (which is always non-zero for callbacks). In short, callbacks can read parent reference_state, but cannot mutate it, to be able to use pointers acquired by the caller. They must only undo their changes (by releasing their own acquired_refs before BPF_EXIT) on top of caller reference_state before returning (at which point the caller and callback state will match anyway, so no need to copy it back to caller). Fixes: 69c87ba6225 ("bpf: Add bpf_for_each_map_elem() helper") Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 11 ++++++++++ kernel/bpf/verifier.c | 42 ++++++++++++++++++++++++++++-------- 2 files changed, 44 insertions(+), 9 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 2e3bad8640dc..1fdddbf3546b 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -212,6 +212,17 @@ struct bpf_reference_state { * is used purely to inform the user of a reference leak. */ int insn_idx; + /* There can be a case like: + * main (frame 0) + * cb (frame 1) + * func (frame 3) + * cb (frame 4) + * Hence for frame 4, if callback_ref just stored boolean, it would be + * impossible to distinguish nested callback refs. Hence store the + * frameno and compare that to callback_ref in check_reference_leak when + * exiting a callback function. + */ + int callback_ref; }; /* state of the program: diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 096fdac70165..3e885ba88b02 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1086,6 +1086,7 @@ static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx) id = ++env->id_gen; state->refs[new_ofs].id = id; state->refs[new_ofs].insn_idx = insn_idx; + state->refs[new_ofs].callback_ref = state->in_callback_fn ? state->frameno : 0; return id; } @@ -1098,6 +1099,9 @@ static int release_reference_state(struct bpf_func_state *state, int ptr_id) last_idx = state->acquired_refs - 1; for (i = 0; i < state->acquired_refs; i++) { if (state->refs[i].id == ptr_id) { + /* Cannot release caller references in callbacks */ + if (state->in_callback_fn && state->refs[i].callback_ref != state->frameno) + return -EINVAL; if (last_idx && i != last_idx) memcpy(&state->refs[i], &state->refs[last_idx], sizeof(*state->refs)); @@ -6938,10 +6942,17 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx) caller->regs[BPF_REG_0] = *r0; } - /* Transfer references to the caller */ - err = copy_reference_state(caller, callee); - if (err) - return err; + /* callback_fn frame should have released its own additions to parent's + * reference state at this point, or check_reference_leak would + * complain, hence it must be the same as the caller. There is no need + * to copy it back. + */ + if (!callee->in_callback_fn) { + /* Transfer references to the caller */ + err = copy_reference_state(caller, callee); + if (err) + return err; + } *insn_idx = callee->callsite + 1; if (env->log.level & BPF_LOG_LEVEL) { @@ -7065,13 +7076,20 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta, static int check_reference_leak(struct bpf_verifier_env *env) { struct bpf_func_state *state = cur_func(env); + bool refs_lingering = false; int i; + if (state->frameno && !state->in_callback_fn) + return 0; + for (i = 0; i < state->acquired_refs; i++) { + if (state->in_callback_fn && state->refs[i].callback_ref != state->frameno) + continue; verbose(env, "Unreleased reference id=%d alloc_insn=%d\n", state->refs[i].id, state->refs[i].insn_idx); + refs_lingering = true; } - return state->acquired_refs ? -EINVAL : 0; + return refs_lingering ? -EINVAL : 0; } static int check_bpf_snprintf_call(struct bpf_verifier_env *env, @@ -12332,6 +12350,16 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } + /* We must do check_reference_leak here before + * prepare_func_exit to handle the case when + * state->curframe > 0, it may be a callback + * function, for which reference_state must + * match caller reference state when it exits. + */ + err = check_reference_leak(env); + if (err) + return err; + if (state->curframe) { /* exit from nested function */ err = prepare_func_exit(env, &env->insn_idx); @@ -12341,10 +12369,6 @@ static int do_check(struct bpf_verifier_env *env) continue; } - err = check_reference_leak(env); - if (err) - return err; - err = check_return_code(env); if (err) return err; From patchwork Tue Aug 23 01:32:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12951606 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCE5BC28D13 for ; Tue, 23 Aug 2022 01:32:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235419AbiHWBce (ORCPT ); Mon, 22 Aug 2022 21:32:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234840AbiHWBcd (ORCPT ); Mon, 22 Aug 2022 21:32:33 -0400 Received: from mail-ed1-x543.google.com (mail-ed1-x543.google.com [IPv6:2a00:1450:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B27AE5A3C9 for ; Mon, 22 Aug 2022 18:32:32 -0700 (PDT) Received: by mail-ed1-x543.google.com with SMTP id w20so399184edd.10 for ; Mon, 22 Aug 2022 18:32:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=6UemqxAvDWGnv1O00KBmS/fszoO1TMCLlwZ9yVK5Rp0=; b=pf5+7a+ZXBDbbt6J05y09rcMIFipSGh55B+xWXNkMWOAz6cy9As509E+VWyeP/jlVJ EteXWLLDTjkwugGnDmQbmvz9occezsHVY646RhVvM2CnHcQ7Uk5hlVypL5jWyzMElRsu eOfnBSGQJeGReZ0OgJ4xzTuuKW8QRbSn9Cwer81eqCLS4CZvWCqfSvIQr+jpUoGbr1ch k+SNHWh1nev/t54rcOkvk8A54H+t6pBrXsZsOAxY+WOKZb6qSGfNUJRZ0tS+ot18Jnsm 3+nmQJZ/0QYtJl83uYUUBNCJfB+T6yZGTRHoPknL7OFjujIiKgCiUdAewhtQgKRlO/uZ LwzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=6UemqxAvDWGnv1O00KBmS/fszoO1TMCLlwZ9yVK5Rp0=; b=uoDVyYc1tGvWb4TWz8DVB9WE3Na/3U/w+XxAOS6UvRePGR6xI0ztLHTmn+4qdwnc8Q Q3I0jEogcBCOLjIUxeabot3wFFQXnrJMVLw/hhMhFKVNlmFR24U61HY/t5FRQ3z1RGQb xhuDV1bEB/tKkfgMmezrwjMtgP1/tKNsVJAogMCLJTESbLLaxBgv38jdkzOX6Hr+uBjp 7CF+hrGarsAW+VcJZsRLqhwgN6ohSwjNSdQtjkunZBKwyufgyhRebooIxa/FHmsOxrfx QPdxMEI1AziG4m2npz3iUQBRd8wfJ92UhkSuD3uVqqEimK30zeYiGHLphYJpxxGG0PHo K/6g== X-Gm-Message-State: ACgBeo0Hs/4DcacpNh0eF2w/9IQ1d65598rvKqJ6V3dylVeKzHMqus6I qIyskCZYUbN1kxn6Cbni97QcWzDl5rA= X-Google-Smtp-Source: AA6agR5mQjrJ95v9wqrsiekBxv6wfIf4d9yj79O4yNPSGscY6wNeGyuvnUtSZSPHKo09vrnaHnesbw== X-Received: by 2002:a05:6402:241e:b0:443:be9:83c0 with SMTP id t30-20020a056402241e00b004430be983c0mr1572525eda.24.1661218350914; Mon, 22 Aug 2022 18:32:30 -0700 (PDT) Received: from localhost (212.191.202.62.dynamic.cgnat.res.cust.swisscom.ch. [62.202.191.212]) by smtp.gmail.com with ESMTPSA id r11-20020a170906704b00b007315c723ac8sm6733320ejj.224.2022.08.22.18.32.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 18:32:30 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Andrii Nakryiko , Daniel Borkmann Subject: [PATCH bpf v2 3/3] selftests/bpf: Add tests for reference state fixes for callbacks Date: Tue, 23 Aug 2022 03:32:26 +0200 Message-Id: <20220823013226.24988-1-memxor@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220823012759.24844-1-memxor@gmail.com> References: <20220823012759.24844-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5389; i=memxor@gmail.com; h=from:subject; bh=5yqjz6tOaH7Usu551D32J6i7WCZXwkv5x5gHsOuamAg=; b=owEBbQKS/ZANAwAKAUzgyIZIvxHKAcsmYgBjBCba+pSc0GF9t8iEuWLwgAhdyI1rgMHT+i2Tmh8k 9Mh/T7CJAjMEAAEKAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCYwQm2gAKCRBM4MiGSL8Ryuq8D/ 98CfKV2nMm5gcw5hxxyBdijzQ+SAnFt6n6JhyHVFv+CVqIgQoysRnrvPByYCt3ufJtcdxtDRlmU9xF Y/YpRxUKc2/56Ly1kTM7/3v/jP8rRtgWiuWG/aHFHUD5zfuDTj9vc8bYVx8+4mZRFxl7i690mZk6Oy GiuCpc/S+v1vlwRSptfMa+LGhu9Jw9qI6wMz7xaChncq5axmELo2cXgI00UBIz4l1x33OnwOGZS+aD EpeeXIVUiRBSIflvXk1cRf3ugmtZg2Wmoah4xGeglxsTuCbzN1Am5rVEUYNLkPAaVc/bqm29X3QOVl UQkfXeF7GfrzYUFtk1oFR1C4FiksAIRUkn3/vTX62wIUlDuG88ESrTlIbu9hx2WSKfhxKUB/IlMiK0 vu4/JPmoI43WZ8fWDthfOzThAYpg+m00KL6T5ExDNgyIHUyrm/ucG9iz1wtBRB86b88g0XXcY5IlBe XiRnqJKZQe9hkmt4w/Xb4pLHpME3TLnH9QFBDIS1rdQ1unhcYMHHP3CKcFSMuSbzD5gTdMmOvkog5f OdRG5PS2CN1IQWkeqwvAMfT5rDCrc0lRoJF8bU8+yj7krBA0oFcKHE/S2Z56z91aIIo6s/fc374PFF 76L3JHVXbexDKIi/aegGUs5F2XJRjacSiwBfQ1PMEVrn/yr3dPh/h1A5qFcg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net These are regression tests to ensure we don't end up in invalid runtime state for helpers that execute callbacks multiple times. It exercises the fixes to verifier callback handling for reference state in previous patches. Signed-off-by: Kumar Kartikeya Dwivedi --- .../selftests/bpf/prog_tests/cb_refs.c | 48 ++++++++ tools/testing/selftests/bpf/progs/cb_refs.c | 116 ++++++++++++++++++ 2 files changed, 164 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/cb_refs.c create mode 100644 tools/testing/selftests/bpf/progs/cb_refs.c diff --git a/tools/testing/selftests/bpf/prog_tests/cb_refs.c b/tools/testing/selftests/bpf/prog_tests/cb_refs.c new file mode 100644 index 000000000000..3bff680de16c --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/cb_refs.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "bpf/libbpf.h" +#include +#include + +#include "cb_refs.skel.h" + +static char log_buf[1024 * 1024]; + +struct { + const char *prog_name; + const char *err_msg; +} cb_refs_tests[] = { + { "underflow_prog", "reference has not been acquired before" }, + { "leak_prog", "Unreleased reference" }, + { "nested_cb", "Unreleased reference id=4 alloc_insn=2" }, /* alloc_insn=2{4,5} */ + { "non_cb_transfer_ref", "Unreleased reference id=4 alloc_insn=1" }, /* alloc_insn=1{1,2} */ +}; + +void test_cb_refs(void) +{ + LIBBPF_OPTS(bpf_object_open_opts, opts, .kernel_log_buf = log_buf, + .kernel_log_size = sizeof(log_buf), + .kernel_log_level = 1); + struct bpf_program *prog; + struct cb_refs *skel; + int i; + + for (i = 0; i < ARRAY_SIZE(cb_refs_tests); i++) { + LIBBPF_OPTS(bpf_test_run_opts, run_opts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + skel = cb_refs__open_opts(&opts); + if (!ASSERT_OK_PTR(skel, "cb_refs__open_and_load")) + return; + prog = bpf_object__find_program_by_name(skel->obj, cb_refs_tests[i].prog_name); + bpf_program__set_autoload(prog, true); + if (!ASSERT_ERR(cb_refs__load(skel), "cb_refs__load")) + bpf_prog_test_run_opts(bpf_program__fd(prog), &run_opts); + if (!ASSERT_OK_PTR(strstr(log_buf, cb_refs_tests[i].err_msg), "expected error message")) { + fprintf(stderr, "Expected: %s\n", cb_refs_tests[i].err_msg); + fprintf(stderr, "Verifier: %s\n", log_buf); + } + cb_refs__destroy(skel); + } +} diff --git a/tools/testing/selftests/bpf/progs/cb_refs.c b/tools/testing/selftests/bpf/progs/cb_refs.c new file mode 100644 index 000000000000..7653df1bc787 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/cb_refs.c @@ -0,0 +1,116 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +struct map_value { + struct prog_test_ref_kfunc __kptr_ref *ptr; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct map_value); + __uint(max_entries, 16); +} array_map SEC(".maps"); + +extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym; +extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym; + +static __noinline int cb1(void *map, void *key, void *value, void *ctx) +{ + void *p = *(void **)ctx; + bpf_kfunc_call_test_release(p); + /* Without the fix this would cause underflow */ + return 0; +} + +SEC("?tc") +int underflow_prog(void *ctx) +{ + struct prog_test_ref_kfunc *p; + unsigned long sl = 0; + + p = bpf_kfunc_call_test_acquire(&sl); + if (!p) + return 0; + bpf_for_each_map_elem(&array_map, cb1, &p, 0); + return 0; +} + +static __always_inline int cb2(void *map, void *key, void *value, void *ctx) +{ + unsigned long sl = 0; + + *(void **)ctx = bpf_kfunc_call_test_acquire(&sl); + /* Without the fix this would leak memory */ + return 0; +} + +SEC("?tc") +int leak_prog(void *ctx) +{ + struct prog_test_ref_kfunc *p; + struct map_value *v; + unsigned long sl; + + v = bpf_map_lookup_elem(&array_map, &(int){0}); + if (!v) + return 0; + + p = NULL; + bpf_for_each_map_elem(&array_map, cb2, &p, 0); + p = bpf_kptr_xchg(&v->ptr, p); + if (p) + bpf_kfunc_call_test_release(p); + return 0; +} + +static __always_inline int cb(void *map, void *key, void *value, void *ctx) +{ + return 0; +} + +static __always_inline int cb3(void *map, void *key, void *value, void *ctx) +{ + unsigned long sl = 0; + void *p; + + bpf_kfunc_call_test_acquire(&sl); + bpf_for_each_map_elem(&array_map, cb, &p, 0); + /* It should only complain here, not in cb. This is why we need + * callback_ref to be set to frameno. + */ + return 0; +} + +SEC("?tc") +int nested_cb(void *ctx) +{ + struct prog_test_ref_kfunc *p; + unsigned long sl = 0; + int sp = 0; + + p = bpf_kfunc_call_test_acquire(&sl); + if (!p) + return 0; + bpf_for_each_map_elem(&array_map, cb3, &sp, 0); + bpf_kfunc_call_test_release(p); + return 0; +} + +SEC("?tc") +int non_cb_transfer_ref(void *ctx) +{ + struct prog_test_ref_kfunc *p; + unsigned long sl = 0; + + p = bpf_kfunc_call_test_acquire(&sl); + if (!p) + return 0; + cb1(NULL, NULL, NULL, &p); + bpf_kfunc_call_test_acquire(&sl); + return 0; +} + +char _license[] SEC("license") = "GPL";