From patchwork Tue May 30 17:27:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13260793 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 024861F179 for ; Tue, 30 May 2023 17:28:23 +0000 (UTC) Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com [IPv6:2a00:1450:4864:20::12d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C802134 for ; Tue, 30 May 2023 10:27:55 -0700 (PDT) Received: by mail-lf1-x12d.google.com with SMTP id 2adb3069b0e04-4f4f89f71b8so3368258e87.3 for ; Tue, 30 May 2023 10:27:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1685467673; x=1688059673; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VIFS8y5rj4rAdX41RUqJOAHDsWXXHlpZYgu8Gc4afiY=; b=QoQup8S4CZS+qta+78zwlZRX0DRCwpo6BP0lcMHXNGgRYBDtvsmf4slErmoXyVYhDg xhhhEF1ggvAXaeJmTDSgOwGuMhNcS967rOHxNQX1YUw7C8p6J4LFOYm7FHKH+V4NEsdh lk7qvNf6atWFR3Y9iUpDUV0BOQzp0HuHUxC5TgV7VIyxNowiiqyVyFwpydpHF5GPoTTR Y14zvJmloFEIjqmcuA7mP7MWfZg/EBMuO4vJ7GyAR7eTCKVu4qg314m3Vf+2aoFixKzi pg8dRCiS+jGGKvCfi0Pd7/3m8pcrrMP/UqpgkycpjWb8Ip197p02Qtoa3IzeJtPH76nI 2/TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685467673; x=1688059673; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VIFS8y5rj4rAdX41RUqJOAHDsWXXHlpZYgu8Gc4afiY=; b=l1Y3voIkeUFfUEyPnpYE7kdv5OXV+R0NXX7relZP6UL5vPzx8r8WAbx+cfTdXKscdw TewXHEmktZXnaS4OmgBuQEKpxAY84MUhOe5sS10F1Cptb9d8f2r38tYXmfWwHBTDh49Q FmLyVLdJG0BFOmvdqm3R78YeK1Ej8ccNRtiFHunV4ZCC0AcXDgk3o1LiZ88g3ZG+27Sy 0dFLmLqNouRDAaoiNR9w+Iw23flWyFFlIndhbZVNhi06EWbEbtMrQh1flC1vqY29BVbT 6hZonyA2pixYlOTMomf0VPY25waisCkuFKOc411qJzB8MWszBz5Fn3GXXLeI1mIK0vJd uIdA== X-Gm-Message-State: AC+VfDzNHEKWA6exFfWwBZVc9fh/DJBDUbyu97qtvbSZxmkoI3bK1c1c 1cNCAlTujOAZ4Ts0UUpbeo2ODrJXcLKD0Q== X-Google-Smtp-Source: ACHHUZ6oh7K8YEioeyGlYbtNEr79HC6Eg51+VXUgHNII9yFLX4NO24UQPJfzuxIXKnKiS4fxwzmnyQ== X-Received: by 2002:a19:740f:0:b0:4f1:444e:6c5a with SMTP id v15-20020a19740f000000b004f1444e6c5amr1268084lfe.8.1685467673452; Tue, 30 May 2023 10:27:53 -0700 (PDT) Received: from bigfoot.. (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id a1-20020a056512020100b004f262997496sm405985lfo.76.2023.05.30.10.27.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 May 2023 10:27:53 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next v2 1/4] bpf: verify scalar ids mapping in regsafe() using check_ids() Date: Tue, 30 May 2023 20:27:36 +0300 Message-Id: <20230530172739.447290-2-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530172739.447290-1-eddyz87@gmail.com> References: <20230530172739.447290-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Make sure that the following unsafe example is rejected by verifier: 1: r9 = ... some pointer with range X ... 2: r6 = ... unbound scalar ID=a ... 3: r7 = ... unbound scalar ID=b ... 4: if (r6 > r7) goto +1 5: r6 = r7 6: if (r6 > X) goto ... --- checkpoint --- 7: r9 += r7 8: *(u64 *)r9 = Y This example is unsafe because not all execution paths verify r7 range. Because of the jump at (4) the verifier would arrive at (6) in two states: I. r6{.id=b}, r7{.id=b} via path 1-6; II. r6{.id=a}, r7{.id=b} via path 1-4, 6. Currently regsafe() does not call check_ids() for scalar registers, thus from POV of regsafe() states (I) and (II) are identical. If the path 1-6 is taken by verifier first, and checkpoint is created at (6) the path [1-4, 6] would be considered safe. This commit updates regsafe() to call check_ids() for scalar registers. This change is costly in terms of verification performance. Using veristat to compare number of processed states for selftests object files listed in tools/testing/selftests/bpf/veristat.cfg and Cilium object files from [1] gives the following statistics: Filter | Number of programs ---------------------------------- states_pct>10 | 40 states_pct>20 | 20 states_pct>30 | 15 states_pct>40 | 11 (Out of total 177 programs) In fact, impact is so bad that in no-alu32 mode the following test_progs tests no longer pass verifiction: - verif_scale2: maximal number of instructions exceeded - verif_scale3: maximal number of instructions exceeded - verif_scale_pyperf600: maximal number of instructions exceeded Additionally: - verifier_search_pruning/allocated_stack: expected prunning does not happen because of differences in register id allocation. Followup patch would address these issues. [1] git@github.com:anakryiko/cilium.git Fixes: 75748837b7e5 ("bpf: Propagate scalar ranges through register assignments.") Signed-off-by: Eduard Zingerman --- kernel/bpf/verifier.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index af70dad655ab..9c10f2619c4f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -15151,6 +15151,28 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, switch (base_type(rold->type)) { case SCALAR_VALUE: + /* Why check_ids() for scalar registers? + * + * Consider the following BPF code: + * 1: r6 = ... unbound scalar, ID=a ... + * 2: r7 = ... unbound scalar, ID=b ... + * 3: if (r6 > r7) goto +1 + * 4: r6 = r7 + * 5: if (r6 > X) goto ... + * 6: ... memory operation using r7 ... + * + * First verification path is [1-6]: + * - at (4) same bpf_reg_state::id (b) would be assigned to r6 and r7; + * - at (5) r6 would be marked <= X, find_equal_scalars() would also mark + * r7 <= X, because r6 and r7 share same id. + * + * Next verification path would start from (5), because of the jump at (3). + * The only state difference between first and second visits of (5) is + * bpf_reg_state::id assignments for r6 and r7: (b, b) vs (a, b). + * Thus, use check_ids() to distinguish these states. + */ + if (!check_ids(rold->id, rcur->id, idmap)) + return false; if (regs_exact(rold, rcur, idmap)) return true; if (env->explore_alu_limits) From patchwork Tue May 30 17:27:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13260795 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0E28200C0 for ; Tue, 30 May 2023 17:28:25 +0000 (UTC) Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com [IPv6:2a00:1450:4864:20::136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD5C4189 for ; Tue, 30 May 2023 10:27:57 -0700 (PDT) Received: by mail-lf1-x136.google.com with SMTP id 2adb3069b0e04-4effb818c37so5245649e87.3 for ; Tue, 30 May 2023 10:27:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1685467674; x=1688059674; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r9PKakjUPj0VMT0tQBxCfsvh70MN07+QftYbg6wUJhA=; b=ox7KTiJ0X3rT1V464RG4caYopLvKUZILotFcnNXw0GNPOvBaPGJHGh+aYUypLY6zUh uoF1oo8jFb0tyqFJU4cX5CG34agD0b6bGbvKVqdWKHNecml0ORNcUqYzdo1pNRzVvRIF shHAvoSr+xRxD8gvZw9TEr9EAhn1edrjYPTrFESq1ClvpB78fzNtjZWyyDsLtBj9mAlf jr+x+RmwRo5CcOb2MqUrloJIDjqCVfCNQ4d24eDf0f9HGXf3NNrm6lCReRUsAN8ystmx Y8LTANVKH6HeBoI1FYJcewfT5je585sPc0X5AbELD8S67Th4ovOI5yadDtkWAVLpPTVs SHfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685467674; x=1688059674; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r9PKakjUPj0VMT0tQBxCfsvh70MN07+QftYbg6wUJhA=; b=YGSn/pIn1UVu8eFelrwDi/YtmbxFXP2TxhnCNkRZyp0urAHqVasSKs4If7rqmVyrKw v+/hT7nefwthmswOscYkZJZ/+vL0OJQWGgSbSYQTXaRoc+g8vB9w8LD0cIl0GPI0exTH xSSEk5dPKY/G0LrljFhWix2Gi6bXuHi0QI/54ggvD3mgnarXYy8qHOzrojgYR2TedLaY y1CJqEvL/25LO4JriA2DAjR07TP3u/MEdKapP3EYSKeuZejl/ThBas7Kv4thynoBKrv5 3XPl81CdmPX6npbzxYjDLSoTl3JMwb4Q4xXsKxfeqYF1FQHHPQaXDRDdzsbZ9pYyRdN4 rDzA== X-Gm-Message-State: AC+VfDx/cu5AUsAJzlE+2PHXv1Tyzrj8uUVBfnNu8OnjckMUfk34f0Og xU8SXEZldblHS/0Ms6MxRmwSwHPxvxjsJg== X-Google-Smtp-Source: ACHHUZ7/ip6XrF/ocypbSgY6mtte6j0kVcE3WozitleA+IknaKk6f0awiKmL0Wz1FYBSYXdLKwO65Q== X-Received: by 2002:ac2:4435:0:b0:4f4:b0d0:63fb with SMTP id w21-20020ac24435000000b004f4b0d063fbmr1075820lfl.35.1685467674627; Tue, 30 May 2023 10:27:54 -0700 (PDT) Received: from bigfoot.. (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id a1-20020a056512020100b004f262997496sm405985lfo.76.2023.05.30.10.27.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 May 2023 10:27:54 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next v2 2/4] selftests/bpf: verify that check_ids() is used for scalars in regsafe() Date: Tue, 30 May 2023 20:27:37 +0300 Message-Id: <20230530172739.447290-3-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530172739.447290-1-eddyz87@gmail.com> References: <20230530172739.447290-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Verify that the following example is rejected by verifier: r9 = ... some pointer with range X ... r6 = ... unbound scalar ID=a ... r7 = ... unbound scalar ID=b ... if (r6 > r7) goto +1 r6 = r7 if (r6 > X) goto exit r9 += r7 *(u64 *)r9 = Y Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_scalar_ids.c | 108 ++++++++++++++++++ 2 files changed, 110 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/verifier_scalar_ids.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 531621adef42..070a13833c3f 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -50,6 +50,7 @@ #include "verifier_regalloc.skel.h" #include "verifier_ringbuf.skel.h" #include "verifier_runtime_jit.skel.h" +#include "verifier_scalar_ids.skel.h" #include "verifier_search_pruning.skel.h" #include "verifier_sock.skel.h" #include "verifier_spill_fill.skel.h" @@ -150,6 +151,7 @@ void test_verifier_ref_tracking(void) { RUN(verifier_ref_tracking); } void test_verifier_regalloc(void) { RUN(verifier_regalloc); } void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); } void test_verifier_runtime_jit(void) { RUN(verifier_runtime_jit); } +void test_verifier_scalar_ids(void) { RUN(verifier_scalar_ids); } void test_verifier_search_pruning(void) { RUN(verifier_search_pruning); } void test_verifier_sock(void) { RUN(verifier_sock); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c new file mode 100644 index 000000000000..0ea9a1f6e1ae --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include "bpf_misc.h" + +/* Verify that check_ids() is used by regsafe() for scalars. + * + * r9 = ... some pointer with range X ... + * r6 = ... unbound scalar ID=a ... + * r7 = ... unbound scalar ID=b ... + * if (r6 > r7) goto +1 + * r6 = r7 + * if (r6 > X) goto exit + * r9 += r7 + * *(u8 *)r9 = Y + * + * The memory access is safe only if r7 is bounded, + * which is true for one branch and not true for another. + */ +SEC("socket") +__failure __msg("register with unbounded min value") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void ids_id_mapping_in_regsafe(void) +{ + asm volatile ( + /* Bump allocated stack */ + "r1 = 0;" + "*(u64*)(r10 - 8) = r1;" + /* r9 = pointer to stack */ + "r9 = r10;" + "r9 += -8;" + /* r7 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + /* r6 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r6 = r0;" + /* if r6 > r7 is an unpredictable jump */ + "if r6 > r7 goto l1_%=;" + "r6 = r7;" +"l1_%=:" + /* a noop to get to add new parent state */ + "r0 = r0;" + /* if r6 > 4 exit(0) */ + "if r6 > 4 goto l2_%=;" + /* Access memory at r9[r7] */ + "r9 += r7;" + "r0 = *(u8*)(r9 + 0);" +"l2_%=:" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Similar to a previous one, but shows that bpf_reg_state::precise + * could not be used to filter out registers subject to check_ids() in + * verifier.c:regsafe(). At 'l0' register 'r6' does not have 'precise' + * flag set but it is important to have this register in the idmap. + */ +SEC("socket") +__failure __msg("register with unbounded min value") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void ids_id_mapping_in_regsafe_2(void) +{ + asm volatile ( + /* Bump allocated stack */ + "r1 = 0;" + "*(u64*)(r10 - 8) = r1;" + /* r9 = pointer to stack */ + "r9 = r10;" + "r9 += -8;" + /* r8 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r8 = r0;" + /* r7 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + /* r6 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r6 = r0;" + /* scratch .id from r0 */ + "r0 = 0;" + /* if r6 > r7 is an unpredictable jump */ + "if r6 > r7 goto l1_%=;" + /* tie r6 and r7 .id */ + "r6 = r7;" +"l0_%=:" + /* if r7 > 4 exit(0) */ + "if r7 > 4 goto l2_%=;" + /* Access memory at r9[r7] */ + "r9 += r6;" + "r0 = *(u8*)(r9 + 0);" +"l2_%=:" + "r0 = 0;" + "exit;" +"l1_%=:" + /* tie r6 and r8 .id */ + "r6 = r8;" + "goto l0_%=;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; From patchwork Tue May 30 17:27:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13260797 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DABCC23C9E for ; Tue, 30 May 2023 17:28:29 +0000 (UTC) Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com [IPv6:2a00:1450:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43A87196 for ; Tue, 30 May 2023 10:28:01 -0700 (PDT) Received: by mail-lf1-x12e.google.com with SMTP id 2adb3069b0e04-4f3ba703b67so5253609e87.1 for ; Tue, 30 May 2023 10:28:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1685467676; x=1688059676; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4ylH0VNTL1ZMVzN8kk9IdT3VsBrdm3nHhmLd8htJ54A=; b=CGrCfU/9gf1+sLTuRLficMZQRvjHBU65xgNt/UT/GGX1Fec5Wqu4ZrG9mCzEtRHqw2 8ganVHMjl8gRb7vRglfoY9suvBVnl2jWaKCdCcliCjDp8aKD5LW4mwYAfJM7o9FaOJIK /Vb+M9+kjme7y91ceH1WVwmY8iio6FNXiBXMB5ClHbCebebd7BFczgA+AiQXYv9xBrTx YDoIz1G8h5x45lK4l8Sa0rCcSmx8P7icqFXG//HZsUIcdDdIsTzri+ccbUGStEo8GpxX 1v00ZesW430N7co4hmtaFZuHCRLtdTDYS5m0tBWGR7sqXygXd0v34f+XPkW6euz9AXi9 NYaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685467676; x=1688059676; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4ylH0VNTL1ZMVzN8kk9IdT3VsBrdm3nHhmLd8htJ54A=; b=A7r6gBnRVXFy/VVRhDHjrkWX4TIUw24LEVCw/2wL0iGonkwVRGpoO/9XtEki7dlgF7 1Enlsy3vPoz10A6LwCfurirQfqEWDDIzl/2YwqgTh6iLF/0fNKtBjftf/UnIFJ/CpewI 4By2SUVEq2SxFoKio/fYx0YW3kmZ8YJYWsFdPgv2UVh62/CgoPberBXL5bHMAKj6hatA NWRCUj6cOzllluDbCxrBPUpAZ+3tZw1SXSH2XVJJzzmeqDhdZ7dXWURRhAuotzkCI89W QiDg80kU4tkQ2JcBVWM1//2JZbfCV1lLSH98wojQqL3SOee/r7vNj0oMkAMSw0KNmCrs dY1g== X-Gm-Message-State: AC+VfDwRg5QxN/YUFThuPnaSPfY8450IMVw08Vf/8TBeSLChT5zQWfBK tsw1PaAQVE1h+kPN0XSL0U4NG442Nc0SNA== X-Google-Smtp-Source: ACHHUZ4Td9XBt+ZKkZebAd5YUrICgo/21da7Y1nR9qazRK+4e3wPVM7rP5WzvoOiSsXw1vAgqdik5g== X-Received: by 2002:ac2:447c:0:b0:4f3:a61d:19d2 with SMTP id y28-20020ac2447c000000b004f3a61d19d2mr1190902lfl.36.1685467675657; Tue, 30 May 2023 10:27:55 -0700 (PDT) Received: from bigfoot.. (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id a1-20020a056512020100b004f262997496sm405985lfo.76.2023.05.30.10.27.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 May 2023 10:27:55 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next v2 3/4] bpf: filter out scalar ids not used for range transfer in regsafe() Date: Tue, 30 May 2023 20:27:38 +0300 Message-Id: <20230530172739.447290-4-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530172739.447290-1-eddyz87@gmail.com> References: <20230530172739.447290-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_FILL_THIS_FORM_SHORT,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net verifier.c:regsafe() uses check_ids() to verify that scalar register id assignments match between cached and current state. This is costly because many registers might get an id, but not all of them would actually gain range through verifier.c:find_equal_scalars(). For example, in the following code range is transferred via find_equal_scalars(): 1: r0 = call some_func(); 2: r1 = r0; // r0 and r1 get same id 3: if r0 > 10 goto exit; // r0 and r1 get the same range ... use r1 ... In this case it is important to verify that r0 and r1 have the same id if there is ever a jump to (3). However, for the following code registers id mapping is not important but gets in a way: 1: r6 = ... 2: if ... goto <4>; 3: r0 = call some_func(); // r0.id == 0 4: goto <6>; 5: r0 = r6; 6: if r0 > 10 goto exit; // first visit with r0.id == 0, // second visit with r0.id != 0 ... use r0 ... Jump from 4 to 6 would not be considered safe and path starting from 6 would be processed again because of mismatch in r0 id mapping. This commit modifies find_equal_scalars() to track which ids were actually used for range transfer. regsafe() can safely omit id mapping checks for ids that were never used for range transfer. This brings back most of the performance lost because of the previous commit: $ ./veristat -e file,prog,states -f 'states_pct!=0' \ -C master-baseline.log current.log File Program States (A) States (B) States (DIFF) --------------- --------------------- ---------- ---------- ------------- bpf_host.o cil_from_host 37 45 +8 (+21.62%) bpf_sock.o cil_sock6_connect 99 103 +4 (+4.04%) bpf_sock.o cil_sock6_getpeername 56 57 +1 (+1.79%) bpf_sock.o cil_sock6_recvmsg 56 57 +1 (+1.79%) bpf_sock.o cil_sock6_sendmsg 93 97 +4 (+4.30%) test_usdt.bpf.o usdt12 116 117 +1 (+0.86%) (As in the previous commit verification performance is tested for object files listed in tools/testing/selftests/bpf/veristat.cfg and Cilium object files from [1]) [1] git@github.com:anakryiko/cilium.git Signed-off-by: Eduard Zingerman --- include/linux/bpf_verifier.h | 4 + kernel/bpf/Makefile | 1 + kernel/bpf/u32_hashset.c | 137 +++++++++++++++++++++++++++++++++++ kernel/bpf/u32_hashset.h | 30 ++++++++ kernel/bpf/verifier.c | 46 ++++++++++-- 5 files changed, 210 insertions(+), 8 deletions(-) create mode 100644 kernel/bpf/u32_hashset.c create mode 100644 kernel/bpf/u32_hashset.h diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 5b11a3b0fec0..c5bc87403a6f 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -557,6 +557,8 @@ struct backtrack_state { u64 stack_masks[MAX_CALL_FRAMES]; }; +struct u32_hashset; + /* single container for all structs * one verifier_env per bpf_check() call */ @@ -622,6 +624,8 @@ struct bpf_verifier_env { u32 scratched_regs; /* Same as scratched_regs but for stack slots */ u64 scratched_stack_slots; + /* set of ids that gain range via find_equal_scalars() */ + struct u32_hashset *range_transfer_ids; u64 prev_log_pos, prev_insn_print_pos; /* buffer used to generate temporary string representations, * e.g., in reg_type_str() to generate reg_type string diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index 1d3892168d32..8e94e549679e 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -12,6 +12,7 @@ obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o obj-$(CONFIG_BPF_SYSCALL) += bpf_local_storage.o bpf_task_storage.o obj-${CONFIG_BPF_LSM} += bpf_inode_storage.o +obj-$(CONFIG_BPF_SYSCALL) += u32_hashset.o obj-$(CONFIG_BPF_SYSCALL) += disasm.o obj-$(CONFIG_BPF_JIT) += trampoline.o obj-$(CONFIG_BPF_SYSCALL) += btf.o memalloc.o diff --git a/kernel/bpf/u32_hashset.c b/kernel/bpf/u32_hashset.c new file mode 100644 index 000000000000..a2c5429e34e1 --- /dev/null +++ b/kernel/bpf/u32_hashset.c @@ -0,0 +1,137 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "linux/gfp_types.h" +#include "linux/random.h" +#include "linux/slab.h" +#include + +#include "u32_hashset.h" + +static struct u32_hashset_bucket *u32_hashset_put_in_bucket(struct u32_hashset_bucket *bucket, + u32 item) +{ + struct u32_hashset_bucket *new_bucket; + u32 new_cap = bucket ? 2 * bucket->cap : 1; + u32 cnt = bucket ? bucket->cnt : 0; + size_t sz; + + if (!bucket || bucket->cnt == bucket->cap) { + sz = sizeof(struct u32_hashset_bucket) + sizeof(u32) * new_cap; + new_bucket = krealloc(bucket, sz, GFP_KERNEL); + if (!new_bucket) + return NULL; + new_bucket->cap = new_cap; + } else { + new_bucket = bucket; + } + + new_bucket->items[cnt] = item; + new_bucket->cnt = cnt + 1; + + return new_bucket; +} + +static bool u32_hashset_needs_to_grow(struct u32_hashset *set) +{ + /* grow if empty or more than 75% filled */ + return (set->buckets_cnt == 0) || ((set->items_cnt + 1) * 4 / 3 > set->buckets_cnt); +} + +static void u32_hashset_free_buckets(struct u32_hashset_bucket **buckets, size_t cnt) +{ + size_t bkt; + + for (bkt = 0; bkt < cnt; ++bkt) + kfree(buckets[bkt]); + kfree(buckets); +} + +static int u32_hashset_grow(struct u32_hashset *set) +{ + struct u32_hashset_bucket **new_buckets; + size_t new_buckets_cnt; + size_t h, bkt, i; + u32 item; + + new_buckets_cnt = set->buckets_cnt ? set->buckets_cnt * 2 : 4; + new_buckets = kcalloc(new_buckets_cnt, sizeof(new_buckets[0]), GFP_KERNEL); + if (!new_buckets) + return -ENOMEM; + + for (bkt = 0; bkt < set->buckets_cnt; ++bkt) { + if (!set->buckets[bkt]) + continue; + + for (i = 0; i < set->buckets[bkt]->cnt; ++i) { + item = set->buckets[bkt]->items[i]; + h = jhash_1word(item, set->seed) % new_buckets_cnt; + new_buckets[h] = u32_hashset_put_in_bucket(new_buckets[h], item); + if (!new_buckets[h]) + goto nomem; + } + } + + u32_hashset_free_buckets(set->buckets, set->buckets_cnt); + set->buckets_cnt = new_buckets_cnt; + set->buckets = new_buckets; + return 0; + +nomem: + u32_hashset_free_buckets(new_buckets, new_buckets_cnt); + + return -ENOMEM; +} + +void u32_hashset_clear(struct u32_hashset *set) +{ + u32_hashset_free_buckets(set->buckets, set->buckets_cnt); + set->buckets = NULL; + set->buckets_cnt = 0; + set->items_cnt = 0; +} + +bool u32_hashset_find(const struct u32_hashset *set, const u32 key) +{ + struct u32_hashset_bucket *bkt; + u32 i, hash; + + if (!set->buckets) + return false; + + hash = jhash_1word(key, set->seed) % set->buckets_cnt; + bkt = set->buckets[hash]; + if (!bkt) + return false; + + for (i = 0; i < bkt->cnt; ++i) + if (bkt->items[i] == key) + return true; + + return false; +} + +int u32_hashset_add(struct u32_hashset *set, u32 key) +{ + struct u32_hashset_bucket *new_bucket; + u32 hash; + int err; + + if (u32_hashset_find(set, key)) + return 0; + + if (u32_hashset_needs_to_grow(set)) { + err = u32_hashset_grow(set); + if (err) + return err; + } + + hash = jhash_1word(key, set->seed) % set->buckets_cnt; + new_bucket = u32_hashset_put_in_bucket(set->buckets[hash], key); + if (!new_bucket) + return -ENOMEM; + + set->buckets[hash] = new_bucket; + set->items_cnt++; + + return 0; +} diff --git a/kernel/bpf/u32_hashset.h b/kernel/bpf/u32_hashset.h new file mode 100644 index 000000000000..76f03e2e4f14 --- /dev/null +++ b/kernel/bpf/u32_hashset.h @@ -0,0 +1,30 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* A hashset for u32 values, based on tools/lib/bpf/hashmap.h */ + +#ifndef __U32_HASHSET_H__ +#define __U32_HASHSET_H__ + +#include "linux/gfp_types.h" +#include "linux/random.h" +#include "linux/slab.h" +#include + +struct u32_hashset_bucket { + u32 cnt; + u32 cap; + u32 items[]; +}; + +struct u32_hashset { + struct u32_hashset_bucket **buckets; + size_t buckets_cnt; + size_t items_cnt; + u32 seed; +}; + +void u32_hashset_clear(struct u32_hashset *set); +bool u32_hashset_find(const struct u32_hashset *set, const u32 key); +int u32_hashset_add(struct u32_hashset *set, u32 key); + +#endif /* __U32_HASHSET_H__ */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9c10f2619c4f..0d3a695aa4da 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -27,6 +27,7 @@ #include #include "disasm.h" +#include "u32_hashset.h" static const struct bpf_verifier_ops * const bpf_verifier_ops[] = { #define BPF_PROG_TYPE(_id, _name, prog_ctx_type, kern_ctx_type) \ @@ -13629,16 +13630,25 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, return true; } -static void find_equal_scalars(struct bpf_verifier_state *vstate, - struct bpf_reg_state *known_reg) +static int find_equal_scalars(struct bpf_verifier_env *env, + struct bpf_verifier_state *vstate, + struct bpf_reg_state *known_reg) { struct bpf_func_state *state; struct bpf_reg_state *reg; + int err = 0, count = 0; bpf_for_each_reg_in_vstate(vstate, state, reg, ({ - if (reg->type == SCALAR_VALUE && reg->id == known_reg->id) + if (reg->type == SCALAR_VALUE && reg->id == known_reg->id) { copy_register_state(reg, known_reg); + ++count; + } })); + + if (count > 1) + err = u32_hashset_add(env->range_transfer_ids, known_reg->id); + + return err; } static int check_cond_jmp_op(struct bpf_verifier_env *env, @@ -13803,8 +13813,13 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, src_reg, dst_reg, opcode); if (src_reg->id && !WARN_ON_ONCE(src_reg->id != other_branch_regs[insn->src_reg].id)) { - find_equal_scalars(this_branch, src_reg); - find_equal_scalars(other_branch, &other_branch_regs[insn->src_reg]); + err = find_equal_scalars(env, this_branch, src_reg); + if (err) + return err; + err = find_equal_scalars(env, other_branch, + &other_branch_regs[insn->src_reg]); + if (err) + return err; } } @@ -13816,8 +13831,12 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, if (dst_reg->type == SCALAR_VALUE && dst_reg->id && !WARN_ON_ONCE(dst_reg->id != other_branch_regs[insn->dst_reg].id)) { - find_equal_scalars(this_branch, dst_reg); - find_equal_scalars(other_branch, &other_branch_regs[insn->dst_reg]); + err = find_equal_scalars(env, this_branch, dst_reg); + if (err) + return err; + err = find_equal_scalars(env, other_branch, &other_branch_regs[insn->dst_reg]); + if (err) + return err; } /* if one pointer register is compared to another pointer @@ -15170,8 +15189,13 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, * The only state difference between first and second visits of (5) is * bpf_reg_state::id assignments for r6 and r7: (b, b) vs (a, b). * Thus, use check_ids() to distinguish these states. + * + * All children states of 'rold' are already verified. + * Thus env->range_transfer_ids contains all ids that gained range via + * find_equal_scalars() during children verification. */ - if (!check_ids(rold->id, rcur->id, idmap)) + if (u32_hashset_find(env->range_transfer_ids, rold->id) && + !check_ids(rold->id, rcur->id, idmap)) return false; if (regs_exact(rold, rcur, idmap)) return true; @@ -19289,6 +19313,10 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3 if (!env->explored_states) goto skip_full_check; + env->range_transfer_ids = kzalloc(sizeof(*env->range_transfer_ids), GFP_KERNEL); + if (!env->range_transfer_ids) + goto skip_full_check; + ret = add_subprog_and_kfunc(env); if (ret < 0) goto skip_full_check; @@ -19327,6 +19355,8 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3 skip_full_check: kvfree(env->explored_states); + u32_hashset_clear(env->range_transfer_ids); + kvfree(env->range_transfer_ids); if (ret == 0) ret = check_max_stack_depth(env); From patchwork Tue May 30 17:27:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13260796 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3307200C0 for ; Tue, 30 May 2023 17:28:29 +0000 (UTC) Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com [IPv6:2a00:1450:4864:20::12a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9535A198 for ; Tue, 30 May 2023 10:28:01 -0700 (PDT) Received: by mail-lf1-x12a.google.com with SMTP id 2adb3069b0e04-4f4b80bf93aso5337991e87.0 for ; Tue, 30 May 2023 10:28:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1685467677; x=1688059677; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+AzOuRrSlParB9JLCIe2IxndwSw/AxAffv4FvWCBK+8=; b=aQlmvPj6IP96l7B9vG/Mn6N79C9zrEfgBUePm6Dgf7PQjqiht/3vmKFOHjf4cbfyDX HG3jEZhFym2BEh21RYYrlANaZ+nM8i0IUERZfYuyqtxZP2lsH84EwuTc0ClkNPxxKSF7 TQ3eMQRxV8G+sbWjMzUXuwcz9z8/unvERsx0xUlt5QGX7fBCtPQxqNDreTzMLDBjW9hA GHvzxwdAFmMJevN/MCVQ3692xeRnkrTkt9hQ388VdeFAfS1MMifbciwzQ4D6cYz8iE5N z1wylVBaJVgK+Yx1qeZTCCh6wRbLDPtdrxFmIIFw5wMqcsgKYs/9zEQOTpnFADrmPrJD NPtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685467677; x=1688059677; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+AzOuRrSlParB9JLCIe2IxndwSw/AxAffv4FvWCBK+8=; b=GTjwPl0zdZoCTREBtLF7TK3G99G1TBCXlHq14nMQEwX6G7vCjQmm6VqrbWS6eqbSQm 6+ykQz4FuSa5B+ajwg6JWnx7WC0573Xc53bdAmnyNrn2o7UfSavIgHhGN5aCVd9fGtIv MzcEghio3p5zR0N1lAany7ZU/h+7ZMPgtrXhHG2tF94/iQ4ZEumLBVLVSoMfGQdITp/1 RxDoj88hOkL5BQcZA7D2MJuTZnaycN/v9Bt8lVX6e8BID0ltZAHNy5qIQ66aRXPM6pUR n/mUZKwaF/IoPgEK3+sGZcIj6NHasuxjwBBLO1N5rCk0YhPhM2S/SzeY3LOUuHClCM94 sdPQ== X-Gm-Message-State: AC+VfDwtyAgyQvZSd9b6q5iA2iCNPjL35pTdsHdrmEX38yxVOCtuKmpU LEEMUnADbuyiETtYzPoRd/pMyT4MsqjEkA== X-Google-Smtp-Source: ACHHUZ6bHeFsurm7JRcXhBwBm7eSTdRJjywf0NS/Zr73MsuTT4G2I5u6bDBssoqf/HerY+jxxRFAgQ== X-Received: by 2002:ac2:43ac:0:b0:4e9:74a8:134c with SMTP id t12-20020ac243ac000000b004e974a8134cmr1125574lfl.43.1685467676779; Tue, 30 May 2023 10:27:56 -0700 (PDT) Received: from bigfoot.. (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id a1-20020a056512020100b004f262997496sm405985lfo.76.2023.05.30.10.27.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 May 2023 10:27:56 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next v2 4/4] selftests/bpf: check env->that range_transfer_ids has effect Date: Tue, 30 May 2023 20:27:39 +0300 Message-Id: <20230530172739.447290-5-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530172739.447290-1-eddyz87@gmail.com> References: <20230530172739.447290-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Previous commit adds bpf_verifier_env::range_transfer_ids check to verifier.c:regsafe(). This check allows to skip check_ids() for some ids in the cached verifier state and thus improves verification performance. This commit adds two test cases: - first: showing that check_ids() is indeed skipped as expected; - second: modification of first where check_ids() cannot be skipped. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_scalar_ids.c | 106 ++++++++++++++++++ 1 file changed, 106 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c index 0ea9a1f6e1ae..2c5bb72696ce 100644 --- a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c +++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c @@ -105,4 +105,110 @@ __naked void ids_id_mapping_in_regsafe_2(void) : __clobber_all); } +/* Label l1 could be reached in two combinations: + * + * (1) r6{.id=A}, r7{.id=A}, r8{.id=B} + * (2) r6{.id=B}, r7{.id=A}, r8{.id=B} + * + * However neither A nor B are used in find_equal_scalars() + * to transfer range information in this test. + * Thus states (1) and (2) should be considered identical due + * to bpf_verifier_env::range_transfer_ids handling. + * + * Make sure that this is the case by checking that second jump + * to l1 hits cached state. + */ +SEC("socket") +__success __log_level(7) __msg("14: safe") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void no_range_transfer_ids(void) +{ + asm volatile ( + /* Bump allocated stack */ + "r1 = 0;" + "*(u64*)(r10 - 16) = r1;" + /* r9 = pointer to stack */ + "r9 = r10;" + "r9 += -16;" + /* r7 = ktime_get_ns() & 0b11 */ + "call %[bpf_ktime_get_ns];" + "r8 = r0;" + "r8 &= 3;" + /* r6 = ktime_get_ns() & 0b11 */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + "r7 &= 3;" + /* if r6 > r7 is an unpredictable jump */ + "if r7 > r8 goto l0_%=;" + "r6 = r7;" + "goto l1_%=;" +"l0_%=:" + "r6 = r8;" +"l1_%=:" + /* insn #14 */ + "r9 += r6;" + "r9 += r7;" + "r9 += r8;" + "r0 = *(u8*)(r9 + 0);" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Same as above, but cached state for l1 has id used for + * range transfer: + * + * (1) r6{.id=A}, r7{.id=A}, r8{.id=B} + * (2) r6{.id=B}, r7{.id=A}, r8{.id=B} + * + * If (A) is used for range transfer (1) and (2) should not + * be considered identical. + * + * Check this by verifying that instruction immediately following l1 + * is visited twice. + */ +SEC("socket") +__success __log_level(7) __msg("r9 = r9") __msg("r9 = r9") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void has_range_transfer_ids(void) +{ + asm volatile ( + /* Bump allocated stack */ + "r1 = 0;" + "*(u64*)(r10 - 16) = r1;" + /* r9 = pointer to stack */ + "r9 = r10;" + "r9 += -16;" + /* r7 = ktime_get_ns() & 0b11 */ + "call %[bpf_ktime_get_ns];" + "r8 = r0;" + /* r6 = ktime_get_ns() & 0b11 */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + /* if r6 > r7 is an unpredictable jump */ + "if r7 > r8 goto l0_%=;" + "r6 = r7;" + "goto l1_%=;" +"l0_%=:" + "r6 = r8;" +"l1_%=:" + /* just a unique marker, this insn should be verified twice */ + "r9 = r9;" + /* one of the instructions below transfers range for r6 */ + "if r7 > 2 goto l2_%=;" + "if r8 > 2 goto l2_%=;" + "r9 += r6;" + "r9 += r7;" + "r9 += r8;" + "r0 = *(u8*)(r9 + 0);" +"l2_%=:" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + char _license[] SEC("license") = "GPL";