From patchwork Fri Apr 21 17:42:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220509 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DB08C77B78 for ; Fri, 21 Apr 2023 17:43:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229996AbjDURnV (ORCPT ); Fri, 21 Apr 2023 13:43:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233121AbjDURnE (ORCPT ); Fri, 21 Apr 2023 13:43:04 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBCA930D7 for ; Fri, 21 Apr 2023 10:42:47 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id ffacd0b85a97d-2fe3fb8e2f7so1280873f8f.0 for ; Fri, 21 Apr 2023 10:42:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098966; x=1684690966; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7K9QJLCiWjNf7ZX2dsiP9tDIrrE+yxdZAgYhLTPqVxI=; b=hX73LR5hzpsMbBZ/KJEQ/VgQjT2DWpn1ffTNJTBh2vvpbNGmQEsBAdqZDhCUllCX8W OwISkZ9r7dxLj7tMgCOwNp4YQpkqyflLGw3uuJ7v1TeZBlE1n+sg20urQXTlHRwk0BFn XLMmbOMZBlBQgqHi/pke3b8b0uzn5HDFhlfMjdQsV1awC3Jf6jlE1lV9j+SdbwSG9mfL B3f+/M0aV9YhXil3wKrpXCF9oTu111RzOaAoDmctr/5AVOp7n6DH4au8kD6Zn8BDr7cN vBfHvoOyKR2PMyNyFW40VMEkmHhrhaKyBY66C40NiHXRVyDkY7WS4Bj2ArRL28+6FpDC wQBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098966; x=1684690966; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7K9QJLCiWjNf7ZX2dsiP9tDIrrE+yxdZAgYhLTPqVxI=; b=lhQC9ZN8+zDhdLjm+rSF8Vf0Ud09WUJE4mco5UjYLNF/DC9GSlXYFpL2rB45AMOGLW iD8KVjM9xd3pHis1fripbkNeWOpoQCOsOaGw7MNV3feRIzgB7V24VkYXepZsJLvY8aKE 8iuQMB+5AezLmg3xxyWiiXv7BBvTZUM6YCaL8vGo+T+x+6CAEaaY62C2KV2GGZgN370u JdvE/9fxg4PNOsi11rv9l19Vi5ofb1D44Whzu8tEp50oEFSP7UNxKVgJb8Pd8nE/qN40 nM5PNidYBcB3em5yxEpHsZhCOqlcRKR+kUqx2Tx2Cka5Lrsuw8AgYhSjJiMf/pID6b4f LBxw== X-Gm-Message-State: AAQBX9eVmB1imnXdF4xgVmUUtK28B/MofrSNIGE7EqqEraPNzhg7R7ZT bEwG3ho4hUOJmAKLzyYfNsCZm1oy0En0xA== X-Google-Smtp-Source: AKy350ZcUjNF6ia5iyegWqYpQvIdlWIB+zZB3Xa2CcuU5JvAwf2NSX/eEw+IojsucFUkCU4a3oYT3w== X-Received: by 2002:a5d:5226:0:b0:2f8:f144:a22c with SMTP id i6-20020a5d5226000000b002f8f144a22cmr4702582wra.62.1682098965866; Fri, 21 Apr 2023 10:42:45 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:45 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 01/24] selftests/bpf: Add notion of auxiliary programs for test_loader Date: Fri, 21 Apr 2023 20:42:11 +0300 Message-Id: <20230421174234.2391278-2-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net In order to express test cases that use bpf_tail_call() intrinsic it is necessary to have several programs to be loaded at a time. This commit adds __auxiliary annotation to the set of annotations supported by test_loader.c. Programs marked as auxiliary are always loaded but are not treated as a separate test. For example: void dummy_prog1(void); struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 4); __uint(key_size, sizeof(int)); __array(values, void (void)); } prog_map SEC(".maps") = { .values = { [0] = (void *) &dummy_prog1, }, }; SEC("tc") __auxiliary __naked void dummy_prog1(void) { asm volatile ("r0 = 42; exit;"); } SEC("tc") __description("reference tracking: check reference or tail call") __success __retval(0) __naked void check_reference_or_tail_call(void) { asm volatile ( "r2 = %[prog_map] ll;" "r3 = 0;" "call %[bpf_tail_call];" "r0 = 0;" "exit;" :: __imm(bpf_tail_call), : __clobber_all); } Signed-off-by: Eduard Zingerman --- tools/testing/selftests/bpf/progs/bpf_misc.h | 6 ++ tools/testing/selftests/bpf/test_loader.c | 89 +++++++++++++++----- 2 files changed, 73 insertions(+), 22 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h index 3b307de8dab9..d3c1217ba79a 100644 --- a/tools/testing/selftests/bpf/progs/bpf_misc.h +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h @@ -53,6 +53,10 @@ * - A numeric value. * Multiple __flag attributes could be specified, the final flags * value is derived by applying binary "or" to all specified values. + * + * __auxiliary Annotated program is not a separate test, but used as auxiliary + * for some other test cases and should always be loaded. + * __auxiliary_unpriv Same, but load program in unprivileged mode. */ #define __msg(msg) __attribute__((btf_decl_tag("comment:test_expect_msg=" msg))) #define __failure __attribute__((btf_decl_tag("comment:test_expect_failure"))) @@ -65,6 +69,8 @@ #define __flag(flag) __attribute__((btf_decl_tag("comment:test_prog_flags="#flag))) #define __retval(val) __attribute__((btf_decl_tag("comment:test_retval="#val))) #define __retval_unpriv(val) __attribute__((btf_decl_tag("comment:test_retval_unpriv="#val))) +#define __auxiliary __attribute__((btf_decl_tag("comment:test_auxiliary"))) +#define __auxiliary_unpriv __attribute__((btf_decl_tag("comment:test_auxiliary_unpriv"))) /* Convenience macro for use with 'asm volatile' blocks */ #define __naked __attribute__((naked)) diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c index 40c9b7d532c4..b4edd8454934 100644 --- a/tools/testing/selftests/bpf/test_loader.c +++ b/tools/testing/selftests/bpf/test_loader.c @@ -25,6 +25,8 @@ #define TEST_TAG_DESCRIPTION_PFX "comment:test_description=" #define TEST_TAG_RETVAL_PFX "comment:test_retval=" #define TEST_TAG_RETVAL_PFX_UNPRIV "comment:test_retval_unpriv=" +#define TEST_TAG_AUXILIARY "comment:test_auxiliary" +#define TEST_TAG_AUXILIARY_UNPRIV "comment:test_auxiliary_unpriv" /* Warning: duplicated in bpf_misc.h */ #define POINTER_VALUE 0xcafe4all @@ -59,6 +61,8 @@ struct test_spec { int log_level; int prog_flags; int mode_mask; + bool auxiliary; + bool valid; }; static int tester_init(struct test_loader *tester) @@ -87,6 +91,11 @@ static void free_test_spec(struct test_spec *spec) free(spec->unpriv.name); free(spec->priv.expect_msgs); free(spec->unpriv.expect_msgs); + + spec->priv.name = NULL; + spec->unpriv.name = NULL; + spec->priv.expect_msgs = NULL; + spec->unpriv.expect_msgs = NULL; } static int push_msg(const char *msg, struct test_subspec *subspec) @@ -204,6 +213,12 @@ static int parse_test_spec(struct test_loader *tester, spec->unpriv.expect_failure = false; spec->mode_mask |= UNPRIV; has_unpriv_result = true; + } else if (strcmp(s, TEST_TAG_AUXILIARY) == 0) { + spec->auxiliary = true; + spec->mode_mask |= PRIV; + } else if (strcmp(s, TEST_TAG_AUXILIARY_UNPRIV) == 0) { + spec->auxiliary = true; + spec->mode_mask |= UNPRIV; } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) { msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1; err = push_msg(msg, &spec->priv); @@ -314,6 +329,8 @@ static int parse_test_spec(struct test_loader *tester, } } + spec->valid = true; + return 0; cleanup: @@ -516,16 +533,18 @@ void run_subtest(struct test_loader *tester, struct bpf_object_open_opts *open_opts, const void *obj_bytes, size_t obj_byte_cnt, + struct test_spec *specs, struct test_spec *spec, bool unpriv) { struct test_subspec *subspec = unpriv ? &spec->unpriv : &spec->priv; + struct bpf_program *tprog, *tprog_iter; + struct test_spec *spec_iter; struct cap_state caps = {}; - struct bpf_program *tprog; struct bpf_object *tobj; struct bpf_map *map; - int retval; - int err; + int retval, err, i; + bool should_load; if (!test__start_subtest(subspec->name)) return; @@ -546,15 +565,23 @@ void run_subtest(struct test_loader *tester, if (!ASSERT_OK_PTR(tobj, "obj_open_mem")) /* shouldn't happen */ goto subtest_cleanup; - bpf_object__for_each_program(tprog, tobj) - bpf_program__set_autoload(tprog, false); + i = 0; + bpf_object__for_each_program(tprog_iter, tobj) { + spec_iter = &specs[i++]; + should_load = false; + + if (spec_iter->valid) { + if (strcmp(bpf_program__name(tprog_iter), spec->prog_name) == 0) { + tprog = tprog_iter; + should_load = true; + } - bpf_object__for_each_program(tprog, tobj) { - /* only load specified program */ - if (strcmp(bpf_program__name(tprog), spec->prog_name) == 0) { - bpf_program__set_autoload(tprog, true); - break; + if (spec_iter->auxiliary && + spec_iter->mode_mask & (unpriv ? UNPRIV : PRIV)) + should_load = true; } + + bpf_program__set_autoload(tprog_iter, should_load); } prepare_case(tester, spec, tobj, tprog); @@ -617,11 +644,12 @@ static void process_subtest(struct test_loader *tester, skel_elf_bytes_fn elf_bytes_factory) { LIBBPF_OPTS(bpf_object_open_opts, open_opts, .object_name = skel_name); + struct test_spec *specs = NULL; struct bpf_object *obj = NULL; struct bpf_program *prog; const void *obj_bytes; + int err, i, nr_progs; size_t obj_byte_cnt; - int err; if (tester_init(tester) < 0) return; /* failed to initialize tester */ @@ -631,25 +659,42 @@ static void process_subtest(struct test_loader *tester, if (!ASSERT_OK_PTR(obj, "obj_open_mem")) return; - bpf_object__for_each_program(prog, obj) { - struct test_spec spec; + nr_progs = 0; + bpf_object__for_each_program(prog, obj) + ++nr_progs; + + specs = calloc(nr_progs, sizeof(struct test_spec)); + if (!ASSERT_OK_PTR(specs, "Can't alloc specs array")) + return; - /* if we can't derive test specification, go to the next test */ - err = parse_test_spec(tester, obj, prog, &spec); - if (err) { + i = 0; + bpf_object__for_each_program(prog, obj) { + /* ignore tests for which we can't derive test specification */ + err = parse_test_spec(tester, obj, prog, &specs[i++]); + if (err) PRINT_FAIL("Can't parse test spec for program '%s'\n", bpf_program__name(prog)); + } + + i = 0; + bpf_object__for_each_program(prog, obj) { + struct test_spec *spec = &specs[i++]; + + if (!spec->valid || spec->auxiliary) continue; - } - if (spec.mode_mask & PRIV) - run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt, &spec, false); - if (spec.mode_mask & UNPRIV) - run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt, &spec, true); + if (spec->mode_mask & PRIV) + run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt, + specs, spec, false); + if (spec->mode_mask & UNPRIV) + run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt, + specs, spec, true); - free_test_spec(&spec); } + for (i = 0; i < nr_progs; ++i) + free_test_spec(&specs[i]); + free(specs); bpf_object__close(obj); } From patchwork Fri Apr 21 17:42:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220514 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FFB4C77B7E for ; Fri, 21 Apr 2023 17:43:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233082AbjDURnY (ORCPT ); Fri, 21 Apr 2023 13:43:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233280AbjDURnG (ORCPT ); Fri, 21 Apr 2023 13:43:06 -0400 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90DC5468F for ; Fri, 21 Apr 2023 10:42:49 -0700 (PDT) Received: by mail-wr1-x42a.google.com with SMTP id ffacd0b85a97d-2f9b9aa9d75so1301540f8f.0 for ; Fri, 21 Apr 2023 10:42:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098968; x=1684690968; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5eXGugMjLsLhY2JDJ8sIzUsCeI5JgU7Ior6nMC8RFW0=; b=lQvLJ5n9ZyGt2EcEojWIsy6SL07bvuYf1HEpdegfAPCT+VOMP7qNmWM+F86dKCSMUL rbbTMYMIOINNi4NIBCOZzNHcG0BECEdO+Gb9OXQZrB8DN3d7+MEp/TFeHQGv1WH6HYZF ySZno7eUetqsEVJMD0mOkM5zbQ6zKmwvfO5xcUV4uZP+9jF+LjLRLOZcm6xNLEG037Ri VLgEHmt1mBv4ECGUB7P702ZTeP7LmyPLwAyZaZhlEDB1W4LsX4/iJJINeZlrXPbtBykz 2mvbNiiaAvpVFBHo4agaQH+DS1MMBaTrQju6dnIjJbpunYEPNx5UFuls7PdpSRFw5wgE z+tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098968; x=1684690968; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5eXGugMjLsLhY2JDJ8sIzUsCeI5JgU7Ior6nMC8RFW0=; b=OqurtPSBkVY+RlugMgM74t0FATOF9PZLk9Mbk4jUb2mb7/76y8UR/LK4zRp7gqq0u8 nYYYTvqosu1f4dbW4X/nj2DHRuxXJPgyeUGyYfB1PHaWm5E+gFK9FwFLk+wZuGIO1bS2 Zu3iAqyASsuE6fw2NAJD2xL4FrMXEi9/Hlv/1s7YWRhDqmgtZOtAvy3cRjaBc6O13wkM I1tHlbf0VMAjOCchFLklwUC1/l7vrlC6fbbrP8d9PiQRIzisgOb1p5lo/AQr+wQOYGVS 91Rvqc1tuv1LnQfhruRniu4YV8t5cePxQyZrwVMi3V2CQoeNYNEQljC/361ff3Ai4p+s cRPQ== X-Gm-Message-State: AAQBX9dYNwM7wfN2Ntrfy2E4IfnM0TPXTGENNhB6Ukou7zBd6AX4HAJ0 sESW041c4FR6bP1yVw6PmcVUscbreJX90w== X-Google-Smtp-Source: AKy350YaGmjZ0lS4I1igo6Vianzq1m8RgtLbWjw6bQWlMdFpefh2NrJul6+l7M+vAIpXSBM3pj8OOw== X-Received: by 2002:adf:e590:0:b0:2fb:1f34:dc6d with SMTP id l16-20020adfe590000000b002fb1f34dc6dmr4026896wrm.64.1682098967079; Fri, 21 Apr 2023 10:42:47 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:46 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 02/24] selftests/bpf: verifier/bounds converted to inline assembly Date: Fri, 21 Apr 2023 20:42:12 +0300 Message-Id: <20230421174234.2391278-3-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/bounds automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_bounds.c | 1076 +++++++++++++++++ tools/testing/selftests/bpf/verifier/bounds.c | 884 -------------- 3 files changed, 1078 insertions(+), 884 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_bounds.c delete mode 100644 tools/testing/selftests/bpf/verifier/bounds.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 7c68d78da9ea..e61c9120e261 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -6,6 +6,7 @@ #include "verifier_and.skel.h" #include "verifier_array_access.skel.h" #include "verifier_basic_stack.skel.h" +#include "verifier_bounds.skel.h" #include "verifier_bounds_deduction.skel.h" #include "verifier_bounds_deduction_non_const.skel.h" #include "verifier_bounds_mix_sign_unsign.skel.h" @@ -80,6 +81,7 @@ static void run_tests_aux(const char *skel_name, void test_verifier_and(void) { RUN(verifier_and); } void test_verifier_basic_stack(void) { RUN(verifier_basic_stack); } +void test_verifier_bounds(void) { RUN(verifier_bounds); } void test_verifier_bounds_deduction(void) { RUN(verifier_bounds_deduction); } void test_verifier_bounds_deduction_non_const(void) { RUN(verifier_bounds_deduction_non_const); } void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); } diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c new file mode 100644 index 000000000000..c5588a14fe2e --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c @@ -0,0 +1,1076 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/bounds.c */ + +#include +#include +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("socket") +__description("subtraction bounds (map value) variant 1") +__failure __msg("R0 max value is outside of the allowed memory range") +__failure_unpriv +__naked void bounds_map_value_variant_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u8*)(r0 + 0); \ + if r1 > 0xff goto l0_%=; \ + r3 = *(u8*)(r0 + 1); \ + if r3 > 0xff goto l0_%=; \ + r1 -= r3; \ + r1 >>= 56; \ + r0 += r1; \ + r0 = *(u8*)(r0 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("subtraction bounds (map value) variant 2") +__failure +__msg("R0 min value is negative, either use unsigned index or do a if (index >=0) check.") +__msg_unpriv("R1 has unknown scalar with mixed signed bounds") +__naked void bounds_map_value_variant_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u8*)(r0 + 0); \ + if r1 > 0xff goto l0_%=; \ + r3 = *(u8*)(r0 + 1); \ + if r3 > 0xff goto l0_%=; \ + r1 -= r3; \ + r0 += r1; \ + r0 = *(u8*)(r0 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("check subtraction on pointers for unpriv") +__success __failure_unpriv __msg_unpriv("R9 pointer -= pointer prohibited") +__retval(0) +__naked void subtraction_on_pointers_for_unpriv(void) +{ + asm volatile (" \ + r0 = 0; \ + r1 = %[map_hash_8b] ll; \ + r2 = r10; \ + r2 += -8; \ + r6 = 9; \ + *(u64*)(r2 + 0) = r6; \ + call %[bpf_map_lookup_elem]; \ + r9 = r10; \ + r9 -= r0; \ + r1 = %[map_hash_8b] ll; \ + r2 = r10; \ + r2 += -8; \ + r6 = 0; \ + *(u64*)(r2 + 0) = r6; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: *(u64*)(r0 + 0) = r9; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check based on zero-extended MOV") +__success __success_unpriv __retval(0) +__naked void based_on_zero_extended_mov(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + /* r2 = 0x0000'0000'ffff'ffff */ \ + w2 = 0xffffffff; \ + /* r2 = 0 */ \ + r2 >>= 32; \ + /* no-op */ \ + r0 += r2; \ + /* access at offset 0 */ \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: /* exit */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check based on sign-extended MOV. test1") +__failure __msg("map_value pointer and 4294967295") +__failure_unpriv +__naked void on_sign_extended_mov_test1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + /* r2 = 0xffff'ffff'ffff'ffff */ \ + r2 = 0xffffffff; \ + /* r2 = 0xffff'ffff */ \ + r2 >>= 32; \ + /* r0 = */ \ + r0 += r2; \ + /* access to OOB pointer */ \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: /* exit */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check based on sign-extended MOV. test2") +__failure __msg("R0 min value is outside of the allowed memory range") +__failure_unpriv +__naked void on_sign_extended_mov_test2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + /* r2 = 0xffff'ffff'ffff'ffff */ \ + r2 = 0xffffffff; \ + /* r2 = 0xfff'ffff */ \ + r2 >>= 36; \ + /* r0 = */ \ + r0 += r2; \ + /* access to OOB pointer */ \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: /* exit */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tc") +__description("bounds check based on reg_off + var_off + insn_off. test1") +__failure __msg("value_size=8 off=1073741825") +__naked void var_off_insn_off_test1(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_mark]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r6 &= 1; \ + r6 += %[__imm_0]; \ + r0 += r6; \ + r0 += %[__imm_0]; \ +l0_%=: r0 = *(u8*)(r0 + 3); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__imm_0, (1 << 29) - 1), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("tc") +__description("bounds check based on reg_off + var_off + insn_off. test2") +__failure __msg("value 1073741823") +__naked void var_off_insn_off_test2(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_mark]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r6 &= 1; \ + r6 += %[__imm_0]; \ + r0 += r6; \ + r0 += %[__imm_1]; \ +l0_%=: r0 = *(u8*)(r0 + 3); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__imm_0, (1 << 30) - 1), + __imm_const(__imm_1, (1 << 29) - 1), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("socket") +__description("bounds check after truncation of non-boundary-crossing range") +__success __success_unpriv __retval(0) +__naked void of_non_boundary_crossing_range(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + /* r1 = [0x00, 0xff] */ \ + r1 = *(u8*)(r0 + 0); \ + r2 = 1; \ + /* r2 = 0x10'0000'0000 */ \ + r2 <<= 36; \ + /* r1 = [0x10'0000'0000, 0x10'0000'00ff] */ \ + r1 += r2; \ + /* r1 = [0x10'7fff'ffff, 0x10'8000'00fe] */ \ + r1 += 0x7fffffff; \ + /* r1 = [0x00, 0xff] */ \ + w1 -= 0x7fffffff; \ + /* r1 = 0 */ \ + r1 >>= 8; \ + /* no-op */ \ + r0 += r1; \ + /* access at offset 0 */ \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: /* exit */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check after truncation of boundary-crossing range (1)") +__failure +/* not actually fully unbounded, but the bound is very high */ +__msg("value -4294967168 makes map_value pointer be out of bounds") +__failure_unpriv +__naked void of_boundary_crossing_range_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + /* r1 = [0x00, 0xff] */ \ + r1 = *(u8*)(r0 + 0); \ + r1 += %[__imm_0]; \ + /* r1 = [0xffff'ff80, 0x1'0000'007f] */ \ + r1 += %[__imm_0]; \ + /* r1 = [0xffff'ff80, 0xffff'ffff] or \ + * [0x0000'0000, 0x0000'007f] \ + */ \ + w1 += 0; \ + r1 -= %[__imm_0]; \ + /* r1 = [0x00, 0xff] or \ + * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]\ + */ \ + r1 -= %[__imm_0]; \ + /* error on OOB pointer computation */ \ + r0 += r1; \ + /* exit */ \ + r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__imm_0, 0xffffff80 >> 1) + : __clobber_all); +} + +SEC("socket") +__description("bounds check after truncation of boundary-crossing range (2)") +__failure __msg("value -4294967168 makes map_value pointer be out of bounds") +__failure_unpriv +__naked void of_boundary_crossing_range_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + /* r1 = [0x00, 0xff] */ \ + r1 = *(u8*)(r0 + 0); \ + r1 += %[__imm_0]; \ + /* r1 = [0xffff'ff80, 0x1'0000'007f] */ \ + r1 += %[__imm_0]; \ + /* r1 = [0xffff'ff80, 0xffff'ffff] or \ + * [0x0000'0000, 0x0000'007f] \ + * difference to previous test: truncation via MOV32\ + * instead of ALU32. \ + */ \ + w1 = w1; \ + r1 -= %[__imm_0]; \ + /* r1 = [0x00, 0xff] or \ + * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]\ + */ \ + r1 -= %[__imm_0]; \ + /* error on OOB pointer computation */ \ + r0 += r1; \ + /* exit */ \ + r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__imm_0, 0xffffff80 >> 1) + : __clobber_all); +} + +SEC("socket") +__description("bounds check after wrapping 32-bit addition") +__success __success_unpriv __retval(0) +__naked void after_wrapping_32_bit_addition(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + /* r1 = 0x7fff'ffff */ \ + r1 = 0x7fffffff; \ + /* r1 = 0xffff'fffe */ \ + r1 += 0x7fffffff; \ + /* r1 = 0 */ \ + w1 += 2; \ + /* no-op */ \ + r0 += r1; \ + /* access at offset 0 */ \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: /* exit */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check after shift with oversized count operand") +__failure __msg("R0 max value is outside of the allowed memory range") +__failure_unpriv +__naked void shift_with_oversized_count_operand(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = 32; \ + r1 = 1; \ + /* r1 = (u32)1 << (u32)32 = ? */ \ + w1 <<= w2; \ + /* r1 = [0x0000, 0xffff] */ \ + r1 &= 0xffff; \ + /* computes unknown pointer, potentially OOB */ \ + r0 += r1; \ + /* potentially OOB access */ \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: /* exit */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check after right shift of maybe-negative number") +__failure __msg("R0 unbounded memory access") +__failure_unpriv +__naked void shift_of_maybe_negative_number(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + /* r1 = [0x00, 0xff] */ \ + r1 = *(u8*)(r0 + 0); \ + /* r1 = [-0x01, 0xfe] */ \ + r1 -= 1; \ + /* r1 = 0 or 0xff'ffff'ffff'ffff */ \ + r1 >>= 8; \ + /* r1 = 0 or 0xffff'ffff'ffff */ \ + r1 >>= 8; \ + /* computes unknown pointer, potentially OOB */ \ + r0 += r1; \ + /* potentially OOB access */ \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: /* exit */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check after 32-bit right shift with 64-bit input") +__failure __msg("math between map_value pointer and 4294967294 is not allowed") +__failure_unpriv +__naked void shift_with_64_bit_input(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 2; \ + /* r1 = 1<<32 */ \ + r1 <<= 31; \ + /* r1 = 0 (NOT 2!) */ \ + w1 >>= 31; \ + /* r1 = 0xffff'fffe (NOT 0!) */ \ + w1 -= 2; \ + /* error on computing OOB pointer */ \ + r0 += r1; \ + /* exit */ \ + r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check map access with off+size signed 32bit overflow. test1") +__failure __msg("map_value pointer and 2147483646") +__failure_unpriv +__naked void size_signed_32bit_overflow_test1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r0 += 0x7ffffffe; \ + r0 = *(u64*)(r0 + 0); \ + goto l1_%=; \ +l1_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check map access with off+size signed 32bit overflow. test2") +__failure __msg("pointer offset 1073741822") +__msg_unpriv("R0 pointer arithmetic of map value goes out of range") +__naked void size_signed_32bit_overflow_test2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r0 += 0x1fffffff; \ + r0 += 0x1fffffff; \ + r0 += 0x1fffffff; \ + r0 = *(u64*)(r0 + 0); \ + goto l1_%=; \ +l1_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check map access with off+size signed 32bit overflow. test3") +__failure __msg("pointer offset -1073741822") +__msg_unpriv("R0 pointer arithmetic of map value goes out of range") +__naked void size_signed_32bit_overflow_test3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r0 -= 0x1fffffff; \ + r0 -= 0x1fffffff; \ + r0 = *(u64*)(r0 + 2); \ + goto l1_%=; \ +l1_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check map access with off+size signed 32bit overflow. test4") +__failure __msg("map_value pointer and 1000000000000") +__failure_unpriv +__naked void size_signed_32bit_overflow_test4(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = 1000000; \ + r1 *= 1000000; \ + r0 += r1; \ + r0 = *(u64*)(r0 + 2); \ + goto l1_%=; \ +l1_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check mixed 32bit and 64bit arithmetic. test1") +__success __failure_unpriv __msg_unpriv("R0 invalid mem access 'scalar'") +__retval(0) +__naked void _32bit_and_64bit_arithmetic_test1(void) +{ + asm volatile (" \ + r0 = 0; \ + r1 = -1; \ + r1 <<= 32; \ + r1 += 1; \ + /* r1 = 0xffffFFFF00000001 */ \ + if w1 > 1 goto l0_%=; \ + /* check ALU64 op keeps 32bit bounds */ \ + r1 += 1; \ + if w1 > 2 goto l0_%=; \ + goto l1_%=; \ +l0_%=: /* invalid ldx if bounds are lost above */ \ + r0 = *(u64*)(r0 - 1); \ +l1_%=: exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("bounds check mixed 32bit and 64bit arithmetic. test2") +__success __failure_unpriv __msg_unpriv("R0 invalid mem access 'scalar'") +__retval(0) +__naked void _32bit_and_64bit_arithmetic_test2(void) +{ + asm volatile (" \ + r0 = 0; \ + r1 = -1; \ + r1 <<= 32; \ + r1 += 1; \ + /* r1 = 0xffffFFFF00000001 */ \ + r2 = 3; \ + /* r1 = 0x2 */ \ + w1 += 1; \ + /* check ALU32 op zero extends 64bit bounds */ \ + if r1 > r2 goto l0_%=; \ + goto l1_%=; \ +l0_%=: /* invalid ldx if bounds are lost above */ \ + r0 = *(u64*)(r0 - 1); \ +l1_%=: exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("assigning 32bit bounds to 64bit for wA = 0, wB = wA") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void for_wa_0_wb_wa(void) +{ + asm volatile (" \ + r8 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data]); \ + w9 = 0; \ + w2 = w9; \ + r6 = r7; \ + r6 += r2; \ + r3 = r6; \ + r3 += 8; \ + if r3 > r8 goto l0_%=; \ + r5 = *(u32*)(r6 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("socket") +__description("bounds check for reg = 0, reg xor 1") +__success __failure_unpriv +__msg_unpriv("R0 min value is outside of the allowed memory range") +__retval(0) +__naked void reg_0_reg_xor_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = 0; \ + r1 ^= 1; \ + if r1 != 0 goto l1_%=; \ + r0 = *(u64*)(r0 + 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check for reg32 = 0, reg32 xor 1") +__success __failure_unpriv +__msg_unpriv("R0 min value is outside of the allowed memory range") +__retval(0) +__naked void reg32_0_reg32_xor_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: w1 = 0; \ + w1 ^= 1; \ + if w1 != 0 goto l1_%=; \ + r0 = *(u64*)(r0 + 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check for reg = 2, reg xor 3") +__success __failure_unpriv +__msg_unpriv("R0 min value is outside of the allowed memory range") +__retval(0) +__naked void reg_2_reg_xor_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = 2; \ + r1 ^= 3; \ + if r1 > 0 goto l1_%=; \ + r0 = *(u64*)(r0 + 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check for reg = any, reg xor 3") +__failure __msg("invalid access to map value") +__msg_unpriv("invalid access to map value") +__naked void reg_any_reg_xor_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = *(u64*)(r0 + 0); \ + r1 ^= 3; \ + if r1 != 0 goto l1_%=; \ + r0 = *(u64*)(r0 + 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check for reg32 = any, reg32 xor 3") +__failure __msg("invalid access to map value") +__msg_unpriv("invalid access to map value") +__naked void reg32_any_reg32_xor_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = *(u64*)(r0 + 0); \ + w1 ^= 3; \ + if w1 != 0 goto l1_%=; \ + r0 = *(u64*)(r0 + 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check for reg > 0, reg xor 3") +__success __failure_unpriv +__msg_unpriv("R0 min value is outside of the allowed memory range") +__retval(0) +__naked void reg_0_reg_xor_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = *(u64*)(r0 + 0); \ + if r1 <= 0 goto l1_%=; \ + r1 ^= 3; \ + if r1 >= 0 goto l1_%=; \ + r0 = *(u64*)(r0 + 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds check for reg32 > 0, reg32 xor 3") +__success __failure_unpriv +__msg_unpriv("R0 min value is outside of the allowed memory range") +__retval(0) +__naked void reg32_0_reg32_xor_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = *(u64*)(r0 + 0); \ + if w1 <= 0 goto l1_%=; \ + w1 ^= 3; \ + if w1 >= 0 goto l1_%=; \ + r0 = *(u64*)(r0 + 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks after 32-bit truncation. test 1") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) +__naked void _32_bit_truncation_test_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + /* This used to reduce the max bound to 0x7fffffff */\ + if r1 == 0 goto l1_%=; \ + if r1 > 0x7fffffff goto l0_%=; \ +l1_%=: r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks after 32-bit truncation. test 2") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) +__naked void _32_bit_truncation_test_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + if r1 s< 1 goto l1_%=; \ + if w1 s< 0 goto l0_%=; \ +l1_%=: r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("xdp") +__description("bound check with JMP_JLT for crossing 64-bit signed boundary") +__success __retval(0) +__naked void crossing_64_bit_signed_boundary_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 1; \ + if r1 > r3 goto l0_%=; \ + r1 = *(u8*)(r2 + 0); \ + r0 = 0x7fffffffffffff10 ll; \ + r1 += r0; \ + r0 = 0x8000000000000000 ll; \ +l1_%=: r0 += 1; \ + /* r1 unsigned range is [0x7fffffffffffff10, 0x800000000000000f] */\ + if r0 < r1 goto l1_%=; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("bound check with JMP_JSLT for crossing 64-bit signed boundary") +__success __retval(0) +__naked void crossing_64_bit_signed_boundary_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 1; \ + if r1 > r3 goto l0_%=; \ + r1 = *(u8*)(r2 + 0); \ + r0 = 0x7fffffffffffff10 ll; \ + r1 += r0; \ + r2 = 0x8000000000000fff ll; \ + r0 = 0x8000000000000000 ll; \ +l1_%=: r0 += 1; \ + if r0 s> r2 goto l0_%=; \ + /* r1 signed range is [S64_MIN, S64_MAX] */ \ + if r0 s< r1 goto l1_%=; \ + r0 = 1; \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("bound check for loop upper bound greater than U32_MAX") +__success __retval(0) +__naked void bound_greater_than_u32_max(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 1; \ + if r1 > r3 goto l0_%=; \ + r1 = *(u8*)(r2 + 0); \ + r0 = 0x100000000 ll; \ + r1 += r0; \ + r0 = 0x100000000 ll; \ +l1_%=: r0 += 1; \ + if r0 < r1 goto l1_%=; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("bound check with JMP32_JLT for crossing 32-bit signed boundary") +__success __retval(0) +__naked void crossing_32_bit_signed_boundary_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 1; \ + if r1 > r3 goto l0_%=; \ + r1 = *(u8*)(r2 + 0); \ + w0 = 0x7fffff10; \ + w1 += w0; \ + w0 = 0x80000000; \ +l1_%=: w0 += 1; \ + /* r1 unsigned range is [0, 0x8000000f] */ \ + if w0 < w1 goto l1_%=; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("bound check with JMP32_JSLT for crossing 32-bit signed boundary") +__success __retval(0) +__naked void crossing_32_bit_signed_boundary_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 1; \ + if r1 > r3 goto l0_%=; \ + r1 = *(u8*)(r2 + 0); \ + w0 = 0x7fffff10; \ + w1 += w0; \ + w2 = 0x80000fff; \ + w0 = 0x80000000; \ +l1_%=: w0 += 1; \ + if w0 s> w2 goto l0_%=; \ + /* r1 signed range is [S32_MIN, S32_MAX] */ \ + if w0 s< w1 goto l1_%=; \ + r0 = 1; \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/bounds.c b/tools/testing/selftests/bpf/verifier/bounds.c deleted file mode 100644 index 43942ce8cf15..000000000000 --- a/tools/testing/selftests/bpf/verifier/bounds.c +++ /dev/null @@ -1,884 +0,0 @@ -{ - "subtraction bounds (map value) variant 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 7), - BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 0xff, 5), - BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_3), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 56), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "R0 max value is outside of the allowed memory range", - .result = REJECT, -}, -{ - "subtraction bounds (map value) variant 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 6), - BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 0xff, 4), - BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.", - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", - .result = REJECT, -}, -{ - "check subtraction on pointers for unpriv", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_LD_MAP_FD(BPF_REG_ARG1, 0), - BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_ARG2, 0, 9), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_FP), - BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_ARG1, 0), - BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_ARG2, 0, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 1, 9 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R9 pointer -= pointer prohibited", -}, -{ - "bounds check based on zero-extended MOV", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - /* r2 = 0x0000'0000'ffff'ffff */ - BPF_MOV32_IMM(BPF_REG_2, 0xffffffff), - /* r2 = 0 */ - BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32), - /* no-op */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), - /* access at offset 0 */ - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT -}, -{ - "bounds check based on sign-extended MOV. test1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - /* r2 = 0xffff'ffff'ffff'ffff */ - BPF_MOV64_IMM(BPF_REG_2, 0xffffffff), - /* r2 = 0xffff'ffff */ - BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32), - /* r0 = */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), - /* access to OOB pointer */ - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "map_value pointer and 4294967295", - .result = REJECT -}, -{ - "bounds check based on sign-extended MOV. test2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - /* r2 = 0xffff'ffff'ffff'ffff */ - BPF_MOV64_IMM(BPF_REG_2, 0xffffffff), - /* r2 = 0xfff'ffff */ - BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 36), - /* r0 = */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), - /* access to OOB pointer */ - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "R0 min value is outside of the allowed memory range", - .result = REJECT -}, -{ - "bounds check based on reg_off + var_off + insn_off. test1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 29) - 1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .errstr = "value_size=8 off=1073741825", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "bounds check based on reg_off + var_off + insn_off. test2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 30) - 1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .errstr = "value 1073741823", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "bounds check after truncation of non-boundary-crossing range", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), - /* r1 = [0x00, 0xff] */ - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_2, 1), - /* r2 = 0x10'0000'0000 */ - BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 36), - /* r1 = [0x10'0000'0000, 0x10'0000'00ff] */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2), - /* r1 = [0x10'7fff'ffff, 0x10'8000'00fe] */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff), - /* r1 = [0x00, 0xff] */ - BPF_ALU32_IMM(BPF_SUB, BPF_REG_1, 0x7fffffff), - /* r1 = 0 */ - BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8), - /* no-op */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - /* access at offset 0 */ - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT -}, -{ - "bounds check after truncation of boundary-crossing range (1)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - /* r1 = [0x00, 0xff] */ - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1), - /* r1 = [0xffff'ff80, 0x1'0000'007f] */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1), - /* r1 = [0xffff'ff80, 0xffff'ffff] or - * [0x0000'0000, 0x0000'007f] - */ - BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1), - /* r1 = [0x00, 0xff] or - * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff] - */ - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1), - /* error on OOB pointer computation */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - /* not actually fully unbounded, but the bound is very high */ - .errstr = "value -4294967168 makes map_value pointer be out of bounds", - .result = REJECT, -}, -{ - "bounds check after truncation of boundary-crossing range (2)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - /* r1 = [0x00, 0xff] */ - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1), - /* r1 = [0xffff'ff80, 0x1'0000'007f] */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1), - /* r1 = [0xffff'ff80, 0xffff'ffff] or - * [0x0000'0000, 0x0000'007f] - * difference to previous test: truncation via MOV32 - * instead of ALU32. - */ - BPF_MOV32_REG(BPF_REG_1, BPF_REG_1), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1), - /* r1 = [0x00, 0xff] or - * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff] - */ - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1), - /* error on OOB pointer computation */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "value -4294967168 makes map_value pointer be out of bounds", - .result = REJECT, -}, -{ - "bounds check after wrapping 32-bit addition", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - /* r1 = 0x7fff'ffff */ - BPF_MOV64_IMM(BPF_REG_1, 0x7fffffff), - /* r1 = 0xffff'fffe */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff), - /* r1 = 0 */ - BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 2), - /* no-op */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - /* access at offset 0 */ - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT -}, -{ - "bounds check after shift with oversized count operand", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_IMM(BPF_REG_2, 32), - BPF_MOV64_IMM(BPF_REG_1, 1), - /* r1 = (u32)1 << (u32)32 = ? */ - BPF_ALU32_REG(BPF_LSH, BPF_REG_1, BPF_REG_2), - /* r1 = [0x0000, 0xffff] */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xffff), - /* computes unknown pointer, potentially OOB */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - /* potentially OOB access */ - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "R0 max value is outside of the allowed memory range", - .result = REJECT -}, -{ - "bounds check after right shift of maybe-negative number", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - /* r1 = [0x00, 0xff] */ - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - /* r1 = [-0x01, 0xfe] */ - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1), - /* r1 = 0 or 0xff'ffff'ffff'ffff */ - BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8), - /* r1 = 0 or 0xffff'ffff'ffff */ - BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8), - /* computes unknown pointer, potentially OOB */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - /* potentially OOB access */ - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "R0 unbounded memory access", - .result = REJECT -}, -{ - "bounds check after 32-bit right shift with 64-bit input", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - /* r1 = 2 */ - BPF_MOV64_IMM(BPF_REG_1, 2), - /* r1 = 1<<32 */ - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 31), - /* r1 = 0 (NOT 2!) */ - BPF_ALU32_IMM(BPF_RSH, BPF_REG_1, 31), - /* r1 = 0xffff'fffe (NOT 0!) */ - BPF_ALU32_IMM(BPF_SUB, BPF_REG_1, 2), - /* error on computing OOB pointer */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - /* exit */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "math between map_value pointer and 4294967294 is not allowed", - .result = REJECT, -}, -{ - "bounds check map access with off+size signed 32bit overflow. test1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7ffffffe), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), - BPF_JMP_A(0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "map_value pointer and 2147483646", - .result = REJECT -}, -{ - "bounds check map access with off+size signed 32bit overflow. test2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), - BPF_JMP_A(0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "pointer offset 1073741822", - .errstr_unpriv = "R0 pointer arithmetic of map value goes out of range", - .result = REJECT -}, -{ - "bounds check map access with off+size signed 32bit overflow. test3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2), - BPF_JMP_A(0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "pointer offset -1073741822", - .errstr_unpriv = "R0 pointer arithmetic of map value goes out of range", - .result = REJECT -}, -{ - "bounds check map access with off+size signed 32bit overflow. test4", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_1, 1000000), - BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, 1000000), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2), - BPF_JMP_A(0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "map_value pointer and 1000000000000", - .result = REJECT -}, -{ - "bounds check mixed 32bit and 64bit arithmetic. test1", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_1, -1), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - /* r1 = 0xffffFFFF00000001 */ - BPF_JMP32_IMM(BPF_JGT, BPF_REG_1, 1, 3), - /* check ALU64 op keeps 32bit bounds */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_JMP32_IMM(BPF_JGT, BPF_REG_1, 2, 1), - BPF_JMP_A(1), - /* invalid ldx if bounds are lost above */ - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R0 invalid mem access 'scalar'", - .result_unpriv = REJECT, - .result = ACCEPT -}, -{ - "bounds check mixed 32bit and 64bit arithmetic. test2", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_1, -1), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - /* r1 = 0xffffFFFF00000001 */ - BPF_MOV64_IMM(BPF_REG_2, 3), - /* r1 = 0x2 */ - BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 1), - /* check ALU32 op zero extends 64bit bounds */ - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 1), - BPF_JMP_A(1), - /* invalid ldx if bounds are lost above */ - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R0 invalid mem access 'scalar'", - .result_unpriv = REJECT, - .result = ACCEPT -}, -{ - "assigning 32bit bounds to 64bit for wA = 0, wB = wA", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_MOV32_IMM(BPF_REG_9, 0), - BPF_MOV32_REG(BPF_REG_2, BPF_REG_9), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_7), - BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_8, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_6, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "bounds check for reg = 0, reg xor 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R0 min value is outside of the allowed memory range", - .result_unpriv = REJECT, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, -}, -{ - "bounds check for reg32 = 0, reg32 xor 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_ALU32_IMM(BPF_XOR, BPF_REG_1, 1), - BPF_JMP32_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R0 min value is outside of the allowed memory range", - .result_unpriv = REJECT, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, -}, -{ - "bounds check for reg = 2, reg xor 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_1, 2), - BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 3), - BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R0 min value is outside of the allowed memory range", - .result_unpriv = REJECT, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, -}, -{ - "bounds check for reg = any, reg xor 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 3), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = REJECT, - .errstr = "invalid access to map value", - .errstr_unpriv = "invalid access to map value", -}, -{ - "bounds check for reg32 = any, reg32 xor 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU32_IMM(BPF_XOR, BPF_REG_1, 3), - BPF_JMP32_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = REJECT, - .errstr = "invalid access to map value", - .errstr_unpriv = "invalid access to map value", -}, -{ - "bounds check for reg > 0, reg xor 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JLE, BPF_REG_1, 0, 3), - BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 3), - BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R0 min value is outside of the allowed memory range", - .result_unpriv = REJECT, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, -}, -{ - "bounds check for reg32 > 0, reg32 xor 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP32_IMM(BPF_JLE, BPF_REG_1, 0, 3), - BPF_ALU32_IMM(BPF_XOR, BPF_REG_1, 3), - BPF_JMP32_IMM(BPF_JGE, BPF_REG_1, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R0 min value is outside of the allowed memory range", - .result_unpriv = REJECT, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, -}, -{ - "bounds checks after 32-bit truncation. test 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - /* This used to reduce the max bound to 0x7fffffff */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0x7fffffff, 1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "bounds checks after 32-bit truncation. test 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_1, 1, 1), - BPF_JMP32_IMM(BPF_JSLT, BPF_REG_1, 0, 1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "bound check with JMP_JLT for crossing 64-bit signed boundary", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 8), - - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), - BPF_LD_IMM64(BPF_REG_0, 0x7fffffffffffff10), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - - BPF_LD_IMM64(BPF_REG_0, 0x8000000000000000), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - /* r1 unsigned range is [0x7fffffffffffff10, 0x800000000000000f] */ - BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2), - - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "bound check with JMP_JSLT for crossing 64-bit signed boundary", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 13), - - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), - BPF_LD_IMM64(BPF_REG_0, 0x7fffffffffffff10), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - - BPF_LD_IMM64(BPF_REG_2, 0x8000000000000fff), - BPF_LD_IMM64(BPF_REG_0, 0x8000000000000000), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_REG(BPF_JSGT, BPF_REG_0, BPF_REG_2, 3), - /* r1 signed range is [S64_MIN, S64_MAX] */ - BPF_JMP_REG(BPF_JSLT, BPF_REG_0, BPF_REG_1, -3), - - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "bound check for loop upper bound greater than U32_MAX", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 8), - - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), - BPF_LD_IMM64(BPF_REG_0, 0x100000000), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - - BPF_LD_IMM64(BPF_REG_0, 0x100000000), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2), - - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "bound check with JMP32_JLT for crossing 32-bit signed boundary", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 6), - - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), - BPF_MOV32_IMM(BPF_REG_0, 0x7fffff10), - BPF_ALU32_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - - BPF_MOV32_IMM(BPF_REG_0, 0x80000000), - BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1), - /* r1 unsigned range is [0, 0x8000000f] */ - BPF_JMP32_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2), - - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "bound check with JMP32_JSLT for crossing 32-bit signed boundary", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 10), - - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), - BPF_MOV32_IMM(BPF_REG_0, 0x7fffff10), - BPF_ALU32_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - - BPF_MOV32_IMM(BPF_REG_2, 0x80000fff), - BPF_MOV32_IMM(BPF_REG_0, 0x80000000), - BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP32_REG(BPF_JSGT, BPF_REG_0, BPF_REG_2, 3), - /* r1 signed range is [S32_MIN, S32_MAX] */ - BPF_JMP32_REG(BPF_JSLT, BPF_REG_0, BPF_REG_1, -3), - - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, From patchwork Fri Apr 21 17:42:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220510 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32616C7618E for ; Fri, 21 Apr 2023 17:43:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233250AbjDURnW (ORCPT ); Fri, 21 Apr 2023 13:43:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233278AbjDURnF (ORCPT ); Fri, 21 Apr 2023 13:43:05 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 031C45BA5 for ; Fri, 21 Apr 2023 10:42:51 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id ffacd0b85a97d-2f3fe12de15so1274222f8f.3 for ; Fri, 21 Apr 2023 10:42:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098969; x=1684690969; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7yDK7Ow/EQo3R038liwVGClD6X8zlhQtk+9SHaz2OOQ=; b=ajMn/F7KnTx5GKlBiHzwuvhMcqG2rcKSE+2RduddiwDUN0R5MWeU/mUJyOT0FjJiJk q4vt2pTz09/volbOnRq9t7H9hwkDarjgEx6VG74inVTGMm9ua+abvG1JDGa2MXorYRq9 xcGgeAZW2N/j6A/Yf+as8/4+t8/Up+jZny3Bs9K5UFY6fS/VTvuvywFfdVDMZ8R4QRhM x8iXK9tKv67kaabpiDX48UOpZeJIMOc23bND8xtStdAGQ5q/U4X6qAfX+ZVlS4AR7u8p lRJlUhEEEPyTzqCi6IFuCPgxsXmmgEDScQWgG9nqHQRmjI2ipG2YNWniBQHYlTs412zt sZ+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098969; x=1684690969; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7yDK7Ow/EQo3R038liwVGClD6X8zlhQtk+9SHaz2OOQ=; b=e3ClP5/mPFF4y0RK3wFRoSOYDEh5bVBIXwNAQ/kzESkOzICaxp+Zvc1PByK8jDqP+K lB8obrV7m1kWcPOS8N2AqqJlOgVUDHOrEBwVHtP6FJSfRVmvHL/Eh411PIpq2Xztrsco JDEtb5Irv9uX7L70PVxmE873hB8QioURoEEYphHkeXI0nrYmWa3x3OwgIlCM8R7sDfpA 5Y27VFIDOGd4tj8IhZp4LAFCzdM9eawTdWm9/R22I3vaud+gy32qGCTnJecGKHi90cj1 5Y6Y+dPi5veoal5hmx9+HNFXN58TsiZBbTefN7XLNtzBVumM1nVtav8P8RvRtenrej0c UCdQ== X-Gm-Message-State: AAQBX9f/H/6MPle40m9FtBvb3r0E4u/CI7nY/oleIPEeAjsxhssIJaOf Rn/AXJ3zY5m9DRwlRyPcD0T80NXn1QrnBg== X-Google-Smtp-Source: AKy350ZeFrN2Yu4e2tXbDmwRKWSxl0rfexYgG+37+3AGszM7d3ZxC86kdKNINrtg1xwRlPmvhjNong== X-Received: by 2002:a5d:4b91:0:b0:2f9:61d4:1183 with SMTP id b17-20020a5d4b91000000b002f961d41183mr4433399wrt.45.1682098968939; Fri, 21 Apr 2023 10:42:48 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:48 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 03/24] selftests/bpf: verifier/bpf_get_stack converted to inline assembly Date: Fri, 21 Apr 2023 20:42:13 +0300 Message-Id: <20230421174234.2391278-4-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/bpf_get_stack automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../bpf/progs/verifier_bpf_get_stack.c | 124 ++++++++++++++++++ .../selftests/bpf/verifier/bpf_get_stack.c | 87 ------------ 3 files changed, 126 insertions(+), 87 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_bpf_get_stack.c delete mode 100644 tools/testing/selftests/bpf/verifier/bpf_get_stack.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index e61c9120e261..db55d125928c 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -10,6 +10,7 @@ #include "verifier_bounds_deduction.skel.h" #include "verifier_bounds_deduction_non_const.skel.h" #include "verifier_bounds_mix_sign_unsign.skel.h" +#include "verifier_bpf_get_stack.skel.h" #include "verifier_cfg.skel.h" #include "verifier_cgroup_inv_retcode.skel.h" #include "verifier_cgroup_skb.skel.h" @@ -85,6 +86,7 @@ void test_verifier_bounds(void) { RUN(verifier_bounds); } void test_verifier_bounds_deduction(void) { RUN(verifier_bounds_deduction); } void test_verifier_bounds_deduction_non_const(void) { RUN(verifier_bounds_deduction_non_const); } void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); } +void test_verifier_bpf_get_stack(void) { RUN(verifier_bpf_get_stack); } void test_verifier_cfg(void) { RUN(verifier_cfg); } void test_verifier_cgroup_inv_retcode(void) { RUN(verifier_cgroup_inv_retcode); } void test_verifier_cgroup_skb(void) { RUN(verifier_cgroup_skb); } diff --git a/tools/testing/selftests/bpf/progs/verifier_bpf_get_stack.c b/tools/testing/selftests/bpf/progs/verifier_bpf_get_stack.c new file mode 100644 index 000000000000..325a2bab4a71 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_bpf_get_stack.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/bpf_get_stack.c */ + +#include +#include +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct test_val); +} map_array_48b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +SEC("tracepoint") +__description("bpf_get_stack return R0 within range") +__success +__naked void stack_return_r0_within_range(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + r9 = %[__imm_0]; \ + r1 = r6; \ + r2 = r7; \ + r3 = %[__imm_0]; \ + r4 = 256; \ + call %[bpf_get_stack]; \ + r1 = 0; \ + r8 = r0; \ + r8 <<= 32; \ + r8 s>>= 32; \ + if r1 s> r8 goto l0_%=; \ + r9 -= r8; \ + r2 = r7; \ + r2 += r8; \ + r1 = r9; \ + r1 <<= 32; \ + r1 s>>= 32; \ + r3 = r2; \ + r3 += r1; \ + r1 = r7; \ + r5 = %[__imm_0]; \ + r1 += r5; \ + if r3 >= r1 goto l0_%=; \ + r1 = r6; \ + r3 = r9; \ + r4 = 0; \ + call %[bpf_get_stack]; \ +l0_%=: exit; \ +" : + : __imm(bpf_get_stack), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) / 2) + : __clobber_all); +} + +SEC("iter/task") +__description("bpf_get_task_stack return R0 range is refined") +__success +__naked void return_r0_range_is_refined(void) +{ + asm volatile (" \ + r6 = *(u64*)(r1 + 0); \ + r6 = *(u64*)(r6 + 0); /* ctx->meta->seq */\ + r7 = *(u64*)(r1 + 8); /* ctx->task */\ + r1 = %[map_array_48b] ll; /* fixup_map_array_48b */\ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + r2 = r10; \ + r2 += -8; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: if r7 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r1 = r7; \ + r2 = r0; \ + r9 = r0; /* keep buf for seq_write */\ + r3 = 48; \ + r4 = 0; \ + call %[bpf_get_task_stack]; \ + if r0 s> 0 goto l2_%=; \ + r0 = 0; \ + exit; \ +l2_%=: r1 = r6; \ + r2 = r9; \ + r3 = r0; \ + call %[bpf_seq_write]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_task_stack), + __imm(bpf_map_lookup_elem), + __imm(bpf_seq_write), + __imm_addr(map_array_48b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c deleted file mode 100644 index 3e024c891178..000000000000 --- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c +++ /dev/null @@ -1,87 +0,0 @@ -{ - "bpf_get_stack return R0 within range", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 28), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_9, sizeof(struct test_val)/2), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_7), - BPF_MOV64_IMM(BPF_REG_3, sizeof(struct test_val)/2), - BPF_MOV64_IMM(BPF_REG_4, 256), - BPF_EMIT_CALL(BPF_FUNC_get_stack), - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_8, 32), - BPF_JMP_REG(BPF_JSGT, BPF_REG_1, BPF_REG_8, 16), - BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_7), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_9), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_1, 32), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_MOV64_IMM(BPF_REG_5, sizeof(struct test_val)/2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_5), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_9), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_EMIT_CALL(BPF_FUNC_get_stack), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "bpf_get_task_stack return R0 range is refined", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_6, 0), // ctx->meta->seq - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_1, 8), // ctx->task - BPF_LD_MAP_FD(BPF_REG_1, 0), // fixup_map_array_48b - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_0), // keep buf for seq_write - BPF_MOV64_IMM(BPF_REG_3, 48), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_EMIT_CALL(BPF_FUNC_get_task_stack), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_9), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_seq_write), - - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACING, - .expected_attach_type = BPF_TRACE_ITER, - .kfunc = "task", - .runs = -1, // Don't run, just load - .fixup_map_array_48b = { 3 }, -}, From patchwork Fri Apr 21 17:42:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220511 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76E23C77B76 for ; Fri, 21 Apr 2023 17:43:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232893AbjDURnY (ORCPT ); Fri, 21 Apr 2023 13:43:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233082AbjDURnF (ORCPT ); Fri, 21 Apr 2023 13:43:05 -0400 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3F9D10DB for ; Fri, 21 Apr 2023 10:42:51 -0700 (PDT) Received: by mail-wr1-x430.google.com with SMTP id ffacd0b85a97d-2f95231618aso1299168f8f.1 for ; Fri, 21 Apr 2023 10:42:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098970; x=1684690970; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oHTjUK1nlMMZ5K5TaRoj7e1KPdrrZbj1t7pD8k+4mIg=; b=JRcrwpo31YOT2UTdDN5WBMHH5mRBUs9KzUlOfztrNt17w6zv5hm8Yh+odXYkCAGvA5 EhomazDQjxXCFAf88EfSvVri47GFxNLeGyJGIuKhEXW0tkWZfRbZfqjL7sIfVxZJMZkR zr+OhBiLxqR8o7rJFYkx0k/Fk0fJ2LG07bZRh5w5xOxClnrPHLB2HlpsC3cc/hFgUmGo 1NpzwD+L1ebDnhtL4kI+t2LPXtLJa6B2rANqStPsnoIY9zcZsFuT3XtT3rtNCoKeVCuf ngn2GF3lBXWRZl16aeAJwTAa0Zj0+I37b6OFmj7ZB6sX5nOkHPRZMzb5FxEoAuYhxxtn iHMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098970; x=1684690970; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oHTjUK1nlMMZ5K5TaRoj7e1KPdrrZbj1t7pD8k+4mIg=; b=Zopm4KGzm3E07b7HirsIspYzSFqRUYfMPbJlkQEZpfvIZMDci4XSPB09AqFDm/rxU2 DOx7UnI4M/MMtAIpaXG+or9YJgg6TslIeC7/a1iDjUCuwB0XtFseifMW7N9oeHPryvmM k3TkvqaeXQJlzrjGJMKSjUx913Ks8+tNCgoZ2bYixlV2bZWEc4034KWSI/1S/uYpa4QY E9pUud7yijdAxlwFp1p2vQ/6oXIeJ5kY+N3ssChI4UNdT9prgyCYZxJy9BFeKv6meCGV Oi6XhDtQ5uAEmVU4vWVQC92vh/AQEbztr9hnIonOdA95Gel4kHF7yOJtJz0Du6WGnFgJ 3wqg== X-Gm-Message-State: AAQBX9dYTIaVSBo3O5gKYtGOpU2NOAoC2ImuQhvgHjrAdeXkpdWexUDB +Of3RSd5P/R3+T0ygpQyxzfWMRGcylhp/g== X-Google-Smtp-Source: AKy350b41cONiqyfsY2vXpZxMqgca+Y2fFnW0ofooj0wJxWGIt5i2xCvYWt8KHNm+jUvxqqxysQMfw== X-Received: by 2002:a5d:4d49:0:b0:2f6:4a39:9701 with SMTP id a9-20020a5d4d49000000b002f64a399701mr4099082wru.41.1682098970084; Fri, 21 Apr 2023 10:42:50 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:49 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 04/24] selftests/bpf: verifier/btf_ctx_access converted to inline assembly Date: Fri, 21 Apr 2023 20:42:14 +0300 Message-Id: <20230421174234.2391278-5-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/btf_ctx_access automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 ++ .../bpf/progs/verifier_btf_ctx_access.c | 32 +++++++++++++++++++ .../selftests/bpf/verifier/btf_ctx_access.c | 25 --------------- 3 files changed, 34 insertions(+), 25 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c delete mode 100644 tools/testing/selftests/bpf/verifier/btf_ctx_access.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index db55d125928c..b42601f7edcb 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -11,6 +11,7 @@ #include "verifier_bounds_deduction_non_const.skel.h" #include "verifier_bounds_mix_sign_unsign.skel.h" #include "verifier_bpf_get_stack.skel.h" +#include "verifier_btf_ctx_access.skel.h" #include "verifier_cfg.skel.h" #include "verifier_cgroup_inv_retcode.skel.h" #include "verifier_cgroup_skb.skel.h" @@ -87,6 +88,7 @@ void test_verifier_bounds_deduction(void) { RUN(verifier_bounds_deduction); void test_verifier_bounds_deduction_non_const(void) { RUN(verifier_bounds_deduction_non_const); } void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); } void test_verifier_bpf_get_stack(void) { RUN(verifier_bpf_get_stack); } +void test_verifier_btf_ctx_access(void) { RUN(verifier_btf_ctx_access); } void test_verifier_cfg(void) { RUN(verifier_cfg); } void test_verifier_cgroup_inv_retcode(void) { RUN(verifier_cgroup_inv_retcode); } void test_verifier_cgroup_skb(void) { RUN(verifier_cgroup_skb); } diff --git a/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c b/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c new file mode 100644 index 000000000000..a570e48b917a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c @@ -0,0 +1,32 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/btf_ctx_access.c */ + +#include +#include +#include "bpf_misc.h" + +SEC("fentry/bpf_modify_return_test") +__description("btf_ctx_access accept") +__success __retval(0) +__naked void btf_ctx_access_accept(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + 8); /* load 2nd argument value (int pointer) */\ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("fentry/bpf_fentry_test9") +__description("btf_ctx_access u32 pointer accept") +__success __retval(0) +__naked void ctx_access_u32_pointer_accept(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + 0); /* load 1nd argument value (u32 pointer) */\ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/btf_ctx_access.c b/tools/testing/selftests/bpf/verifier/btf_ctx_access.c deleted file mode 100644 index 0484d3de040d..000000000000 --- a/tools/testing/selftests/bpf/verifier/btf_ctx_access.c +++ /dev/null @@ -1,25 +0,0 @@ -{ - "btf_ctx_access accept", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 8), /* load 2nd argument value (int pointer) */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACING, - .expected_attach_type = BPF_TRACE_FENTRY, - .kfunc = "bpf_modify_return_test", -}, - -{ - "btf_ctx_access u32 pointer accept", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), /* load 1nd argument value (u32 pointer) */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACING, - .expected_attach_type = BPF_TRACE_FENTRY, - .kfunc = "bpf_fentry_test9", -}, From patchwork Fri Apr 21 17:42:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220512 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3476FC77B78 for ; Fri, 21 Apr 2023 17:43:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233308AbjDURnZ (ORCPT ); Fri, 21 Apr 2023 13:43:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233305AbjDURnG (ORCPT ); Fri, 21 Apr 2023 13:43:06 -0400 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6A7C49E0 for ; Fri, 21 Apr 2023 10:42:52 -0700 (PDT) Received: by mail-wr1-x42d.google.com with SMTP id ffacd0b85a97d-2f58125b957so1844145f8f.3 for ; Fri, 21 Apr 2023 10:42:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098971; x=1684690971; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FMALvQLQ9jCQgkNwTl1g4hr895uRYr2kc3tJMgZG3bU=; b=rwsblcMp+jd5gEX/m01VDx43QiAneG+PLPd9KiILLq4B2A7afXX00F8LzPRu/Lj+iy wMG8pw3mIFMi7fDHcC6RCTDwyXdNOEg3wnKzw8SKqJsMazlH8Qmy/M7Ud+fLVmGFiU+F XjzAVRUQG9iay5C9qDQ1hna123qcTRDGdfquo2RtwwXKr2616kJzJURkyqCcvhg6Muj6 9TC+z+qMrPtSA1FsM1lmuduakmnUrx+U75i/EZDTe2+/eWflqaZHmJXFwOXV5utuzWSF aOVDYLjMz9R5kKaPfyMrKCIb/6VCjsFF8ihvd9YFbyEOK9yOoYcQnIUGsoHwNCwqzgcr iwaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098971; x=1684690971; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FMALvQLQ9jCQgkNwTl1g4hr895uRYr2kc3tJMgZG3bU=; b=j7QfFCaZUrCZhU2wzr5Gq35KWTn+sFGZeXZyt14iYOcEnVNNiqnimcDhYY/qJOo8fo WvITZYPKRSyYfXrY5p5ulfLNOYh7BKSt07uB1uugEbGkSSdYxtr7hLdvzFSjwYjAKVcj hvELgiYHm8E7g4fl9B13hLKm+CHj5xVhL5GLT43o3tj/U+MIvnSU+NpSactez24Y8Wdy ir/5+ZOrFcsUFYC1z/ASVJOsAagt/c8Zbq9WoMnKC2m8faOOdErP39Xig2thNNfZ9Tt1 1Z91rshviF7LFeAAIWTh7kIH50bPyhlDmioex05GqLLuXHyYtbmW//YpJBdWIFXrHmP8 1OxA== X-Gm-Message-State: AAQBX9cTsbNLEl6KHcY4Ub4Go4NaLa95yLM3W4AV1+R+ExBa+jWleSAG 6k7y/YpluooX9BDKupELBqaYWdveG5SabQ== X-Google-Smtp-Source: AKy350Z8bzJ7seCWTD9nMlw3VYAYqa0VEllFBpaX36tZWAxu7a9AQBvB6/UsonFURYD3NKyeoUVK8Q== X-Received: by 2002:adf:e686:0:b0:303:e387:aabf with SMTP id r6-20020adfe686000000b00303e387aabfmr1855894wrm.71.1682098971135; Fri, 21 Apr 2023 10:42:51 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:50 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 05/24] selftests/bpf: verifier/ctx converted to inline assembly Date: Fri, 21 Apr 2023 20:42:15 +0300 Message-Id: <20230421174234.2391278-6-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/ctx automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_ctx.c | 221 ++++++++++++++++++ tools/testing/selftests/bpf/verifier/ctx.c | 186 --------------- 3 files changed, 223 insertions(+), 186 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx.c delete mode 100644 tools/testing/selftests/bpf/verifier/ctx.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index b42601f7edcb..f559bc3f7c2f 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -17,6 +17,7 @@ #include "verifier_cgroup_skb.skel.h" #include "verifier_cgroup_storage.skel.h" #include "verifier_const_or.skel.h" +#include "verifier_ctx.skel.h" #include "verifier_ctx_sk_msg.skel.h" #include "verifier_direct_stack_access_wraparound.skel.h" #include "verifier_div0.skel.h" @@ -94,6 +95,7 @@ void test_verifier_cgroup_inv_retcode(void) { RUN(verifier_cgroup_inv_retcode) void test_verifier_cgroup_skb(void) { RUN(verifier_cgroup_skb); } void test_verifier_cgroup_storage(void) { RUN(verifier_cgroup_storage); } void test_verifier_const_or(void) { RUN(verifier_const_or); } +void test_verifier_ctx(void) { RUN(verifier_ctx); } void test_verifier_ctx_sk_msg(void) { RUN(verifier_ctx_sk_msg); } void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); } void test_verifier_div0(void) { RUN(verifier_div0); } diff --git a/tools/testing/selftests/bpf/progs/verifier_ctx.c b/tools/testing/selftests/bpf/progs/verifier_ctx.c new file mode 100644 index 000000000000..a83809a1dbbf --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_ctx.c @@ -0,0 +1,221 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/ctx.c */ + +#include +#include +#include "bpf_misc.h" + +SEC("tc") +__description("context stores via BPF_ATOMIC") +__failure __msg("BPF_ATOMIC stores into R1 ctx is not allowed") +__naked void context_stores_via_bpf_atomic(void) +{ + asm volatile (" \ + r0 = 0; \ + lock *(u32 *)(r1 + %[__sk_buff_mark]) += w0; \ + exit; \ +" : + : __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("tc") +__description("arithmetic ops make PTR_TO_CTX unusable") +__failure __msg("dereference of modified ctx ptr") +__naked void make_ptr_to_ctx_unusable(void) +{ + asm volatile (" \ + r1 += %[__imm_0]; \ + r0 = *(u32*)(r1 + %[__sk_buff_mark]); \ + exit; \ +" : + : __imm_const(__imm_0, + offsetof(struct __sk_buff, data) - offsetof(struct __sk_buff, mark)), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("tc") +__description("pass unmodified ctx pointer to helper") +__success __retval(0) +__naked void unmodified_ctx_pointer_to_helper(void) +{ + asm volatile (" \ + r2 = 0; \ + call %[bpf_csum_update]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_update) + : __clobber_all); +} + +SEC("tc") +__description("pass modified ctx pointer to helper, 1") +__failure __msg("negative offset ctx ptr R1 off=-612 disallowed") +__naked void ctx_pointer_to_helper_1(void) +{ + asm volatile (" \ + r1 += -612; \ + r2 = 0; \ + call %[bpf_csum_update]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_update) + : __clobber_all); +} + +SEC("socket") +__description("pass modified ctx pointer to helper, 2") +__failure __msg("negative offset ctx ptr R1 off=-612 disallowed") +__failure_unpriv __msg_unpriv("negative offset ctx ptr R1 off=-612 disallowed") +__naked void ctx_pointer_to_helper_2(void) +{ + asm volatile (" \ + r1 += -612; \ + call %[bpf_get_socket_cookie]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_socket_cookie) + : __clobber_all); +} + +SEC("tc") +__description("pass modified ctx pointer to helper, 3") +__failure __msg("variable ctx access var_off=(0x0; 0x4)") +__naked void ctx_pointer_to_helper_3(void) +{ + asm volatile (" \ + r3 = *(u32*)(r1 + 0); \ + r3 &= 4; \ + r1 += r3; \ + r2 = 0; \ + call %[bpf_csum_update]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_update) + : __clobber_all); +} + +SEC("cgroup/sendmsg6") +__description("pass ctx or null check, 1: ctx") +__success +__naked void or_null_check_1_ctx(void) +{ + asm volatile (" \ + call %[bpf_get_netns_cookie]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_netns_cookie) + : __clobber_all); +} + +SEC("cgroup/sendmsg6") +__description("pass ctx or null check, 2: null") +__success +__naked void or_null_check_2_null(void) +{ + asm volatile (" \ + r1 = 0; \ + call %[bpf_get_netns_cookie]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_netns_cookie) + : __clobber_all); +} + +SEC("cgroup/sendmsg6") +__description("pass ctx or null check, 3: 1") +__failure __msg("R1 type=scalar expected=ctx") +__naked void or_null_check_3_1(void) +{ + asm volatile (" \ + r1 = 1; \ + call %[bpf_get_netns_cookie]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_netns_cookie) + : __clobber_all); +} + +SEC("cgroup/sendmsg6") +__description("pass ctx or null check, 4: ctx - const") +__failure __msg("negative offset ctx ptr R1 off=-612 disallowed") +__naked void null_check_4_ctx_const(void) +{ + asm volatile (" \ + r1 += -612; \ + call %[bpf_get_netns_cookie]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_netns_cookie) + : __clobber_all); +} + +SEC("cgroup/connect4") +__description("pass ctx or null check, 5: null (connect)") +__success +__naked void null_check_5_null_connect(void) +{ + asm volatile (" \ + r1 = 0; \ + call %[bpf_get_netns_cookie]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_netns_cookie) + : __clobber_all); +} + +SEC("cgroup/post_bind4") +__description("pass ctx or null check, 6: null (bind)") +__success +__naked void null_check_6_null_bind(void) +{ + asm volatile (" \ + r1 = 0; \ + call %[bpf_get_netns_cookie]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_netns_cookie) + : __clobber_all); +} + +SEC("cgroup/post_bind4") +__description("pass ctx or null check, 7: ctx (bind)") +__success +__naked void null_check_7_ctx_bind(void) +{ + asm volatile (" \ + call %[bpf_get_socket_cookie]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_socket_cookie) + : __clobber_all); +} + +SEC("cgroup/post_bind4") +__description("pass ctx or null check, 8: null (bind)") +__failure __msg("R1 type=scalar expected=ctx") +__naked void null_check_8_null_bind(void) +{ + asm volatile (" \ + r1 = 0; \ + call %[bpf_get_socket_cookie]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_socket_cookie) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/ctx.c b/tools/testing/selftests/bpf/verifier/ctx.c deleted file mode 100644 index 2fd31612c0b8..000000000000 --- a/tools/testing/selftests/bpf/verifier/ctx.c +++ /dev/null @@ -1,186 +0,0 @@ -{ - "context stores via BPF_ATOMIC", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_1, BPF_REG_0, offsetof(struct __sk_buff, mark)), - BPF_EXIT_INSN(), - }, - .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "arithmetic ops make PTR_TO_CTX unusable", - .insns = { - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, - offsetof(struct __sk_buff, data) - - offsetof(struct __sk_buff, mark)), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_EXIT_INSN(), - }, - .errstr = "dereference of modified ctx ptr", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "pass unmodified ctx pointer to helper", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_csum_update), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "pass modified ctx pointer to helper, 1", - .insns = { - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_csum_update), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "negative offset ctx ptr R1 off=-612 disallowed", -}, -{ - "pass modified ctx pointer to helper, 2", - .insns = { - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_get_socket_cookie), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result_unpriv = REJECT, - .result = REJECT, - .errstr_unpriv = "negative offset ctx ptr R1 off=-612 disallowed", - .errstr = "negative offset ctx ptr R1 off=-612 disallowed", -}, -{ - "pass modified ctx pointer to helper, 3", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_csum_update), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "variable ctx access var_off=(0x0; 0x4)", -}, -{ - "pass ctx or null check, 1: ctx", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_get_netns_cookie), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR, - .expected_attach_type = BPF_CGROUP_UDP6_SENDMSG, - .result = ACCEPT, -}, -{ - "pass ctx or null check, 2: null", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_get_netns_cookie), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR, - .expected_attach_type = BPF_CGROUP_UDP6_SENDMSG, - .result = ACCEPT, -}, -{ - "pass ctx or null check, 3: 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_get_netns_cookie), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR, - .expected_attach_type = BPF_CGROUP_UDP6_SENDMSG, - .result = REJECT, - .errstr = "R1 type=scalar expected=ctx", -}, -{ - "pass ctx or null check, 4: ctx - const", - .insns = { - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_get_netns_cookie), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR, - .expected_attach_type = BPF_CGROUP_UDP6_SENDMSG, - .result = REJECT, - .errstr = "negative offset ctx ptr R1 off=-612 disallowed", -}, -{ - "pass ctx or null check, 5: null (connect)", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_get_netns_cookie), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR, - .expected_attach_type = BPF_CGROUP_INET4_CONNECT, - .result = ACCEPT, -}, -{ - "pass ctx or null check, 6: null (bind)", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_get_netns_cookie), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, - .expected_attach_type = BPF_CGROUP_INET4_POST_BIND, - .result = ACCEPT, -}, -{ - "pass ctx or null check, 7: ctx (bind)", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_get_socket_cookie), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, - .expected_attach_type = BPF_CGROUP_INET4_POST_BIND, - .result = ACCEPT, -}, -{ - "pass ctx or null check, 8: null (bind)", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_get_socket_cookie), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, - .expected_attach_type = BPF_CGROUP_INET4_POST_BIND, - .result = REJECT, - .errstr = "R1 type=scalar expected=ctx", -}, From patchwork Fri Apr 21 17:42:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220513 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EE33C7618E for ; Fri, 21 Apr 2023 17:43:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233376AbjDURn0 (ORCPT ); Fri, 21 Apr 2023 13:43:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233318AbjDURnH (ORCPT ); Fri, 21 Apr 2023 13:43:07 -0400 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EA811BFC for ; Fri, 21 Apr 2023 10:42:54 -0700 (PDT) Received: by mail-wr1-x431.google.com with SMTP id ffacd0b85a97d-2fddb442d47so1845587f8f.2 for ; Fri, 21 Apr 2023 10:42:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098972; x=1684690972; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vIY93kluNmo8IdAK/xBD53faZl2yu1KIRdFHLx14s00=; b=gTkzlvsSZz40oszOa472G1feYx4vcsFF3sWPXTbDI2FXqt68DVdtQIMu4mh7WHkNQy xWp5DJQw6u6kQy0YoNpkbJqQ0rge+AORyTGVaKMHeLbBt0f4izXeoJ0cmByvhfFj/KE5 nvO/ZdjqZi8iEWVxUI1BRbqvEXGeIwsOcdrGxz8TfYya1uGhkSMRSdjsCmppYN0I2nxH LCgRqqmJT6YFEjfymcyldHZpKCPJaUgQE28PoFGl71ibOzUrMvEbC3A3h14puRSSq1IJ 2NhA7YfEFnS+xn6akd6g6qd4tQLA293+akzia+N0PT/GWszh+ae9SISFVNY12r/uTxxY Q/3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098972; x=1684690972; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vIY93kluNmo8IdAK/xBD53faZl2yu1KIRdFHLx14s00=; b=QlkQtFsFB4ySkQQw7/UtBog5RhmAbMRv74rYOUXA9YhVaiaI/ZU+1Es96flN4nRsAX OCnn7MlSpAj3bsRagN9wcNMx4CB54pkjdF1UIeGBl0WCGiVE2kbEG7dcBbF+DesiXnb/ iKEUomEF8n4H21j+nqYC9GkS3lBYTdSUd8cCfm7kBQmHZ8dC7GGLsHfUOyuoDr9ssVT+ 5oWJ4KT1rxoeU7gUpDNWC7bBVBDWvAAzmvDMHpvpAIIn+Mmp3QJff0hpGaqXDcszm62F hJRU5cBRRHTPIf7rXTd2qVM1HhxgR8z+oKdM/oDURDyou3oEt8rQL/4e9h5eI1YYf2yP b9hQ== X-Gm-Message-State: AAQBX9drAgoc96Wp/1gMV71dETwFtlud3I7QjuR7fj/PJwRJCiopepfZ tXRBmm6h30Dx5ETx8h7cqgPi/R8GWyJaNw== X-Google-Smtp-Source: AKy350bJ1UjRbl6fXFW9CVjENZGJRQ0xU0OT9UMfQgMeoOZzeVb1si54nL7/I+yDvm2U3/OJX23N8g== X-Received: by 2002:a5d:44cd:0:b0:2f2:cfc8:ab7c with SMTP id z13-20020a5d44cd000000b002f2cfc8ab7cmr4345892wrr.1.1682098972486; Fri, 21 Apr 2023 10:42:52 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:51 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 06/24] selftests/bpf: verifier/d_path converted to inline assembly Date: Fri, 21 Apr 2023 20:42:16 +0300 Message-Id: <20230421174234.2391278-7-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/d_path automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_d_path.c | 48 +++++++++++++++++++ tools/testing/selftests/bpf/verifier/d_path.c | 37 -------------- 3 files changed, 50 insertions(+), 37 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_d_path.c delete mode 100644 tools/testing/selftests/bpf/verifier/d_path.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index f559bc3f7c2f..ae4da5f7598c 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -19,6 +19,7 @@ #include "verifier_const_or.skel.h" #include "verifier_ctx.skel.h" #include "verifier_ctx_sk_msg.skel.h" +#include "verifier_d_path.skel.h" #include "verifier_direct_stack_access_wraparound.skel.h" #include "verifier_div0.skel.h" #include "verifier_div_overflow.skel.h" @@ -97,6 +98,7 @@ void test_verifier_cgroup_storage(void) { RUN(verifier_cgroup_storage); } void test_verifier_const_or(void) { RUN(verifier_const_or); } void test_verifier_ctx(void) { RUN(verifier_ctx); } void test_verifier_ctx_sk_msg(void) { RUN(verifier_ctx_sk_msg); } +void test_verifier_d_path(void) { RUN(verifier_d_path); } void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); } void test_verifier_div0(void) { RUN(verifier_div0); } void test_verifier_div_overflow(void) { RUN(verifier_div_overflow); } diff --git a/tools/testing/selftests/bpf/progs/verifier_d_path.c b/tools/testing/selftests/bpf/progs/verifier_d_path.c new file mode 100644 index 000000000000..ec79cbcfde91 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_d_path.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/d_path.c */ + +#include +#include +#include "bpf_misc.h" + +SEC("fentry/dentry_open") +__description("d_path accept") +__success __retval(0) +__naked void d_path_accept(void) +{ + asm volatile (" \ + r1 = *(u32*)(r1 + 0); \ + r2 = r10; \ + r2 += -8; \ + r6 = 0; \ + *(u64*)(r2 + 0) = r6; \ + r3 = 8 ll; \ + call %[bpf_d_path]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_d_path) + : __clobber_all); +} + +SEC("fentry/d_path") +__description("d_path reject") +__failure __msg("helper call is not allowed in probe") +__naked void d_path_reject(void) +{ + asm volatile (" \ + r1 = *(u32*)(r1 + 0); \ + r2 = r10; \ + r2 += -8; \ + r6 = 0; \ + *(u64*)(r2 + 0) = r6; \ + r3 = 8 ll; \ + call %[bpf_d_path]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_d_path) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/d_path.c b/tools/testing/selftests/bpf/verifier/d_path.c deleted file mode 100644 index b988396379a7..000000000000 --- a/tools/testing/selftests/bpf/verifier/d_path.c +++ /dev/null @@ -1,37 +0,0 @@ -{ - "d_path accept", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_MOV64_IMM(BPF_REG_6, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 0), - BPF_LD_IMM64(BPF_REG_3, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_d_path), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACING, - .expected_attach_type = BPF_TRACE_FENTRY, - .kfunc = "dentry_open", -}, -{ - "d_path reject", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_MOV64_IMM(BPF_REG_6, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 0), - BPF_LD_IMM64(BPF_REG_3, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_d_path), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "helper call is not allowed in probe", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACING, - .expected_attach_type = BPF_TRACE_FENTRY, - .kfunc = "d_path", -}, From patchwork Fri Apr 21 17:42:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220518 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BC5BC77B7F for ; Fri, 21 Apr 2023 17:43:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233318AbjDURn1 (ORCPT ); Fri, 21 Apr 2023 13:43:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233175AbjDURnL (ORCPT ); Fri, 21 Apr 2023 13:43:11 -0400 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4DF3E55 for ; Fri, 21 Apr 2023 10:42:55 -0700 (PDT) Received: by mail-wr1-x430.google.com with SMTP id ffacd0b85a97d-3023a56048bso1532382f8f.3 for ; Fri, 21 Apr 2023 10:42:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098974; x=1684690974; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=86+vKaa0Y/wDRSeqvnAcwiFf5Ld26MElEI3W8iSZvGM=; b=Hk+rGLUMh4oDPxJyu6DNYBN0vCBWm+ejZrODxbkQamRB601K208QnrY3PyjzBjjTng XAs6n3EUhnXDdLLdW0qixXcZTnqlSEECmiAZ5dOgPhVdaTHT82hMu83v47A76y6vSbbZ OD2tkaGoUGcMC8RwQtPoNvP+0wK8FBJxKxa/326Uts+RtRP/8DPKQK8BvMx4u4l2ft9c RqMjJXlFeNt7CucNU8Iu/mEboGay5UA5I+tt1Bmp7ZiSWTJvm7a3fZuOnTt5V+VveVB5 21fGossS5fzAQMpklD/Sk/vtq6bUCLW0t4f3jRJm9h7zPYIyMiHIUsltCbLY282+5rUE DQbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098974; x=1684690974; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=86+vKaa0Y/wDRSeqvnAcwiFf5Ld26MElEI3W8iSZvGM=; b=D2dV5+JDl07ZDt7mh8Gt0xGu0jj27S4UEpeHi0oYkK20lygGAoDjbJQkj3YkrTEG5U DDxmR//HJ4w58PceSH8kZhdZG4B38e0uDOH6GdzlLVx+6j3EkdiP184Qz269XlWW6all JX17omLtx3N04PT3UPgLNfctHXgxaourz2vBbSwNq03vDCJjVcVU33qgRsfGG7w8P6l5 piSRaDx4paCtbJVc28nc2i0MDJkt5ELTDkVEJQoUOjdEPqt9w+555OerWExw/WLNbE1W B4vl2C4W3JvOfmJYZLaymQjW9EjK0DhLlZ5sKcWszGb56i8z+MUsrwwJnAuLEUM3lesq nuRA== X-Gm-Message-State: AAQBX9dc5hqfuZQ1ZnQgCMajWt2JyZyo123Ow3K561mHZA6m+wt4VsTi tA1/eHulITpyXfyzTc//uUOc7MnemDUDvA== X-Google-Smtp-Source: AKy350YBcyRQwyW5j4rJSn3zRgzwUR7zZ9Cd7lgYscLta/w1VuuiI3j4IoXUTXg6Kj/a1tTjddWYbQ== X-Received: by 2002:a05:6000:10c8:b0:2f5:b1aa:679c with SMTP id b8-20020a05600010c800b002f5b1aa679cmr4401299wrx.39.1682098973676; Fri, 21 Apr 2023 10:42:53 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:53 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 07/24] selftests/bpf: verifier/direct_packet_access converted to inline assembly Date: Fri, 21 Apr 2023 20:42:17 +0300 Message-Id: <20230421174234.2391278-8-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/direct_packet_access automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../bpf/progs/verifier_direct_packet_access.c | 803 ++++++++++++++++++ .../bpf/verifier/direct_packet_access.c | 710 ---------------- 3 files changed, 805 insertions(+), 710 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c delete mode 100644 tools/testing/selftests/bpf/verifier/direct_packet_access.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index ae4da5f7598c..2c9e61b9a83e 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -20,6 +20,7 @@ #include "verifier_ctx.skel.h" #include "verifier_ctx_sk_msg.skel.h" #include "verifier_d_path.skel.h" +#include "verifier_direct_packet_access.skel.h" #include "verifier_direct_stack_access_wraparound.skel.h" #include "verifier_div0.skel.h" #include "verifier_div_overflow.skel.h" @@ -99,6 +100,7 @@ void test_verifier_const_or(void) { RUN(verifier_const_or); } void test_verifier_ctx(void) { RUN(verifier_ctx); } void test_verifier_ctx_sk_msg(void) { RUN(verifier_ctx_sk_msg); } void test_verifier_d_path(void) { RUN(verifier_d_path); } +void test_verifier_direct_packet_access(void) { RUN(verifier_direct_packet_access); } void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); } void test_verifier_div0(void) { RUN(verifier_div0); } void test_verifier_div_overflow(void) { RUN(verifier_div_overflow); } diff --git a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c new file mode 100644 index 000000000000..99a23dea8233 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c @@ -0,0 +1,803 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/direct_packet_access.c */ + +#include +#include +#include "bpf_misc.h" + +SEC("tc") +__description("pkt_end - pkt_start is allowed") +__success __retval(TEST_DATA_LEN) +__naked void end_pkt_start_is_allowed(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r0 -= r2; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test1") +__success __retval(0) +__naked void direct_packet_access_test1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test2") +__success __retval(0) +__naked void direct_packet_access_test2(void) +{ + asm volatile (" \ + r0 = 1; \ + r4 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data]); \ + r5 = r3; \ + r5 += 14; \ + if r5 > r4 goto l0_%=; \ + r0 = *(u8*)(r3 + 7); \ + r4 = *(u8*)(r3 + 12); \ + r4 *= 14; \ + r3 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 += r4; \ + r2 = *(u32*)(r1 + %[__sk_buff_len]); \ + r2 <<= 49; \ + r2 >>= 49; \ + r3 += r2; \ + r2 = r3; \ + r2 += 8; \ + r1 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + if r2 > r1 goto l1_%=; \ + r1 = *(u8*)(r3 + 4); \ +l1_%=: r0 = 0; \ +l0_%=: exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("direct packet access: test3") +__failure __msg("invalid bpf_context access off=76") +__failure_unpriv +__naked void direct_packet_access_test3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test4 (write)") +__success __retval(0) +__naked void direct_packet_access_test4_write(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + *(u8*)(r2 + 0) = r2; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test5 (pkt_end >= reg, good access)") +__success __retval(0) +__naked void pkt_end_reg_good_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r3 >= r0 goto l0_%=; \ + r0 = 1; \ + exit; \ +l0_%=: r0 = *(u8*)(r2 + 0); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test6 (pkt_end >= reg, bad access)") +__failure __msg("invalid access to packet") +__naked void pkt_end_reg_bad_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r3 >= r0 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ + r0 = 1; \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test7 (pkt_end >= reg, both accesses)") +__failure __msg("invalid access to packet") +__naked void pkt_end_reg_both_accesses(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r3 >= r0 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ + r0 = 1; \ + exit; \ +l0_%=: r0 = *(u8*)(r2 + 0); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test8 (double test, variant 1)") +__success __retval(0) +__naked void test8_double_test_variant_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r3 >= r0 goto l0_%=; \ + if r0 > r3 goto l1_%=; \ + r0 = *(u8*)(r2 + 0); \ +l1_%=: r0 = 1; \ + exit; \ +l0_%=: r0 = *(u8*)(r2 + 0); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test9 (double test, variant 2)") +__success __retval(0) +__naked void test9_double_test_variant_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r3 >= r0 goto l0_%=; \ + r0 = 1; \ + exit; \ +l0_%=: if r0 > r3 goto l1_%=; \ + r0 = *(u8*)(r2 + 0); \ +l1_%=: r0 = *(u8*)(r2 + 0); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test10 (write invalid)") +__failure __msg("invalid access to packet") +__naked void packet_access_test10_write_invalid(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: *(u8*)(r2 + 0) = r2; \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test11 (shift, good access)") +__success __retval(1) +__naked void access_test11_shift_good_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 22; \ + if r0 > r3 goto l0_%=; \ + r3 = 144; \ + r5 = r3; \ + r5 += 23; \ + r5 >>= 3; \ + r6 = r2; \ + r6 += r5; \ + r0 = 1; \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test12 (and, good access)") +__success __retval(1) +__naked void access_test12_and_good_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 22; \ + if r0 > r3 goto l0_%=; \ + r3 = 144; \ + r5 = r3; \ + r5 += 23; \ + r5 &= 15; \ + r6 = r2; \ + r6 += r5; \ + r0 = 1; \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test13 (branches, good access)") +__success __retval(1) +__naked void access_test13_branches_good_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 22; \ + if r0 > r3 goto l0_%=; \ + r3 = *(u32*)(r1 + %[__sk_buff_mark]); \ + r4 = 1; \ + if r3 > r4 goto l1_%=; \ + r3 = 14; \ + goto l2_%=; \ +l1_%=: r3 = 24; \ +l2_%=: r5 = r3; \ + r5 += 23; \ + r5 &= 15; \ + r6 = r2; \ + r6 += r5; \ + r0 = 1; \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test14 (pkt_ptr += 0, CONST_IMM, good access)") +__success __retval(1) +__naked void _0_const_imm_good_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 22; \ + if r0 > r3 goto l0_%=; \ + r5 = 12; \ + r5 >>= 4; \ + r6 = r2; \ + r6 += r5; \ + r0 = *(u8*)(r6 + 0); \ + r0 = 1; \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test15 (spill with xadd)") +__failure __msg("R2 invalid mem access 'scalar'") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void access_test15_spill_with_xadd(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r5 = 4096; \ + r4 = r10; \ + r4 += -8; \ + *(u64*)(r4 + 0) = r2; \ + lock *(u64 *)(r4 + 0) += r5; \ + r2 = *(u64*)(r4 + 0); \ + *(u32*)(r2 + 0) = r5; \ + r0 = 0; \ +l0_%=: exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test16 (arith on data_end)") +__failure __msg("R3 pointer arithmetic on pkt_end") +__naked void test16_arith_on_data_end(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + r3 += 16; \ + if r0 > r3 goto l0_%=; \ + *(u8*)(r2 + 0) = r2; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test17 (pruning, alignment)") +__failure __msg("misaligned packet access off 2+(0x0; 0x0)+15+-4 size 4") +__flag(BPF_F_STRICT_ALIGNMENT) +__naked void packet_access_test17_pruning_alignment(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r7 = *(u32*)(r1 + %[__sk_buff_mark]); \ + r0 = r2; \ + r0 += 14; \ + if r7 > 1 goto l0_%=; \ +l2_%=: if r0 > r3 goto l1_%=; \ + *(u32*)(r0 - 4) = r0; \ +l1_%=: r0 = 0; \ + exit; \ +l0_%=: r0 += 1; \ + goto l2_%=; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test18 (imm += pkt_ptr, 1)") +__success __retval(0) +__naked void test18_imm_pkt_ptr_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = 8; \ + r0 += r2; \ + if r0 > r3 goto l0_%=; \ + *(u8*)(r2 + 0) = r2; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test19 (imm += pkt_ptr, 2)") +__success __retval(0) +__naked void test19_imm_pkt_ptr_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r4 = 4; \ + r4 += r2; \ + *(u8*)(r4 + 0) = r4; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test20 (x += pkt_ptr, 1)") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void test20_x_pkt_ptr_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = 0xffffffff; \ + *(u64*)(r10 - 8) = r0; \ + r0 = *(u64*)(r10 - 8); \ + r0 &= 0x7fff; \ + r4 = r0; \ + r4 += r2; \ + r5 = r4; \ + r4 += %[__imm_0]; \ + if r4 > r3 goto l0_%=; \ + *(u64*)(r5 + 0) = r4; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__imm_0, 0x7fff - 1), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test21 (x += pkt_ptr, 2)") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void test21_x_pkt_ptr_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r4 = 0xffffffff; \ + *(u64*)(r10 - 8) = r4; \ + r4 = *(u64*)(r10 - 8); \ + r4 &= 0x7fff; \ + r4 += r2; \ + r5 = r4; \ + r4 += %[__imm_0]; \ + if r4 > r3 goto l0_%=; \ + *(u64*)(r5 + 0) = r4; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__imm_0, 0x7fff - 1), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test22 (x += pkt_ptr, 3)") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void test22_x_pkt_ptr_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + *(u64*)(r10 - 8) = r2; \ + *(u64*)(r10 - 16) = r3; \ + r3 = *(u64*)(r10 - 16); \ + if r0 > r3 goto l0_%=; \ + r2 = *(u64*)(r10 - 8); \ + r4 = 0xffffffff; \ + lock *(u64 *)(r10 - 8) += r4; \ + r4 = *(u64*)(r10 - 8); \ + r4 >>= 49; \ + r4 += r2; \ + r0 = r4; \ + r0 += 2; \ + if r0 > r3 goto l0_%=; \ + r2 = 1; \ + *(u16*)(r4 + 0) = r2; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test23 (x += pkt_ptr, 4)") +__failure __msg("invalid access to packet, off=0 size=8, R5(id=2,off=0,r=0)") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void test23_x_pkt_ptr_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = *(u32*)(r1 + %[__sk_buff_mark]); \ + *(u64*)(r10 - 8) = r0; \ + r0 = *(u64*)(r10 - 8); \ + r0 &= 0xffff; \ + r4 = r0; \ + r0 = 31; \ + r0 += r4; \ + r0 += r2; \ + r5 = r0; \ + r0 += %[__imm_0]; \ + if r0 > r3 goto l0_%=; \ + *(u64*)(r5 + 0) = r0; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__imm_0, 0xffff - 1), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test24 (x += pkt_ptr, 5)") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void test24_x_pkt_ptr_5(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = 0xffffffff; \ + *(u64*)(r10 - 8) = r0; \ + r0 = *(u64*)(r10 - 8); \ + r0 &= 0xff; \ + r4 = r0; \ + r0 = 64; \ + r0 += r4; \ + r0 += r2; \ + r5 = r0; \ + r0 += %[__imm_0]; \ + if r0 > r3 goto l0_%=; \ + *(u64*)(r5 + 0) = r0; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__imm_0, 0x7fff - 1), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test25 (marking on <, good access)") +__success __retval(0) +__naked void test25_marking_on_good_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 < r3 goto l0_%=; \ +l1_%=: r0 = 0; \ + exit; \ +l0_%=: r0 = *(u8*)(r2 + 0); \ + goto l1_%=; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test26 (marking on <, bad access)") +__failure __msg("invalid access to packet") +__naked void test26_marking_on_bad_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 < r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l1_%=: r0 = 0; \ + exit; \ +l0_%=: goto l1_%=; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test27 (marking on <=, good access)") +__success __retval(1) +__naked void test27_marking_on_good_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r3 <= r0 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test28 (marking on <=, bad access)") +__failure __msg("invalid access to packet") +__naked void test28_marking_on_bad_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r3 <= r0 goto l0_%=; \ +l1_%=: r0 = 1; \ + exit; \ +l0_%=: r0 = *(u8*)(r2 + 0); \ + goto l1_%=; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("direct packet access: test29 (reg > pkt_end in subprog)") +__success __retval(0) +__naked void reg_pkt_end_in_subprog(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r2 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r3 = r6; \ + r3 += 8; \ + call reg_pkt_end_in_subprog__1; \ + if r0 == 0 goto l0_%=; \ + r0 = *(u8*)(r6 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void reg_pkt_end_in_subprog__1(void) +{ + asm volatile (" \ + r0 = 0; \ + if r3 > r2 goto l0_%=; \ + r0 = 1; \ +l0_%=: exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("direct packet access: test30 (check_id() in regsafe(), bad access)") +__failure __msg("invalid access to packet, off=0 size=1, R2") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void id_in_regsafe_bad_access(void) +{ + asm volatile (" \ + /* r9 = ctx */ \ + r9 = r1; \ + /* r7 = ktime_get_ns() */ \ + call %[bpf_ktime_get_ns]; \ + r7 = r0; \ + /* r6 = ktime_get_ns() */ \ + call %[bpf_ktime_get_ns]; \ + r6 = r0; \ + /* r2 = ctx->data \ + * r3 = ctx->data \ + * r4 = ctx->data_end \ + */ \ + r2 = *(u32*)(r9 + %[__sk_buff_data]); \ + r3 = *(u32*)(r9 + %[__sk_buff_data]); \ + r4 = *(u32*)(r9 + %[__sk_buff_data_end]); \ + /* if r6 > 100 goto exit \ + * if r7 > 100 goto exit \ + */ \ + if r6 > 100 goto l0_%=; \ + if r7 > 100 goto l0_%=; \ + /* r2 += r6 ; this forces assignment of ID to r2\ + * r2 += 1 ; get some fixed off for r2\ + * r3 += r7 ; this forces assignment of ID to r3\ + * r3 += 1 ; get some fixed off for r3\ + */ \ + r2 += r6; \ + r2 += 1; \ + r3 += r7; \ + r3 += 1; \ + /* if r6 > r7 goto +1 ; no new information about the state is derived from\ + * ; this check, thus produced verifier states differ\ + * ; only in 'insn_idx' \ + * r2 = r3 ; optionally share ID between r2 and r3\ + */ \ + if r6 != r7 goto l1_%=; \ + r2 = r3; \ +l1_%=: /* if r3 > ctx->data_end goto exit */ \ + if r3 > r4 goto l0_%=; \ + /* r5 = *(u8 *) (r2 - 1) ; access packet memory using r2,\ + * ; this is not always safe\ + */ \ + r5 = *(u8*)(r2 - 1); \ +l0_%=: /* exit(0) */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/direct_packet_access.c b/tools/testing/selftests/bpf/verifier/direct_packet_access.c deleted file mode 100644 index dce2e28aeb43..000000000000 --- a/tools/testing/selftests/bpf/verifier/direct_packet_access.c +++ /dev/null @@ -1,710 +0,0 @@ -{ - "pkt_end - pkt_start is allowed", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = TEST_DATA_LEN, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test2", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14), - BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 15), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 7), - BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_3, 12), - BPF_ALU64_IMM(BPF_MUL, BPF_REG_4, 14), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_4), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 49), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 49), - BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_3, 4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test3", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid bpf_context access off=76", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SOCKET_FILTER, -}, -{ - "direct packet access: test4 (write)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test5 (pkt_end >= reg, good access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test6 (pkt_end >= reg, bad access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid access to packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test7 (pkt_end >= reg, both accesses)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid access to packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test8 (double test, variant 1)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 4), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test9 (double test, variant 2)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test10 (write invalid)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid access to packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test11 (shift, good access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8), - BPF_MOV64_IMM(BPF_REG_3, 144), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_5, 3), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .retval = 1, -}, -{ - "direct packet access: test12 (and, good access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8), - BPF_MOV64_IMM(BPF_REG_3, 144), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23), - BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .retval = 1, -}, -{ - "direct packet access: test13 (branches, good access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 13), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_MOV64_IMM(BPF_REG_4, 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 2), - BPF_MOV64_IMM(BPF_REG_3, 14), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_MOV64_IMM(BPF_REG_3, 24), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23), - BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .retval = 1, -}, -{ - "direct packet access: test14 (pkt_ptr += 0, CONST_IMM, good access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 7), - BPF_MOV64_IMM(BPF_REG_5, 12), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_5, 4), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .retval = 1, -}, -{ - "direct packet access: test15 (spill with xadd)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8), - BPF_MOV64_IMM(BPF_REG_5, 4096), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_4, BPF_REG_5, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_4, 0), - BPF_STX_MEM(BPF_W, BPF_REG_2, BPF_REG_5, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R2 invalid mem access 'scalar'", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "direct packet access: test16 (arith on data_end)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 16), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R3 pointer arithmetic on pkt_end", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test17 (pruning, alignment)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 14), - BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 1, 4), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_A(-6), - }, - .errstr = "misaligned packet access off 2+(0x0; 0x0)+15+-4 size 4", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .flags = F_LOAD_WITH_STRICT_ALIGNMENT, -}, -{ - "direct packet access: test18 (imm += pkt_ptr, 1)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_IMM(BPF_REG_0, 8), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test19 (imm += pkt_ptr, 2)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3), - BPF_MOV64_IMM(BPF_REG_4, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2), - BPF_STX_MEM(BPF_B, BPF_REG_4, BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test20 (x += pkt_ptr, 1)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_IMM(BPF_REG_0, 0xffffffff), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0x7fff), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_4), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "direct packet access: test21 (x += pkt_ptr, 2)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 9), - BPF_MOV64_IMM(BPF_REG_4, 0xffffffff), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8), - BPF_ALU64_IMM(BPF_AND, BPF_REG_4, 0x7fff), - BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_4), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "direct packet access: test22 (x += pkt_ptr, 3)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_3, -16), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -16), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 11), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8), - BPF_MOV64_IMM(BPF_REG_4, 0xffffffff), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_4, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_4, 49), - BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2), - BPF_MOV64_IMM(BPF_REG_2, 1), - BPF_STX_MEM(BPF_H, BPF_REG_4, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "direct packet access: test23 (x += pkt_ptr, 4)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_0, 31), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0xffff - 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "invalid access to packet, off=0 size=8, R5(id=2,off=0,r=0)", - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "direct packet access: test24 (x += pkt_ptr, 5)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_IMM(BPF_REG_0, 0xffffffff), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xff), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_0, 64), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7fff - 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "direct packet access: test25 (marking on <, good access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, -4), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test26 (marking on <, bad access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 3), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JA, 0, 0, -3), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test27 (marking on <=, good access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .retval = 1, -}, -{ - "direct packet access: test28 (marking on <=, bad access)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 2), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, -4), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test29 (reg > pkt_end in subprog)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 1), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "direct packet access: test30 (check_id() in regsafe(), bad access)", - .insns = { - /* r9 = ctx */ - BPF_MOV64_REG(BPF_REG_9, BPF_REG_1), - /* r7 = ktime_get_ns() */ - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - /* r6 = ktime_get_ns() */ - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* r2 = ctx->data - * r3 = ctx->data - * r4 = ctx->data_end - */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_9, offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_9, offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_9, offsetof(struct __sk_buff, data_end)), - /* if r6 > 100 goto exit - * if r7 > 100 goto exit - */ - BPF_JMP_IMM(BPF_JGT, BPF_REG_6, 100, 9), - BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 100, 8), - /* r2 += r6 ; this forces assignment of ID to r2 - * r2 += 1 ; get some fixed off for r2 - * r3 += r7 ; this forces assignment of ID to r3 - * r3 += 1 ; get some fixed off for r3 - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_7), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 1), - /* if r6 > r7 goto +1 ; no new information about the state is derived from - * ; this check, thus produced verifier states differ - * ; only in 'insn_idx' - * r2 = r3 ; optionally share ID between r2 and r3 - */ - BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_7, 1), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_3), - /* if r3 > ctx->data_end goto exit */ - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 1), - /* r5 = *(u8 *) (r2 - 1) ; access packet memory using r2, - * ; this is not always safe - */ - BPF_LDX_MEM(BPF_B, BPF_REG_5, BPF_REG_2, -1), - /* exit(0) */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .flags = BPF_F_TEST_STATE_FREQ, - .result = REJECT, - .errstr = "invalid access to packet, off=0 size=1, R2", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, From patchwork Fri Apr 21 17:42:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220515 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E9EEC77B76 for ; Fri, 21 Apr 2023 17:43:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229641AbjDURn1 (ORCPT ); Fri, 21 Apr 2023 13:43:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233338AbjDURnM (ORCPT ); Fri, 21 Apr 2023 13:43:12 -0400 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB1A355AF for ; Fri, 21 Apr 2023 10:42:56 -0700 (PDT) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f1957e80a2so18116835e9.1 for ; Fri, 21 Apr 2023 10:42:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098975; x=1684690975; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NeJvSxjx784sBWolWdaqQIutCvZLOHAA7uZsJzJtHT8=; b=g8IjyA/9M3Qvk5z1avhsPIyn3bKYnXiDY+e8d2LUgCGQz+B8D16NUrrpEFJ74JZrBy DkFRskPnr8s5v4U0NTjZqDtY2CBd3Pgt1hXz94VdkgkZqh4JsCM/SqAq0C0clpGHXqpx b5X81acwArS21uLC6Pe7qyN9Twf5IJdxeGCcVqwZnCdg4jh5BUzkIXPg8YtJDP/fUFdl ItX9woF99fgqn0srvAvOBTiBBh+wywG9MeTUnqd9WZnQfsx4D19tpILplnenx+8alJCZ 1N+tcjRl855hLKIDCJYOirkV/EA1KG9WnGcc4sBNte9ph8KTC/TE0dLZZ50jLR/NNwQB fceA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098975; x=1684690975; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NeJvSxjx784sBWolWdaqQIutCvZLOHAA7uZsJzJtHT8=; b=i0z16YCXFMH/GzXaCfhPesWCb9hIfhl6jXpwnQvNpjr/nId+RsJh7VeZwKTrEOQJPn 2QwTC7iTm9lW63BdfH7gBr4rro4aHAwWrGV8MUJbOa58LQVGrIxrZ6HcMtaromYhyp03 J4A57r32BSTY2TQwF2qMkIIf+1XNdn34DCsEoDSGKx/oztVFa5/U360vFS+4MqTxuTTB PV4HORxSdcFSnqaUSv4y0yJLngxHtONpK3on2tv8a+/qVkuYcaIKstD7a7ZBVxv4a8PF 6x12EChUl5cvUVbC7+JbM3vh8MiddJM+hM4kbDPsZYsXIcvHLe5Sif9rXHLM8ayU9UAj A2bw== X-Gm-Message-State: AAQBX9emKFaI2H4fPs7pMQ4hqjs1ZtA3fVbKVkixVP/ftZaWaYPMEv+x aYVZ3wk4U9LK+R/v6ImpsWdH584N2OwPBQ== X-Google-Smtp-Source: AKy350YE0MuzpAgNqI4On5gAEvrr6Ay4EZqy0s8WiR4RBPKxyfMQZygw5WQSCBECUVZyxKP9g/TGYQ== X-Received: by 2002:a5d:4645:0:b0:2cf:efa5:5322 with SMTP id j5-20020a5d4645000000b002cfefa55322mr8421264wrs.14.1682098974771; Fri, 21 Apr 2023 10:42:54 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:54 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 08/24] selftests/bpf: verifier/jeq_infer_not_null converted to inline assembly Date: Fri, 21 Apr 2023 20:42:18 +0300 Message-Id: <20230421174234.2391278-9-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/jeq_infer_not_null automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../bpf/progs/verifier_jeq_infer_not_null.c | 213 ++++++++++++++++++ .../bpf/verifier/jeq_infer_not_null.c | 174 -------------- 3 files changed, 215 insertions(+), 174 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_jeq_infer_not_null.c delete mode 100644 tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 2c9e61b9a83e..de5db0de98a1 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -29,6 +29,7 @@ #include "verifier_helper_restricted.skel.h" #include "verifier_helper_value_access.skel.h" #include "verifier_int_ptr.skel.h" +#include "verifier_jeq_infer_not_null.skel.h" #include "verifier_ld_ind.skel.h" #include "verifier_leak_ptr.skel.h" #include "verifier_map_ptr.skel.h" @@ -109,6 +110,7 @@ void test_verifier_helper_packet_access(void) { RUN(verifier_helper_packet_acces void test_verifier_helper_restricted(void) { RUN(verifier_helper_restricted); } void test_verifier_helper_value_access(void) { RUN(verifier_helper_value_access); } void test_verifier_int_ptr(void) { RUN(verifier_int_ptr); } +void test_verifier_jeq_infer_not_null(void) { RUN(verifier_jeq_infer_not_null); } void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); } void test_verifier_map_ptr(void) { RUN(verifier_map_ptr); } diff --git a/tools/testing/selftests/bpf/progs/verifier_jeq_infer_not_null.c b/tools/testing/selftests/bpf/progs/verifier_jeq_infer_not_null.c new file mode 100644 index 000000000000..bf16b00502f2 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_jeq_infer_not_null.c @@ -0,0 +1,213 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c */ + +#include +#include +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_XSKMAP); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); +} map_xskmap SEC(".maps"); + +/* This is equivalent to the following program: + * + * r6 = skb->sk; + * r7 = sk_fullsock(r6); + * r0 = sk_fullsock(r6); + * if (r0 == 0) return 0; (a) + * if (r0 != r7) return 0; (b) + * *r7->type; (c) + * return 0; + * + * It is safe to dereference r7 at point (c), because of (a) and (b). + * The test verifies that relation r0 == r7 is propagated from (b) to (c). + */ +SEC("cgroup/skb") +__description("jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL -> PTR_TO_SOCKET for JNE false branch") +__success __failure_unpriv __msg_unpriv("R7 pointer comparison") +__retval(0) +__naked void socket_for_jne_false_branch(void) +{ + asm volatile (" \ + /* r6 = skb->sk; */ \ + r6 = *(u64*)(r1 + %[__sk_buff_sk]); \ + /* if (r6 == 0) return 0; */ \ + if r6 == 0 goto l0_%=; \ + /* r7 = sk_fullsock(skb); */ \ + r1 = r6; \ + call %[bpf_sk_fullsock]; \ + r7 = r0; \ + /* r0 = sk_fullsock(skb); */ \ + r1 = r6; \ + call %[bpf_sk_fullsock]; \ + /* if (r0 == null) return 0; */ \ + if r0 == 0 goto l0_%=; \ + /* if (r0 == r7) r0 = *(r7->type); */ \ + if r0 != r7 goto l0_%=; /* Use ! JNE ! */\ + r0 = *(u32*)(r7 + %[bpf_sock_type]); \ +l0_%=: /* return 0 */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +/* Same as above, but verify that another branch of JNE still + * prohibits access to PTR_MAYBE_NULL. + */ +SEC("cgroup/skb") +__description("jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL unchanged for JNE true branch") +__failure __msg("R7 invalid mem access 'sock_or_null'") +__failure_unpriv __msg_unpriv("R7 pointer comparison") +__naked void unchanged_for_jne_true_branch(void) +{ + asm volatile (" \ + /* r6 = skb->sk */ \ + r6 = *(u64*)(r1 + %[__sk_buff_sk]); \ + /* if (r6 == 0) return 0; */ \ + if r6 == 0 goto l0_%=; \ + /* r7 = sk_fullsock(skb); */ \ + r1 = r6; \ + call %[bpf_sk_fullsock]; \ + r7 = r0; \ + /* r0 = sk_fullsock(skb); */ \ + r1 = r6; \ + call %[bpf_sk_fullsock]; \ + /* if (r0 == null) return 0; */ \ + if r0 != 0 goto l0_%=; \ + /* if (r0 == r7) return 0; */ \ + if r0 != r7 goto l1_%=; /* Use ! JNE ! */\ + goto l0_%=; \ +l1_%=: /* r0 = *(r7->type); */ \ + r0 = *(u32*)(r7 + %[bpf_sock_type]); \ +l0_%=: /* return 0 */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +/* Same as a first test, but not null should be inferred for JEQ branch */ +SEC("cgroup/skb") +__description("jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL -> PTR_TO_SOCKET for JEQ true branch") +__success __failure_unpriv __msg_unpriv("R7 pointer comparison") +__retval(0) +__naked void socket_for_jeq_true_branch(void) +{ + asm volatile (" \ + /* r6 = skb->sk; */ \ + r6 = *(u64*)(r1 + %[__sk_buff_sk]); \ + /* if (r6 == null) return 0; */ \ + if r6 == 0 goto l0_%=; \ + /* r7 = sk_fullsock(skb); */ \ + r1 = r6; \ + call %[bpf_sk_fullsock]; \ + r7 = r0; \ + /* r0 = sk_fullsock(skb); */ \ + r1 = r6; \ + call %[bpf_sk_fullsock]; \ + /* if (r0 == null) return 0; */ \ + if r0 == 0 goto l0_%=; \ + /* if (r0 != r7) return 0; */ \ + if r0 == r7 goto l1_%=; /* Use ! JEQ ! */\ + goto l0_%=; \ +l1_%=: /* r0 = *(r7->type); */ \ + r0 = *(u32*)(r7 + %[bpf_sock_type]); \ +l0_%=: /* return 0; */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +/* Same as above, but verify that another branch of JNE still + * prohibits access to PTR_MAYBE_NULL. + */ +SEC("cgroup/skb") +__description("jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL unchanged for JEQ false branch") +__failure __msg("R7 invalid mem access 'sock_or_null'") +__failure_unpriv __msg_unpriv("R7 pointer comparison") +__naked void unchanged_for_jeq_false_branch(void) +{ + asm volatile (" \ + /* r6 = skb->sk; */ \ + r6 = *(u64*)(r1 + %[__sk_buff_sk]); \ + /* if (r6 == null) return 0; */ \ + if r6 == 0 goto l0_%=; \ + /* r7 = sk_fullsock(skb); */ \ + r1 = r6; \ + call %[bpf_sk_fullsock]; \ + r7 = r0; \ + /* r0 = sk_fullsock(skb); */ \ + r1 = r6; \ + call %[bpf_sk_fullsock]; \ + /* if (r0 == null) return 0; */ \ + if r0 == 0 goto l0_%=; \ + /* if (r0 != r7) r0 = *(r7->type); */ \ + if r0 == r7 goto l0_%=; /* Use ! JEQ ! */\ + r0 = *(u32*)(r7 + %[bpf_sock_type]); \ +l0_%=: /* return 0; */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +/* Maps are treated in a different branch of `mark_ptr_not_null_reg`, + * so separate test for maps case. + */ +SEC("xdp") +__description("jne/jeq infer not null, PTR_TO_MAP_VALUE_OR_NULL -> PTR_TO_MAP_VALUE") +__success __retval(0) +__naked void null_ptr_to_map_value(void) +{ + asm volatile (" \ + /* r9 = &some stack to use as key */ \ + r1 = 0; \ + *(u32*)(r10 - 8) = r1; \ + r9 = r10; \ + r9 += -8; \ + /* r8 = process local map */ \ + r8 = %[map_xskmap] ll; \ + /* r6 = map_lookup_elem(r8, r9); */ \ + r1 = r8; \ + r2 = r9; \ + call %[bpf_map_lookup_elem]; \ + r6 = r0; \ + /* r7 = map_lookup_elem(r8, r9); */ \ + r1 = r8; \ + r2 = r9; \ + call %[bpf_map_lookup_elem]; \ + r7 = r0; \ + /* if (r6 == 0) return 0; */ \ + if r6 == 0 goto l0_%=; \ + /* if (r6 != r7) return 0; */ \ + if r6 != r7 goto l0_%=; \ + /* read *r7; */ \ + r0 = *(u32*)(r7 + %[bpf_xdp_sock_queue_id]); \ +l0_%=: /* return 0; */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_xskmap), + __imm_const(bpf_xdp_sock_queue_id, offsetof(struct bpf_xdp_sock, queue_id)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c b/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c deleted file mode 100644 index 67a1c07ead34..000000000000 --- a/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c +++ /dev/null @@ -1,174 +0,0 @@ -{ - /* This is equivalent to the following program: - * - * r6 = skb->sk; - * r7 = sk_fullsock(r6); - * r0 = sk_fullsock(r6); - * if (r0 == 0) return 0; (a) - * if (r0 != r7) return 0; (b) - * *r7->type; (c) - * return 0; - * - * It is safe to dereference r7 at point (c), because of (a) and (b). - * The test verifies that relation r0 == r7 is propagated from (b) to (c). - */ - "jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL -> PTR_TO_SOCKET for JNE false branch", - .insns = { - /* r6 = skb->sk; */ - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)), - /* if (r6 == 0) return 0; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 8), - /* r7 = sk_fullsock(skb); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - /* r0 = sk_fullsock(skb); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - /* if (r0 == null) return 0; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - /* if (r0 == r7) r0 = *(r7->type); */ - BPF_JMP_REG(BPF_JNE, BPF_REG_0, BPF_REG_7, 1), /* Use ! JNE ! */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)), - /* return 0 */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R7 pointer comparison", -}, -{ - /* Same as above, but verify that another branch of JNE still - * prohibits access to PTR_MAYBE_NULL. - */ - "jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL unchanged for JNE true branch", - .insns = { - /* r6 = skb->sk */ - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)), - /* if (r6 == 0) return 0; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 9), - /* r7 = sk_fullsock(skb); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - /* r0 = sk_fullsock(skb); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - /* if (r0 == null) return 0; */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3), - /* if (r0 == r7) return 0; */ - BPF_JMP_REG(BPF_JNE, BPF_REG_0, BPF_REG_7, 1), /* Use ! JNE ! */ - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - /* r0 = *(r7->type); */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)), - /* return 0 */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "R7 invalid mem access 'sock_or_null'", - .result_unpriv = REJECT, - .errstr_unpriv = "R7 pointer comparison", -}, -{ - /* Same as a first test, but not null should be inferred for JEQ branch */ - "jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL -> PTR_TO_SOCKET for JEQ true branch", - .insns = { - /* r6 = skb->sk; */ - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)), - /* if (r6 == null) return 0; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 9), - /* r7 = sk_fullsock(skb); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - /* r0 = sk_fullsock(skb); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - /* if (r0 == null) return 0; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - /* if (r0 != r7) return 0; */ - BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_7, 1), /* Use ! JEQ ! */ - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - /* r0 = *(r7->type); */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)), - /* return 0; */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R7 pointer comparison", -}, -{ - /* Same as above, but verify that another branch of JNE still - * prohibits access to PTR_MAYBE_NULL. - */ - "jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL unchanged for JEQ false branch", - .insns = { - /* r6 = skb->sk; */ - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)), - /* if (r6 == null) return 0; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 8), - /* r7 = sk_fullsock(skb); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - /* r0 = sk_fullsock(skb); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - /* if (r0 == null) return 0; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - /* if (r0 != r7) r0 = *(r7->type); */ - BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_7, 1), /* Use ! JEQ ! */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)), - /* return 0; */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "R7 invalid mem access 'sock_or_null'", - .result_unpriv = REJECT, - .errstr_unpriv = "R7 pointer comparison", -}, -{ - /* Maps are treated in a different branch of `mark_ptr_not_null_reg`, - * so separate test for maps case. - */ - "jne/jeq infer not null, PTR_TO_MAP_VALUE_OR_NULL -> PTR_TO_MAP_VALUE", - .insns = { - /* r9 = &some stack to use as key */ - BPF_ST_MEM(BPF_W, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_9, -8), - /* r8 = process local map */ - BPF_LD_MAP_FD(BPF_REG_8, 0), - /* r6 = map_lookup_elem(r8, r9); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_9), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* r7 = map_lookup_elem(r8, r9); */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_9), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - /* if (r6 == 0) return 0; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 2), - /* if (r6 != r7) return 0; */ - BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_7, 1), - /* read *r7; */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_xdp_sock, queue_id)), - /* return 0; */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_xskmap = { 3 }, - .prog_type = BPF_PROG_TYPE_XDP, - .result = ACCEPT, -}, From patchwork Fri Apr 21 17:42:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220516 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C135EC7EE20 for ; Fri, 21 Apr 2023 17:43:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229805AbjDURn2 (ORCPT ); Fri, 21 Apr 2023 13:43:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233355AbjDURnM (ORCPT ); Fri, 21 Apr 2023 13:43:12 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD1D759C9 for ; Fri, 21 Apr 2023 10:42:57 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-3f195b164c4so3999745e9.1 for ; Fri, 21 Apr 2023 10:42:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098976; x=1684690976; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FtHegdPWnCoIIvj3+l4JBdWz9kJSGRAe8dNbroWWinI=; b=O06I/38hNs7T5SlnDyk9JfEPJ6txHOmp1g+wrro+zP6HiR8dUXSnsCuFPoLsdv1wcU N+xBb1PeZt0uMsMZrztqkocOybDvwTWB7bkTVAZSMqZUFoNBTRnnBiEG/nbnS9SSwtOA racdFnFmD3CiT1hbLb27eYLI7MS7+oRqIhNMs2fosRpI0O/aVOOfALNn8va7gLPIZn+O lP7duSnAcjXgemwOaFdNrXUuPO0VlDtwsjFP8b08rXvWJ3HKWL4VFXa4ZuddbsD1upoY RDmEvaKuey4uPZBfBZ43kzfnCt4Sldpjp2qmq4+6H3WaYSfnDpeonHfpw6N3cQ2H3/U0 hoFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098976; x=1684690976; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FtHegdPWnCoIIvj3+l4JBdWz9kJSGRAe8dNbroWWinI=; b=Byxw1jfcckmqylS+mzkRe5+DGLFHPs2ama2cGL+s/MkzhZRC0tceTfezBDek5gJYkd Hqr9SDlgoMkBqB/3TYaXTHxFBnOGJTSO+G5UZNubB8d1aHeh2vbjFIX34FKm78mtVXN4 LJ1yBTWgFYnzDM4UslXD9Lmpx44Tpqdm+dIL8d2rkxJAOpnDteuqHIAqt2SMwIpzkRpI wO9KgOJUyfUFAI15JWK+0WwYrTIN2z8fHxiTKWLTUMim/mNmHCORK/w+ufoMXdwLmtwR ymlKt22o19L4sh5YUbYyJPBefv3CoZo9NL7ujMweLG+Z7+cZCtof+bmPGOs/VGrM+aQs uAlg== X-Gm-Message-State: AAQBX9fgAOHRjqpAw8EyFF+FMZNdjMtCIoSaKfY9A39EBEQeggFdjUZ7 x+D/uDD4chhhjGyUzdipWX2YC7zLu5OVjw== X-Google-Smtp-Source: AKy350aaYkGYJJkEaBcJiC90JIhfcwPq2MlV8RRcE++BNT9V2XJP807fc1QBGefkqtuGK8q6OxJvLg== X-Received: by 2002:a7b:c38e:0:b0:3f1:718d:a21c with SMTP id s14-20020a7bc38e000000b003f1718da21cmr2609436wmj.31.1682098975917; Fri, 21 Apr 2023 10:42:55 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:55 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 09/24] selftests/bpf: verifier/loops1 converted to inline assembly Date: Fri, 21 Apr 2023 20:42:19 +0300 Message-Id: <20230421174234.2391278-10-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/loops1 automatically converted to use inline assembly. There are a few modifications for the converted tests. "tracepoint" programs do not support test execution, change program type to "xdp" (which supports test execution) for the following tests that have __retval tags: - bounded loop, count to 4 - bonded loop containing forward jump Also, remove the __retval tag for test: - bounded loop, count from positive unknown to 4 As it's return value is a random number. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_loops1.c | 259 ++++++++++++++++++ tools/testing/selftests/bpf/verifier/loops1.c | 206 -------------- 3 files changed, 261 insertions(+), 206 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_loops1.c delete mode 100644 tools/testing/selftests/bpf/verifier/loops1.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index de5db0de98a1..33a50dbc2321 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -32,6 +32,7 @@ #include "verifier_jeq_infer_not_null.skel.h" #include "verifier_ld_ind.skel.h" #include "verifier_leak_ptr.skel.h" +#include "verifier_loops1.skel.h" #include "verifier_map_ptr.skel.h" #include "verifier_map_ret_val.skel.h" #include "verifier_masking.skel.h" @@ -113,6 +114,7 @@ void test_verifier_int_ptr(void) { RUN(verifier_int_ptr); } void test_verifier_jeq_infer_not_null(void) { RUN(verifier_jeq_infer_not_null); } void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); } +void test_verifier_loops1(void) { RUN(verifier_loops1); } void test_verifier_map_ptr(void) { RUN(verifier_map_ptr); } void test_verifier_map_ret_val(void) { RUN(verifier_map_ret_val); } void test_verifier_masking(void) { RUN(verifier_masking); } diff --git a/tools/testing/selftests/bpf/progs/verifier_loops1.c b/tools/testing/selftests/bpf/progs/verifier_loops1.c new file mode 100644 index 000000000000..5bc86af80a9a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_loops1.c @@ -0,0 +1,259 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/loops1.c */ + +#include +#include +#include "bpf_misc.h" + +SEC("xdp") +__description("bounded loop, count to 4") +__success __retval(4) +__naked void bounded_loop_count_to_4(void) +{ + asm volatile (" \ + r0 = 0; \ +l0_%=: r0 += 1; \ + if r0 < 4 goto l0_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("bounded loop, count to 20") +__success +__naked void bounded_loop_count_to_20(void) +{ + asm volatile (" \ + r0 = 0; \ +l0_%=: r0 += 3; \ + if r0 < 20 goto l0_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("bounded loop, count from positive unknown to 4") +__success +__naked void from_positive_unknown_to_4(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + if r0 s< 0 goto l0_%=; \ +l1_%=: r0 += 1; \ + if r0 < 4 goto l1_%=; \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("tracepoint") +__description("bounded loop, count from totally unknown to 4") +__success +__naked void from_totally_unknown_to_4(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ +l0_%=: r0 += 1; \ + if r0 < 4 goto l0_%=; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("tracepoint") +__description("bounded loop, count to 4 with equality") +__success +__naked void count_to_4_with_equality(void) +{ + asm volatile (" \ + r0 = 0; \ +l0_%=: r0 += 1; \ + if r0 != 4 goto l0_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("bounded loop, start in the middle") +__failure __msg("back-edge") +__naked void loop_start_in_the_middle(void) +{ + asm volatile (" \ + r0 = 0; \ + goto l0_%=; \ +l1_%=: r0 += 1; \ +l0_%=: if r0 < 4 goto l1_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("xdp") +__description("bounded loop containing a forward jump") +__success __retval(4) +__naked void loop_containing_a_forward_jump(void) +{ + asm volatile (" \ + r0 = 0; \ +l1_%=: r0 += 1; \ + if r0 == r0 goto l0_%=; \ +l0_%=: if r0 < 4 goto l1_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("bounded loop that jumps out rather than in") +__success +__naked void jumps_out_rather_than_in(void) +{ + asm volatile (" \ + r6 = 0; \ +l1_%=: r6 += 1; \ + if r6 > 10000 goto l0_%=; \ + call %[bpf_get_prandom_u32]; \ + goto l1_%=; \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("tracepoint") +__description("infinite loop after a conditional jump") +__failure __msg("program is too large") +__naked void loop_after_a_conditional_jump(void) +{ + asm volatile (" \ + r0 = 5; \ + if r0 < 4 goto l0_%=; \ +l1_%=: r0 += 1; \ + goto l1_%=; \ +l0_%=: exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("bounded recursion") +__failure __msg("back-edge") +__naked void bounded_recursion(void) +{ + asm volatile (" \ + r1 = 0; \ + call bounded_recursion__1; \ + exit; \ +" ::: __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void bounded_recursion__1(void) +{ + asm volatile (" \ + r1 += 1; \ + r0 = r1; \ + if r1 < 4 goto l0_%=; \ + exit; \ +l0_%=: call bounded_recursion__1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("infinite loop in two jumps") +__failure __msg("loop detected") +__naked void infinite_loop_in_two_jumps(void) +{ + asm volatile (" \ + r0 = 0; \ +l1_%=: goto l0_%=; \ +l0_%=: if r0 < 4 goto l1_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("infinite loop: three-jump trick") +__failure __msg("loop detected") +__naked void infinite_loop_three_jump_trick(void) +{ + asm volatile (" \ + r0 = 0; \ +l2_%=: r0 += 1; \ + r0 &= 1; \ + if r0 < 2 goto l0_%=; \ + exit; \ +l0_%=: r0 += 1; \ + r0 &= 1; \ + if r0 < 2 goto l1_%=; \ + exit; \ +l1_%=: r0 += 1; \ + r0 &= 1; \ + if r0 < 2 goto l2_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("xdp") +__description("not-taken loop with back jump to 1st insn") +__success __retval(123) +__naked void back_jump_to_1st_insn_1(void) +{ + asm volatile (" \ +l0_%=: r0 = 123; \ + if r0 == 4 goto l0_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("xdp") +__description("taken loop with back jump to 1st insn") +__success __retval(55) +__naked void back_jump_to_1st_insn_2(void) +{ + asm volatile (" \ + r1 = 10; \ + r2 = 0; \ + call back_jump_to_1st_insn_2__1; \ + exit; \ +" ::: __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void back_jump_to_1st_insn_2__1(void) +{ + asm volatile (" \ +l0_%=: r2 += r1; \ + r1 -= 1; \ + if r1 != 0 goto l0_%=; \ + r0 = r2; \ + exit; \ +" ::: __clobber_all); +} + +SEC("xdp") +__description("taken loop with back jump to 1st insn, 2") +__success __retval(55) +__naked void jump_to_1st_insn_2(void) +{ + asm volatile (" \ + r1 = 10; \ + r2 = 0; \ + call jump_to_1st_insn_2__1; \ + exit; \ +" ::: __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void jump_to_1st_insn_2__1(void) +{ + asm volatile (" \ +l0_%=: r2 += r1; \ + r1 -= 1; \ + if w1 != 0 goto l0_%=; \ + r0 = r2; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/loops1.c b/tools/testing/selftests/bpf/verifier/loops1.c deleted file mode 100644 index 1af37187dc12..000000000000 --- a/tools/testing/selftests/bpf/verifier/loops1.c +++ /dev/null @@ -1,206 +0,0 @@ -{ - "bounded loop, count to 4", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .retval = 4, -}, -{ - "bounded loop, count to 20", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 3), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 20, -2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "bounded loop, count from positive unknown to 4", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_0, 0, 2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .retval = 4, -}, -{ - "bounded loop, count from totally unknown to 4", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "bounded loop, count to 4 with equality", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 4, -2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "bounded loop, start in the middle", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_A(1), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "back-edge", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .retval = 4, -}, -{ - "bounded loop containing a forward jump", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -3), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .retval = 4, -}, -{ - "bounded loop that jumps out rather than in", - .insns = { - BPF_MOV64_IMM(BPF_REG_6, 0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_JMP_IMM(BPF_JGT, BPF_REG_6, 10000, 2), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_JMP_A(-4), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "infinite loop after a conditional jump", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 5), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, 2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_JMP_A(-2), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "program is too large", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "bounded recursion", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1), - BPF_EXIT_INSN(), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_JMP_IMM(BPF_JLT, BPF_REG_1, 4, 1), - BPF_EXIT_INSN(), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -5), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "back-edge", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "infinite loop in two jumps", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_A(0), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "loop detected", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "infinite loop: three-jump trick", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, 1), - BPF_EXIT_INSN(), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, 1), - BPF_EXIT_INSN(), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, -11), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "loop detected", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "not-taken loop with back jump to 1st insn", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 123), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, -2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .retval = 123, -}, -{ - "taken loop with back jump to 1st insn", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 10), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1), - BPF_EXIT_INSN(), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, -3), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .retval = 55, -}, -{ - "taken loop with back jump to 1st insn, 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 10), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1), - BPF_EXIT_INSN(), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1), - BPF_JMP32_IMM(BPF_JNE, BPF_REG_1, 0, -3), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .retval = 55, -}, From patchwork Fri Apr 21 17:42:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220517 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6263DC7618E for ; Fri, 21 Apr 2023 17:43:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231368AbjDURn3 (ORCPT ); Fri, 21 Apr 2023 13:43:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233387AbjDURnO (ORCPT ); Fri, 21 Apr 2023 13:43:14 -0400 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 106523C0D for ; Fri, 21 Apr 2023 10:42:59 -0700 (PDT) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f1958d3a85so6074115e9.1 for ; Fri, 21 Apr 2023 10:42:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098977; x=1684690977; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zdSJ7QNaxCCf8E/q7TTuV1MRzT2kE3PDVLFsNdsykUE=; b=J5qwdrtXlmuQKu+vSSMc4uI+8xiJbAZKPJ1YNkrw6Hgmih0fM7ESCOGBRgmPIEmHKh Xf3gErxPIzslbxISRacI74e2oOhuaZcGGbnBDDO7dv7b5W9r+4Gcd2LJi79HgzqvN4O+ r2T2HtwlbMpkdlOUd9CYeqBrPk7gmlpRpiesWIU2/zbmSzbNbMLGg2k4qTNiypiCM07E /fB6sOGO/W9zvj765oNoVIhxguyeFLlbdnj5oxC1g6ld62pMlglNy7XpGXE4wgMzB/1a v4Hu75BndHj3hVH0Jl733jgIejTryBf98RY6MBtbNXQDB0ikr0SrKKDhekqWtnWn4Vcm Popw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098977; x=1684690977; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zdSJ7QNaxCCf8E/q7TTuV1MRzT2kE3PDVLFsNdsykUE=; b=SoxOV/+ZGtP2dhogEagQYmDN49tyAIB0CWbYOosZN+G+Uevtgl/MRB7UlTSPbTsZOq 1OkkOWwxwl5cdKHyh8YpkAN56qiZTEUJmC1Qfvxg9FBnA3q5+yBqXrA0nmQfVhuep+yE X4HuQ07TnL5k3lZ61XLbkEraqeS7VUfAkKICuH95ce838YgxMIzgQNAHXJL25ovzgKM2 5bmpFEu8ZQ3M2HTwCabJ/vrFD+gzq22fmJZy/blQe4H49veCgM00SZJR9ODT4nV+9IsB Xal9u+Y64lKOnJqksGFWZU57U934FYgb+tCOqZN+x9zQjKxjaXuGJh94e5r/PJddI3BU +S7A== X-Gm-Message-State: AAQBX9cneq50Cv4z8PeUysssZdk8W1KnQlm3qXlYhf82Oeb27rQr7wTD Sz7PaHBPRIYnRgVYu40hNPkEVpThlii7KA== X-Google-Smtp-Source: AKy350YOjfG+/3B0zlmjr6QJ7TNqN7zByFm33BVIqKu/eQxwRTfGWsVGxdw68brROtffMzLPv6AMKQ== X-Received: by 2002:a1c:4c09:0:b0:3f0:49b5:f0ce with SMTP id z9-20020a1c4c09000000b003f049b5f0cemr2501931wmf.12.1682098977098; Fri, 21 Apr 2023 10:42:57 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:56 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 10/24] selftests/bpf: verifier/lwt converted to inline assembly Date: Fri, 21 Apr 2023 20:42:20 +0300 Message-Id: <20230421174234.2391278-11-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/lwt automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_lwt.c | 234 ++++++++++++++++++ tools/testing/selftests/bpf/verifier/lwt.c | 189 -------------- 3 files changed, 236 insertions(+), 189 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_lwt.c delete mode 100644 tools/testing/selftests/bpf/verifier/lwt.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 33a50dbc2321..54c30fe1b693 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -33,6 +33,7 @@ #include "verifier_ld_ind.skel.h" #include "verifier_leak_ptr.skel.h" #include "verifier_loops1.skel.h" +#include "verifier_lwt.skel.h" #include "verifier_map_ptr.skel.h" #include "verifier_map_ret_val.skel.h" #include "verifier_masking.skel.h" @@ -115,6 +116,7 @@ void test_verifier_jeq_infer_not_null(void) { RUN(verifier_jeq_infer_not_null) void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); } void test_verifier_loops1(void) { RUN(verifier_loops1); } +void test_verifier_lwt(void) { RUN(verifier_lwt); } void test_verifier_map_ptr(void) { RUN(verifier_map_ptr); } void test_verifier_map_ret_val(void) { RUN(verifier_map_ret_val); } void test_verifier_masking(void) { RUN(verifier_masking); } diff --git a/tools/testing/selftests/bpf/progs/verifier_lwt.c b/tools/testing/selftests/bpf/progs/verifier_lwt.c new file mode 100644 index 000000000000..5ab746307309 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_lwt.c @@ -0,0 +1,234 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/lwt.c */ + +#include +#include +#include "bpf_misc.h" + +SEC("lwt_in") +__description("invalid direct packet write for LWT_IN") +__failure __msg("cannot write into packet") +__naked void packet_write_for_lwt_in(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + *(u8*)(r2 + 0) = r2; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("lwt_out") +__description("invalid direct packet write for LWT_OUT") +__failure __msg("cannot write into packet") +__naked void packet_write_for_lwt_out(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + *(u8*)(r2 + 0) = r2; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("lwt_xmit") +__description("direct packet write for LWT_XMIT") +__success __retval(0) +__naked void packet_write_for_lwt_xmit(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + *(u8*)(r2 + 0) = r2; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("lwt_in") +__description("direct packet read for LWT_IN") +__success __retval(0) +__naked void packet_read_for_lwt_in(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("lwt_out") +__description("direct packet read for LWT_OUT") +__success __retval(0) +__naked void packet_read_for_lwt_out(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("lwt_xmit") +__description("direct packet read for LWT_XMIT") +__success __retval(0) +__naked void packet_read_for_lwt_xmit(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("lwt_xmit") +__description("overlapping checks for direct packet access") +__success __retval(0) +__naked void checks_for_direct_packet_access(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r1 = r2; \ + r1 += 6; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u16*)(r2 + 6); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("lwt_xmit") +__description("make headroom for LWT_XMIT") +__success __retval(0) +__naked void make_headroom_for_lwt_xmit(void) +{ + asm volatile (" \ + r6 = r1; \ + r2 = 34; \ + r3 = 0; \ + call %[bpf_skb_change_head]; \ + /* split for s390 to succeed */ \ + r1 = r6; \ + r2 = 42; \ + r3 = 0; \ + call %[bpf_skb_change_head]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_skb_change_head) + : __clobber_all); +} + +SEC("socket") +__description("invalid access of tc_classid for LWT_IN") +__failure __msg("invalid bpf_context access") +__failure_unpriv +__naked void tc_classid_for_lwt_in(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_tc_classid]); \ + exit; \ +" : + : __imm_const(__sk_buff_tc_classid, offsetof(struct __sk_buff, tc_classid)) + : __clobber_all); +} + +SEC("socket") +__description("invalid access of tc_classid for LWT_OUT") +__failure __msg("invalid bpf_context access") +__failure_unpriv +__naked void tc_classid_for_lwt_out(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_tc_classid]); \ + exit; \ +" : + : __imm_const(__sk_buff_tc_classid, offsetof(struct __sk_buff, tc_classid)) + : __clobber_all); +} + +SEC("socket") +__description("invalid access of tc_classid for LWT_XMIT") +__failure __msg("invalid bpf_context access") +__failure_unpriv +__naked void tc_classid_for_lwt_xmit(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_tc_classid]); \ + exit; \ +" : + : __imm_const(__sk_buff_tc_classid, offsetof(struct __sk_buff, tc_classid)) + : __clobber_all); +} + +SEC("lwt_in") +__description("check skb->tc_classid half load not permitted for lwt prog") +__failure __msg("invalid bpf_context access") +__naked void not_permitted_for_lwt_prog(void) +{ + asm volatile ( + "r0 = 0;" +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + "r0 = *(u16*)(r1 + %[__sk_buff_tc_classid]);" +#else + "r0 = *(u16*)(r1 + %[__imm_0]);" +#endif + "exit;" + : + : __imm_const(__imm_0, offsetof(struct __sk_buff, tc_classid) + 2), + __imm_const(__sk_buff_tc_classid, offsetof(struct __sk_buff, tc_classid)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/lwt.c b/tools/testing/selftests/bpf/verifier/lwt.c deleted file mode 100644 index 5c8944d0b091..000000000000 --- a/tools/testing/selftests/bpf/verifier/lwt.c +++ /dev/null @@ -1,189 +0,0 @@ -{ - "invalid direct packet write for LWT_IN", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "cannot write into packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, -{ - "invalid direct packet write for LWT_OUT", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "cannot write into packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_LWT_OUT, -}, -{ - "direct packet write for LWT_XMIT", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_LWT_XMIT, -}, -{ - "direct packet read for LWT_IN", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, -{ - "direct packet read for LWT_OUT", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_LWT_OUT, -}, -{ - "direct packet read for LWT_XMIT", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_LWT_XMIT, -}, -{ - "overlapping checks for direct packet access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_2, 6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_LWT_XMIT, -}, -{ - "make headroom for LWT_XMIT", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_2, 34), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_skb_change_head), - /* split for s390 to succeed */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, 42), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_skb_change_head), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_LWT_XMIT, -}, -{ - "invalid access of tc_classid for LWT_IN", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, tc_classid)), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid bpf_context access", -}, -{ - "invalid access of tc_classid for LWT_OUT", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, tc_classid)), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid bpf_context access", -}, -{ - "invalid access of tc_classid for LWT_XMIT", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, tc_classid)), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid bpf_context access", -}, -{ - "check skb->tc_classid half load not permitted for lwt prog", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), -#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ - BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, tc_classid)), -#else - BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, tc_classid) + 2), -#endif - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid bpf_context access", - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, From patchwork Fri Apr 21 17:42:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220519 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63FD1C77B78 for ; Fri, 21 Apr 2023 17:43:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232526AbjDURn3 (ORCPT ); Fri, 21 Apr 2023 13:43:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233393AbjDURnP (ORCPT ); Fri, 21 Apr 2023 13:43:15 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F5066189 for ; Fri, 21 Apr 2023 10:43:00 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-2fa47de5b04so1948743f8f.1 for ; Fri, 21 Apr 2023 10:43:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098978; x=1684690978; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+Vgv5RzTIEnETx4FU/RyC2oSCRdoeqOGp2zsM7CpEYc=; b=cwT15TGd1pbH+rwr1oqmCBvwvRHcQRuhQWeEOc7afNYIoi7MWOyvcuzPkzj00VVHPD +LbgsNlVhlp4oE1CizXdkSTh3vN8SpDQhVK7qKOXMso1gXueTGS5YXpeqEByn+XL5z98 FhfkI8U7y0zy6VIC0nGrgfGEdPjgVW0aEZ8BtHQc3zoG8UYuAb64D1EDNEsYuOBlhqcO zWAIwDxPcO5zbi7Fe1mzNKfNrDRgMcWmldONoMV4ijNLF8H2+I8vzTuGvkv2dss8ItXS Uc4IKrEOUjpM5nrYllYf1tjeG7QWcKqseWHhJJ9FC2XEvXbavzOECOykZU9xa7bIUFNf vJug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098978; x=1684690978; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+Vgv5RzTIEnETx4FU/RyC2oSCRdoeqOGp2zsM7CpEYc=; b=lEP7WvohBFyv3dQITje9oGm1ObWyZMzDqKCiiYyDFyDAr3pipMPRFBEvlfd7vQ9K+n CM1bOE/EjvZBOFSx0pwyoPz3pyG8TaT3BiXXvz+Y2gbuJ97Eskfrw1Q17i4+fWXdflcg nY3J6X1zWJI8v87FUsUlQr7RlFNSQo3uKROMyf7Nxn+O271n2dTbBTCCedEbhnNuqB// S2lg/zyUHYLE2Yoc8I9w4gXY+9Kx6cx6vtk0TtxOG7dFI5n2ug3ip/ua9Z4yTxnZQIiy jLqHbgB0F8KIht840I89q7BXNZH/enQG9x2+LH+gH8a5DwGTFBOtak1VMbW18knvgYlW SwwA== X-Gm-Message-State: AAQBX9f4CQXkHydpSHvtEKbe+zvhHwKub2Car+ctdyXhgtmW3JQ0xk29 4KsX65ts/FBPXDUdd8XGVBPqpsw54P1R6g== X-Google-Smtp-Source: AKy350YXfJRlWXsga29Qo9Mh5gJBidg2zM6d54DvUH5Jemsu5guIUaymzeyPBaGcqf9rLw+MbjwH/A== X-Received: by 2002:a5d:65c5:0:b0:2f0:2d92:e6fd with SMTP id e5-20020a5d65c5000000b002f02d92e6fdmr4701010wrw.50.1682098978373; Fri, 21 Apr 2023 10:42:58 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:57 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 11/24] selftests/bpf: verifier/map_in_map converted to inline assembly Date: Fri, 21 Apr 2023 20:42:21 +0300 Message-Id: <20230421174234.2391278-12-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/map_in_map automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_map_in_map.c | 142 ++++++++++++++++++ .../selftests/bpf/verifier/map_in_map.c | 96 ------------ 3 files changed, 144 insertions(+), 96 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_map_in_map.c delete mode 100644 tools/testing/selftests/bpf/verifier/map_in_map.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 54c30fe1b693..95fc9cb231ad 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -34,6 +34,7 @@ #include "verifier_leak_ptr.skel.h" #include "verifier_loops1.skel.h" #include "verifier_lwt.skel.h" +#include "verifier_map_in_map.skel.h" #include "verifier_map_ptr.skel.h" #include "verifier_map_ret_val.skel.h" #include "verifier_masking.skel.h" @@ -117,6 +118,7 @@ void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); } void test_verifier_loops1(void) { RUN(verifier_loops1); } void test_verifier_lwt(void) { RUN(verifier_lwt); } +void test_verifier_map_in_map(void) { RUN(verifier_map_in_map); } void test_verifier_map_ptr(void) { RUN(verifier_map_ptr); } void test_verifier_map_ret_val(void) { RUN(verifier_map_ret_val); } void test_verifier_masking(void) { RUN(verifier_masking); } diff --git a/tools/testing/selftests/bpf/progs/verifier_map_in_map.c b/tools/testing/selftests/bpf/progs/verifier_map_in_map.c new file mode 100644 index 000000000000..4eaab1468eb7 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_map_in_map.c @@ -0,0 +1,142 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/map_in_map.c */ + +#include +#include +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); + __array(values, struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); + }); +} map_in_map SEC(".maps"); + +SEC("socket") +__description("map in map access") +__success __success_unpriv __retval(0) +__naked void map_in_map_access(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_in_map] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = r0; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_in_map) + : __clobber_all); +} + +SEC("xdp") +__description("map in map state pruning") +__success __msg("processed 26 insns") +__log_level(2) __retval(0) __flag(BPF_F_TEST_STATE_FREQ) +__naked void map_in_map_state_pruning(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r6 = r10; \ + r6 += -4; \ + r2 = r6; \ + r1 = %[map_in_map] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r2 = r6; \ + r1 = r0; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l1_%=; \ + r2 = r6; \ + r1 = %[map_in_map] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l2_%=; \ + exit; \ +l2_%=: r2 = r6; \ + r1 = r0; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r0 = *(u32*)(r0 + 0); \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_in_map) + : __clobber_all); +} + +SEC("socket") +__description("invalid inner map pointer") +__failure __msg("R1 pointer arithmetic on map_ptr prohibited") +__failure_unpriv +__naked void invalid_inner_map_pointer(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_in_map] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = r0; \ + r1 += 8; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_in_map) + : __clobber_all); +} + +SEC("socket") +__description("forgot null checking on the inner map pointer") +__failure __msg("R1 type=map_value_or_null expected=map_ptr") +__failure_unpriv +__naked void on_the_inner_map_pointer(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_in_map] ll; \ + call %[bpf_map_lookup_elem]; \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = r0; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_in_map) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/map_in_map.c b/tools/testing/selftests/bpf/verifier/map_in_map.c deleted file mode 100644 index 128a348b762d..000000000000 --- a/tools/testing/selftests/bpf/verifier/map_in_map.c +++ /dev/null @@ -1,96 +0,0 @@ -{ - "map in map access", - .insns = { - BPF_ST_MEM(0, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_ST_MEM(0, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_in_map = { 3 }, - .result = ACCEPT, -}, -{ - "map in map state pruning", - .insns = { - BPF_ST_MEM(0, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_6), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 11), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_6), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_in_map = { 4, 14 }, - .flags = BPF_F_TEST_STATE_FREQ, - .result = VERBOSE_ACCEPT, - .errstr = "processed 25 insns", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "invalid inner map pointer", - .insns = { - BPF_ST_MEM(0, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_ST_MEM(0, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_in_map = { 3 }, - .errstr = "R1 pointer arithmetic on map_ptr prohibited", - .result = REJECT, -}, -{ - "forgot null checking on the inner map pointer", - .insns = { - BPF_ST_MEM(0, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_ST_MEM(0, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_in_map = { 3 }, - .errstr = "R1 type=map_value_or_null expected=map_ptr", - .result = REJECT, -}, From patchwork Fri Apr 21 17:42:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220520 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05C86C77B7E for ; Fri, 21 Apr 2023 17:43:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232578AbjDURna (ORCPT ); Fri, 21 Apr 2023 13:43:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229534AbjDURnP (ORCPT ); Fri, 21 Apr 2023 13:43:15 -0400 Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com [IPv6:2a00:1450:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FA9B2696 for ; Fri, 21 Apr 2023 10:43:01 -0700 (PDT) Received: by mail-wr1-x42f.google.com with SMTP id ffacd0b85a97d-2f9b9aa9d75so1301622f8f.0 for ; Fri, 21 Apr 2023 10:43:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098979; x=1684690979; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JRBL3rV1B0pyVlX06vjzxhZo31DWIZom/BS3dYyDGt8=; b=MgEE1L43JP9EDZyxUJmro1fpl5nhYxlCkD3TiTYVCa6kAq1iySymaP50xR3rO8euSc gMOtvbMOdV3mR5Emk1O/xbVBo/BqQt9dc/SUg+fEdyYPIJkdCiiNJCvtUTrnDyMTuF4v EdtcJRTPXW7/n5988U+ew420kHNXtqnmxMqyHT/+q6zfGen5dBl1Rj2bs2nLDhzPyH4I 37sAlRfo3msDTME4QFgFGAFijMnxpQ0BP3+VSdwXSx5tRYe2E6A6IhzB39WJtmIprlmT AjxXw1mz0pHg2lN8DgLrWLbOlD+OkJA+FJILH1+AVCf8EbDAOME0VPMSxp6evLoQ1zJF gmHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098979; x=1684690979; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JRBL3rV1B0pyVlX06vjzxhZo31DWIZom/BS3dYyDGt8=; b=jWKK8SFm+XMKUo9L5FNvQLHLik64KrlY8fiAbmsdja729BYdciqS/u/FTp4b959bNu V/e74ecBs+aofvTwQCzxpb2cM/bVLkRXpGVPRIrwZ6RhlwXpgfPVvEwoAj9J/NNrRRJU WAvhmbWJpLGHc62kHuKWTLHWMglH6/7ppKZyDoICsfymUOchzvsmoxPGDpqrmNQho37O t54g0bOo4Ql9w4qkpkNRnV0Xi3Myc3ZIMV9KN10biiuzoK1bVDkDlw7tFwKE+lt/ph3U /PEhBLxMkxYuhTMfP1fn39B9xPX9qFA5Y3jF1wGQW9xTYec/03jDTi1PnjNN2wxlqx8n X91w== X-Gm-Message-State: AAQBX9esZm5vrsim3A4Pr6YUVsKEn4WXlg7LI59+d0jiyULoNnkAh3DB X4yLRURZ2luACUhW4hpazNKZ1Fv8FvPHlA== X-Google-Smtp-Source: AKy350ZrtifyocN9Dok9rJxyqxA2kG3AVxd3JXPgQaDV8Dr4a4tl8FJcYSB57yBVh5mq67E2gcnkAw== X-Received: by 2002:adf:e946:0:b0:2fe:d540:8c8e with SMTP id m6-20020adfe946000000b002fed5408c8emr4389389wrn.19.1682098979474; Fri, 21 Apr 2023 10:42:59 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:42:59 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 12/24] selftests/bpf: verifier/map_ptr_mixing converted to inline assembly Date: Fri, 21 Apr 2023 20:42:22 +0300 Message-Id: <20230421174234.2391278-13-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/map_ptr_mixing automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../bpf/progs/verifier_map_ptr_mixing.c | 265 ++++++++++++++++++ .../selftests/bpf/verifier/map_ptr_mixing.c | 100 ------- 3 files changed, 267 insertions(+), 100 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_map_ptr_mixing.c delete mode 100644 tools/testing/selftests/bpf/verifier/map_ptr_mixing.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 95fc9cb231ad..261567230dd0 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -36,6 +36,7 @@ #include "verifier_lwt.skel.h" #include "verifier_map_in_map.skel.h" #include "verifier_map_ptr.skel.h" +#include "verifier_map_ptr_mixing.skel.h" #include "verifier_map_ret_val.skel.h" #include "verifier_masking.skel.h" #include "verifier_meta_access.skel.h" @@ -120,6 +121,7 @@ void test_verifier_loops1(void) { RUN(verifier_loops1); } void test_verifier_lwt(void) { RUN(verifier_lwt); } void test_verifier_map_in_map(void) { RUN(verifier_map_in_map); } void test_verifier_map_ptr(void) { RUN(verifier_map_ptr); } +void test_verifier_map_ptr_mixing(void) { RUN(verifier_map_ptr_mixing); } void test_verifier_map_ret_val(void) { RUN(verifier_map_ret_val); } void test_verifier_masking(void) { RUN(verifier_masking); } void test_verifier_meta_access(void) { RUN(verifier_meta_access); } diff --git a/tools/testing/selftests/bpf/progs/verifier_map_ptr_mixing.c b/tools/testing/selftests/bpf/progs/verifier_map_ptr_mixing.c new file mode 100644 index 000000000000..c5a7c1ddc562 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_map_ptr_mixing.c @@ -0,0 +1,265 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/map_ptr_mixing.c */ + +#include +#include +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct test_val); +} map_array_48b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); + __array(values, struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); + }); +} map_in_map SEC(".maps"); + +void dummy_prog_42_socket(void); +void dummy_prog_24_socket(void); +void dummy_prog_loop1_socket(void); +void dummy_prog_loop2_socket(void); + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 4); + __uint(key_size, sizeof(int)); + __array(values, void (void)); +} map_prog1_socket SEC(".maps") = { + .values = { + [0] = (void *)&dummy_prog_42_socket, + [1] = (void *)&dummy_prog_loop1_socket, + [2] = (void *)&dummy_prog_24_socket, + }, +}; + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 8); + __uint(key_size, sizeof(int)); + __array(values, void (void)); +} map_prog2_socket SEC(".maps") = { + .values = { + [1] = (void *)&dummy_prog_loop2_socket, + [2] = (void *)&dummy_prog_24_socket, + [7] = (void *)&dummy_prog_42_socket, + }, +}; + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_42_socket(void) +{ + asm volatile ("r0 = 42; exit;"); +} + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_24_socket(void) +{ + asm volatile ("r0 = 24; exit;"); +} + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_loop1_socket(void) +{ + asm volatile (" \ + r3 = 1; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 41; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_loop2_socket(void) +{ + asm volatile (" \ + r3 = 1; \ + r2 = %[map_prog2_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 41; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog2_socket) + : __clobber_all); +} + +SEC("tc") +__description("calls: two calls returning different map pointers for lookup (hash, array)") +__success __retval(1) +__naked void pointers_for_lookup_hash_array(void) +{ + asm volatile (" \ + /* main prog */ \ + if r1 != 0 goto l0_%=; \ + call pointers_for_lookup_hash_array__1; \ + goto l1_%=; \ +l0_%=: call pointers_for_lookup_hash_array__2; \ +l1_%=: r1 = r0; \ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + r2 = r10; \ + r2 += -8; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ + r0 = 1; \ +l2_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void pointers_for_lookup_hash_array__1(void) +{ + asm volatile (" \ + r0 = %[map_hash_48b] ll; \ + exit; \ +" : + : __imm_addr(map_hash_48b) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void pointers_for_lookup_hash_array__2(void) +{ + asm volatile (" \ + r0 = %[map_array_48b] ll; \ + exit; \ +" : + : __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("tc") +__description("calls: two calls returning different map pointers for lookup (hash, map in map)") +__failure __msg("only read from bpf_array is supported") +__naked void lookup_hash_map_in_map(void) +{ + asm volatile (" \ + /* main prog */ \ + if r1 != 0 goto l0_%=; \ + call lookup_hash_map_in_map__1; \ + goto l1_%=; \ +l0_%=: call lookup_hash_map_in_map__2; \ +l1_%=: r1 = r0; \ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + r2 = r10; \ + r2 += -8; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ + r0 = 1; \ +l2_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void lookup_hash_map_in_map__1(void) +{ + asm volatile (" \ + r0 = %[map_array_48b] ll; \ + exit; \ +" : + : __imm_addr(map_array_48b) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void lookup_hash_map_in_map__2(void) +{ + asm volatile (" \ + r0 = %[map_in_map] ll; \ + exit; \ +" : + : __imm_addr(map_in_map) + : __clobber_all); +} + +SEC("socket") +__description("cond: two branches returning different map pointers for lookup (tail, tail)") +__success __failure_unpriv __msg_unpriv("tail_call abusing map_ptr") +__retval(42) +__naked void pointers_for_lookup_tail_tail_1(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_mark]); \ + if r6 != 0 goto l0_%=; \ + r2 = %[map_prog2_socket] ll; \ + goto l1_%=; \ +l0_%=: r2 = %[map_prog1_socket] ll; \ +l1_%=: r3 = 7; \ + call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket), + __imm_addr(map_prog2_socket), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("socket") +__description("cond: two branches returning same map pointers for lookup (tail, tail)") +__success __success_unpriv __retval(42) +__naked void pointers_for_lookup_tail_tail_2(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_mark]); \ + if r6 == 0 goto l0_%=; \ + r2 = %[map_prog2_socket] ll; \ + goto l1_%=; \ +l0_%=: r2 = %[map_prog2_socket] ll; \ +l1_%=: r3 = 7; \ + call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog2_socket), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c b/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c deleted file mode 100644 index 1f2b8c4cb26d..000000000000 --- a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c +++ /dev/null @@ -1,100 +0,0 @@ -{ - "calls: two calls returning different map pointers for lookup (hash, array)", - .insns = { - /* main prog */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_CALL_REL(11), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_CALL_REL(12), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - /* subprog 1 */ - BPF_LD_MAP_FD(BPF_REG_0, 0), - BPF_EXIT_INSN(), - /* subprog 2 */ - BPF_LD_MAP_FD(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .fixup_map_hash_48b = { 13 }, - .fixup_map_array_48b = { 16 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "calls: two calls returning different map pointers for lookup (hash, map in map)", - .insns = { - /* main prog */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_CALL_REL(11), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_CALL_REL(12), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - /* subprog 1 */ - BPF_LD_MAP_FD(BPF_REG_0, 0), - BPF_EXIT_INSN(), - /* subprog 2 */ - BPF_LD_MAP_FD(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .fixup_map_in_map = { 16 }, - .fixup_map_array_48b = { 13 }, - .result = REJECT, - .errstr = "only read from bpf_array is supported", -}, -{ - "cond: two branches returning different map pointers for lookup (tail, tail)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 3), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, 2), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 7), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 5 }, - .fixup_prog2 = { 2 }, - .result_unpriv = REJECT, - .errstr_unpriv = "tail_call abusing map_ptr", - .result = ACCEPT, - .retval = 42, -}, -{ - "cond: two branches returning same map pointers for lookup (tail, tail)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 3), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, 2), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 7), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog2 = { 2, 5 }, - .result_unpriv = ACCEPT, - .result = ACCEPT, - .retval = 42, -}, From patchwork Fri Apr 21 17:42:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220522 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 008C2C77B76 for ; Fri, 21 Apr 2023 17:43:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232022AbjDURnb (ORCPT ); Fri, 21 Apr 2023 13:43:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231956AbjDURnQ (ORCPT ); Fri, 21 Apr 2023 13:43:16 -0400 Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A717265B5 for ; Fri, 21 Apr 2023 10:43:02 -0700 (PDT) Received: by mail-wr1-x435.google.com with SMTP id ffacd0b85a97d-2f917585b26so1860513f8f.0 for ; Fri, 21 Apr 2023 10:43:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098981; x=1684690981; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/a/QQWc/OiP7atEYTeyVdAbMeUylaoHT0DALzB1ZFXM=; b=oyK0KO7VCZoiDeTOZAQLHS7aPtTvO2EZRnv2XiMKL0LADjIp2PdpCzxSk1M9I1sSPK exaFrx7yFNW5jjzj9wtYmi+r3ZBBIDWSclEwOVHzRVxyqvUPqj+QFjIiQmqDG1M8LFtp H+Rn94ztTgkAI9lPYBKROMk5dXGLZMjHW24fBSJceGdYJw/yduvEc7xXerirwfnUtgZs Dtp3iAianRldzop/xhUsuqzIczZazSFZM4mT3xqsooio2BIgrOZasTGH1KBiKr7kqtkl 0C0B7f+PHOv0MdLbty54dASzt0VKyeJYYIaBEX2eiVc8kn0MRP+5FFwdghudUhHBgyj1 L07w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098981; x=1684690981; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/a/QQWc/OiP7atEYTeyVdAbMeUylaoHT0DALzB1ZFXM=; b=Z3Xd7VaeMnuh4K7pw1a//QFE/fzjbg8cgRDyj9vD8TkJDgqYmGCR/olR3s4cHmgBp1 nRmpfkUZ9Pm7dNg7OUvbD4OwEoHoA4E/v3uWSZbJYW3vp1ImPRmxXgalGPhcM60slTel rvE4vm0GzXUb0A8WoJyrU+TzEjcMF8M6cbe1r28UMPMnHuENoYwMqeYSKcuVzP7yRCDu IrD1nH0hZ48hqcK9hzYYr6UmLLFlOddkzOfu8t4HhiNX0y3WjAX4gLv2X7x6rGt/Wywl avnvQhvPhLfsLATzkqafswi/wyXrZygLDJTrwtJyQgnmwrGwZCVy4qzS757lTS4QrfD0 ljYA== X-Gm-Message-State: AAQBX9db/2xGMwyfo5dxRqD6aPPf7D4cJjvRPOalgLGQxdgLQoPKMN2m GsUdtmNmju2z0QpWasvSbxdiPLRqBxcBNw== X-Google-Smtp-Source: AKy350bt+3FnlGKfrmEVFWt50xiBC02Ul0futhVMXzF16Bi/pL+zKqOIQHdPhCNm1RduIV3K37VpOQ== X-Received: by 2002:a5d:526e:0:b0:2f7:8f62:1a45 with SMTP id l14-20020a5d526e000000b002f78f621a45mr4508177wrc.66.1682098980742; Fri, 21 Apr 2023 10:43:00 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.42.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:00 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 13/24] selftests/bpf: verifier/precise converted to inline assembly Date: Fri, 21 Apr 2023 20:42:23 +0300 Message-Id: <20230421174234.2391278-14-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/precise automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_precise.c | 269 ++++++++++++++++++ .../testing/selftests/bpf/verifier/precise.c | 219 -------------- 3 files changed, 271 insertions(+), 219 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_precise.c delete mode 100644 tools/testing/selftests/bpf/verifier/precise.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 261567230dd0..d5ec9054c025 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -40,6 +40,7 @@ #include "verifier_map_ret_val.skel.h" #include "verifier_masking.skel.h" #include "verifier_meta_access.skel.h" +#include "verifier_precise.skel.h" #include "verifier_raw_stack.skel.h" #include "verifier_raw_tp_writable.skel.h" #include "verifier_reg_equal.skel.h" @@ -125,6 +126,7 @@ void test_verifier_map_ptr_mixing(void) { RUN(verifier_map_ptr_mixing); } void test_verifier_map_ret_val(void) { RUN(verifier_map_ret_val); } void test_verifier_masking(void) { RUN(verifier_masking); } void test_verifier_meta_access(void) { RUN(verifier_meta_access); } +void test_verifier_precise(void) { RUN(verifier_precise); } void test_verifier_raw_stack(void) { RUN(verifier_raw_stack); } void test_verifier_raw_tp_writable(void) { RUN(verifier_raw_tp_writable); } void test_verifier_reg_equal(void) { RUN(verifier_reg_equal); } diff --git a/tools/testing/selftests/bpf/progs/verifier_precise.c b/tools/testing/selftests/bpf/progs/verifier_precise.c new file mode 100644 index 000000000000..81d58dc7d2d4 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_precise.c @@ -0,0 +1,269 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/precise.c */ + +#include +#include +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct test_val); +} map_array_48b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_RINGBUF); + __uint(max_entries, 4096); +} map_ringbuf SEC(".maps"); + +SEC("tracepoint") +__description("precise: test 1") +__success +__msg("27: (85) call bpf_probe_read_kernel#113") +__msg("last_idx 27 first_idx 21") +__msg("regs=4 stack=0 before 26") +__msg("regs=4 stack=0 before 25") +__msg("regs=4 stack=0 before 24") +__msg("regs=4 stack=0 before 23") +__msg("regs=4 stack=0 before 21") +__msg("parent didn't have regs=4 stack=0 marks") +__msg("last_idx 20 first_idx 11") +__msg("regs=4 stack=0 before 20") +__msg("regs=200 stack=0 before 19") +__msg("regs=300 stack=0 before 18") +__msg("regs=201 stack=0 before 16") +__msg("regs=201 stack=0 before 15") +__msg("regs=200 stack=0 before 14") +__msg("regs=200 stack=0 before 13") +__msg("regs=200 stack=0 before 12") +__msg("regs=200 stack=0 before 11") +__msg("parent already had regs=0 stack=0 marks") +__log_level(2) +__naked void precise_test_1(void) +{ + asm volatile (" \ + r0 = 1; \ + r6 = %[map_array_48b] ll; \ + r1 = r6; \ + r2 = r10; \ + r2 += -8; \ + r7 = 0; \ + *(u64*)(r10 - 8) = r7; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r9 = r0; \ + r1 = r6; \ + r2 = r10; \ + r2 += -8; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r8 = r0; \ + r9 -= r8; /* map_value_ptr -= map_value_ptr */\ + r2 = r9; \ + if r2 < 8 goto l2_%=; \ + exit; \ +l2_%=: r2 += 1; /* R2=scalar(umin=1, umax=8) */\ + r1 = r10; \ + r1 += -8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("precise: test 2") +__success +__msg("27: (85) call bpf_probe_read_kernel#113") +__msg("last_idx 27 first_idx 23") +__msg("regs=4 stack=0 before 26") +__msg("regs=4 stack=0 before 25") +__msg("regs=4 stack=0 before 24") +__msg("regs=4 stack=0 before 25") +__msg("parent didn't have regs=4 stack=0 marks") +__msg("last_idx 21 first_idx 21") +__msg("regs=4 stack=0 before 21") +__msg("parent didn't have regs=4 stack=0 marks") +__msg("last_idx 20 first_idx 18") +__msg("regs=4 stack=0 before 20") +__msg("regs=200 stack=0 before 19") +__msg("regs=300 stack=0 before 18") +__msg("parent already had regs=0 stack=0 marks") +__log_level(2) __flag(BPF_F_TEST_STATE_FREQ) +__naked void precise_test_2(void) +{ + asm volatile (" \ + r0 = 1; \ + r6 = %[map_array_48b] ll; \ + r1 = r6; \ + r2 = r10; \ + r2 += -8; \ + r7 = 0; \ + *(u64*)(r10 - 8) = r7; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r9 = r0; \ + r1 = r6; \ + r2 = r10; \ + r2 += -8; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r8 = r0; \ + r9 -= r8; /* map_value_ptr -= map_value_ptr */\ + r2 = r9; \ + if r2 < 8 goto l2_%=; \ + exit; \ +l2_%=: r2 += 1; /* R2=scalar(umin=1, umax=8) */\ + r1 = r10; \ + r1 += -8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("xdp") +__description("precise: cross frame pruning") +__failure __msg("!read_ok") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void precise_cross_frame_pruning(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r8 = 0; \ + if r0 != 0 goto l0_%=; \ + r8 = 1; \ +l0_%=: call %[bpf_get_prandom_u32]; \ + r9 = 0; \ + if r0 != 0 goto l1_%=; \ + r9 = 1; \ +l1_%=: r1 = r0; \ + call precise_cross_frame_pruning__1; \ + if r8 == 1 goto l2_%=; \ + r1 = *(u8*)(r2 + 0); \ +l2_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void precise_cross_frame_pruning__1(void) +{ + asm volatile (" \ + if r1 == 0 goto l0_%=; \ +l0_%=: exit; \ +" ::: __clobber_all); +} + +SEC("xdp") +__description("precise: ST insn causing spi > allocated_stack") +__success +__msg("5: (2d) if r4 > r0 goto pc+0") +__msg("last_idx 5 first_idx 5") +__msg("parent didn't have regs=10 stack=0 marks") +__msg("last_idx 4 first_idx 2") +__msg("regs=10 stack=0 before 4") +__msg("regs=10 stack=0 before 3") +__msg("regs=0 stack=1 before 2") +__msg("last_idx 5 first_idx 5") +__msg("parent didn't have regs=1 stack=0 marks") +__log_level(2) __retval(-1) __flag(BPF_F_TEST_STATE_FREQ) +__naked void insn_causing_spi_allocated_stack_1(void) +{ + asm volatile (" \ + r3 = r10; \ + if r3 != 123 goto l0_%=; \ +l0_%=: .8byte %[st_mem]; \ + r4 = *(u64*)(r10 - 8); \ + r0 = -1; \ + if r4 > r0 goto l1_%=; \ +l1_%=: exit; \ +" : + : __imm_insn(st_mem, BPF_ST_MEM(BPF_DW, BPF_REG_3, -8, 0)) + : __clobber_all); +} + +SEC("xdp") +__description("precise: STX insn causing spi > allocated_stack") +__success +__msg("last_idx 6 first_idx 6") +__msg("parent didn't have regs=10 stack=0 marks") +__msg("last_idx 5 first_idx 3") +__msg("regs=10 stack=0 before 5") +__msg("regs=10 stack=0 before 4") +__msg("regs=0 stack=1 before 3") +__msg("last_idx 6 first_idx 6") +__msg("parent didn't have regs=1 stack=0 marks") +__msg("last_idx 5 first_idx 3") +__msg("regs=1 stack=0 before 5") +__log_level(2) __retval(-1) __flag(BPF_F_TEST_STATE_FREQ) +__naked void insn_causing_spi_allocated_stack_2(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r3 = r10; \ + if r3 != 123 goto l0_%=; \ +l0_%=: *(u64*)(r3 - 8) = r0; \ + r4 = *(u64*)(r10 - 8); \ + r0 = -1; \ + if r4 > r0 goto l1_%=; \ +l1_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("xdp") +__description("precise: mark_chain_precision for ARG_CONST_ALLOC_SIZE_OR_ZERO") +__failure __msg("invalid access to memory, mem_size=1 off=42 size=8") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void const_alloc_size_or_zero(void) +{ + asm volatile (" \ + r4 = *(u32*)(r1 + %[xdp_md_ingress_ifindex]); \ + r6 = %[map_ringbuf] ll; \ + r1 = r6; \ + r2 = 1; \ + r3 = 0; \ + if r4 == 0 goto l0_%=; \ + r2 = 0x1000; \ +l0_%=: call %[bpf_ringbuf_reserve]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r1 = r0; \ + r2 = *(u64*)(r0 + 42); \ + call %[bpf_ringbuf_submit]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_ringbuf_reserve), + __imm(bpf_ringbuf_submit), + __imm_addr(map_ringbuf), + __imm_const(xdp_md_ingress_ifindex, offsetof(struct xdp_md, ingress_ifindex)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c deleted file mode 100644 index 6c03a7d805f9..000000000000 --- a/tools/testing/selftests/bpf/verifier/precise.c +++ /dev/null @@ -1,219 +0,0 @@ -{ - "precise: test 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_LD_MAP_FD(BPF_REG_6, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - - BPF_MOV64_REG(BPF_REG_9, BPF_REG_0), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - - BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8), /* map_value_ptr -= map_value_ptr */ - BPF_MOV64_REG(BPF_REG_2, BPF_REG_9), - BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1), - BPF_EXIT_INSN(), - - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .fixup_map_array_48b = { 1 }, - .result = VERBOSE_ACCEPT, - .errstr = - "26: (85) call bpf_probe_read_kernel#113\ - last_idx 26 first_idx 20\ - regs=4 stack=0 before 25\ - regs=4 stack=0 before 24\ - regs=4 stack=0 before 23\ - regs=4 stack=0 before 22\ - regs=4 stack=0 before 20\ - parent didn't have regs=4 stack=0 marks\ - last_idx 19 first_idx 10\ - regs=4 stack=0 before 19\ - regs=200 stack=0 before 18\ - regs=300 stack=0 before 17\ - regs=201 stack=0 before 15\ - regs=201 stack=0 before 14\ - regs=200 stack=0 before 13\ - regs=200 stack=0 before 12\ - regs=200 stack=0 before 11\ - regs=200 stack=0 before 10\ - parent already had regs=0 stack=0 marks", -}, -{ - "precise: test 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_LD_MAP_FD(BPF_REG_6, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - - BPF_MOV64_REG(BPF_REG_9, BPF_REG_0), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - - BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8), /* map_value_ptr -= map_value_ptr */ - BPF_MOV64_REG(BPF_REG_2, BPF_REG_9), - BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1), - BPF_EXIT_INSN(), - - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .fixup_map_array_48b = { 1 }, - .result = VERBOSE_ACCEPT, - .flags = BPF_F_TEST_STATE_FREQ, - .errstr = - "26: (85) call bpf_probe_read_kernel#113\ - last_idx 26 first_idx 22\ - regs=4 stack=0 before 25\ - regs=4 stack=0 before 24\ - regs=4 stack=0 before 23\ - regs=4 stack=0 before 22\ - parent didn't have regs=4 stack=0 marks\ - last_idx 20 first_idx 20\ - regs=4 stack=0 before 20\ - parent didn't have regs=4 stack=0 marks\ - last_idx 19 first_idx 17\ - regs=4 stack=0 before 19\ - regs=200 stack=0 before 18\ - regs=300 stack=0 before 17\ - parent already had regs=0 stack=0 marks", -}, -{ - "precise: cross frame pruning", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_IMM(BPF_REG_8, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_MOV64_IMM(BPF_REG_8, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_IMM(BPF_REG_9, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_MOV64_IMM(BPF_REG_9, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 1, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = BPF_F_TEST_STATE_FREQ, - .errstr = "!read_ok", - .result = REJECT, -}, -{ - "precise: ST insn causing spi > allocated_stack", - .insns = { - BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), - BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 123, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_3, -8, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8), - BPF_MOV64_IMM(BPF_REG_0, -1), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = BPF_F_TEST_STATE_FREQ, - .errstr = "5: (2d) if r4 > r0 goto pc+0\ - last_idx 5 first_idx 5\ - parent didn't have regs=10 stack=0 marks\ - last_idx 4 first_idx 2\ - regs=10 stack=0 before 4\ - regs=10 stack=0 before 3\ - regs=0 stack=1 before 2\ - last_idx 5 first_idx 5\ - parent didn't have regs=1 stack=0 marks", - .result = VERBOSE_ACCEPT, - .retval = -1, -}, -{ - "precise: STX insn causing spi > allocated_stack", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), - BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 123, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8), - BPF_MOV64_IMM(BPF_REG_0, -1), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = BPF_F_TEST_STATE_FREQ, - .errstr = "last_idx 6 first_idx 6\ - parent didn't have regs=10 stack=0 marks\ - last_idx 5 first_idx 3\ - regs=10 stack=0 before 5\ - regs=10 stack=0 before 4\ - regs=0 stack=1 before 3\ - last_idx 6 first_idx 6\ - parent didn't have regs=1 stack=0 marks\ - last_idx 5 first_idx 3\ - regs=1 stack=0 before 5", - .result = VERBOSE_ACCEPT, - .retval = -1, -}, -{ - "precise: mark_chain_precision for ARG_CONST_ALLOC_SIZE_OR_ZERO", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct xdp_md, ingress_ifindex)), - BPF_LD_MAP_FD(BPF_REG_6, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, 1), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 0, 1), - BPF_MOV64_IMM(BPF_REG_2, 0x1000), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 42), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_ringbuf = { 1 }, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = BPF_F_TEST_STATE_FREQ, - .errstr = "invalid access to memory, mem_size=1 off=42 size=8", - .result = REJECT, -}, From patchwork Fri Apr 21 17:42:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220521 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4A7CC7EE21 for ; Fri, 21 Apr 2023 17:43:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232757AbjDURnb (ORCPT ); Fri, 21 Apr 2023 13:43:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229916AbjDURnQ (ORCPT ); Fri, 21 Apr 2023 13:43:16 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A027C10DA for ; Fri, 21 Apr 2023 10:43:03 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-3f1957e80a2so18119085e9.1 for ; Fri, 21 Apr 2023 10:43:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098982; x=1684690982; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0kgxF1UKrycTdLytpfs5pIYh7OKcvU+Rjp0qYx3dSYw=; b=dtL8di2C4bNzL7SAc5EZfsGZw6IxuNG8msmp5WLi+VXqt88g6YmTprUbo5vaurmHUe VZIxXjvF7NDEwpjavsCWj9DJXgdzOwny63jqASIPBcKDu+5d/mVtUsdAMpXVqUw+zaBL d4L77MmfKDRJ4MB2Mep4qk71LTLNly1JX4XHSLzMZ7UaPoY7PHPoegabSyu7ut9L8aYQ XjpuQEZTKqMpO6GjwJiQI82blkZp1DN3RYw9t3wpjafLmR5gl8EZGqqq6mfdoslPgvCq 4Hz0Z5yeEFzOfsVhqSsLWUcKIvsV9v+iiT40akT89/ukt8ITzIeyKx+op9Y2ws9QeOfr +I4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098982; x=1684690982; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0kgxF1UKrycTdLytpfs5pIYh7OKcvU+Rjp0qYx3dSYw=; b=P1dSINOQCts1q46/hKurtj+rofauHx/da0XlUncLi7KpflVdL8YSb2P9s/ZqBm2QpN +q53TxMPqop4RZYG5MhCC4+huOLwAZ/BHne2B2bO/kU2F67g5fGzNVItrrnwLvDHB+OH CM0XDDCoG9s9HeEbadu5eMmX7bJZLYzw8ITJdP43Bjx1hycbvpIsLF7cfOH6hSB62yLq 154Z27M8yjyOygwlv0+ztWcOXT4P1dCEnfo50zam/NRkosi7el38UsO1ax7kZh4PuJgw X4O3CGPHdVBNVe9nhDKT47ypuh4tdFjZacp3950fdNalkTjfiq1iWcULKaYcTDuzxTqz akvw== X-Gm-Message-State: AAQBX9cVa2JzAyWG/Z1oE7lvXzc1brdQAhjOcMOw94o0Xq/BrVfIrjvB GhujYGc38nWvKy4G06Prnkz6m90A5mqzJw== X-Google-Smtp-Source: AKy350ZFYkg74SvFzkaTnJSWTSM4OZmjfh7+wquyCTKiYTMMcc40IO4Z53ViD35aZNI1L/UO7pniHA== X-Received: by 2002:adf:e912:0:b0:2f6:aa71:d5b0 with SMTP id f18-20020adfe912000000b002f6aa71d5b0mr8091627wrm.15.1682098981831; Fri, 21 Apr 2023 10:43:01 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:01 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 14/24] selftests/bpf: verifier/prevent_map_lookup converted to inline assembly Date: Fri, 21 Apr 2023 20:42:24 +0300 Message-Id: <20230421174234.2391278-15-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/prevent_map_lookup automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../bpf/progs/verifier_prevent_map_lookup.c | 65 +++++++++++++++++++ .../bpf/verifier/prevent_map_lookup.c | 29 --------- 3 files changed, 67 insertions(+), 29 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_prevent_map_lookup.c delete mode 100644 tools/testing/selftests/bpf/verifier/prevent_map_lookup.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index d5ec9054c025..7627893dd849 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -41,6 +41,7 @@ #include "verifier_masking.skel.h" #include "verifier_meta_access.skel.h" #include "verifier_precise.skel.h" +#include "verifier_prevent_map_lookup.skel.h" #include "verifier_raw_stack.skel.h" #include "verifier_raw_tp_writable.skel.h" #include "verifier_reg_equal.skel.h" @@ -127,6 +128,7 @@ void test_verifier_map_ret_val(void) { RUN(verifier_map_ret_val); } void test_verifier_masking(void) { RUN(verifier_masking); } void test_verifier_meta_access(void) { RUN(verifier_meta_access); } void test_verifier_precise(void) { RUN(verifier_precise); } +void test_verifier_prevent_map_lookup(void) { RUN(verifier_prevent_map_lookup); } void test_verifier_raw_stack(void) { RUN(verifier_raw_stack); } void test_verifier_raw_tp_writable(void) { RUN(verifier_raw_tp_writable); } void test_verifier_reg_equal(void) { RUN(verifier_reg_equal); } diff --git a/tools/testing/selftests/bpf/progs/verifier_prevent_map_lookup.c b/tools/testing/selftests/bpf/progs/verifier_prevent_map_lookup.c new file mode 100644 index 000000000000..e85f5b0d60d7 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_prevent_map_lookup.c @@ -0,0 +1,65 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/prevent_map_lookup.c */ + +#include +#include +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_STACK_TRACE); + __uint(max_entries, 1); + __type(key, __u32); + __type(value, __u64); +} map_stacktrace SEC(".maps"); + +void dummy_prog_42_socket(void); +void dummy_prog_24_socket(void); +void dummy_prog_loop2_socket(void); + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 8); + __uint(key_size, sizeof(int)); + __array(values, void (void)); +} map_prog2_socket SEC(".maps"); + +SEC("perf_event") +__description("prevent map lookup in stack trace") +__failure __msg("cannot pass map_type 7 into func bpf_map_lookup_elem") +__naked void map_lookup_in_stack_trace(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_stacktrace] ll; \ + call %[bpf_map_lookup_elem]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_stacktrace) + : __clobber_all); +} + +SEC("socket") +__description("prevent map lookup in prog array") +__failure __msg("cannot pass map_type 3 into func bpf_map_lookup_elem") +__failure_unpriv +__naked void map_lookup_in_prog_array(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_prog2_socket] ll; \ + call %[bpf_map_lookup_elem]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_prog2_socket) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c b/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c deleted file mode 100644 index fc4e301260f6..000000000000 --- a/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c +++ /dev/null @@ -1,29 +0,0 @@ -{ - "prevent map lookup in stack trace", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_stacktrace = { 3 }, - .result = REJECT, - .errstr = "cannot pass map_type 7 into func bpf_map_lookup_elem", - .prog_type = BPF_PROG_TYPE_PERF_EVENT, -}, -{ - "prevent map lookup in prog array", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_prog2 = { 3 }, - .result = REJECT, - .errstr = "cannot pass map_type 3 into func bpf_map_lookup_elem", -}, From patchwork Fri Apr 21 17:42:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220526 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B4D3C77B7E for ; Fri, 21 Apr 2023 17:43:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232777AbjDURne (ORCPT ); Fri, 21 Apr 2023 13:43:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233049AbjDURnV (ORCPT ); Fri, 21 Apr 2023 13:43:21 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A27E755A8 for ; Fri, 21 Apr 2023 10:43:05 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-2f917585b26so1860548f8f.0 for ; Fri, 21 Apr 2023 10:43:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098984; x=1684690984; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SRHPP5NjIYXdfWG6cP3BDnYCeq0/V3aalZnC0tGQQAQ=; b=AaK182RYJ5UmwpJLKJf3bdoU0KgAq+gktRM5cTsWcReNR4lbxRZYWjbZrqyAS5+OTX jjknxYBbjoz18DiscmB6k9hfzuySEIiZvK8DPPZ4WOgfhjPUhzCTQushU/US2ZFgXbeX jDBGx5YrbNsOwQ/BPpJnEmoFPlTqt3cEzUgGRYzkr45aePcbb4Q/IEverE81Ru8x+yWF rF1tvV8MR4IwdA1G6YsJSIPHjdyLXT7O/V3Ghz1mGy9/OBzQR3Ze6GTFrDyrkHWcAMcy 8CIJ2ZKN5oJf+m5ZELdu4hIp3gq8ftPFVt7Vk9bud0B5hJb4/ebh4oXRaXHinoBaczWS n/Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098984; x=1684690984; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SRHPP5NjIYXdfWG6cP3BDnYCeq0/V3aalZnC0tGQQAQ=; b=d83fp8wPT2AhmijlASVT5bEBD9DT5m263IbUWfWZ0TEuEGyg1uBPr6IcBqQruysxLA u9911XCzTVAdLCBNdcIDpI3L1e6r/uPUJuQwaLEfNzMo7XIKUUEhaCAUMf8js43Cmg23 J/HsTfpytMnNjAvcYRaodGpjTgey+Kq0pV8Gbs0JCh6ZNys5lAgvV9svGlkjM5YnNIoR mztq4FL8hkU7L4nx0tJLCcHmr2YlLzb1SGXR7am3L1QcDY83leBeqG1es2T1bXPHWpJ0 kc+I6HeRaeUCsRBPw8HM7arZjxQzvA1BBeK+5bikJ5rx6BOK2bxNXRSaU3Z1CwURZBw2 FncQ== X-Gm-Message-State: AAQBX9f/ysIzzKRb5SHnHyttBLZlmNhCfIsI1/a2oFV6XQ2sM0YMjoSz M2dfKyriAwgeZZ4xrs56pPDxv30SuBd6IQ== X-Google-Smtp-Source: AKy350ZOxCwhEBKWnOQDqY0K2rTiI+kTxdPdjepL9OpY4LxNKodPCYvWFrmtDri2xTUmOi1meB0CyA== X-Received: by 2002:a5d:684e:0:b0:2ce:a8d5:4a89 with SMTP id o14-20020a5d684e000000b002cea8d54a89mr4389484wrw.37.1682098983059; Fri, 21 Apr 2023 10:43:03 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:02 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 15/24] selftests/bpf: verifier/ref_tracking converted to inline assembly Date: Fri, 21 Apr 2023 20:42:25 +0300 Message-Id: <20230421174234.2391278-16-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/ref_tracking automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../bpf/progs/verifier_ref_tracking.c | 1495 +++++++++++++++++ .../selftests/bpf/verifier/ref_tracking.c | 1082 ------------ 3 files changed, 1497 insertions(+), 1082 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_ref_tracking.c delete mode 100644 tools/testing/selftests/bpf/verifier/ref_tracking.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 7627893dd849..5941ef59ed76 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -45,6 +45,7 @@ #include "verifier_raw_stack.skel.h" #include "verifier_raw_tp_writable.skel.h" #include "verifier_reg_equal.skel.h" +#include "verifier_ref_tracking.skel.h" #include "verifier_ringbuf.skel.h" #include "verifier_spill_fill.skel.h" #include "verifier_stack_ptr.skel.h" @@ -132,6 +133,7 @@ void test_verifier_prevent_map_lookup(void) { RUN(verifier_prevent_map_lookup) void test_verifier_raw_stack(void) { RUN(verifier_raw_stack); } void test_verifier_raw_tp_writable(void) { RUN(verifier_raw_tp_writable); } void test_verifier_reg_equal(void) { RUN(verifier_reg_equal); } +void test_verifier_ref_tracking(void) { RUN(verifier_ref_tracking); } void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } diff --git a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c new file mode 100644 index 000000000000..c4c6da21265e --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c @@ -0,0 +1,1495 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/ref_tracking.c */ + +#include +#include +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +#define BPF_SK_LOOKUP(func) \ + /* struct bpf_sock_tuple tuple = {} */ \ + "r2 = 0;" \ + "*(u32*)(r10 - 8) = r2;" \ + "*(u64*)(r10 - 16) = r2;" \ + "*(u64*)(r10 - 24) = r2;" \ + "*(u64*)(r10 - 32) = r2;" \ + "*(u64*)(r10 - 40) = r2;" \ + "*(u64*)(r10 - 48) = r2;" \ + /* sk = func(ctx, &tuple, sizeof tuple, 0, 0) */ \ + "r2 = r10;" \ + "r2 += -48;" \ + "r3 = %[sizeof_bpf_sock_tuple];"\ + "r4 = 0;" \ + "r5 = 0;" \ + "call %[" #func "];" + +struct bpf_key {} __attribute__((preserve_access_index)); + +extern void bpf_key_put(struct bpf_key *key) __ksym; +extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym; +extern struct bpf_key *bpf_lookup_user_key(__u32 serial, __u64 flags) __ksym; + +/* BTF FUNC records are not generated for kfuncs referenced + * from inline assembly. These records are necessary for + * libbpf to link the program. The function below is a hack + * to ensure that BTF FUNC records are generated. + */ +void __kfunc_btf_root(void) +{ + bpf_key_put(0); + bpf_lookup_system_key(0); + bpf_lookup_user_key(0, 0); +} + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct test_val); +} map_array_48b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_RINGBUF); + __uint(max_entries, 4096); +} map_ringbuf SEC(".maps"); + +void dummy_prog_42_tc(void); +void dummy_prog_24_tc(void); +void dummy_prog_loop1_tc(void); + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 4); + __uint(key_size, sizeof(int)); + __array(values, void (void)); +} map_prog1_tc SEC(".maps") = { + .values = { + [0] = (void *)&dummy_prog_42_tc, + [1] = (void *)&dummy_prog_loop1_tc, + [2] = (void *)&dummy_prog_24_tc, + }, +}; + +SEC("tc") +__auxiliary +__naked void dummy_prog_42_tc(void) +{ + asm volatile ("r0 = 42; exit;"); +} + +SEC("tc") +__auxiliary +__naked void dummy_prog_24_tc(void) +{ + asm volatile ("r0 = 24; exit;"); +} + +SEC("tc") +__auxiliary +__naked void dummy_prog_loop1_tc(void) +{ + asm volatile (" \ + r3 = 1; \ + r2 = %[map_prog1_tc] ll; \ + call %[bpf_tail_call]; \ + r0 = 41; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_tc) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: leak potential reference") +__failure __msg("Unreleased reference") +__naked void reference_tracking_leak_potential_reference(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r6 = r0; /* leak reference */ \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: leak potential reference to sock_common") +__failure __msg("Unreleased reference") +__naked void potential_reference_to_sock_common_1(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_skc_lookup_tcp) +" r6 = r0; /* leak reference */ \ + exit; \ +" : + : __imm(bpf_skc_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: leak potential reference on stack") +__failure __msg("Unreleased reference") +__naked void leak_potential_reference_on_stack(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r4 = r10; \ + r4 += -8; \ + *(u64*)(r4 + 0) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: leak potential reference on stack 2") +__failure __msg("Unreleased reference") +__naked void potential_reference_on_stack_2(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r4 = r10; \ + r4 += -8; \ + *(u64*)(r4 + 0) = r0; \ + r0 = 0; \ + r1 = 0; \ + *(u64*)(r4 + 0) = r1; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: zero potential reference") +__failure __msg("Unreleased reference") +__naked void reference_tracking_zero_potential_reference(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r0 = 0; /* leak reference */ \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: zero potential reference to sock_common") +__failure __msg("Unreleased reference") +__naked void potential_reference_to_sock_common_2(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_skc_lookup_tcp) +" r0 = 0; /* leak reference */ \ + exit; \ +" : + : __imm(bpf_skc_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: copy and zero potential references") +__failure __msg("Unreleased reference") +__naked void copy_and_zero_potential_references(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r7 = r0; \ + r0 = 0; \ + r7 = 0; /* leak reference */ \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("lsm.s/bpf") +__description("reference tracking: acquire/release user key reference") +__success +__naked void acquire_release_user_key_reference(void) +{ + asm volatile (" \ + r1 = -3; \ + r2 = 0; \ + call %[bpf_lookup_user_key]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + call %[bpf_key_put]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_key_put), + __imm(bpf_lookup_user_key) + : __clobber_all); +} + +SEC("lsm.s/bpf") +__description("reference tracking: acquire/release system key reference") +__success +__naked void acquire_release_system_key_reference(void) +{ + asm volatile (" \ + r1 = 1; \ + call %[bpf_lookup_system_key]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + call %[bpf_key_put]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_key_put), + __imm(bpf_lookup_system_key) + : __clobber_all); +} + +SEC("lsm.s/bpf") +__description("reference tracking: release user key reference without check") +__failure __msg("Possibly NULL pointer passed to trusted arg0") +__naked void user_key_reference_without_check(void) +{ + asm volatile (" \ + r1 = -3; \ + r2 = 0; \ + call %[bpf_lookup_user_key]; \ + r1 = r0; \ + call %[bpf_key_put]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_key_put), + __imm(bpf_lookup_user_key) + : __clobber_all); +} + +SEC("lsm.s/bpf") +__description("reference tracking: release system key reference without check") +__failure __msg("Possibly NULL pointer passed to trusted arg0") +__naked void system_key_reference_without_check(void) +{ + asm volatile (" \ + r1 = 1; \ + call %[bpf_lookup_system_key]; \ + r1 = r0; \ + call %[bpf_key_put]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_key_put), + __imm(bpf_lookup_system_key) + : __clobber_all); +} + +SEC("lsm.s/bpf") +__description("reference tracking: release with NULL key pointer") +__failure __msg("Possibly NULL pointer passed to trusted arg0") +__naked void release_with_null_key_pointer(void) +{ + asm volatile (" \ + r1 = 0; \ + call %[bpf_key_put]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_key_put) + : __clobber_all); +} + +SEC("lsm.s/bpf") +__description("reference tracking: leak potential reference to user key") +__failure __msg("Unreleased reference") +__naked void potential_reference_to_user_key(void) +{ + asm volatile (" \ + r1 = -3; \ + r2 = 0; \ + call %[bpf_lookup_user_key]; \ + exit; \ +" : + : __imm(bpf_lookup_user_key) + : __clobber_all); +} + +SEC("lsm.s/bpf") +__description("reference tracking: leak potential reference to system key") +__failure __msg("Unreleased reference") +__naked void potential_reference_to_system_key(void) +{ + asm volatile (" \ + r1 = 1; \ + call %[bpf_lookup_system_key]; \ + exit; \ +" : + : __imm(bpf_lookup_system_key) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: release reference without check") +__failure __msg("type=sock_or_null expected=sock") +__naked void tracking_release_reference_without_check(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" /* reference in r0 may be NULL */ \ + r1 = r0; \ + r2 = 0; \ + call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: release reference to sock_common without check") +__failure __msg("type=sock_common_or_null expected=sock") +__naked void to_sock_common_without_check(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_skc_lookup_tcp) +" /* reference in r0 may be NULL */ \ + r1 = r0; \ + r2 = 0; \ + call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_sk_release), + __imm(bpf_skc_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: release reference") +__success __retval(0) +__naked void reference_tracking_release_reference(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; \ + if r0 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: release reference to sock_common") +__success __retval(0) +__naked void release_reference_to_sock_common(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_skc_lookup_tcp) +" r1 = r0; \ + if r0 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_release), + __imm(bpf_skc_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: release reference 2") +__success __retval(0) +__naked void reference_tracking_release_reference_2(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: release reference twice") +__failure __msg("type=scalar expected=sock") +__naked void reference_tracking_release_reference_twice(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; \ + r6 = r0; \ + if r0 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: release reference twice inside branch") +__failure __msg("type=scalar expected=sock") +__naked void release_reference_twice_inside_branch(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; \ + r6 = r0; \ + if r0 == 0 goto l0_%=; /* goto end */ \ + call %[bpf_sk_release]; \ + r1 = r6; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: alloc, check, free in one subbranch") +__failure __msg("Unreleased reference") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void check_free_in_one_subbranch(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 16; \ + /* if (offsetof(skb, mark) > data_len) exit; */ \ + if r0 <= r3 goto l0_%=; \ + exit; \ +l0_%=: r6 = *(u32*)(r2 + %[__sk_buff_mark]); \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r6 == 0 goto l1_%=; /* mark == 0? */\ + /* Leak reference in R0 */ \ + exit; \ +l1_%=: if r0 == 0 goto l2_%=; /* sk NULL? */ \ + r1 = r0; \ + call %[bpf_sk_release]; \ +l2_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: alloc, check, free in both subbranches") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void check_free_in_both_subbranches(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 16; \ + /* if (offsetof(skb, mark) > data_len) exit; */ \ + if r0 <= r3 goto l0_%=; \ + exit; \ +l0_%=: r6 = *(u32*)(r2 + %[__sk_buff_mark]); \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r6 == 0 goto l1_%=; /* mark == 0? */\ + if r0 == 0 goto l2_%=; /* sk NULL? */ \ + r1 = r0; \ + call %[bpf_sk_release]; \ +l2_%=: exit; \ +l1_%=: if r0 == 0 goto l3_%=; /* sk NULL? */ \ + r1 = r0; \ + call %[bpf_sk_release]; \ +l3_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking in call: free reference in subprog") +__success __retval(0) +__naked void call_free_reference_in_subprog(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; /* unchecked reference */ \ + call call_free_reference_in_subprog__1; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void call_free_reference_in_subprog__1(void) +{ + asm volatile (" \ + /* subprog 1 */ \ + r2 = r1; \ + if r2 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_release) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking in call: free reference in subprog and outside") +__failure __msg("type=scalar expected=sock") +__naked void reference_in_subprog_and_outside(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; /* unchecked reference */ \ + r6 = r0; \ + call reference_in_subprog_and_outside__1; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void reference_in_subprog_and_outside__1(void) +{ + asm volatile (" \ + /* subprog 1 */ \ + r2 = r1; \ + if r2 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_release) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking in call: alloc & leak reference in subprog") +__failure __msg("Unreleased reference") +__naked void alloc_leak_reference_in_subprog(void) +{ + asm volatile (" \ + r4 = r10; \ + r4 += -8; \ + call alloc_leak_reference_in_subprog__1; \ + r1 = r0; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void alloc_leak_reference_in_subprog__1(void) +{ + asm volatile (" \ + /* subprog 1 */ \ + r6 = r4; \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" /* spill unchecked sk_ptr into stack of caller */\ + *(u64*)(r6 + 0) = r0; \ + r1 = r0; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking in call: alloc in subprog, release outside") +__success __retval(POINTER_VALUE) +__naked void alloc_in_subprog_release_outside(void) +{ + asm volatile (" \ + r4 = r10; \ + call alloc_in_subprog_release_outside__1; \ + r1 = r0; \ + if r0 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_release) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void alloc_in_subprog_release_outside__1(void) +{ + asm volatile (" \ + /* subprog 1 */ \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" exit; /* return sk */ \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking in call: sk_ptr leak into caller stack") +__failure __msg("Unreleased reference") +__naked void ptr_leak_into_caller_stack(void) +{ + asm volatile (" \ + r4 = r10; \ + r4 += -8; \ + call ptr_leak_into_caller_stack__1; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void ptr_leak_into_caller_stack__1(void) +{ + asm volatile (" \ + /* subprog 1 */ \ + r5 = r10; \ + r5 += -8; \ + *(u64*)(r5 + 0) = r4; \ + call ptr_leak_into_caller_stack__2; \ + /* spill unchecked sk_ptr into stack of caller */\ + r5 = r10; \ + r5 += -8; \ + r4 = *(u64*)(r5 + 0); \ + *(u64*)(r4 + 0) = r0; \ + exit; \ +" ::: __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void ptr_leak_into_caller_stack__2(void) +{ + asm volatile (" \ + /* subprog 2 */ \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking in call: sk_ptr spill into caller stack") +__success __retval(0) +__naked void ptr_spill_into_caller_stack(void) +{ + asm volatile (" \ + r4 = r10; \ + r4 += -8; \ + call ptr_spill_into_caller_stack__1; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void ptr_spill_into_caller_stack__1(void) +{ + asm volatile (" \ + /* subprog 1 */ \ + r5 = r10; \ + r5 += -8; \ + *(u64*)(r5 + 0) = r4; \ + call ptr_spill_into_caller_stack__2; \ + /* spill unchecked sk_ptr into stack of caller */\ + r5 = r10; \ + r5 += -8; \ + r4 = *(u64*)(r5 + 0); \ + *(u64*)(r4 + 0) = r0; \ + if r0 == 0 goto l0_%=; \ + /* now the sk_ptr is verified, free the reference */\ + r1 = *(u64*)(r4 + 0); \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_release) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void ptr_spill_into_caller_stack__2(void) +{ + asm volatile (" \ + /* subprog 2 */ \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: allow LD_ABS") +__success __retval(0) +__naked void reference_tracking_allow_ld_abs(void) +{ + asm volatile (" \ + r6 = r1; \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; \ + if r0 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: r0 = *(u8*)skb[0]; \ + r0 = *(u16*)skb[0]; \ + r0 = *(u32*)skb[0]; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: forbid LD_ABS while holding reference") +__failure __msg("BPF_LD_[ABS|IND] cannot be mixed with socket references") +__naked void ld_abs_while_holding_reference(void) +{ + asm volatile (" \ + r6 = r1; \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r0 = *(u8*)skb[0]; \ + r0 = *(u16*)skb[0]; \ + r0 = *(u32*)skb[0]; \ + r1 = r0; \ + if r0 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: allow LD_IND") +__success __retval(1) +__naked void reference_tracking_allow_ld_ind(void) +{ + asm volatile (" \ + r6 = r1; \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; \ + if r0 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: r7 = 1; \ + .8byte %[ld_ind]; \ + r0 = r7; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)), + __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: forbid LD_IND while holding reference") +__failure __msg("BPF_LD_[ABS|IND] cannot be mixed with socket references") +__naked void ld_ind_while_holding_reference(void) +{ + asm volatile (" \ + r6 = r1; \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r4 = r0; \ + r7 = 1; \ + .8byte %[ld_ind]; \ + r0 = r7; \ + r1 = r4; \ + if r1 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)), + __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: check reference or tail call") +__success __retval(0) +__naked void check_reference_or_tail_call(void) +{ + asm volatile (" \ + r7 = r1; \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" /* if (sk) bpf_sk_release() */ \ + r1 = r0; \ + if r1 != 0 goto l0_%=; \ + /* bpf_tail_call() */ \ + r3 = 3; \ + r2 = %[map_prog1_tc] ll; \ + r1 = r7; \ + call %[bpf_tail_call]; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_tail_call), + __imm_addr(map_prog1_tc), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: release reference then tail call") +__success __retval(0) +__naked void release_reference_then_tail_call(void) +{ + asm volatile (" \ + r7 = r1; \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" /* if (sk) bpf_sk_release() */ \ + r1 = r0; \ + if r1 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: /* bpf_tail_call() */ \ + r3 = 3; \ + r2 = %[map_prog1_tc] ll; \ + r1 = r7; \ + call %[bpf_tail_call]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_tail_call), + __imm_addr(map_prog1_tc), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: leak possible reference over tail call") +__failure __msg("tail_call would lead to reference leak") +__naked void possible_reference_over_tail_call(void) +{ + asm volatile (" \ + r7 = r1; \ + /* Look up socket and store in REG_6 */ \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" /* bpf_tail_call() */ \ + r6 = r0; \ + r3 = 3; \ + r2 = %[map_prog1_tc] ll; \ + r1 = r7; \ + call %[bpf_tail_call]; \ + r0 = 0; \ + /* if (sk) bpf_sk_release() */ \ + r1 = r6; \ + if r1 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_tail_call), + __imm_addr(map_prog1_tc), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: leak checked reference over tail call") +__failure __msg("tail_call would lead to reference leak") +__naked void checked_reference_over_tail_call(void) +{ + asm volatile (" \ + r7 = r1; \ + /* Look up socket and store in REG_6 */ \ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r6 = r0; \ + /* if (!sk) goto end */ \ + if r0 == 0 goto l0_%=; \ + /* bpf_tail_call() */ \ + r3 = 0; \ + r2 = %[map_prog1_tc] ll; \ + r1 = r7; \ + call %[bpf_tail_call]; \ + r0 = 0; \ + r1 = r6; \ +l0_%=: call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_tail_call), + __imm_addr(map_prog1_tc), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: mangle and release sock_or_null") +__failure __msg("R1 pointer arithmetic on sock_or_null prohibited") +__naked void and_release_sock_or_null(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; \ + r1 += 5; \ + if r0 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: mangle and release sock") +__failure __msg("R1 pointer arithmetic on sock prohibited") +__naked void tracking_mangle_and_release_sock(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; \ + if r0 == 0 goto l0_%=; \ + r1 += 5; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: access member") +__success __retval(0) +__naked void reference_tracking_access_member(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r6 = r0; \ + if r0 == 0 goto l0_%=; \ + r2 = *(u32*)(r0 + 4); \ + r1 = r6; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: write to member") +__failure __msg("cannot write into sock") +__naked void reference_tracking_write_to_member(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r6 = r0; \ + if r0 == 0 goto l0_%=; \ + r1 = r6; \ + r2 = 42 ll; \ + *(u32*)(r1 + %[bpf_sock_mark]) = r2; \ + r1 = r6; \ +l0_%=: call %[bpf_sk_release]; \ + r0 = 0 ll; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(bpf_sock_mark, offsetof(struct bpf_sock, mark)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: invalid 64-bit access of member") +__failure __msg("invalid sock access off=0 size=8") +__naked void _64_bit_access_of_member(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r6 = r0; \ + if r0 == 0 goto l0_%=; \ + r2 = *(u64*)(r0 + 0); \ + r1 = r6; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: access after release") +__failure __msg("!read_ok") +__naked void reference_tracking_access_after_release(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r1 = r0; \ + if r0 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ + r2 = *(u32*)(r1 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: direct access for lookup") +__success __retval(0) +__naked void tracking_direct_access_for_lookup(void) +{ + asm volatile (" \ + /* Check that the packet is at least 64B long */\ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r2; \ + r0 += 64; \ + if r0 > r3 goto l0_%=; \ + /* sk = sk_lookup_tcp(ctx, skb->data, ...) */ \ + r3 = %[sizeof_bpf_sock_tuple]; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_sk_lookup_tcp]; \ + r6 = r0; \ + if r0 == 0 goto l0_%=; \ + r2 = *(u32*)(r0 + 4); \ + r1 = r6; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: use ptr from bpf_tcp_sock() after release") +__failure __msg("invalid mem access") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void bpf_tcp_sock_after_release(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + call %[bpf_tcp_sock]; \ + if r0 != 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +l1_%=: r7 = r0; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + r0 = *(u32*)(r7 + %[bpf_tcp_sock_snd_cwnd]); \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_tcp_sock), + __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: use ptr from bpf_sk_fullsock() after release") +__failure __msg("invalid mem access") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void bpf_sk_fullsock_after_release(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +l1_%=: r7 = r0; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + r0 = *(u32*)(r7 + %[bpf_sock_type]); \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: use ptr from bpf_sk_fullsock(tp) after release") +__failure __msg("invalid mem access") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void sk_fullsock_tp_after_release(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + call %[bpf_tcp_sock]; \ + if r0 != 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +l1_%=: r1 = r0; \ + call %[bpf_sk_fullsock]; \ + r1 = r6; \ + r6 = r0; \ + call %[bpf_sk_release]; \ + if r6 != 0 goto l2_%=; \ + exit; \ +l2_%=: r0 = *(u32*)(r6 + %[bpf_sock_type]); \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_tcp_sock), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: use sk after bpf_sk_release(tp)") +__failure __msg("invalid mem access") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void after_bpf_sk_release_tp(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + call %[bpf_tcp_sock]; \ + if r0 != 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +l1_%=: r1 = r0; \ + call %[bpf_sk_release]; \ + r0 = *(u32*)(r6 + %[bpf_sock_type]); \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_tcp_sock), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: use ptr from bpf_get_listener_sock() after bpf_sk_release(sk)") +__success __retval(0) +__naked void after_bpf_sk_release_sk(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + call %[bpf_get_listener_sock]; \ + if r0 != 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +l1_%=: r1 = r6; \ + r6 = r0; \ + call %[bpf_sk_release]; \ + r0 = *(u32*)(r6 + %[bpf_sock_src_port]); \ + exit; \ +" : + : __imm(bpf_get_listener_sock), + __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(bpf_sock_src_port, offsetof(struct bpf_sock, src_port)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: bpf_sk_release(listen_sk)") +__failure __msg("R1 must be referenced when passed to release function") +__naked void bpf_sk_release_listen_sk(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + call %[bpf_get_listener_sock]; \ + if r0 != 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +l1_%=: r1 = r0; \ + call %[bpf_sk_release]; \ + r0 = *(u32*)(r6 + %[bpf_sock_type]); \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_get_listener_sock), + __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +/* !bpf_sk_fullsock(sk) is checked but !bpf_tcp_sock(sk) is not checked */ +SEC("tc") +__description("reference tracking: tp->snd_cwnd after bpf_sk_fullsock(sk) and bpf_tcp_sock(sk)") +__failure __msg("invalid mem access") +__naked void and_bpf_tcp_sock_sk(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + call %[bpf_sk_fullsock]; \ + r7 = r0; \ + r1 = r6; \ + call %[bpf_tcp_sock]; \ + r8 = r0; \ + if r7 != 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +l1_%=: r0 = *(u32*)(r8 + %[bpf_tcp_sock_snd_cwnd]); \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_tcp_sock), + __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: branch tracking valid pointer null comparison") +__success __retval(0) +__naked void tracking_valid_pointer_null_comparison(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r6 = r0; \ + r3 = 1; \ + if r6 != 0 goto l0_%=; \ + r3 = 0; \ +l0_%=: if r6 == 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ +l1_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: branch tracking valid pointer value comparison") +__failure __msg("Unreleased reference") +__naked void tracking_valid_pointer_value_comparison(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r6 = r0; \ + r3 = 1; \ + if r6 == 0 goto l0_%=; \ + r3 = 0; \ + if r6 == 1234 goto l0_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ +l0_%=: exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: bpf_sk_release(btf_tcp_sock)") +__success +__retval(0) +__naked void sk_release_btf_tcp_sock(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + call %[bpf_skc_to_tcp_sock]; \ + if r0 != 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +l1_%=: r1 = r0; \ + call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_skc_to_tcp_sock), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("reference tracking: use ptr from bpf_skc_to_tcp_sock() after release") +__failure __msg("invalid mem access") +__naked void to_tcp_sock_after_release(void) +{ + asm volatile ( + BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + call %[bpf_skc_to_tcp_sock]; \ + if r0 != 0 goto l1_%=; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + exit; \ +l1_%=: r7 = r0; \ + r1 = r6; \ + call %[bpf_sk_release]; \ + r0 = *(u8*)(r7 + 0); \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm(bpf_skc_to_tcp_sock), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("socket") +__description("reference tracking: try to leak released ptr reg") +__success __failure_unpriv __msg_unpriv("R8 !read_ok") +__retval(0) +__naked void to_leak_released_ptr_reg(void) +{ + asm volatile (" \ + r0 = 0; \ + *(u32*)(r10 - 4) = r0; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r9 = r0; \ + r0 = 0; \ + r1 = %[map_ringbuf] ll; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_ringbuf_reserve]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r8 = r0; \ + r1 = r8; \ + r2 = 0; \ + call %[bpf_ringbuf_discard]; \ + r0 = 0; \ + *(u64*)(r9 + 0) = r8; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_ringbuf_discard), + __imm(bpf_ringbuf_reserve), + __imm_addr(map_array_48b), + __imm_addr(map_ringbuf) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c deleted file mode 100644 index 5a2e154dd1e0..000000000000 --- a/tools/testing/selftests/bpf/verifier/ref_tracking.c +++ /dev/null @@ -1,1082 +0,0 @@ -{ - "reference tracking: leak potential reference", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), /* leak reference */ - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking: leak potential reference to sock_common", - .insns = { - BPF_SK_LOOKUP(skc_lookup_tcp), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), /* leak reference */ - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking: leak potential reference on stack", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking: leak potential reference on stack 2", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking: zero potential reference", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_IMM(BPF_REG_0, 0), /* leak reference */ - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking: zero potential reference to sock_common", - .insns = { - BPF_SK_LOOKUP(skc_lookup_tcp), - BPF_MOV64_IMM(BPF_REG_0, 0), /* leak reference */ - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking: copy and zero potential references", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_7, 0), /* leak reference */ - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking: acquire/release user key reference", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, -3), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_LSM, - .kfunc = "bpf", - .expected_attach_type = BPF_LSM_MAC, - .flags = BPF_F_SLEEPABLE, - .fixup_kfunc_btf_id = { - { "bpf_lookup_user_key", 2 }, - { "bpf_key_put", 5 }, - }, - .result = ACCEPT, -}, -{ - "reference tracking: acquire/release system key reference", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_LSM, - .kfunc = "bpf", - .expected_attach_type = BPF_LSM_MAC, - .flags = BPF_F_SLEEPABLE, - .fixup_kfunc_btf_id = { - { "bpf_lookup_system_key", 1 }, - { "bpf_key_put", 4 }, - }, - .result = ACCEPT, -}, -{ - "reference tracking: release user key reference without check", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, -3), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_LSM, - .kfunc = "bpf", - .expected_attach_type = BPF_LSM_MAC, - .flags = BPF_F_SLEEPABLE, - .errstr = "Possibly NULL pointer passed to trusted arg0", - .fixup_kfunc_btf_id = { - { "bpf_lookup_user_key", 2 }, - { "bpf_key_put", 4 }, - }, - .result = REJECT, -}, -{ - "reference tracking: release system key reference without check", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_LSM, - .kfunc = "bpf", - .expected_attach_type = BPF_LSM_MAC, - .flags = BPF_F_SLEEPABLE, - .errstr = "Possibly NULL pointer passed to trusted arg0", - .fixup_kfunc_btf_id = { - { "bpf_lookup_system_key", 1 }, - { "bpf_key_put", 3 }, - }, - .result = REJECT, -}, -{ - "reference tracking: release with NULL key pointer", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_LSM, - .kfunc = "bpf", - .expected_attach_type = BPF_LSM_MAC, - .flags = BPF_F_SLEEPABLE, - .errstr = "Possibly NULL pointer passed to trusted arg0", - .fixup_kfunc_btf_id = { - { "bpf_key_put", 1 }, - }, - .result = REJECT, -}, -{ - "reference tracking: leak potential reference to user key", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, -3), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_LSM, - .kfunc = "bpf", - .expected_attach_type = BPF_LSM_MAC, - .flags = BPF_F_SLEEPABLE, - .errstr = "Unreleased reference", - .fixup_kfunc_btf_id = { - { "bpf_lookup_user_key", 2 }, - }, - .result = REJECT, -}, -{ - "reference tracking: leak potential reference to system key", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_LSM, - .kfunc = "bpf", - .expected_attach_type = BPF_LSM_MAC, - .flags = BPF_F_SLEEPABLE, - .errstr = "Unreleased reference", - .fixup_kfunc_btf_id = { - { "bpf_lookup_system_key", 1 }, - }, - .result = REJECT, -}, -{ - "reference tracking: release reference without check", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - /* reference in r0 may be NULL */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "type=sock_or_null expected=sock", - .result = REJECT, -}, -{ - "reference tracking: release reference to sock_common without check", - .insns = { - BPF_SK_LOOKUP(skc_lookup_tcp), - /* reference in r0 may be NULL */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "type=sock_common_or_null expected=sock", - .result = REJECT, -}, -{ - "reference tracking: release reference", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: release reference to sock_common", - .insns = { - BPF_SK_LOOKUP(skc_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: release reference 2", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: release reference twice", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "type=scalar expected=sock", - .result = REJECT, -}, -{ - "reference tracking: release reference twice inside branch", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), /* goto end */ - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "type=scalar expected=sock", - .result = REJECT, -}, -{ - "reference tracking: alloc, check, free in one subbranch", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 16), - /* if (offsetof(skb, mark) > data_len) exit; */ - BPF_JMP_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_2, - offsetof(struct __sk_buff, mark)), - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 1), /* mark == 0? */ - /* Leak reference in R0 */ - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "reference tracking: alloc, check, free in both subbranches", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 16), - /* if (offsetof(skb, mark) > data_len) exit; */ - BPF_JMP_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_2, - offsetof(struct __sk_buff, mark)), - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 4), /* mark == 0? */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "reference tracking in call: free reference in subprog", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), /* unchecked reference */ - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - - /* subprog 1 */ - BPF_MOV64_REG(BPF_REG_2, BPF_REG_1), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking in call: free reference in subprog and outside", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), /* unchecked reference */ - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - - /* subprog 1 */ - BPF_MOV64_REG(BPF_REG_2, BPF_REG_1), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "type=scalar expected=sock", - .result = REJECT, -}, -{ - "reference tracking in call: alloc & leak reference in subprog", - .insns = { - BPF_MOV64_REG(BPF_REG_4, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - - /* subprog 1 */ - BPF_MOV64_REG(BPF_REG_6, BPF_REG_4), - BPF_SK_LOOKUP(sk_lookup_tcp), - /* spill unchecked sk_ptr into stack of caller */ - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking in call: alloc in subprog, release outside", - .insns = { - BPF_MOV64_REG(BPF_REG_4, BPF_REG_10), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - - /* subprog 1 */ - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_EXIT_INSN(), /* return sk */ - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .retval = POINTER_VALUE, - .result = ACCEPT, -}, -{ - "reference tracking in call: sk_ptr leak into caller stack", - .insns = { - BPF_MOV64_REG(BPF_REG_4, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - - /* subprog 1 */ - BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5), - /* spill unchecked sk_ptr into stack of caller */ - BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_5, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_0, 0), - BPF_EXIT_INSN(), - - /* subprog 2 */ - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking in call: sk_ptr spill into caller stack", - .insns = { - BPF_MOV64_REG(BPF_REG_4, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - - /* subprog 1 */ - BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8), - /* spill unchecked sk_ptr into stack of caller */ - BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_5, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - /* now the sk_ptr is verified, free the reference */ - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_4, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - - /* subprog 2 */ - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: allow LD_ABS", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_LD_ABS(BPF_B, 0), - BPF_LD_ABS(BPF_H, 0), - BPF_LD_ABS(BPF_W, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: forbid LD_ABS while holding reference", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_LD_ABS(BPF_B, 0), - BPF_LD_ABS(BPF_H, 0), - BPF_LD_ABS(BPF_W, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "BPF_LD_[ABS|IND] cannot be mixed with socket references", - .result = REJECT, -}, -{ - "reference tracking: allow LD_IND", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_MOV64_IMM(BPF_REG_7, 1), - BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_7), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 1, -}, -{ - "reference tracking: forbid LD_IND while holding reference", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_7, 1), - BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_7), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_4), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "BPF_LD_[ABS|IND] cannot be mixed with socket references", - .result = REJECT, -}, -{ - "reference tracking: check reference or tail call", - .insns = { - BPF_MOV64_REG(BPF_REG_7, BPF_REG_1), - BPF_SK_LOOKUP(sk_lookup_tcp), - /* if (sk) bpf_sk_release() */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 7), - /* bpf_tail_call() */ - BPF_MOV64_IMM(BPF_REG_3, 3), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 17 }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: release reference then tail call", - .insns = { - BPF_MOV64_REG(BPF_REG_7, BPF_REG_1), - BPF_SK_LOOKUP(sk_lookup_tcp), - /* if (sk) bpf_sk_release() */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - /* bpf_tail_call() */ - BPF_MOV64_IMM(BPF_REG_3, 3), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 18 }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: leak possible reference over tail call", - .insns = { - BPF_MOV64_REG(BPF_REG_7, BPF_REG_1), - /* Look up socket and store in REG_6 */ - BPF_SK_LOOKUP(sk_lookup_tcp), - /* bpf_tail_call() */ - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, 3), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 0), - /* if (sk) bpf_sk_release() */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 16 }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "tail_call would lead to reference leak", - .result = REJECT, -}, -{ - "reference tracking: leak checked reference over tail call", - .insns = { - BPF_MOV64_REG(BPF_REG_7, BPF_REG_1), - /* Look up socket and store in REG_6 */ - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* if (!sk) goto end */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - /* bpf_tail_call() */ - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 17 }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "tail_call would lead to reference leak", - .result = REJECT, -}, -{ - "reference tracking: mangle and release sock_or_null", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 5), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "R1 pointer arithmetic on sock_or_null prohibited", - .result = REJECT, -}, -{ - "reference tracking: mangle and release sock", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 5), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "R1 pointer arithmetic on sock prohibited", - .result = REJECT, -}, -{ - "reference tracking: access member", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: write to member", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_LD_IMM64(BPF_REG_2, 42), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_2, - offsetof(struct bpf_sock, mark)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_LD_IMM64(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "cannot write into sock", - .result = REJECT, -}, -{ - "reference tracking: invalid 64-bit access of member", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "invalid sock access off=0 size=8", - .result = REJECT, -}, -{ - "reference tracking: access after release", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "!read_ok", - .result = REJECT, -}, -{ - "reference tracking: direct access for lookup", - .insns = { - /* Check that the packet is at least 64B long */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 64), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 9), - /* sk = sk_lookup_tcp(ctx, skb->data, ...) */ - BPF_MOV64_IMM(BPF_REG_3, sizeof(struct bpf_sock_tuple)), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: use ptr from bpf_tcp_sock() after release", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_tcp_sock, snd_cwnd)), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "invalid mem access", - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "reference tracking: use ptr from bpf_sk_fullsock() after release", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "invalid mem access", - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "reference tracking: use ptr from bpf_sk_fullsock(tp) after release", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct bpf_sock, type)), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "invalid mem access", - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "reference tracking: use sk after bpf_sk_release(tp)", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct bpf_sock, type)), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "invalid mem access", - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "reference tracking: use ptr from bpf_get_listener_sock() after bpf_sk_release(sk)", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_listener_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct bpf_sock, src_port)), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: bpf_sk_release(listen_sk)", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_listener_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct bpf_sock, type)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "R1 must be referenced when passed to release function", -}, -{ - /* !bpf_sk_fullsock(sk) is checked but !bpf_tcp_sock(sk) is not checked */ - "reference tracking: tp->snd_cwnd after bpf_sk_fullsock(sk) and bpf_tcp_sock(sk)", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_8, offsetof(struct bpf_tcp_sock, snd_cwnd)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "invalid mem access", -}, -{ - "reference tracking: branch tracking valid pointer null comparison", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 1), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 2), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "reference tracking: branch tracking valid pointer value comparison", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 1234, 2), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .errstr = "Unreleased reference", - .result = REJECT, -}, -{ - "reference tracking: bpf_sk_release(btf_tcp_sock)", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "unknown func", -}, -{ - "reference tracking: use ptr from bpf_skc_to_tcp_sock() after release", - .insns = { - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "invalid mem access", - .result_unpriv = REJECT, - .errstr_unpriv = "unknown func", -}, -{ - "reference tracking: try to leak released ptr reg", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_0), - - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_ringbuf_reserve), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_ringbuf_discard), - BPF_MOV64_IMM(BPF_REG_0, 0), - - BPF_STX_MEM(BPF_DW, BPF_REG_9, BPF_REG_8, 0), - BPF_EXIT_INSN() - }, - .fixup_map_array_48b = { 4 }, - .fixup_map_ringbuf = { 11 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R8 !read_ok" -}, From patchwork Fri Apr 21 17:42:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220523 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C3E8C77B78 for ; Fri, 21 Apr 2023 17:43:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231956AbjDURnd (ORCPT ); Fri, 21 Apr 2023 13:43:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232777AbjDURnT (ORCPT ); Fri, 21 Apr 2023 13:43:19 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 237BD7299 for ; Fri, 21 Apr 2023 10:43:06 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3f09b9ac51dso57651635e9.0 for ; Fri, 21 Apr 2023 10:43:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098984; x=1684690984; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cjSfMSsA0BhMtkvdtPTQKu8q0hhIRVZqcFrrmVXoS9U=; b=QyxeTPi8+r8OjTNJXSel8dyIMrKkmAQL1xySk8Nwtw+clxaKPmjcHIeH5/AVD8gKY0 yMuLnDsXC4tuck636ZagQYzdTFpmX6mcDqh2YrnEOSp5BBMr2IjPtV8qN9pGIxxI7Tur Ai87RnG4T9bIO5IkfMpbJNFmavwWZu+1PsXqyjE01teRglO2cLw9jhhRshwtCyQKnFy8 UzHZUXf9OSDIPiInqeMVv4958EqVal0p0JcSJFWkVZlWeAG4bt9yTymsJVqq2ZvXBYEg LDQc3GotWC0MkFAWCBV3AYt2haJwbtsPg78jSwg7AJX4qI5xZpi0RJ/Sr/N8MLRqte8D wm6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098984; x=1684690984; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cjSfMSsA0BhMtkvdtPTQKu8q0hhIRVZqcFrrmVXoS9U=; b=fgThRqkF+8EAuHsBm9o9Atqmin8eTLSBs8raOkRCmDFFFEkkWrQXE46C8xf7DIrgz5 QsEijG+52bcWdLV8uYPZkHnzrTIyk1H+yN8zgSmEVaS/qhU/cIL7vhTRfDtAZ/CyPXU8 3RQ+q2znz6ifKdFn8xJ3hGXcAWhNCJz3gLrIip7qbvCC1ZYysyTteD9c3WJ2/h39Xedz uWm57KGGUdIM1I+NKoMfwp02+Bknihc/OqkN7FarsqRnyBzURYAmV0vsRDRghRb8fSrl 6m5891MUZL8uaxIpMmq7pZThUxmGvsHEmLL6Slni+ET4spIYY+KQQ8gn2wOlkvGtK/0o pe7g== X-Gm-Message-State: AAQBX9dRjsVC3IlutHmlFudM7SL21r+wydMnNBg8ZA9vcBC6B1lVqIxx VNukqiOBdUD5TL9ShhOIK0VOkv3LnJZ9yQ== X-Google-Smtp-Source: AKy350YhqYb4HV/Muy968ruxiqiX7iSMqDenortmU9X4tQGsgjSSgYoEKk8xStNv55fSYyNO+JIHZQ== X-Received: by 2002:a5d:414c:0:b0:2fa:d00d:cab8 with SMTP id c12-20020a5d414c000000b002fad00dcab8mr4485324wrq.18.1682098984279; Fri, 21 Apr 2023 10:43:04 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:03 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 16/24] selftests/bpf: verifier/regalloc converted to inline assembly Date: Fri, 21 Apr 2023 20:42:26 +0300 Message-Id: <20230421174234.2391278-17-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/regalloc automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_regalloc.c | 364 ++++++++++++++++++ .../testing/selftests/bpf/verifier/regalloc.c | 277 ------------- 3 files changed, 366 insertions(+), 277 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_regalloc.c delete mode 100644 tools/testing/selftests/bpf/verifier/regalloc.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 5941ef59ed76..f0b9b74c43d7 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -46,6 +46,7 @@ #include "verifier_raw_tp_writable.skel.h" #include "verifier_reg_equal.skel.h" #include "verifier_ref_tracking.skel.h" +#include "verifier_regalloc.skel.h" #include "verifier_ringbuf.skel.h" #include "verifier_spill_fill.skel.h" #include "verifier_stack_ptr.skel.h" @@ -134,6 +135,7 @@ void test_verifier_raw_stack(void) { RUN(verifier_raw_stack); } void test_verifier_raw_tp_writable(void) { RUN(verifier_raw_tp_writable); } void test_verifier_reg_equal(void) { RUN(verifier_reg_equal); } void test_verifier_ref_tracking(void) { RUN(verifier_ref_tracking); } +void test_verifier_regalloc(void) { RUN(verifier_regalloc); } void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } diff --git a/tools/testing/selftests/bpf/progs/verifier_regalloc.c b/tools/testing/selftests/bpf/progs/verifier_regalloc.c new file mode 100644 index 000000000000..ee5ddea87c91 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_regalloc.c @@ -0,0 +1,364 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/regalloc.c */ + +#include +#include +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +SEC("tracepoint") +__description("regalloc basic") +__success __flag(BPF_F_ANY_ALIGNMENT) +__naked void regalloc_basic(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + call %[bpf_get_prandom_u32]; \ + r2 = r0; \ + if r0 s> 20 goto l0_%=; \ + if r2 s< 0 goto l0_%=; \ + r7 += r0; \ + r7 += r2; \ + r0 = *(u64*)(r7 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("regalloc negative") +__failure __msg("invalid access to map value, value_size=48 off=48 size=1") +__naked void regalloc_negative(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + call %[bpf_get_prandom_u32]; \ + r2 = r0; \ + if r0 s> 24 goto l0_%=; \ + if r2 s< 0 goto l0_%=; \ + r7 += r0; \ + r7 += r2; \ + r0 = *(u8*)(r7 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("regalloc src_reg mark") +__success __flag(BPF_F_ANY_ALIGNMENT) +__naked void regalloc_src_reg_mark(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + call %[bpf_get_prandom_u32]; \ + r2 = r0; \ + if r0 s> 20 goto l0_%=; \ + r3 = 0; \ + if r3 s>= r2 goto l0_%=; \ + r7 += r0; \ + r7 += r2; \ + r0 = *(u64*)(r7 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("regalloc src_reg negative") +__failure __msg("invalid access to map value, value_size=48 off=44 size=8") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void regalloc_src_reg_negative(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + call %[bpf_get_prandom_u32]; \ + r2 = r0; \ + if r0 s> 22 goto l0_%=; \ + r3 = 0; \ + if r3 s>= r2 goto l0_%=; \ + r7 += r0; \ + r7 += r2; \ + r0 = *(u64*)(r7 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("regalloc and spill") +__success __flag(BPF_F_ANY_ALIGNMENT) +__naked void regalloc_and_spill(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + call %[bpf_get_prandom_u32]; \ + r2 = r0; \ + if r0 s> 20 goto l0_%=; \ + /* r0 has upper bound that should propagate into r2 */\ + *(u64*)(r10 - 8) = r2; /* spill r2 */ \ + r0 = 0; \ + r2 = 0; /* clear r0 and r2 */\ + r3 = *(u64*)(r10 - 8); /* fill r3 */ \ + if r0 s>= r3 goto l0_%=; \ + /* r3 has lower and upper bounds */ \ + r7 += r3; \ + r0 = *(u64*)(r7 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("regalloc and spill negative") +__failure __msg("invalid access to map value, value_size=48 off=48 size=8") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void regalloc_and_spill_negative(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + call %[bpf_get_prandom_u32]; \ + r2 = r0; \ + if r0 s> 48 goto l0_%=; \ + /* r0 has upper bound that should propagate into r2 */\ + *(u64*)(r10 - 8) = r2; /* spill r2 */ \ + r0 = 0; \ + r2 = 0; /* clear r0 and r2 */\ + r3 = *(u64*)(r10 - 8); /* fill r3 */\ + if r0 s>= r3 goto l0_%=; \ + /* r3 has lower and upper bounds */ \ + r7 += r3; \ + r0 = *(u64*)(r7 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("regalloc three regs") +__success __flag(BPF_F_ANY_ALIGNMENT) +__naked void regalloc_three_regs(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + call %[bpf_get_prandom_u32]; \ + r2 = r0; \ + r4 = r2; \ + if r0 s> 12 goto l0_%=; \ + if r2 s< 0 goto l0_%=; \ + r7 += r0; \ + r7 += r2; \ + r7 += r4; \ + r0 = *(u64*)(r7 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("regalloc after call") +__success __flag(BPF_F_ANY_ALIGNMENT) +__naked void regalloc_after_call(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + call %[bpf_get_prandom_u32]; \ + r8 = r0; \ + r9 = r0; \ + call regalloc_after_call__1; \ + if r8 s> 20 goto l0_%=; \ + if r9 s< 0 goto l0_%=; \ + r7 += r8; \ + r7 += r9; \ + r0 = *(u64*)(r7 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void regalloc_after_call__1(void) +{ + asm volatile (" \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("regalloc in callee") +__success __flag(BPF_F_ANY_ALIGNMENT) +__naked void regalloc_in_callee(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r7 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = r0; \ + r2 = r0; \ + r3 = r7; \ + call regalloc_in_callee__1; \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void regalloc_in_callee__1(void) +{ + asm volatile (" \ + if r1 s> 20 goto l0_%=; \ + if r2 s< 0 goto l0_%=; \ + r3 += r1; \ + r3 += r2; \ + r0 = *(u64*)(r3 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("regalloc, spill, JEQ") +__success +__naked void regalloc_spill_jeq(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + *(u64*)(r10 - 8) = r0; /* spill r0 */ \ + if r0 == 0 goto l0_%=; \ +l0_%=: /* The verifier will walk the rest twice with r0 == 0 and r0 == map_value */\ + call %[bpf_get_prandom_u32]; \ + r2 = r0; \ + if r2 == 20 goto l1_%=; \ +l1_%=: /* The verifier will walk the rest two more times with r0 == 20 and r0 == unknown */\ + r3 = *(u64*)(r10 - 8); /* fill r3 with map_value */\ + if r3 == 0 goto l2_%=; /* skip ldx if map_value == NULL */\ + /* Buggy verifier will think that r3 == 20 here */\ + r0 = *(u64*)(r3 + 0); /* read from map_value */\ +l2_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/regalloc.c b/tools/testing/selftests/bpf/verifier/regalloc.c deleted file mode 100644 index bb0dd89dd212..000000000000 --- a/tools/testing/selftests/bpf/verifier/regalloc.c +++ /dev/null @@ -1,277 +0,0 @@ -{ - "regalloc basic", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 20, 4), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "regalloc negative", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 24, 4), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=48 off=48 size=1", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "regalloc src_reg mark", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 20, 5), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_JMP_REG(BPF_JSGE, BPF_REG_3, BPF_REG_2, 3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "regalloc src_reg negative", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 22, 5), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_JMP_REG(BPF_JSGE, BPF_REG_3, BPF_REG_2, 3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=48 off=44 size=8", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "regalloc and spill", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 20, 7), - /* r0 has upper bound that should propagate into r2 */ - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8), /* spill r2 */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_2, 0), /* clear r0 and r2 */ - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -8), /* fill r3 */ - BPF_JMP_REG(BPF_JSGE, BPF_REG_0, BPF_REG_3, 2), - /* r3 has lower and upper bounds */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_3), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "regalloc and spill negative", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 48, 7), - /* r0 has upper bound that should propagate into r2 */ - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8), /* spill r2 */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_2, 0), /* clear r0 and r2 */ - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -8), /* fill r3 */ - BPF_JMP_REG(BPF_JSGE, BPF_REG_0, BPF_REG_3, 2), - /* r3 has lower and upper bounds */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_3), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=48 off=48 size=8", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "regalloc three regs", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 12, 5), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_4), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "regalloc after call", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 6), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 20, 4), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_9, 0, 3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_8), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_9), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "regalloc in callee", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_7), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 20, 5), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "regalloc, spill, JEQ", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), /* spill r0 */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 0), - /* The verifier will walk the rest twice with r0 == 0 and r0 == map_value */ - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 20, 0), - /* The verifier will walk the rest two more times with r0 == 20 and r0 == unknown */ - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -8), /* fill r3 with map_value */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 1), /* skip ldx if map_value == NULL */ - /* Buggy verifier will think that r3 == 20 here */ - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0), /* read from map_value */ - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, From patchwork Fri Apr 21 17:42:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220524 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2DB8C7618E for ; Fri, 21 Apr 2023 17:43:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233049AbjDURne (ORCPT ); Fri, 21 Apr 2023 13:43:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233399AbjDURnX (ORCPT ); Fri, 21 Apr 2023 13:43:23 -0400 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5F987EDD for ; Fri, 21 Apr 2023 10:43:07 -0700 (PDT) Received: by mail-wr1-x434.google.com with SMTP id ffacd0b85a97d-2f939bea9ebso1851565f8f.0 for ; Fri, 21 Apr 2023 10:43:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098985; x=1684690985; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MiuvGIClNsEIagMd1FZ+2rHU8wqfRWr9jhPMeSLP8mg=; b=btL856ZXyRf/XiR1PJxdQiWsCj71vStmenZYuD21o7cKEgPs32bLk0BzIIoJymShAe Z94DzTwtp1nF5+OnYE3nF/gMSqBVZ+sBd9zgMeMCPy0NX6aG/gfEnVNWwq4VebrJ2QCl R2mnm9LBSX0Q5rWseit0R7/HJKCCzjA01iNRK4EPlApJyZw71CDkPxTaQDqa3zgRuds2 TrZdeRUhg6e4+AdyXbeVUzEbADBj7q83q4BTCfM7ILXmbpYfT0ugpDr48AiKz8ugTgau Iumn7LSHoE4FO6d0oWUWu05JDicVVx5WdDs/spP/bgkroI1XfpVdxjEA6DqEZUY+z0lN EyMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098985; x=1684690985; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MiuvGIClNsEIagMd1FZ+2rHU8wqfRWr9jhPMeSLP8mg=; b=iEQHHCQis2FAo5WL6/xJB/ZaXQtXTaFdIqIDajMalS8FyZ369ojwKJqqXbxAJsDI4s /pt6aSVWrEWWUs8TJT7EfjmY1cktfg64r85qiFPDsGAnuLnoiMp/j5bto+kiWrbdtfCO 80Pc3XBzkjPwSxeJOcYm/uxG2ebQIlYewedQjyBW98N0ouvAKeHI5DqcXQr8+0BnrC+f qiOa3i7vrVaBHoXzi/4O3Gdl4nlvzPg6il/R/lt30a36mka8vPa/8XXpEcl0zUIiN4rt 0xqDwArvafJyxbwnxqi0XR+YT0EV0CjPnamKxse1ZSr9jOXsbH4fSjqs4ml4Is0QeUrv RjeA== X-Gm-Message-State: AAQBX9etnq5b9pPIXc67xz/7SRhnp27oOFQmpUivdLiaZJC0sR5F7AHd NsSpOhxoHkLCqZoJ4SuKAheDDRDU6z3Nfg== X-Google-Smtp-Source: AKy350ZxyyWW38boL6Deg0gdPLUYRHv9szhV/tNxUQjNO6M2RHv7LfutgygfP83sN2Prl58eZeWfuQ== X-Received: by 2002:a5d:508c:0:b0:2fb:600e:55bd with SMTP id a12-20020a5d508c000000b002fb600e55bdmr4609122wrt.39.1682098985483; Fri, 21 Apr 2023 10:43:05 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:05 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 17/24] selftests/bpf: verifier/runtime_jit converted to inline assembly Date: Fri, 21 Apr 2023 20:42:27 +0300 Message-Id: <20230421174234.2391278-18-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/runtime_jit automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../bpf/progs/verifier_runtime_jit.c | 360 ++++++++++++++++++ .../selftests/bpf/verifier/runtime_jit.c | 231 ----------- 3 files changed, 362 insertions(+), 231 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_runtime_jit.c delete mode 100644 tools/testing/selftests/bpf/verifier/runtime_jit.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index f0b9b74c43d7..072b0eb47391 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -48,6 +48,7 @@ #include "verifier_ref_tracking.skel.h" #include "verifier_regalloc.skel.h" #include "verifier_ringbuf.skel.h" +#include "verifier_runtime_jit.skel.h" #include "verifier_spill_fill.skel.h" #include "verifier_stack_ptr.skel.h" #include "verifier_uninit.skel.h" @@ -137,6 +138,7 @@ void test_verifier_reg_equal(void) { RUN(verifier_reg_equal); } void test_verifier_ref_tracking(void) { RUN(verifier_ref_tracking); } void test_verifier_regalloc(void) { RUN(verifier_regalloc); } void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); } +void test_verifier_runtime_jit(void) { RUN(verifier_runtime_jit); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } void test_verifier_uninit(void) { RUN(verifier_uninit); } diff --git a/tools/testing/selftests/bpf/progs/verifier_runtime_jit.c b/tools/testing/selftests/bpf/progs/verifier_runtime_jit.c new file mode 100644 index 000000000000..27ebfc1fd9ee --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_runtime_jit.c @@ -0,0 +1,360 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/runtime_jit.c */ + +#include +#include +#include "bpf_misc.h" + +void dummy_prog_42_socket(void); +void dummy_prog_24_socket(void); +void dummy_prog_loop1_socket(void); +void dummy_prog_loop2_socket(void); + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 4); + __uint(key_size, sizeof(int)); + __array(values, void (void)); +} map_prog1_socket SEC(".maps") = { + .values = { + [0] = (void *)&dummy_prog_42_socket, + [1] = (void *)&dummy_prog_loop1_socket, + [2] = (void *)&dummy_prog_24_socket, + }, +}; + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 8); + __uint(key_size, sizeof(int)); + __array(values, void (void)); +} map_prog2_socket SEC(".maps") = { + .values = { + [1] = (void *)&dummy_prog_loop2_socket, + [2] = (void *)&dummy_prog_24_socket, + [7] = (void *)&dummy_prog_42_socket, + }, +}; + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_42_socket(void) +{ + asm volatile ("r0 = 42; exit;"); +} + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_24_socket(void) +{ + asm volatile ("r0 = 24; exit;"); +} + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_loop1_socket(void) +{ + asm volatile (" \ + r3 = 1; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 41; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_loop2_socket(void) +{ + asm volatile (" \ + r3 = 1; \ + r2 = %[map_prog2_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 41; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog2_socket) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, prog once") +__success __success_unpriv __retval(42) +__naked void call_within_bounds_prog_once(void) +{ + asm volatile (" \ + r3 = 0; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, prog loop") +__success __success_unpriv __retval(41) +__naked void call_within_bounds_prog_loop(void) +{ + asm volatile (" \ + r3 = 1; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, no prog") +__success __success_unpriv __retval(1) +__naked void call_within_bounds_no_prog(void) +{ + asm volatile (" \ + r3 = 3; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, key 2") +__success __success_unpriv __retval(24) +__naked void call_within_bounds_key_2(void) +{ + asm volatile (" \ + r3 = 2; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, key 2 / key 2, first branch") +__success __success_unpriv __retval(24) +__naked void _2_key_2_first_branch(void) +{ + asm volatile (" \ + r0 = 13; \ + *(u8*)(r1 + %[__sk_buff_cb_0]) = r0; \ + r0 = *(u8*)(r1 + %[__sk_buff_cb_0]); \ + if r0 == 13 goto l0_%=; \ + r3 = 2; \ + r2 = %[map_prog1_socket] ll; \ + goto l1_%=; \ +l0_%=: r3 = 2; \ + r2 = %[map_prog1_socket] ll; \ +l1_%=: call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket), + __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, key 2 / key 2, second branch") +__success __success_unpriv __retval(24) +__naked void _2_key_2_second_branch(void) +{ + asm volatile (" \ + r0 = 14; \ + *(u8*)(r1 + %[__sk_buff_cb_0]) = r0; \ + r0 = *(u8*)(r1 + %[__sk_buff_cb_0]); \ + if r0 == 13 goto l0_%=; \ + r3 = 2; \ + r2 = %[map_prog1_socket] ll; \ + goto l1_%=; \ +l0_%=: r3 = 2; \ + r2 = %[map_prog1_socket] ll; \ +l1_%=: call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket), + __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, key 0 / key 2, first branch") +__success __success_unpriv __retval(24) +__naked void _0_key_2_first_branch(void) +{ + asm volatile (" \ + r0 = 13; \ + *(u8*)(r1 + %[__sk_buff_cb_0]) = r0; \ + r0 = *(u8*)(r1 + %[__sk_buff_cb_0]); \ + if r0 == 13 goto l0_%=; \ + r3 = 0; \ + r2 = %[map_prog1_socket] ll; \ + goto l1_%=; \ +l0_%=: r3 = 2; \ + r2 = %[map_prog1_socket] ll; \ +l1_%=: call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket), + __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, key 0 / key 2, second branch") +__success __success_unpriv __retval(42) +__naked void _0_key_2_second_branch(void) +{ + asm volatile (" \ + r0 = 14; \ + *(u8*)(r1 + %[__sk_buff_cb_0]) = r0; \ + r0 = *(u8*)(r1 + %[__sk_buff_cb_0]); \ + if r0 == 13 goto l0_%=; \ + r3 = 0; \ + r2 = %[map_prog1_socket] ll; \ + goto l1_%=; \ +l0_%=: r3 = 2; \ + r2 = %[map_prog1_socket] ll; \ +l1_%=: call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket), + __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, different maps, first branch") +__success __failure_unpriv __msg_unpriv("tail_call abusing map_ptr") +__retval(1) +__naked void bounds_different_maps_first_branch(void) +{ + asm volatile (" \ + r0 = 13; \ + *(u8*)(r1 + %[__sk_buff_cb_0]) = r0; \ + r0 = *(u8*)(r1 + %[__sk_buff_cb_0]); \ + if r0 == 13 goto l0_%=; \ + r3 = 0; \ + r2 = %[map_prog1_socket] ll; \ + goto l1_%=; \ +l0_%=: r3 = 0; \ + r2 = %[map_prog2_socket] ll; \ +l1_%=: call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket), + __imm_addr(map_prog2_socket), + __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call within bounds, different maps, second branch") +__success __failure_unpriv __msg_unpriv("tail_call abusing map_ptr") +__retval(42) +__naked void bounds_different_maps_second_branch(void) +{ + asm volatile (" \ + r0 = 14; \ + *(u8*)(r1 + %[__sk_buff_cb_0]) = r0; \ + r0 = *(u8*)(r1 + %[__sk_buff_cb_0]); \ + if r0 == 13 goto l0_%=; \ + r3 = 0; \ + r2 = %[map_prog1_socket] ll; \ + goto l1_%=; \ +l0_%=: r3 = 0; \ + r2 = %[map_prog2_socket] ll; \ +l1_%=: call %[bpf_tail_call]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket), + __imm_addr(map_prog2_socket), + __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: tail_call out of bounds") +__success __success_unpriv __retval(2) +__naked void tail_call_out_of_bounds(void) +{ + asm volatile (" \ + r3 = 256; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 2; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: pass negative index to tail_call") +__success __success_unpriv __retval(2) +__naked void negative_index_to_tail_call(void) +{ + asm volatile (" \ + r3 = -1; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 2; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__description("runtime/jit: pass > 32bit index to tail_call") +__success __success_unpriv __retval(42) +/* Verifier rewrite for unpriv skips tail call here. */ +__retval_unpriv(2) +__naked void _32bit_index_to_tail_call(void) +{ + asm volatile (" \ + r3 = 0x100000000 ll; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 2; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/runtime_jit.c b/tools/testing/selftests/bpf/verifier/runtime_jit.c deleted file mode 100644 index 94c399d1faca..000000000000 --- a/tools/testing/selftests/bpf/verifier/runtime_jit.c +++ /dev/null @@ -1,231 +0,0 @@ -{ - "runtime/jit: tail_call within bounds, prog once", - .insns = { - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 1 }, - .result = ACCEPT, - .retval = 42, -}, -{ - "runtime/jit: tail_call within bounds, prog loop", - .insns = { - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 1 }, - .result = ACCEPT, - .retval = 41, -}, -{ - "runtime/jit: tail_call within bounds, no prog", - .insns = { - BPF_MOV64_IMM(BPF_REG_3, 3), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 1 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "runtime/jit: tail_call within bounds, key 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_3, 2), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 1 }, - .result = ACCEPT, - .retval = 24, -}, -{ - "runtime/jit: tail_call within bounds, key 2 / key 2, first branch", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 13), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, - offsetof(struct __sk_buff, cb[0])), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, cb[0])), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4), - BPF_MOV64_IMM(BPF_REG_3, 2), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_3, 2), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 5, 9 }, - .result = ACCEPT, - .retval = 24, -}, -{ - "runtime/jit: tail_call within bounds, key 2 / key 2, second branch", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 14), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, - offsetof(struct __sk_buff, cb[0])), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, cb[0])), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4), - BPF_MOV64_IMM(BPF_REG_3, 2), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_3, 2), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 5, 9 }, - .result = ACCEPT, - .retval = 24, -}, -{ - "runtime/jit: tail_call within bounds, key 0 / key 2, first branch", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 13), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, - offsetof(struct __sk_buff, cb[0])), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, cb[0])), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_3, 2), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 5, 9 }, - .result = ACCEPT, - .retval = 24, -}, -{ - "runtime/jit: tail_call within bounds, key 0 / key 2, second branch", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 14), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, - offsetof(struct __sk_buff, cb[0])), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, cb[0])), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_3, 2), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 5, 9 }, - .result = ACCEPT, - .retval = 42, -}, -{ - "runtime/jit: tail_call within bounds, different maps, first branch", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 13), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, - offsetof(struct __sk_buff, cb[0])), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, cb[0])), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 5 }, - .fixup_prog2 = { 9 }, - .result_unpriv = REJECT, - .errstr_unpriv = "tail_call abusing map_ptr", - .result = ACCEPT, - .retval = 1, -}, -{ - "runtime/jit: tail_call within bounds, different maps, second branch", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 14), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, - offsetof(struct __sk_buff, cb[0])), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, cb[0])), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 5 }, - .fixup_prog2 = { 9 }, - .result_unpriv = REJECT, - .errstr_unpriv = "tail_call abusing map_ptr", - .result = ACCEPT, - .retval = 42, -}, -{ - "runtime/jit: tail_call out of bounds", - .insns = { - BPF_MOV64_IMM(BPF_REG_3, 256), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 2), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 1 }, - .result = ACCEPT, - .retval = 2, -}, -{ - "runtime/jit: pass negative index to tail_call", - .insns = { - BPF_MOV64_IMM(BPF_REG_3, -1), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 2), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 1 }, - .result = ACCEPT, - .retval = 2, -}, -{ - "runtime/jit: pass > 32bit index to tail_call", - .insns = { - BPF_LD_IMM64(BPF_REG_3, 0x100000000ULL), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 2), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 2 }, - .result = ACCEPT, - .retval = 42, - /* Verifier rewrite for unpriv skips tail call here. */ - .retval_unpriv = 2, -}, From patchwork Fri Apr 21 17:42:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220525 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D543C77B76 for ; Fri, 21 Apr 2023 17:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233140AbjDURng (ORCPT ); Fri, 21 Apr 2023 13:43:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232766AbjDURnY (ORCPT ); Fri, 21 Apr 2023 13:43:24 -0400 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91AF193F3 for ; Fri, 21 Apr 2023 10:43:08 -0700 (PDT) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f1957e80a2so18121345e9.1 for ; Fri, 21 Apr 2023 10:43:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098987; x=1684690987; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mdlIyWD2h5iDC7/QEobaar3wvJ8iJMDcSt599XJX+L8=; b=Om86w3r4lapgrVl32HwpLBHvLaODrr5GcuuyGqe1MYGGpNAdZzjRrOtymL+VngnjI0 YHr4vUthQPEVBsGFQsXHS5khiBga5CpWytsDBfkalWh6cn5BlS3nW3x7lQUfSs1J7++y seB2XTP0tDuwazGCrkDX6KqVwNfZ+5tEHy1MK+Xf0T7OW/JTutExxNuD1Oojr2SOnGoj yKlrO/sV6o+lnNkH8r7VrIaKJeKvmY85CLdtJEYuswL9ez7RTeKhNND/tVxt5/4XfKPH 61JLKwg084yxo44ONZgmCmjfZWR3YcZ41KTEPpQjvD+mC5tYstAPU/Y+Rxz5PkOl/a3J gjyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098987; x=1684690987; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mdlIyWD2h5iDC7/QEobaar3wvJ8iJMDcSt599XJX+L8=; b=lmMM/cvC2JHrK8J04hHdtgC35UcX79Fz4BeDOINXgG/jAI17Rugo9VXAQx2wkg9Zan z+XZoKbGAiBU0sSvKjCI0ZSJl01YYBwMMjnIPTWpXQ8Nt8bw7vRwHcxtJ8RAGejn2cws PHfG6N0avWAjZ55Ti9LYooHpxts28erHzEonXKH9V/TYMd8/57b3SjQdVLPsEG/npNYM 8gz/g56DbL1McGVjXeFh9R8rsWoZwA5v0BHioEhg0hdC5Qu9vzSFy9EitdTSAzxtQBdE XWQpUaKZP9GmESaxlSmR/qSer42HCXfaSrspjdS6s27AVW8MHJ5ulvLM3WnsXVkgiXv4 TUmA== X-Gm-Message-State: AAQBX9fTJEixykHbvMCMJwmoBK2NThEExe68VtA+OYmukcAO5KDmV/lE 7/MFLqPgaEEYxtqWowOBeaGzF12yW4QvSg== X-Google-Smtp-Source: AKy350Y9Rm8DpdyBQqUFPeAFTxpt0LlA4VKhOCDS5+qSIaYJbiMPYt0cIsPBXDl4j4p1lPhWXB8O6A== X-Received: by 2002:a5d:4705:0:b0:2fb:b869:bc08 with SMTP id y5-20020a5d4705000000b002fbb869bc08mr9018204wrq.23.1682098986658; Fri, 21 Apr 2023 10:43:06 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:06 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 18/24] selftests/bpf: verifier/search_pruning converted to inline assembly Date: Fri, 21 Apr 2023 20:42:28 +0300 Message-Id: <20230421174234.2391278-19-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/search_pruning automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../bpf/progs/verifier_search_pruning.c | 339 ++++++++++++++++++ .../selftests/bpf/verifier/search_pruning.c | 266 -------------- 3 files changed, 341 insertions(+), 266 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_search_pruning.c delete mode 100644 tools/testing/selftests/bpf/verifier/search_pruning.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 072b0eb47391..1ef44e699e9c 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -49,6 +49,7 @@ #include "verifier_regalloc.skel.h" #include "verifier_ringbuf.skel.h" #include "verifier_runtime_jit.skel.h" +#include "verifier_search_pruning.skel.h" #include "verifier_spill_fill.skel.h" #include "verifier_stack_ptr.skel.h" #include "verifier_uninit.skel.h" @@ -139,6 +140,7 @@ void test_verifier_ref_tracking(void) { RUN(verifier_ref_tracking); } void test_verifier_regalloc(void) { RUN(verifier_regalloc); } void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); } void test_verifier_runtime_jit(void) { RUN(verifier_runtime_jit); } +void test_verifier_search_pruning(void) { RUN(verifier_search_pruning); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } void test_verifier_uninit(void) { RUN(verifier_uninit); } diff --git a/tools/testing/selftests/bpf/progs/verifier_search_pruning.c b/tools/testing/selftests/bpf/progs/verifier_search_pruning.c new file mode 100644 index 000000000000..5a14498d352f --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_search_pruning.c @@ -0,0 +1,339 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/search_pruning.c */ + +#include +#include +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("socket") +__description("pointer/scalar confusion in state equality check (way 1)") +__success __failure_unpriv __msg_unpriv("R0 leaks addr as return value") +__retval(POINTER_VALUE) +__naked void state_equality_check_way_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 = *(u64*)(r0 + 0); \ + goto l1_%=; \ +l0_%=: r0 = r10; \ +l1_%=: goto l2_%=; \ +l2_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("pointer/scalar confusion in state equality check (way 2)") +__success __failure_unpriv __msg_unpriv("R0 leaks addr as return value") +__retval(POINTER_VALUE) +__naked void state_equality_check_way_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + r0 = r10; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r0 + 0); \ +l1_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("lwt_in") +__description("liveness pruning and write screening") +__failure __msg("R0 !read_ok") +__naked void liveness_pruning_and_write_screening(void) +{ + asm volatile (" \ + /* Get an unknown value */ \ + r2 = *(u32*)(r1 + 0); \ + /* branch conditions teach us nothing about R2 */\ + if r2 >= 0 goto l0_%=; \ + r0 = 0; \ +l0_%=: if r2 >= 0 goto l1_%=; \ + r0 = 0; \ +l1_%=: exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("varlen_map_value_access pruning") +__failure __msg("R0 unbounded memory access") +__failure_unpriv __msg_unpriv("R0 leaks addr") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void varlen_map_value_access_pruning(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r0 + 0); \ + w2 = %[max_entries]; \ + if r2 s> r1 goto l1_%=; \ + w1 = 0; \ +l1_%=: w1 <<= 2; \ + r0 += r1; \ + goto l2_%=; \ +l2_%=: r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(max_entries, MAX_ENTRIES), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("search pruning: all branches should be verified (nop operation)") +__failure __msg("R6 invalid mem access 'scalar'") +__naked void should_be_verified_nop_operation(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r3 = *(u64*)(r0 + 0); \ + if r3 == 0xbeef goto l1_%=; \ + r4 = 0; \ + goto l2_%=; \ +l1_%=: r4 = 1; \ +l2_%=: *(u64*)(r10 - 16) = r4; \ + call %[bpf_ktime_get_ns]; \ + r5 = *(u64*)(r10 - 16); \ + if r5 == 0 goto l0_%=; \ + r6 = 0; \ + r1 = 0xdead; \ + *(u64*)(r6 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("search pruning: all branches should be verified (invalid stack access)") +/* in privileged mode reads from uninitialized stack locations are permitted */ +__success __failure_unpriv +__msg_unpriv("invalid read from stack off -16+0 size 8") +__retval(0) +__naked void be_verified_invalid_stack_access(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r3 = *(u64*)(r0 + 0); \ + r4 = 0; \ + if r3 == 0xbeef goto l1_%=; \ + *(u64*)(r10 - 16) = r4; \ + goto l2_%=; \ +l1_%=: *(u64*)(r10 - 24) = r4; \ +l2_%=: call %[bpf_ktime_get_ns]; \ + r5 = *(u64*)(r10 - 16); \ +l0_%=: exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tracepoint") +__description("precision tracking for u32 spill/fill") +__failure __msg("R0 min value is outside of the allowed memory range") +__naked void tracking_for_u32_spill_fill(void) +{ + asm volatile (" \ + r7 = r1; \ + call %[bpf_get_prandom_u32]; \ + w6 = 32; \ + if r0 == 0 goto l0_%=; \ + w6 = 4; \ +l0_%=: /* Additional insns to introduce a pruning point. */\ + call %[bpf_get_prandom_u32]; \ + r3 = 0; \ + r3 = 0; \ + if r0 == 0 goto l1_%=; \ + r3 = 0; \ +l1_%=: /* u32 spill/fill */ \ + *(u32*)(r10 - 8) = r6; \ + r8 = *(u32*)(r10 - 8); \ + /* out-of-bound map value access for r6=32 */ \ + r1 = 0; \ + *(u64*)(r10 - 16) = r1; \ + r2 = r10; \ + r2 += -16; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r0 += r8; \ + r1 = *(u32*)(r0 + 0); \ +l2_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tracepoint") +__description("precision tracking for u32 spills, u64 fill") +__failure __msg("div by zero") +__naked void for_u32_spills_u64_fill(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r6 = r0; \ + w7 = 0xffffffff; \ + /* Additional insns to introduce a pruning point. */\ + r3 = 1; \ + r3 = 1; \ + r3 = 1; \ + r3 = 1; \ + call %[bpf_get_prandom_u32]; \ + if r0 == 0 goto l0_%=; \ + r3 = 1; \ +l0_%=: w3 /= 0; \ + /* u32 spills, u64 fill */ \ + *(u32*)(r10 - 4) = r6; \ + *(u32*)(r10 - 8) = r7; \ + r8 = *(u64*)(r10 - 8); \ + /* if r8 != X goto pc+1 r8 known in fallthrough branch */\ + if r8 != 0xffffffff goto l1_%=; \ + r3 = 1; \ +l1_%=: /* if r8 == X goto pc+1 condition always true on first\ + * traversal, so starts backtracking to mark r8 as requiring\ + * precision. r7 marked as needing precision. r6 not marked\ + * since it's not tracked. \ + */ \ + if r8 == 0xffffffff goto l2_%=; \ + /* fails if r8 correctly marked unknown after fill. */\ + w3 /= 0; \ +l2_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("allocated_stack") +__success __msg("processed 15 insns") +__success_unpriv __msg_unpriv("") __log_level(1) __retval(0) +__naked void allocated_stack(void) +{ + asm volatile (" \ + r6 = r1; \ + call %[bpf_get_prandom_u32]; \ + r7 = r0; \ + if r0 == 0 goto l0_%=; \ + r0 = 0; \ + *(u64*)(r10 - 8) = r6; \ + r6 = *(u64*)(r10 - 8); \ + *(u8*)(r10 - 9) = r7; \ + r7 = *(u8*)(r10 - 9); \ +l0_%=: if r0 != 0 goto l1_%=; \ +l1_%=: if r0 != 0 goto l2_%=; \ +l2_%=: if r0 != 0 goto l3_%=; \ +l3_%=: if r0 != 0 goto l4_%=; \ +l4_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +/* The test performs a conditional 64-bit write to a stack location + * fp[-8], this is followed by an unconditional 8-bit write to fp[-8], + * then data is read from fp[-8]. This sequence is unsafe. + * + * The test would be mistakenly marked as safe w/o dst register parent + * preservation in verifier.c:copy_register_state() function. + * + * Note the usage of BPF_F_TEST_STATE_FREQ to force creation of the + * checkpoint state after conditional 64-bit assignment. + */ + +SEC("socket") +__description("write tracking and register parent chain bug") +/* in privileged mode reads from uninitialized stack locations are permitted */ +__success __failure_unpriv +__msg_unpriv("invalid read from stack off -8+1 size 8") +__retval(0) __flag(BPF_F_TEST_STATE_FREQ) +__naked void and_register_parent_chain_bug(void) +{ + asm volatile (" \ + /* r6 = ktime_get_ns() */ \ + call %[bpf_ktime_get_ns]; \ + r6 = r0; \ + /* r0 = ktime_get_ns() */ \ + call %[bpf_ktime_get_ns]; \ + /* if r0 > r6 goto +1 */ \ + if r0 > r6 goto l0_%=; \ + /* *(u64 *)(r10 - 8) = 0xdeadbeef */ \ + r0 = 0xdeadbeef; \ + *(u64*)(r10 - 8) = r0; \ +l0_%=: r1 = 42; \ + *(u8*)(r10 - 8) = r1; \ + r2 = *(u64*)(r10 - 8); \ + /* exit(0) */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/search_pruning.c b/tools/testing/selftests/bpf/verifier/search_pruning.c deleted file mode 100644 index 745d6b5842fd..000000000000 --- a/tools/testing/selftests/bpf/verifier/search_pruning.c +++ /dev/null @@ -1,266 +0,0 @@ -{ - "pointer/scalar confusion in state equality check (way 1)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), - BPF_JMP_A(1), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), - BPF_JMP_A(0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, - .retval = POINTER_VALUE, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 leaks addr as return value" -}, -{ - "pointer/scalar confusion in state equality check (way 2)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), - BPF_JMP_A(1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, - .retval = POINTER_VALUE, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 leaks addr as return value" -}, -{ - "liveness pruning and write screening", - .insns = { - /* Get an unknown value */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* branch conditions teach us nothing about R2 */ - BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R0 !read_ok", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, -{ - "varlen_map_value_access pruning", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES), - BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_JMP_IMM(BPF_JA, 0, 0, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .errstr = "R0 unbounded memory access", - .result_unpriv = REJECT, - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "search pruning: all branches should be verified (nop operation)", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0xbeef, 2), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_A(1), - BPF_MOV64_IMM(BPF_REG_4, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -16), - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -16), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_5, 0, 2), - BPF_MOV64_IMM(BPF_REG_6, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_6, 0, 0xdead), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "R6 invalid mem access 'scalar'", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "search pruning: all branches should be verified (invalid stack access)", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0xbeef, 2), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -16), - BPF_JMP_A(1), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -24), - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -16), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr_unpriv = "invalid read from stack off -16+0 size 8", - .result_unpriv = REJECT, - /* in privileged mode reads from uninitialized stack locations are permitted */ - .result = ACCEPT, -}, -{ - "precision tracking for u32 spill/fill", - .insns = { - BPF_MOV64_REG(BPF_REG_7, BPF_REG_1), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV32_IMM(BPF_REG_6, 32), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_MOV32_IMM(BPF_REG_6, 4), - /* Additional insns to introduce a pruning point. */ - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_MOV64_IMM(BPF_REG_3, 0), - /* u32 spill/fill */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -8), - BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_10, -8), - /* out-of-bound map value access for r6=32 */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 15 }, - .result = REJECT, - .errstr = "R0 min value is outside of the allowed memory range", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "precision tracking for u32 spills, u64 fill", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV32_IMM(BPF_REG_7, 0xffffffff), - /* Additional insns to introduce a pruning point. */ - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0), - /* u32 spills, u64 fill */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4), - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_10, -8), - /* if r8 != X goto pc+1 r8 known in fallthrough branch */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0xffffffff, 1), - BPF_MOV64_IMM(BPF_REG_3, 1), - /* if r8 == X goto pc+1 condition always true on first - * traversal, so starts backtracking to mark r8 as requiring - * precision. r7 marked as needing precision. r6 not marked - * since it's not tracked. - */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 0xffffffff, 1), - /* fails if r8 correctly marked unknown after fill. */ - BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "div by zero", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "allocated_stack", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_ALU64_REG(BPF_MOV, BPF_REG_7, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, -8), - BPF_STX_MEM(BPF_B, BPF_REG_10, BPF_REG_7, -9), - BPF_LDX_MEM(BPF_B, BPF_REG_7, BPF_REG_10, -9), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .result_unpriv = ACCEPT, - .insn_processed = 15, -}, -/* The test performs a conditional 64-bit write to a stack location - * fp[-8], this is followed by an unconditional 8-bit write to fp[-8], - * then data is read from fp[-8]. This sequence is unsafe. - * - * The test would be mistakenly marked as safe w/o dst register parent - * preservation in verifier.c:copy_register_state() function. - * - * Note the usage of BPF_F_TEST_STATE_FREQ to force creation of the - * checkpoint state after conditional 64-bit assignment. - */ -{ - "write tracking and register parent chain bug", - .insns = { - /* r6 = ktime_get_ns() */ - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* r0 = ktime_get_ns() */ - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - /* if r0 > r6 goto +1 */ - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_6, 1), - /* *(u64 *)(r10 - 8) = 0xdeadbeef */ - BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0xdeadbeef), - /* r1 = 42 */ - BPF_MOV64_IMM(BPF_REG_1, 42), - /* *(u8 *)(r10 - 8) = r1 */ - BPF_STX_MEM(BPF_B, BPF_REG_FP, BPF_REG_1, -8), - /* r2 = *(u64 *)(r10 - 8) */ - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_FP, -8), - /* exit(0) */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .flags = BPF_F_TEST_STATE_FREQ, - .errstr_unpriv = "invalid read from stack off -8+1 size 8", - .result_unpriv = REJECT, - /* in privileged mode reads from uninitialized stack locations are permitted */ - .result = ACCEPT, -}, From patchwork Fri Apr 21 17:42:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220530 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F87CC77B7E for ; Fri, 21 Apr 2023 17:43:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233423AbjDURnt (ORCPT ); Fri, 21 Apr 2023 13:43:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232546AbjDURna (ORCPT ); Fri, 21 Apr 2023 13:43:30 -0400 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3568A976A for ; Fri, 21 Apr 2023 10:43:10 -0700 (PDT) Received: by mail-wr1-x42a.google.com with SMTP id ffacd0b85a97d-2f4c431f69cso1263496f8f.0 for ; Fri, 21 Apr 2023 10:43:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098988; x=1684690988; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ydckA1JcBTo2MUAJ0/P0uEz3gxR1TZN7874ZRgf9Fb4=; b=LJEbhdRxyZT7kXX546bFrJPIVLvBn7HS+WZuJedp35F3EFuwWmhLp+qfYBXvTG9pBY w3MmKIw4L7rqyDkTThFzzVqq5t+2+3OhTzQVIl9f+yvREzBKXuwCeCxFo2DgjdzAh+zb Xk9thAdHFgTHaYeCGTDf3xdhZfCRvgTGJEaIeRGMPddQ5/QrQbwwhGu7tUTlvZPEZvLP t3JsI5pFV4Uxk2LUr4w21eLLDHfAbpohpiOvrZ2oejV9LmJ+8DdRQVvbdfPIqJL4UPC4 DvOdr6iCcjaSu2iuQS5NhsqWdwi0z214HUpf8mtFIUWfLunrPQM5QBMI5v7k6Q607Xt4 dKAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098988; x=1684690988; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ydckA1JcBTo2MUAJ0/P0uEz3gxR1TZN7874ZRgf9Fb4=; b=Tvkbr4Sw6pUorRdjv3QDMiOZshpq9vwV/9tym234Oe6dh5kqA/6ZsEr74fOo+xllC+ cOkC+gxmeJyriR2m50M3kJBWOWBrx6Zs98ryKWun/7KfpNtiCMU6cFIUvqAcjHc4c6OO keGvlBbYp1jyFedoynw9JGEtZLkpYGTAoNAFzOOdO/KBbzXjZ9hCWs66cTrbTzrhprRc 6kq0U5Getp+qjXUI+XWGHykV0zkmL8zaCPd1cmy2/vw+YnPU6Y1C1HOyv1/hcoDuiSd3 QQPvejVzDjiB+ZyFis+LhZC28kEUo8Jlk8WQMYYuLJXj8FgO01r0t8sakiHcB6GGdndO P3DQ== X-Gm-Message-State: AAQBX9eLjJ0NCBOwxus2om2Rd29jx7TqZRbUEvo07aYvh8gL1FiqZAL9 ooJk2qCxMRfXjR7jyq+S8Gdpv0nMTtoVYw== X-Google-Smtp-Source: AKy350bJFeUTY25kfDhEJgXGC47UAS0GV1OoxMQFQRH8VvdCtVBK/Wm5xe2M9GECOo55dI92P4MTbg== X-Received: by 2002:a05:6000:1b85:b0:2fb:2a43:4a97 with SMTP id r5-20020a0560001b8500b002fb2a434a97mr4277808wru.39.1682098987815; Fri, 21 Apr 2023 10:43:07 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:07 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 19/24] selftests/bpf: verifier/sock converted to inline assembly Date: Fri, 21 Apr 2023 20:42:29 +0300 Message-Id: <20230421174234.2391278-20-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/sock automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_sock.c | 980 ++++++++++++++++++ tools/testing/selftests/bpf/verifier/sock.c | 706 ------------- 3 files changed, 982 insertions(+), 706 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_sock.c delete mode 100644 tools/testing/selftests/bpf/verifier/sock.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 1ef44e699e9c..60bcff62d968 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -50,6 +50,7 @@ #include "verifier_ringbuf.skel.h" #include "verifier_runtime_jit.skel.h" #include "verifier_search_pruning.skel.h" +#include "verifier_sock.skel.h" #include "verifier_spill_fill.skel.h" #include "verifier_stack_ptr.skel.h" #include "verifier_uninit.skel.h" @@ -141,6 +142,7 @@ void test_verifier_regalloc(void) { RUN(verifier_regalloc); } void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); } void test_verifier_runtime_jit(void) { RUN(verifier_runtime_jit); } void test_verifier_search_pruning(void) { RUN(verifier_search_pruning); } +void test_verifier_sock(void) { RUN(verifier_sock); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } void test_verifier_uninit(void) { RUN(verifier_uninit); } diff --git a/tools/testing/selftests/bpf/progs/verifier_sock.c b/tools/testing/selftests/bpf/progs/verifier_sock.c new file mode 100644 index 000000000000..ee76b51005ab --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_sock.c @@ -0,0 +1,980 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/sock.c */ + +#include +#include +#include "bpf_misc.h" + +#define sizeof_field(TYPE, MEMBER) sizeof((((TYPE *)0)->MEMBER)) +#define offsetofend(TYPE, MEMBER) \ + (offsetof(TYPE, MEMBER) + sizeof_field(TYPE, MEMBER)) + +struct { + __uint(type, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY); + __uint(max_entries, 1); + __type(key, __u32); + __type(value, __u64); +} map_reuseport_array SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKHASH); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); +} map_sockhash SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); +} map_sockmap SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_XSKMAP); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); +} map_xskmap SEC(".maps"); + +struct val { + int cnt; + struct bpf_spin_lock l; +}; + +struct { + __uint(type, BPF_MAP_TYPE_SK_STORAGE); + __uint(max_entries, 0); + __type(key, int); + __type(value, struct val); + __uint(map_flags, BPF_F_NO_PREALLOC); +} sk_storage_map SEC(".maps"); + +SEC("cgroup/skb") +__description("skb->sk: no NULL check") +__failure __msg("invalid mem access 'sock_common_or_null'") +__failure_unpriv +__naked void skb_sk_no_null_check(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + r0 = *(u32*)(r1 + 0); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("skb->sk: sk->family [non fullsock field]") +__success __success_unpriv __retval(0) +__naked void sk_family_non_fullsock_field_1(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: r0 = *(u32*)(r1 + %[bpf_sock_family]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_family, offsetof(struct bpf_sock, family)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("skb->sk: sk->type [fullsock field]") +__failure __msg("invalid sock_common access") +__failure_unpriv +__naked void sk_sk_type_fullsock_field_1(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: r0 = *(u32*)(r1 + %[bpf_sock_type]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("bpf_sk_fullsock(skb->sk): no !skb->sk check") +__failure __msg("type=sock_common_or_null expected=sock_common") +__failure_unpriv +__naked void sk_no_skb_sk_check_1(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + call %[bpf_sk_fullsock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): no NULL check on ret") +__failure __msg("invalid mem access 'sock_or_null'") +__failure_unpriv +__naked void no_null_check_on_ret_1(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + r0 = *(u32*)(r0 + %[bpf_sock_type]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->type [fullsock field]") +__success __success_unpriv __retval(0) +__naked void sk_sk_type_fullsock_field_2(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u32*)(r0 + %[bpf_sock_type]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->family [non fullsock field]") +__success __success_unpriv __retval(0) +__naked void sk_family_non_fullsock_field_2(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r0 = *(u32*)(r0 + %[bpf_sock_family]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_family, offsetof(struct bpf_sock, family)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->state [narrow load]") +__success __success_unpriv __retval(0) +__naked void sk_sk_state_narrow_load(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u8*)(r0 + %[bpf_sock_state]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_state, offsetof(struct bpf_sock, state)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->dst_port [word load] (backward compatibility)") +__success __success_unpriv __retval(0) +__naked void port_word_load_backward_compatibility(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u32*)(r0 + %[bpf_sock_dst_port]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_dst_port, offsetof(struct bpf_sock, dst_port)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->dst_port [half load]") +__success __success_unpriv __retval(0) +__naked void sk_dst_port_half_load(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u16*)(r0 + %[bpf_sock_dst_port]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_dst_port, offsetof(struct bpf_sock, dst_port)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->dst_port [half load] (invalid)") +__failure __msg("invalid sock access") +__failure_unpriv +__naked void dst_port_half_load_invalid_1(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u16*)(r0 + %[__imm_0]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__imm_0, offsetof(struct bpf_sock, dst_port) + 2), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->dst_port [byte load]") +__success __success_unpriv __retval(0) +__naked void sk_dst_port_byte_load(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r2 = *(u8*)(r0 + %[bpf_sock_dst_port]); \ + r2 = *(u8*)(r0 + %[__imm_0]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__imm_0, offsetof(struct bpf_sock, dst_port) + 1), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_dst_port, offsetof(struct bpf_sock, dst_port)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->dst_port [byte load] (invalid)") +__failure __msg("invalid sock access") +__failure_unpriv +__naked void dst_port_byte_load_invalid(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u8*)(r0 + %[__imm_0]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__imm_0, offsetof(struct bpf_sock, dst_port) + 2), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): past sk->dst_port [half load] (invalid)") +__failure __msg("invalid sock access") +__failure_unpriv +__naked void dst_port_half_load_invalid_2(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u16*)(r0 + %[bpf_sock_dst_port__end]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_dst_port__end, offsetofend(struct bpf_sock, dst_port)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->dst_ip6 [load 2nd byte]") +__success __success_unpriv __retval(0) +__naked void dst_ip6_load_2nd_byte(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u8*)(r0 + %[__imm_0]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__imm_0, offsetof(struct bpf_sock, dst_ip6[0]) + 1), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->type [narrow load]") +__success __success_unpriv __retval(0) +__naked void sk_sk_type_narrow_load(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u8*)(r0 + %[bpf_sock_type]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): sk->protocol [narrow load]") +__success __success_unpriv __retval(0) +__naked void sk_sk_protocol_narrow_load(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u8*)(r0 + %[bpf_sock_protocol]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_protocol, offsetof(struct bpf_sock, protocol)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("sk_fullsock(skb->sk): beyond last field") +__failure __msg("invalid sock access") +__failure_unpriv +__naked void skb_sk_beyond_last_field_1(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u32*)(r0 + %[bpf_sock_rx_queue_mapping__end]);\ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_sock_rx_queue_mapping__end, offsetofend(struct bpf_sock, rx_queue_mapping)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("bpf_tcp_sock(skb->sk): no !skb->sk check") +__failure __msg("type=sock_common_or_null expected=sock_common") +__failure_unpriv +__naked void sk_no_skb_sk_check_2(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + call %[bpf_tcp_sock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_tcp_sock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("bpf_tcp_sock(skb->sk): no NULL check on ret") +__failure __msg("invalid mem access 'tcp_sock_or_null'") +__failure_unpriv +__naked void no_null_check_on_ret_2(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_tcp_sock]; \ + r0 = *(u32*)(r0 + %[bpf_tcp_sock_snd_cwnd]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_tcp_sock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("bpf_tcp_sock(skb->sk): tp->snd_cwnd") +__success __success_unpriv __retval(0) +__naked void skb_sk_tp_snd_cwnd_1(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_tcp_sock]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r0 = *(u32*)(r0 + %[bpf_tcp_sock_snd_cwnd]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_tcp_sock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("bpf_tcp_sock(skb->sk): tp->bytes_acked") +__success __success_unpriv __retval(0) +__naked void skb_sk_tp_bytes_acked(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_tcp_sock]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r0 = *(u64*)(r0 + %[bpf_tcp_sock_bytes_acked]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_tcp_sock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_tcp_sock_bytes_acked, offsetof(struct bpf_tcp_sock, bytes_acked)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("bpf_tcp_sock(skb->sk): beyond last field") +__failure __msg("invalid tcp_sock access") +__failure_unpriv +__naked void skb_sk_beyond_last_field_2(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_tcp_sock]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r0 = *(u64*)(r0 + %[bpf_tcp_sock_bytes_acked__end]);\ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_tcp_sock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_tcp_sock_bytes_acked__end, offsetofend(struct bpf_tcp_sock, bytes_acked)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("bpf_tcp_sock(bpf_sk_fullsock(skb->sk)): tp->snd_cwnd") +__success __success_unpriv __retval(0) +__naked void skb_sk_tp_snd_cwnd_2(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r1 = r0; \ + call %[bpf_tcp_sock]; \ + if r0 != 0 goto l2_%=; \ + exit; \ +l2_%=: r0 = *(u32*)(r0 + %[bpf_tcp_sock_snd_cwnd]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm(bpf_tcp_sock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)), + __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd)) + : __clobber_all); +} + +SEC("tc") +__description("bpf_sk_release(skb->sk)") +__failure __msg("R1 must be referenced when passed to release function") +__naked void bpf_sk_release_skb_sk(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 == 0 goto l0_%=; \ + call %[bpf_sk_release]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_release), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("tc") +__description("bpf_sk_release(bpf_sk_fullsock(skb->sk))") +__failure __msg("R1 must be referenced when passed to release function") +__naked void bpf_sk_fullsock_skb_sk(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r1 = r0; \ + call %[bpf_sk_release]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm(bpf_sk_release), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("tc") +__description("bpf_sk_release(bpf_tcp_sock(skb->sk))") +__failure __msg("R1 must be referenced when passed to release function") +__naked void bpf_tcp_sock_skb_sk(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_tcp_sock]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r1 = r0; \ + call %[bpf_sk_release]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_sk_release), + __imm(bpf_tcp_sock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("tc") +__description("sk_storage_get(map, skb->sk, NULL, 0): value == NULL") +__success __retval(0) +__naked void sk_null_0_value_null(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r4 = 0; \ + r3 = 0; \ + r2 = r0; \ + r1 = %[sk_storage_map] ll; \ + call %[bpf_sk_storage_get]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm(bpf_sk_storage_get), + __imm_addr(sk_storage_map), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("tc") +__description("sk_storage_get(map, skb->sk, 1, 1): value == 1") +__failure __msg("R3 type=scalar expected=fp") +__naked void sk_1_1_value_1(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r4 = 1; \ + r3 = 1; \ + r2 = r0; \ + r1 = %[sk_storage_map] ll; \ + call %[bpf_sk_storage_get]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm(bpf_sk_storage_get), + __imm_addr(sk_storage_map), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("tc") +__description("sk_storage_get(map, skb->sk, &stack_value, 1): stack_value") +__success __retval(0) +__naked void stack_value_1_stack_value(void) +{ + asm volatile (" \ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: call %[bpf_sk_fullsock]; \ + if r0 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r4 = 1; \ + r3 = r10; \ + r3 += -8; \ + r2 = r0; \ + r1 = %[sk_storage_map] ll; \ + call %[bpf_sk_storage_get]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_fullsock), + __imm(bpf_sk_storage_get), + __imm_addr(sk_storage_map), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +SEC("tc") +__description("bpf_map_lookup_elem(smap, &key)") +__failure __msg("cannot pass map_type 24 into func bpf_map_lookup_elem") +__naked void map_lookup_elem_smap_key(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[sk_storage_map] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(sk_storage_map) + : __clobber_all); +} + +SEC("xdp") +__description("bpf_map_lookup_elem(xskmap, &key); xs->queue_id") +__success __retval(0) +__naked void xskmap_key_xs_queue_id(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_xskmap] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r0 = *(u32*)(r0 + %[bpf_xdp_sock_queue_id]); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_xskmap), + __imm_const(bpf_xdp_sock_queue_id, offsetof(struct bpf_xdp_sock, queue_id)) + : __clobber_all); +} + +SEC("sk_skb") +__description("bpf_map_lookup_elem(sockmap, &key)") +__failure __msg("Unreleased reference id=2 alloc_insn=6") +__naked void map_lookup_elem_sockmap_key(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_sockmap] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_sockmap) + : __clobber_all); +} + +SEC("sk_skb") +__description("bpf_map_lookup_elem(sockhash, &key)") +__failure __msg("Unreleased reference id=2 alloc_insn=6") +__naked void map_lookup_elem_sockhash_key(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_sockhash] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_sockhash) + : __clobber_all); +} + +SEC("sk_skb") +__description("bpf_map_lookup_elem(sockmap, &key); sk->type [fullsock field]; bpf_sk_release(sk)") +__success +__naked void field_bpf_sk_release_sk_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_sockmap] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = r0; \ + r0 = *(u32*)(r0 + %[bpf_sock_type]); \ + call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_sk_release), + __imm_addr(map_sockmap), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +SEC("sk_skb") +__description("bpf_map_lookup_elem(sockhash, &key); sk->type [fullsock field]; bpf_sk_release(sk)") +__success +__naked void field_bpf_sk_release_sk_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_sockhash] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = r0; \ + r0 = *(u32*)(r0 + %[bpf_sock_type]); \ + call %[bpf_sk_release]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_sk_release), + __imm_addr(map_sockhash), + __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)) + : __clobber_all); +} + +SEC("sk_reuseport") +__description("bpf_sk_select_reuseport(ctx, reuseport_array, &key, flags)") +__success +__naked void ctx_reuseport_array_key_flags(void) +{ + asm volatile (" \ + r4 = 0; \ + r2 = 0; \ + *(u32*)(r10 - 4) = r2; \ + r3 = r10; \ + r3 += -4; \ + r2 = %[map_reuseport_array] ll; \ + call %[bpf_sk_select_reuseport]; \ + exit; \ +" : + : __imm(bpf_sk_select_reuseport), + __imm_addr(map_reuseport_array) + : __clobber_all); +} + +SEC("sk_reuseport") +__description("bpf_sk_select_reuseport(ctx, sockmap, &key, flags)") +__success +__naked void reuseport_ctx_sockmap_key_flags(void) +{ + asm volatile (" \ + r4 = 0; \ + r2 = 0; \ + *(u32*)(r10 - 4) = r2; \ + r3 = r10; \ + r3 += -4; \ + r2 = %[map_sockmap] ll; \ + call %[bpf_sk_select_reuseport]; \ + exit; \ +" : + : __imm(bpf_sk_select_reuseport), + __imm_addr(map_sockmap) + : __clobber_all); +} + +SEC("sk_reuseport") +__description("bpf_sk_select_reuseport(ctx, sockhash, &key, flags)") +__success +__naked void reuseport_ctx_sockhash_key_flags(void) +{ + asm volatile (" \ + r4 = 0; \ + r2 = 0; \ + *(u32*)(r10 - 4) = r2; \ + r3 = r10; \ + r3 += -4; \ + r2 = %[map_sockmap] ll; \ + call %[bpf_sk_select_reuseport]; \ + exit; \ +" : + : __imm(bpf_sk_select_reuseport), + __imm_addr(map_sockmap) + : __clobber_all); +} + +SEC("tc") +__description("mark null check on return value of bpf_skc_to helpers") +__failure __msg("invalid mem access") +__naked void of_bpf_skc_to_helpers(void) +{ + asm volatile (" \ + r1 = *(u64*)(r1 + %[__sk_buff_sk]); \ + if r1 != 0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: r6 = r1; \ + call %[bpf_skc_to_tcp_sock]; \ + r7 = r0; \ + r1 = r6; \ + call %[bpf_skc_to_tcp_request_sock]; \ + r8 = r0; \ + if r8 != 0 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = *(u8*)(r7 + 0); \ + exit; \ +" : + : __imm(bpf_skc_to_tcp_request_sock), + __imm(bpf_skc_to_tcp_sock), + __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/sock.c b/tools/testing/selftests/bpf/verifier/sock.c deleted file mode 100644 index 108dd3ee1edd..000000000000 --- a/tools/testing/selftests/bpf/verifier/sock.c +++ /dev/null @@ -1,706 +0,0 @@ -{ - "skb->sk: no NULL check", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "invalid mem access 'sock_common_or_null'", -}, -{ - "skb->sk: sk->family [non fullsock field]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, offsetof(struct bpf_sock, family)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "skb->sk: sk->type [fullsock field]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, offsetof(struct bpf_sock, type)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "invalid sock_common access", -}, -{ - "bpf_sk_fullsock(skb->sk): no !skb->sk check", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "type=sock_common_or_null expected=sock_common", -}, -{ - "sk_fullsock(skb->sk): no NULL check on ret", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "invalid mem access 'sock_or_null'", -}, -{ - "sk_fullsock(skb->sk): sk->type [fullsock field]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "sk_fullsock(skb->sk): sk->family [non fullsock field]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, family)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "sk_fullsock(skb->sk): sk->state [narrow load]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, state)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "sk_fullsock(skb->sk): sk->dst_port [word load] (backward compatibility)", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "sk_fullsock(skb->sk): sk->dst_port [half load]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "sk_fullsock(skb->sk): sk->dst_port [half load] (invalid)", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "invalid sock access", -}, -{ - "sk_fullsock(skb->sk): sk->dst_port [byte load]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port)), - BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "sk_fullsock(skb->sk): sk->dst_port [byte load] (invalid)", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "invalid sock access", -}, -{ - "sk_fullsock(skb->sk): past sk->dst_port [half load] (invalid)", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, dst_port)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "invalid sock access", -}, -{ - "sk_fullsock(skb->sk): sk->dst_ip6 [load 2nd byte]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_ip6[0]) + 1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "sk_fullsock(skb->sk): sk->type [narrow load]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "sk_fullsock(skb->sk): sk->protocol [narrow load]", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, protocol)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "sk_fullsock(skb->sk): beyond last field", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, rx_queue_mapping)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "invalid sock access", -}, -{ - "bpf_tcp_sock(skb->sk): no !skb->sk check", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "type=sock_common_or_null expected=sock_common", -}, -{ - "bpf_tcp_sock(skb->sk): no NULL check on ret", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, snd_cwnd)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "invalid mem access 'tcp_sock_or_null'", -}, -{ - "bpf_tcp_sock(skb->sk): tp->snd_cwnd", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, snd_cwnd)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "bpf_tcp_sock(skb->sk): tp->bytes_acked", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, bytes_acked)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "bpf_tcp_sock(skb->sk): beyond last field", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_tcp_sock, bytes_acked)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = REJECT, - .errstr = "invalid tcp_sock access", -}, -{ - "bpf_tcp_sock(bpf_sk_fullsock(skb->sk)): tp->snd_cwnd", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, snd_cwnd)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .result = ACCEPT, -}, -{ - "bpf_sk_release(skb->sk)", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "R1 must be referenced when passed to release function", -}, -{ - "bpf_sk_release(bpf_sk_fullsock(skb->sk))", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "R1 must be referenced when passed to release function", -}, -{ - "bpf_sk_release(bpf_tcp_sock(skb->sk))", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_tcp_sock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "R1 must be referenced when passed to release function", -}, -{ - "sk_storage_get(map, skb->sk, NULL, 0): value == NULL", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_storage_get), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_sk_storage_map = { 11 }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "sk_storage_get(map, skb->sk, 1, 1): value == 1", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_4, 1), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_storage_get), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_sk_storage_map = { 11 }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "R3 type=scalar expected=fp", -}, -{ - "sk_storage_get(map, skb->sk, &stack_value, 1): stack_value", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_4, 1), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_storage_get), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_sk_storage_map = { 14 }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "bpf_map_lookup_elem(smap, &key)", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_sk_storage_map = { 3 }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "cannot pass map_type 24 into func bpf_map_lookup_elem", -}, -{ - "bpf_map_lookup_elem(xskmap, &key); xs->queue_id", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_xdp_sock, queue_id)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_xskmap = { 3 }, - .prog_type = BPF_PROG_TYPE_XDP, - .result = ACCEPT, -}, -{ - "bpf_map_lookup_elem(sockmap, &key)", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_sockmap = { 3 }, - .prog_type = BPF_PROG_TYPE_SK_SKB, - .result = REJECT, - .errstr = "Unreleased reference id=2 alloc_insn=5", -}, -{ - "bpf_map_lookup_elem(sockhash, &key)", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_sockhash = { 3 }, - .prog_type = BPF_PROG_TYPE_SK_SKB, - .result = REJECT, - .errstr = "Unreleased reference id=2 alloc_insn=5", -}, -{ - "bpf_map_lookup_elem(sockmap, &key); sk->type [fullsock field]; bpf_sk_release(sk)", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .fixup_map_sockmap = { 3 }, - .prog_type = BPF_PROG_TYPE_SK_SKB, - .result = ACCEPT, -}, -{ - "bpf_map_lookup_elem(sockhash, &key); sk->type [fullsock field]; bpf_sk_release(sk)", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_EXIT_INSN(), - }, - .fixup_map_sockhash = { 3 }, - .prog_type = BPF_PROG_TYPE_SK_SKB, - .result = ACCEPT, -}, -{ - "bpf_sk_select_reuseport(ctx, reuseport_array, &key, flags)", - .insns = { - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -4), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_select_reuseport), - BPF_EXIT_INSN(), - }, - .fixup_map_reuseport_array = { 4 }, - .prog_type = BPF_PROG_TYPE_SK_REUSEPORT, - .result = ACCEPT, -}, -{ - "bpf_sk_select_reuseport(ctx, sockmap, &key, flags)", - .insns = { - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -4), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_select_reuseport), - BPF_EXIT_INSN(), - }, - .fixup_map_sockmap = { 4 }, - .prog_type = BPF_PROG_TYPE_SK_REUSEPORT, - .result = ACCEPT, -}, -{ - "bpf_sk_select_reuseport(ctx, sockhash, &key, flags)", - .insns = { - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -4), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_sk_select_reuseport), - BPF_EXIT_INSN(), - }, - .fixup_map_sockmap = { 4 }, - .prog_type = BPF_PROG_TYPE_SK_REUSEPORT, - .result = ACCEPT, -}, -{ - "mark null check on return value of bpf_skc_to helpers", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_request_sock), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = REJECT, - .errstr = "invalid mem access", - .result_unpriv = REJECT, - .errstr_unpriv = "unknown func", -}, From patchwork Fri Apr 21 17:42:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220527 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B33D4C7618E for ; Fri, 21 Apr 2023 17:43:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233420AbjDURnt (ORCPT ); Fri, 21 Apr 2023 13:43:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232197AbjDURn3 (ORCPT ); Fri, 21 Apr 2023 13:43:29 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CA72C154 for ; Fri, 21 Apr 2023 10:43:11 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-3f1957e80a2so18122165e9.1 for ; Fri, 21 Apr 2023 10:43:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098989; x=1684690989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w64twY6JmweTxsVIF9KCXcjlFOp00D9p7KpnzHlj5Jk=; b=CBRH/F7FlDbjCkwIh4R09CaROBk41eTGhQjumR/Qj/tRCebORz9fZl4KZ3iZKgKPR7 G4YjXLcGxuWcjbjebNaIvKFT93q8lJaPqN3HwIs6A/zJWEo40rtkn31Jn4IYEIvLw3iR QoPlngEe5JMDfsX+4cSysaeo494BwTFL5ISVNg+aEmnYwuHm+akJCnqjKqDvKxSm+lio uupsCcZcQAE+xrF76MOt9tLdj8mYM9jvdxkGGlO8FGlZFwuzYtzB/t0t34LoRjYZYi0M jYsJtJirsi5F7SnxJmgI/yvKqsHBLUt7RLK5J4cplsOPHrHDTynKxZMbowUHq4O0UrsR Rzxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098989; x=1684690989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w64twY6JmweTxsVIF9KCXcjlFOp00D9p7KpnzHlj5Jk=; b=D2SfgOmJoKT661UoNcP1uAm+UijkapDRwt2vwEyS+7qsXKA5+hswg/WlgwBUiBTWOX mclhhKzxuJyYuuVvsiZxfj6+TxmnU4+ryHEFwc+ydksZBJn2Md3fsGGQD+wuJoQyECoN 8RUQQ86P88xUNlrABKWDLGZ8Aa9fp+sWa3zj67koLFx6BWzsxglOF4Koo6nTbke4IEGW 2sVjV0uX/PtddzXPL2G+UBvpT1URBSFll+IpKCmvvwM9poez+r+D2HSj0ay2iv8WRG3O CBbhb+WH+HW64nrGjNyDq931DyB7KG5aL0TzIFeNUE5mOCb3mcd1j7iVNrrWeestUqnc i5lA== X-Gm-Message-State: AAQBX9esXaeXllO4riE8T8V23sGortC7UrNApSnWkOKC9j6nDJCTC/0I JGSBMiySApNkANoHETYLzPc164bJ+Rom6Q== X-Google-Smtp-Source: AKy350ayp/QJc3h1+Kl0OJ4NM0fLCgXilZHbp/wK0su5xRpjBWtgN1hJEXIc/VAnsN66GyUXbM7ZFQ== X-Received: by 2002:adf:eb11:0:b0:2ff:70c0:9a05 with SMTP id s17-20020adfeb11000000b002ff70c09a05mr6319174wrn.4.1682098988913; Fri, 21 Apr 2023 10:43:08 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:08 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 20/24] selftests/bpf: verifier/spin_lock converted to inline assembly Date: Fri, 21 Apr 2023 20:42:30 +0300 Message-Id: <20230421174234.2391278-21-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/spin_lock automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_spin_lock.c | 533 ++++++++++++++++++ .../selftests/bpf/verifier/spin_lock.c | 447 --------------- 3 files changed, 535 insertions(+), 447 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_spin_lock.c delete mode 100644 tools/testing/selftests/bpf/verifier/spin_lock.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 60bcff62d968..0ea88282859d 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -52,6 +52,7 @@ #include "verifier_search_pruning.skel.h" #include "verifier_sock.skel.h" #include "verifier_spill_fill.skel.h" +#include "verifier_spin_lock.skel.h" #include "verifier_stack_ptr.skel.h" #include "verifier_uninit.skel.h" #include "verifier_value_adj_spill.skel.h" @@ -144,6 +145,7 @@ void test_verifier_runtime_jit(void) { RUN(verifier_runtime_jit); } void test_verifier_search_pruning(void) { RUN(verifier_search_pruning); } void test_verifier_sock(void) { RUN(verifier_sock); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } +void test_verifier_spin_lock(void) { RUN(verifier_spin_lock); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } void test_verifier_uninit(void) { RUN(verifier_uninit); } void test_verifier_value_adj_spill(void) { RUN(verifier_value_adj_spill); } diff --git a/tools/testing/selftests/bpf/progs/verifier_spin_lock.c b/tools/testing/selftests/bpf/progs/verifier_spin_lock.c new file mode 100644 index 000000000000..9c1aa69650f8 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_spin_lock.c @@ -0,0 +1,533 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/spin_lock.c */ + +#include +#include +#include "bpf_misc.h" + +struct val { + int cnt; + struct bpf_spin_lock l; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct val); +} map_spin_lock SEC(".maps"); + +SEC("cgroup/skb") +__description("spin_lock: test1 success") +__success __failure_unpriv __msg_unpriv("") +__retval(0) +__naked void spin_lock_test1_success(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r1 = r6; \ + r1 += 4; \ + r0 = *(u32*)(r6 + 0); \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("spin_lock: test2 direct ld/st") +__failure __msg("cannot be accessed directly") +__failure_unpriv __msg_unpriv("") +__naked void lock_test2_direct_ld_st(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r1 = r6; \ + r1 += 4; \ + r0 = *(u32*)(r1 + 0); \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("spin_lock: test3 direct ld/st") +__failure __msg("cannot be accessed directly") +__failure_unpriv __msg_unpriv("") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void lock_test3_direct_ld_st(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r1 = r6; \ + r1 += 4; \ + r0 = *(u32*)(r6 + 1); \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("spin_lock: test4 direct ld/st") +__failure __msg("cannot be accessed directly") +__failure_unpriv __msg_unpriv("") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void lock_test4_direct_ld_st(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r1 = r6; \ + r1 += 4; \ + r0 = *(u16*)(r6 + 3); \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("spin_lock: test5 call within a locked region") +__failure __msg("calls are not allowed") +__failure_unpriv __msg_unpriv("") +__naked void call_within_a_locked_region(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + call %[bpf_get_prandom_u32]; \ + r1 = r6; \ + r1 += 4; \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("spin_lock: test6 missing unlock") +__failure __msg("unlock is missing") +__failure_unpriv __msg_unpriv("") +__naked void spin_lock_test6_missing_unlock(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r1 = r6; \ + r1 += 4; \ + r0 = *(u32*)(r6 + 0); \ + if r0 != 0 goto l1_%=; \ + call %[bpf_spin_unlock]; \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("spin_lock: test7 unlock without lock") +__failure __msg("without taking a lock") +__failure_unpriv __msg_unpriv("") +__naked void lock_test7_unlock_without_lock(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + r1 += 4; \ + if r1 != 0 goto l1_%=; \ + call %[bpf_spin_lock]; \ +l1_%=: r1 = r6; \ + r1 += 4; \ + r0 = *(u32*)(r6 + 0); \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("spin_lock: test8 double lock") +__failure __msg("calls are not allowed") +__failure_unpriv __msg_unpriv("") +__naked void spin_lock_test8_double_lock(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r1 = r6; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r1 = r6; \ + r1 += 4; \ + r0 = *(u32*)(r6 + 0); \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("spin_lock: test9 different lock") +__failure __msg("unlock of different lock") +__failure_unpriv __msg_unpriv("") +__naked void spin_lock_test9_different_lock(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r7 = r0; \ + r1 = r6; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r1 = r7; \ + r1 += 4; \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("spin_lock: test10 lock in subprog without unlock") +__failure __msg("unlock is missing") +__failure_unpriv __msg_unpriv("") +__naked void lock_in_subprog_without_unlock(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r6 = r0; \ + r1 = r0; \ + r1 += 4; \ + call lock_in_subprog_without_unlock__1; \ + r1 = r6; \ + r1 += 4; \ + call %[bpf_spin_unlock]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +static __naked __noinline __attribute__((used)) +void lock_in_subprog_without_unlock__1(void) +{ + asm volatile (" \ + call %[bpf_spin_lock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_spin_lock) + : __clobber_all); +} + +SEC("tc") +__description("spin_lock: test11 ld_abs under lock") +__failure __msg("inside bpf_spin_lock") +__naked void test11_ld_abs_under_lock(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r7 = r0; \ + r1 = r0; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r0 = *(u8*)skb[0]; \ + r1 = r7; \ + r1 += 4; \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("tc") +__description("spin_lock: regsafe compare reg->id for map value") +__failure __msg("bpf_spin_unlock of different lock") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void reg_id_for_map_value(void) +{ + asm volatile (" \ + r6 = r1; \ + r6 = *(u32*)(r6 + %[__sk_buff_mark]); \ + r1 = %[map_spin_lock] ll; \ + r9 = r1; \ + r2 = 0; \ + *(u32*)(r10 - 4) = r2; \ + r2 = r10; \ + r2 += -4; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r7 = r0; \ + r1 = r9; \ + r2 = r10; \ + r2 += -4; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l1_%=; \ + exit; \ +l1_%=: r8 = r0; \ + r1 = r7; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + if r6 == 0 goto l2_%=; \ + goto l3_%=; \ +l2_%=: r7 = r8; \ +l3_%=: r1 = r7; \ + r1 += 4; \ + call %[bpf_spin_unlock]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +/* Make sure that regsafe() compares ids for spin lock records using + * check_ids(): + * 1: r9 = map_lookup_elem(...) ; r9.id == 1 + * 2: r8 = map_lookup_elem(...) ; r8.id == 2 + * 3: r7 = ktime_get_ns() + * 4: r6 = ktime_get_ns() + * 5: if r6 > r7 goto <9> + * 6: spin_lock(r8) + * 7: r9 = r8 + * 8: goto <10> + * 9: spin_lock(r9) + * 10: spin_unlock(r9) ; r9.id == 1 || r9.id == 2 and lock is active, + * ; second visit to (10) should be considered safe + * ; if check_ids() is used. + * 11: exit(0) + */ + +SEC("cgroup/skb") +__description("spin_lock: regsafe() check_ids() similar id mappings") +__success __msg("29: safe") +__failure_unpriv __msg_unpriv("") +__log_level(2) __retval(0) __flag(BPF_F_TEST_STATE_FREQ) +__naked void check_ids_similar_id_mappings(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u32*)(r10 - 4) = r1; \ + /* r9 = map_lookup_elem(...) */ \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r9 = r0; \ + /* r8 = map_lookup_elem(...) */ \ + r2 = r10; \ + r2 += -4; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l1_%=; \ + r8 = r0; \ + /* r7 = ktime_get_ns() */ \ + call %[bpf_ktime_get_ns]; \ + r7 = r0; \ + /* r6 = ktime_get_ns() */ \ + call %[bpf_ktime_get_ns]; \ + r6 = r0; \ + /* if r6 > r7 goto +5 ; no new information about the state is derived from\ + * ; this check, thus produced verifier states differ\ + * ; only in 'insn_idx' \ + * spin_lock(r8) \ + * r9 = r8 \ + * goto unlock \ + */ \ + if r6 > r7 goto l2_%=; \ + r1 = r8; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ + r9 = r8; \ + goto l3_%=; \ +l2_%=: /* spin_lock(r9) */ \ + r1 = r9; \ + r1 += 4; \ + call %[bpf_spin_lock]; \ +l3_%=: /* spin_unlock(r9) */ \ + r1 = r9; \ + r1 += 4; \ + call %[bpf_spin_unlock]; \ +l0_%=: /* exit(0) */ \ + r0 = 0; \ +l1_%=: exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm(bpf_spin_unlock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/spin_lock.c b/tools/testing/selftests/bpf/verifier/spin_lock.c deleted file mode 100644 index eaf114f07e2e..000000000000 --- a/tools/testing/selftests/bpf/verifier/spin_lock.c +++ /dev/null @@ -1,447 +0,0 @@ -{ - "spin_lock: test1 success", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "spin_lock: test2 direct ld/st", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .result = REJECT, - .errstr = "cannot be accessed directly", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "spin_lock: test3 direct ld/st", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .result = REJECT, - .errstr = "cannot be accessed directly", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "spin_lock: test4 direct ld/st", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_6, 3), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .result = REJECT, - .errstr = "cannot be accessed directly", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "spin_lock: test5 call within a locked region", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .result = REJECT, - .errstr = "calls are not allowed", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "spin_lock: test6 missing unlock", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .result = REJECT, - .errstr = "unlock is missing", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "spin_lock: test7 unlock without lock", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .result = REJECT, - .errstr = "without taking a lock", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "spin_lock: test8 double lock", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .result = REJECT, - .errstr = "calls are not allowed", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "spin_lock: test9 different lock", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3, 11 }, - .result = REJECT, - .errstr = "unlock of different lock", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "spin_lock: test10 lock in subprog without unlock", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .result = REJECT, - .errstr = "unlock is missing", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "spin_lock: test11 ld_abs under lock", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_LD_ABS(BPF_B, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 4 }, - .result = REJECT, - .errstr = "inside bpf_spin_lock", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "spin_lock: regsafe compare reg->id for map value", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_6, offsetof(struct __sk_buff, mark)), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_1), - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_9), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 2 }, - .result = REJECT, - .errstr = "bpf_spin_unlock of different lock", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .flags = BPF_F_TEST_STATE_FREQ, -}, -/* Make sure that regsafe() compares ids for spin lock records using - * check_ids(): - * 1: r9 = map_lookup_elem(...) ; r9.id == 1 - * 2: r8 = map_lookup_elem(...) ; r8.id == 2 - * 3: r7 = ktime_get_ns() - * 4: r6 = ktime_get_ns() - * 5: if r6 > r7 goto <9> - * 6: spin_lock(r8) - * 7: r9 = r8 - * 8: goto <10> - * 9: spin_lock(r9) - * 10: spin_unlock(r9) ; r9.id == 1 || r9.id == 2 and lock is active, - * ; second visit to (10) should be considered safe - * ; if check_ids() is used. - * 11: exit(0) - */ -{ - "spin_lock: regsafe() check_ids() similar id mappings", - .insns = { - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0), - /* r9 = map_lookup_elem(...) */ - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 24), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_0), - /* r8 = map_lookup_elem(...) */ - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 18), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - /* r7 = ktime_get_ns() */ - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - /* r6 = ktime_get_ns() */ - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* if r6 > r7 goto +5 ; no new information about the state is derived from - * ; this check, thus produced verifier states differ - * ; only in 'insn_idx' - * spin_lock(r8) - * r9 = r8 - * goto unlock - */ - BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_EMIT_CALL(BPF_FUNC_spin_lock), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_8), - BPF_JMP_A(3), - /* spin_lock(r9) */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_9), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_EMIT_CALL(BPF_FUNC_spin_lock), - /* spin_unlock(r9) */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_9), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4), - BPF_EMIT_CALL(BPF_FUNC_spin_unlock), - /* exit(0) */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3, 10 }, - .result = VERBOSE_ACCEPT, - .errstr = "28: safe", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .flags = BPF_F_TEST_STATE_FREQ, -}, From patchwork Fri Apr 21 17:42:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220529 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5C6BC7EE22 for ; Fri, 21 Apr 2023 17:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233444AbjDURnv (ORCPT ); Fri, 21 Apr 2023 13:43:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232921AbjDURnc (ORCPT ); Fri, 21 Apr 2023 13:43:32 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BB4F270E for ; Fri, 21 Apr 2023 10:43:12 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-2f7a7f9667bso1276923f8f.1 for ; Fri, 21 Apr 2023 10:43:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098990; x=1684690990; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MyXRXmKqnvh14RGqw8EvB0M/Pe2v+ptpbOPsZY7xPVM=; b=aAS33BDT3ZtvarFhX4BFQBWRlGIAfJ7cRfe00jfniiDLafzCBSxt6RhZQdvTRjcxDj 9hqE2v+dr7C/5c29YEERKWKfUoFm+uKu4YupBgkpCc/UniwgL2FihGEKLxCFazpfB53u mnoxOcpxJZzG+vhUJm0o0z5dtor5uF1OboYz/Q7SdwfAAm8wQJiMW9qUnJO09XXpEClT chM9Hqm6WjALiik7eKuvdjBybggCgJhr8uE4PMPuvGC29p05FrJvPgkNq4Uw5vsSy/+r jcWoU5DvBzSzOKRMErHIXavTxEKSaZqqoxqY1zqgpQl3IMYFqzIpmb4UAiB8D6n/kYWr 4j7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098990; x=1684690990; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MyXRXmKqnvh14RGqw8EvB0M/Pe2v+ptpbOPsZY7xPVM=; b=EU3qL3ZNUA3l0Pz1goxKBNq9d2e90EcJoYeVbzpYhphojJyyDlD0kO2P+cwQzeZKdc LxVJAPpsCnpZRxS6s70jvhj6WiKbkSXUAiSSUU4AOJYE15AAterY9uMMFJ0ADP7+DjRv hUftSUT41RVwaALXmKL+bi53Ykc7hAGY+AC5cOibW2O/SxcLPiFp1hFD2AQdLW0dMvlg pqqWP0JF9HHTPapFTcuk5jGegxM78PNDgO3Q4a4neErtwFozWZ+N/6EG1UjmfXRoHzrL omsUsCqAZDMQlZPdwX+UfQEgJjm3sWkboOgPlVrzA/hSRCgPXCucEOEtURITD2UFim7N Tnzw== X-Gm-Message-State: AAQBX9d5DBK3LYncV3W9VY6GKyCD4HgS+dU08Ch3B8cRBQG2AX5rJ3at jUheJS+yxZqYte8T++pmW1BmYf0PvKJY9w== X-Google-Smtp-Source: AKy350bjKHvqyYrKAzeTzZgmaSOCGGdMiU+nvhuIx8vsBZ1/L4KF6OKk3ecEm0v2FiICzR0t+H01Ug== X-Received: by 2002:a5d:4c4d:0:b0:2cf:e747:b0d4 with SMTP id n13-20020a5d4c4d000000b002cfe747b0d4mr4802589wrt.40.1682098990038; Fri, 21 Apr 2023 10:43:10 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:09 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 21/24] selftests/bpf: verifier/subreg converted to inline assembly Date: Fri, 21 Apr 2023 20:42:31 +0300 Message-Id: <20230421174234.2391278-22-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/subreg automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_subreg.c | 673 ++++++++++++++++++ tools/testing/selftests/bpf/verifier/subreg.c | 533 -------------- 3 files changed, 675 insertions(+), 533 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_subreg.c delete mode 100644 tools/testing/selftests/bpf/verifier/subreg.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 0ea88282859d..999b694850d3 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -54,6 +54,7 @@ #include "verifier_spill_fill.skel.h" #include "verifier_spin_lock.skel.h" #include "verifier_stack_ptr.skel.h" +#include "verifier_subreg.skel.h" #include "verifier_uninit.skel.h" #include "verifier_value_adj_spill.skel.h" #include "verifier_value.skel.h" @@ -147,6 +148,7 @@ void test_verifier_sock(void) { RUN(verifier_sock); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_spin_lock(void) { RUN(verifier_spin_lock); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } +void test_verifier_subreg(void) { RUN(verifier_subreg); } void test_verifier_uninit(void) { RUN(verifier_uninit); } void test_verifier_value_adj_spill(void) { RUN(verifier_value_adj_spill); } void test_verifier_value(void) { RUN(verifier_value); } diff --git a/tools/testing/selftests/bpf/progs/verifier_subreg.c b/tools/testing/selftests/bpf/progs/verifier_subreg.c new file mode 100644 index 000000000000..8613ea160dcd --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_subreg.c @@ -0,0 +1,673 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/subreg.c */ + +#include +#include +#include "bpf_misc.h" + +/* This file contains sub-register zero extension checks for insns defining + * sub-registers, meaning: + * - All insns under BPF_ALU class. Their BPF_ALU32 variants or narrow width + * forms (BPF_END) could define sub-registers. + * - Narrow direct loads, BPF_B/H/W | BPF_LDX. + * - BPF_LD is not exposed to JIT back-ends, so no need for testing. + * + * "get_prandom_u32" is used to initialize low 32-bit of some registers to + * prevent potential optimizations done by verifier or JIT back-ends which could + * optimize register back into constant when range info shows one register is a + * constant. + */ + +SEC("socket") +__description("add32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void add32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = r0; \ + r0 = 0x100000000 ll; \ + w0 += w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("add32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void add32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + /* An insn could have no effect on the low 32-bit, for example:\ + * a = a + 0 \ + * a = a | 0 \ + * a = a & -1 \ + * But, they should still zero high 32-bit. \ + */ \ + w0 += 0; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 += -2; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("sub32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void sub32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = r0; \ + r0 = 0x1ffffffff ll; \ + w0 -= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("sub32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void sub32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 -= 0; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 -= 1; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("mul32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void mul32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = r0; \ + r0 = 0x100000001 ll; \ + w0 *= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("mul32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void mul32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 *= 1; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 *= -1; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("div32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void div32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = r0; \ + r0 = -1; \ + w0 /= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("div32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void div32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 /= 1; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 /= 2; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("or32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void or32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = r0; \ + r0 = 0x100000001 ll; \ + w0 |= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("or32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void or32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 |= 0; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 |= 1; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("and32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void and32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x100000000 ll; \ + r1 |= r0; \ + r0 = 0x1ffffffff ll; \ + w0 &= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("and32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void and32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 &= -1; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 &= -2; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("lsh32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void lsh32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x100000000 ll; \ + r0 |= r1; \ + r1 = 1; \ + w0 <<= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("lsh32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void lsh32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 <<= 0; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 <<= 1; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("rsh32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void rsh32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + r1 = 1; \ + w0 >>= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("rsh32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void rsh32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 >>= 0; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 >>= 1; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("neg32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void neg32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 = -w0; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("mod32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void mod32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = r0; \ + r0 = -1; \ + w0 %%= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("mod32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void mod32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 %%= 1; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 %%= 2; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("xor32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void xor32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = r0; \ + r0 = 0x100000000 ll; \ + w0 ^= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("xor32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void xor32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 ^= 1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("mov32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void mov32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x100000000 ll; \ + r1 |= r0; \ + r0 = 0x100000000 ll; \ + w0 = w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("mov32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void mov32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 = 0; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 = 1; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("arsh32 reg zero extend check") +__success __success_unpriv __retval(0) +__naked void arsh32_reg_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + r1 = 1; \ + w0 s>>= w1; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("arsh32 imm zero extend check") +__success __success_unpriv __retval(0) +__naked void arsh32_imm_zero_extend_check(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 s>>= 0; \ + r0 >>= 32; \ + r6 = r0; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + w0 s>>= 1; \ + r0 >>= 32; \ + r0 |= r6; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("end16 (to_le) reg zero extend check") +__success __success_unpriv __retval(0) +__naked void le_reg_zero_extend_check_1(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r6 = r0; \ + r6 <<= 32; \ + call %[bpf_get_prandom_u32]; \ + r0 |= r6; \ + r0 = le16 r0; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("end32 (to_le) reg zero extend check") +__success __success_unpriv __retval(0) +__naked void le_reg_zero_extend_check_2(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r6 = r0; \ + r6 <<= 32; \ + call %[bpf_get_prandom_u32]; \ + r0 |= r6; \ + r0 = le32 r0; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("end16 (to_be) reg zero extend check") +__success __success_unpriv __retval(0) +__naked void be_reg_zero_extend_check_1(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r6 = r0; \ + r6 <<= 32; \ + call %[bpf_get_prandom_u32]; \ + r0 |= r6; \ + r0 = be16 r0; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("end32 (to_be) reg zero extend check") +__success __success_unpriv __retval(0) +__naked void be_reg_zero_extend_check_2(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r6 = r0; \ + r6 <<= 32; \ + call %[bpf_get_prandom_u32]; \ + r0 |= r6; \ + r0 = be32 r0; \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("ldx_b zero extend check") +__success __success_unpriv __retval(0) +__naked void ldx_b_zero_extend_check(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -4; \ + r7 = 0xfaceb00c; \ + *(u32*)(r6 + 0) = r7; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + r0 = *(u8*)(r6 + 0); \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("ldx_h zero extend check") +__success __success_unpriv __retval(0) +__naked void ldx_h_zero_extend_check(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -4; \ + r7 = 0xfaceb00c; \ + *(u32*)(r6 + 0) = r7; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + r0 = *(u16*)(r6 + 0); \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("ldx_w zero extend check") +__success __success_unpriv __retval(0) +__naked void ldx_w_zero_extend_check(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -4; \ + r7 = 0xfaceb00c; \ + *(u32*)(r6 + 0) = r7; \ + call %[bpf_get_prandom_u32]; \ + r1 = 0x1000000000 ll; \ + r0 |= r1; \ + r0 = *(u32*)(r6 + 0); \ + r0 >>= 32; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/subreg.c b/tools/testing/selftests/bpf/verifier/subreg.c deleted file mode 100644 index 4c4133c80440..000000000000 --- a/tools/testing/selftests/bpf/verifier/subreg.c +++ /dev/null @@ -1,533 +0,0 @@ -/* This file contains sub-register zero extension checks for insns defining - * sub-registers, meaning: - * - All insns under BPF_ALU class. Their BPF_ALU32 variants or narrow width - * forms (BPF_END) could define sub-registers. - * - Narrow direct loads, BPF_B/H/W | BPF_LDX. - * - BPF_LD is not exposed to JIT back-ends, so no need for testing. - * - * "get_prandom_u32" is used to initialize low 32-bit of some registers to - * prevent potential optimizations done by verifier or JIT back-ends which could - * optimize register back into constant when range info shows one register is a - * constant. - */ -{ - "add32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL), - BPF_ALU32_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "add32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - /* An insn could have no effect on the low 32-bit, for example: - * a = a + 0 - * a = a | 0 - * a = a & -1 - * But, they should still zero high 32-bit. - */ - BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, -2), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "sub32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LD_IMM64(BPF_REG_0, 0x1ffffffffULL), - BPF_ALU32_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "sub32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_SUB, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_SUB, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "mul32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL), - BPF_ALU32_REG(BPF_MUL, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "mul32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_MUL, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_MUL, BPF_REG_0, -1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "div32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_0, -1), - BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "div32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_DIV, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_DIV, BPF_REG_0, 2), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "or32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL), - BPF_ALU32_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "or32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_OR, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_OR, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "and32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x100000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_1, BPF_REG_0), - BPF_LD_IMM64(BPF_REG_0, 0x1ffffffffULL), - BPF_ALU32_REG(BPF_AND, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "and32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_AND, BPF_REG_0, -1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_AND, BPF_REG_0, -2), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "lsh32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x100000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_ALU32_REG(BPF_LSH, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "lsh32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_LSH, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_LSH, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "rsh32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_ALU32_REG(BPF_RSH, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "rsh32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_RSH, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_RSH, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "neg32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_NEG, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "mod32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_0, -1), - BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "mod32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, 2), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "xor32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL), - BPF_ALU32_REG(BPF_XOR, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "xor32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_XOR, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "mov32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x100000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_1, BPF_REG_0), - BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL), - BPF_MOV32_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "mov32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_MOV32_IMM(BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "arsh32 reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_ALU32_REG(BPF_ARSH, BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "arsh32 imm zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_ARSH, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_ALU32_IMM(BPF_ARSH, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "end16 (to_le) reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_ENDIAN(BPF_TO_LE, BPF_REG_0, 16), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "end32 (to_le) reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_ENDIAN(BPF_TO_LE, BPF_REG_0, 32), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "end16 (to_be) reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_ENDIAN(BPF_TO_BE, BPF_REG_0, 16), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "end32 (to_be) reg zero extend check", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6), - BPF_ENDIAN(BPF_TO_BE, BPF_REG_0, 32), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "ldx_b zero extend check", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4), - BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0xfaceb00c), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "ldx_h zero extend check", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4), - BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0xfaceb00c), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_6, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "ldx_w zero extend check", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4), - BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0xfaceb00c), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL), - BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0), - BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, From patchwork Fri Apr 21 17:42:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220531 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7FC7C77B76 for ; Fri, 21 Apr 2023 17:43:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232921AbjDURnv (ORCPT ); Fri, 21 Apr 2023 13:43:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232999AbjDURnd (ORCPT ); Fri, 21 Apr 2023 13:43:33 -0400 Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F5724234 for ; Fri, 21 Apr 2023 10:43:13 -0700 (PDT) Received: by mail-wr1-x435.google.com with SMTP id ffacd0b85a97d-2f3fe12de15so1274442f8f.3 for ; Fri, 21 Apr 2023 10:43:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098991; x=1684690991; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1iQdnDakyrB2TT8Z/oJI675XR5NS4HGYYftltcWFsX8=; b=PNNVSXxaB8eWZot0nRwSdGTtNsG/kC+G8pnlw+2fvZxbxDZhPTkkfXix6dqwn+JYqk LwEEjEEelk4DEVfykWoN80+99w6TykOGtzx4H8uqRXfaqIG4NWDKoLGYgKT9KJcXcVCc V3iGbp2bvkq4YFUxNng+9xnV940vSI+X8ixCXrK29ieFugZ52VsV21YFYOud7yV0cTiq 7L4CEKJaTyn1Xn+4Xj92YWuPrs3Ef4SNY/5PoXpMef9SV159djZof6GOnLBLsTO+B2C5 Kvo6MBflVteywEzdi/6wZeAu7VADzbYIoLn/T3j8meIZQfqp/dzd4LCT5CczUMqY96XK p7ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098991; x=1684690991; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1iQdnDakyrB2TT8Z/oJI675XR5NS4HGYYftltcWFsX8=; b=eQYtjBPduEXfl3IXwk8rDkbu3ddTvegKR4v5n5FMXeMs9QE4AsS/B6MUPwiEmWQt1H 5o7GhKEbtM6sv+Tu97jKMhyTv5/OhmzhplN8nyTd/Vfbl8xG/Pmv5XmUXMBScZShz3FS xolssaQY7zSU6kI7K05CSgy7m4AGlB/3neFGPkVln76KJKu+O7HosQfU59VtrB7GfuxN xKSv4Upg7La+tozqrbOuoKj5xajd97alkCUWXlnYXSQEt4miSXrCcGkSZfKeLUWs9n9k rZ91NmFPQ21a7zs74qSNtAB/82RXFPy/+Zj+QNc8qMSHKFNT1s0DEtQ+OX/1iQz/orKU PBzA== X-Gm-Message-State: AAQBX9fWpyiZcmOfnqyIlT0Oor6KF9ogYPloqfK2qIHeOfGgOP+gtJ+M 1BugNCySIPe+cxSdpZoRgDH7JUJxo7XucQ== X-Google-Smtp-Source: AKy350aQWailbjrOMQmbfycF/K5icEJg8Z30s13mCQp3QGf5mVjcf9E0NTfWvdmmud79JlNKw/+0fQ== X-Received: by 2002:a5d:46c9:0:b0:2f0:2d92:9c81 with SMTP id g9-20020a5d46c9000000b002f02d929c81mr4772521wrs.19.1682098991190; Fri, 21 Apr 2023 10:43:11 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:10 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 22/24] selftests/bpf: verifier/unpriv converted to inline assembly Date: Fri, 21 Apr 2023 20:42:32 +0300 Message-Id: <20230421174234.2391278-23-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/unpriv semi-automatically converted to use inline assembly. The verifier/unpriv.c had to be split in two parts: - the bulk of the tests is in the progs/verifier_unpriv.c; - the single test that needs `struct bpf_perf_event_data` definition is in the progs/verifier_unpriv_perf.c. The tests above can't be in a single file because: - first requires inclusion of the filter.h header (to get access to BPF_ST_MEM macro, inline assembler does not support this isntruction); - the second requires vmlinux.h, which contains definitions conflicting with filter.h. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 4 + .../selftests/bpf/progs/verifier_unpriv.c | 726 ++++++++++++++++++ .../bpf/progs/verifier_unpriv_perf.c | 34 + tools/testing/selftests/bpf/verifier/unpriv.c | 562 -------------- 4 files changed, 764 insertions(+), 562 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_unpriv.c create mode 100644 tools/testing/selftests/bpf/progs/verifier_unpriv_perf.c delete mode 100644 tools/testing/selftests/bpf/verifier/unpriv.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 999b694850d3..94405bf00b47 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -56,6 +56,8 @@ #include "verifier_stack_ptr.skel.h" #include "verifier_subreg.skel.h" #include "verifier_uninit.skel.h" +#include "verifier_unpriv.skel.h" +#include "verifier_unpriv_perf.skel.h" #include "verifier_value_adj_spill.skel.h" #include "verifier_value.skel.h" #include "verifier_value_or_null.skel.h" @@ -150,6 +152,8 @@ void test_verifier_spin_lock(void) { RUN(verifier_spin_lock); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } void test_verifier_subreg(void) { RUN(verifier_subreg); } void test_verifier_uninit(void) { RUN(verifier_uninit); } +void test_verifier_unpriv(void) { RUN(verifier_unpriv); } +void test_verifier_unpriv_perf(void) { RUN(verifier_unpriv_perf); } void test_verifier_value_adj_spill(void) { RUN(verifier_value_adj_spill); } void test_verifier_value(void) { RUN(verifier_value); } void test_verifier_value_or_null(void) { RUN(verifier_value_or_null); } diff --git a/tools/testing/selftests/bpf/progs/verifier_unpriv.c b/tools/testing/selftests/bpf/progs/verifier_unpriv.c new file mode 100644 index 000000000000..7ea535bfbacd --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_unpriv.c @@ -0,0 +1,726 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/unpriv.c */ + +#include +#include +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +#define BPF_SK_LOOKUP(func) \ + /* struct bpf_sock_tuple tuple = {} */ \ + "r2 = 0;" \ + "*(u32*)(r10 - 8) = r2;" \ + "*(u64*)(r10 - 16) = r2;" \ + "*(u64*)(r10 - 24) = r2;" \ + "*(u64*)(r10 - 32) = r2;" \ + "*(u64*)(r10 - 40) = r2;" \ + "*(u64*)(r10 - 48) = r2;" \ + /* sk = func(ctx, &tuple, sizeof tuple, 0, 0) */ \ + "r2 = r10;" \ + "r2 += -48;" \ + "r3 = %[sizeof_bpf_sock_tuple];"\ + "r4 = 0;" \ + "r5 = 0;" \ + "call %[" #func "];" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +void dummy_prog_42_socket(void); +void dummy_prog_24_socket(void); +void dummy_prog_loop1_socket(void); + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 4); + __uint(key_size, sizeof(int)); + __array(values, void (void)); +} map_prog1_socket SEC(".maps") = { + .values = { + [0] = (void *)&dummy_prog_42_socket, + [1] = (void *)&dummy_prog_loop1_socket, + [2] = (void *)&dummy_prog_24_socket, + }, +}; + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_42_socket(void) +{ + asm volatile ("r0 = 42; exit;"); +} + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_24_socket(void) +{ + asm volatile ("r0 = 24; exit;"); +} + +SEC("socket") +__auxiliary __auxiliary_unpriv +__naked void dummy_prog_loop1_socket(void) +{ + asm volatile (" \ + r3 = 1; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 41; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__description("unpriv: return pointer") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(POINTER_VALUE) +__naked void unpriv_return_pointer(void) +{ + asm volatile (" \ + r0 = r10; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: add const to pointer") +__success __success_unpriv __retval(0) +__naked void unpriv_add_const_to_pointer(void) +{ + asm volatile (" \ + r1 += 8; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: add pointer to pointer") +__failure __msg("R1 pointer += pointer") +__failure_unpriv +__naked void unpriv_add_pointer_to_pointer(void) +{ + asm volatile (" \ + r1 += r10; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: neg pointer") +__success __failure_unpriv __msg_unpriv("R1 pointer arithmetic") +__retval(0) +__naked void unpriv_neg_pointer(void) +{ + asm volatile (" \ + r1 = -r1; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: cmp pointer with const") +__success __failure_unpriv __msg_unpriv("R1 pointer comparison") +__retval(0) +__naked void unpriv_cmp_pointer_with_const(void) +{ + asm volatile (" \ + if r1 == 0 goto l0_%=; \ +l0_%=: r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: cmp pointer with pointer") +__success __failure_unpriv __msg_unpriv("R10 pointer comparison") +__retval(0) +__naked void unpriv_cmp_pointer_with_pointer(void) +{ + asm volatile (" \ + if r1 == r10 goto l0_%=; \ +l0_%=: r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tracepoint") +__description("unpriv: check that printk is disallowed") +__success +__naked void check_that_printk_is_disallowed(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r1 = r10; \ + r1 += -8; \ + r2 = 8; \ + r3 = r1; \ + call %[bpf_trace_printk]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_trace_printk) + : __clobber_all); +} + +SEC("socket") +__description("unpriv: pass pointer to helper function") +__success __failure_unpriv __msg_unpriv("R4 leaks addr") +__retval(0) +__naked void pass_pointer_to_helper_function(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + r3 = r2; \ + r4 = r2; \ + call %[bpf_map_update_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_update_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("unpriv: indirectly pass pointer on stack to helper function") +__success __failure_unpriv +__msg_unpriv("invalid indirect read from stack R2 off -8+0 size 8") +__retval(0) +__naked void on_stack_to_helper_function(void) +{ + asm volatile (" \ + *(u64*)(r10 - 8) = r10; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("unpriv: mangle pointer on stack 1") +__success __failure_unpriv __msg_unpriv("attempt to corrupt spilled") +__retval(0) +__naked void mangle_pointer_on_stack_1(void) +{ + asm volatile (" \ + *(u64*)(r10 - 8) = r10; \ + r0 = 0; \ + *(u32*)(r10 - 8) = r0; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: mangle pointer on stack 2") +__success __failure_unpriv __msg_unpriv("attempt to corrupt spilled") +__retval(0) +__naked void mangle_pointer_on_stack_2(void) +{ + asm volatile (" \ + *(u64*)(r10 - 8) = r10; \ + r0 = 0; \ + *(u8*)(r10 - 1) = r0; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: read pointer from stack in small chunks") +__failure __msg("invalid size") +__failure_unpriv +__naked void from_stack_in_small_chunks(void) +{ + asm volatile (" \ + *(u64*)(r10 - 8) = r10; \ + r0 = *(u32*)(r10 - 8); \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: write pointer into ctx") +__failure __msg("invalid bpf_context access") +__failure_unpriv __msg_unpriv("R1 leaks addr") +__naked void unpriv_write_pointer_into_ctx(void) +{ + asm volatile (" \ + *(u64*)(r1 + 0) = r1; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: spill/fill of ctx") +__success __success_unpriv __retval(0) +__naked void unpriv_spill_fill_of_ctx(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -8; \ + *(u64*)(r6 + 0) = r1; \ + r1 = *(u64*)(r6 + 0); \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("unpriv: spill/fill of ctx 2") +__success __retval(0) +__naked void spill_fill_of_ctx_2(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -8; \ + *(u64*)(r6 + 0) = r1; \ + r1 = *(u64*)(r6 + 0); \ + call %[bpf_get_hash_recalc]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_hash_recalc) + : __clobber_all); +} + +SEC("tc") +__description("unpriv: spill/fill of ctx 3") +__failure __msg("R1 type=fp expected=ctx") +__naked void spill_fill_of_ctx_3(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -8; \ + *(u64*)(r6 + 0) = r1; \ + *(u64*)(r6 + 0) = r10; \ + r1 = *(u64*)(r6 + 0); \ + call %[bpf_get_hash_recalc]; \ + exit; \ +" : + : __imm(bpf_get_hash_recalc) + : __clobber_all); +} + +SEC("tc") +__description("unpriv: spill/fill of ctx 4") +__failure __msg("R1 type=scalar expected=ctx") +__naked void spill_fill_of_ctx_4(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -8; \ + *(u64*)(r6 + 0) = r1; \ + r0 = 1; \ + lock *(u64 *)(r10 - 8) += r0; \ + r1 = *(u64*)(r6 + 0); \ + call %[bpf_get_hash_recalc]; \ + exit; \ +" : + : __imm(bpf_get_hash_recalc) + : __clobber_all); +} + +SEC("tc") +__description("unpriv: spill/fill of different pointers stx") +__failure __msg("same insn cannot be used with different pointers") +__naked void fill_of_different_pointers_stx(void) +{ + asm volatile (" \ + r3 = 42; \ + r6 = r10; \ + r6 += -8; \ + if r1 == 0 goto l0_%=; \ + r2 = r10; \ + r2 += -16; \ + *(u64*)(r6 + 0) = r2; \ +l0_%=: if r1 != 0 goto l1_%=; \ + *(u64*)(r6 + 0) = r1; \ +l1_%=: r1 = *(u64*)(r6 + 0); \ + *(u32*)(r1 + %[__sk_buff_mark]) = r3; \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +/* Same as above, but use BPF_ST_MEM to save 42 + * instead of BPF_STX_MEM. + */ +SEC("tc") +__description("unpriv: spill/fill of different pointers st") +__failure __msg("same insn cannot be used with different pointers") +__naked void fill_of_different_pointers_st(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -8; \ + if r1 == 0 goto l0_%=; \ + r2 = r10; \ + r2 += -16; \ + *(u64*)(r6 + 0) = r2; \ +l0_%=: if r1 != 0 goto l1_%=; \ + *(u64*)(r6 + 0) = r1; \ +l1_%=: r1 = *(u64*)(r6 + 0); \ + .8byte %[st_mem]; \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)), + __imm_insn(st_mem, + BPF_ST_MEM(BPF_W, BPF_REG_1, offsetof(struct __sk_buff, mark), 42)) + : __clobber_all); +} + +SEC("tc") +__description("unpriv: spill/fill of different pointers stx - ctx and sock") +__failure __msg("type=ctx expected=sock") +__naked void pointers_stx_ctx_and_sock(void) +{ + asm volatile (" \ + r8 = r1; \ + /* struct bpf_sock *sock = bpf_sock_lookup(...); */\ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r2 = r0; \ + /* u64 foo; */ \ + /* void *target = &foo; */ \ + r6 = r10; \ + r6 += -8; \ + r1 = r8; \ + /* if (skb == NULL) *target = sock; */ \ + if r1 == 0 goto l0_%=; \ + *(u64*)(r6 + 0) = r2; \ +l0_%=: /* else *target = skb; */ \ + if r1 != 0 goto l1_%=; \ + *(u64*)(r6 + 0) = r1; \ +l1_%=: /* struct __sk_buff *skb = *target; */ \ + r1 = *(u64*)(r6 + 0); \ + /* skb->mark = 42; */ \ + r3 = 42; \ + *(u32*)(r1 + %[__sk_buff_mark]) = r3; \ + /* if (sk) bpf_sk_release(sk) */ \ + if r1 == 0 goto l2_%=; \ + call %[bpf_sk_release]; \ +l2_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("unpriv: spill/fill of different pointers stx - leak sock") +__failure +//.errstr = "same insn cannot be used with different pointers", +__msg("Unreleased reference") +__naked void different_pointers_stx_leak_sock(void) +{ + asm volatile (" \ + r8 = r1; \ + /* struct bpf_sock *sock = bpf_sock_lookup(...); */\ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r2 = r0; \ + /* u64 foo; */ \ + /* void *target = &foo; */ \ + r6 = r10; \ + r6 += -8; \ + r1 = r8; \ + /* if (skb == NULL) *target = sock; */ \ + if r1 == 0 goto l0_%=; \ + *(u64*)(r6 + 0) = r2; \ +l0_%=: /* else *target = skb; */ \ + if r1 != 0 goto l1_%=; \ + *(u64*)(r6 + 0) = r1; \ +l1_%=: /* struct __sk_buff *skb = *target; */ \ + r1 = *(u64*)(r6 + 0); \ + /* skb->mark = 42; */ \ + r3 = 42; \ + *(u32*)(r1 + %[__sk_buff_mark]) = r3; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("unpriv: spill/fill of different pointers stx - sock and ctx (read)") +__failure __msg("same insn cannot be used with different pointers") +__naked void stx_sock_and_ctx_read(void) +{ + asm volatile (" \ + r8 = r1; \ + /* struct bpf_sock *sock = bpf_sock_lookup(...); */\ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r2 = r0; \ + /* u64 foo; */ \ + /* void *target = &foo; */ \ + r6 = r10; \ + r6 += -8; \ + r1 = r8; \ + /* if (skb) *target = skb */ \ + if r1 == 0 goto l0_%=; \ + *(u64*)(r6 + 0) = r1; \ +l0_%=: /* else *target = sock */ \ + if r1 != 0 goto l1_%=; \ + *(u64*)(r6 + 0) = r2; \ +l1_%=: /* struct bpf_sock *sk = *target; */ \ + r1 = *(u64*)(r6 + 0); \ + /* if (sk) u32 foo = sk->mark; bpf_sk_release(sk); */\ + if r1 == 0 goto l2_%=; \ + r3 = *(u32*)(r1 + %[bpf_sock_mark]); \ + call %[bpf_sk_release]; \ +l2_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(bpf_sock_mark, offsetof(struct bpf_sock, mark)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("tc") +__description("unpriv: spill/fill of different pointers stx - sock and ctx (write)") +__failure +//.errstr = "same insn cannot be used with different pointers", +__msg("cannot write into sock") +__naked void stx_sock_and_ctx_write(void) +{ + asm volatile (" \ + r8 = r1; \ + /* struct bpf_sock *sock = bpf_sock_lookup(...); */\ +" BPF_SK_LOOKUP(bpf_sk_lookup_tcp) +" r2 = r0; \ + /* u64 foo; */ \ + /* void *target = &foo; */ \ + r6 = r10; \ + r6 += -8; \ + r1 = r8; \ + /* if (skb) *target = skb */ \ + if r1 == 0 goto l0_%=; \ + *(u64*)(r6 + 0) = r1; \ +l0_%=: /* else *target = sock */ \ + if r1 != 0 goto l1_%=; \ + *(u64*)(r6 + 0) = r2; \ +l1_%=: /* struct bpf_sock *sk = *target; */ \ + r1 = *(u64*)(r6 + 0); \ + /* if (sk) sk->mark = 42; bpf_sk_release(sk); */\ + if r1 == 0 goto l2_%=; \ + r3 = 42; \ + *(u32*)(r1 + %[bpf_sock_mark]) = r3; \ + call %[bpf_sk_release]; \ +l2_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(bpf_sock_mark, offsetof(struct bpf_sock, mark)), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all); +} + +SEC("socket") +__description("unpriv: write pointer into map elem value") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) +__naked void pointer_into_map_elem_value(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + *(u64*)(r0 + 0) = r0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("alu32: mov u32 const") +__success __failure_unpriv __msg_unpriv("R7 invalid mem access 'scalar'") +__retval(0) +__naked void alu32_mov_u32_const(void) +{ + asm volatile (" \ + w7 = 0; \ + w7 &= 1; \ + w0 = w7; \ + if r0 == 0 goto l0_%=; \ + r0 = *(u64*)(r7 + 0); \ +l0_%=: exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: partial copy of pointer") +__success __failure_unpriv __msg_unpriv("R10 partial copy") +__retval(0) +__naked void unpriv_partial_copy_of_pointer(void) +{ + asm volatile (" \ + w1 = w10; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: pass pointer to tail_call") +__success __failure_unpriv __msg_unpriv("R3 leaks addr into helper") +__retval(0) +__naked void pass_pointer_to_tail_call(void) +{ + asm volatile (" \ + r3 = r1; \ + r2 = %[map_prog1_socket] ll; \ + call %[bpf_tail_call]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_tail_call), + __imm_addr(map_prog1_socket) + : __clobber_all); +} + +SEC("socket") +__description("unpriv: cmp map pointer with zero") +__success __failure_unpriv __msg_unpriv("R1 pointer comparison") +__retval(0) +__naked void cmp_map_pointer_with_zero(void) +{ + asm volatile (" \ + r1 = 0; \ + r1 = %[map_hash_8b] ll; \ + if r1 == 0 goto l0_%=; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("unpriv: write into frame pointer") +__failure __msg("frame pointer is read only") +__failure_unpriv +__naked void unpriv_write_into_frame_pointer(void) +{ + asm volatile (" \ + r10 = r1; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: spill/fill frame pointer") +__failure __msg("frame pointer is read only") +__failure_unpriv +__naked void unpriv_spill_fill_frame_pointer(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -8; \ + *(u64*)(r6 + 0) = r10; \ + r10 = *(u64*)(r6 + 0); \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: cmp of frame pointer") +__success __failure_unpriv __msg_unpriv("R10 pointer comparison") +__retval(0) +__naked void unpriv_cmp_of_frame_pointer(void) +{ + asm volatile (" \ + if r10 == 0 goto l0_%=; \ +l0_%=: r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: adding of fp, reg") +__success __failure_unpriv +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__retval(0) +__naked void unpriv_adding_of_fp_reg(void) +{ + asm volatile (" \ + r0 = 0; \ + r1 = 0; \ + r1 += r10; \ + *(u64*)(r1 - 8) = r0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: adding of fp, imm") +__success __failure_unpriv +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__retval(0) +__naked void unpriv_adding_of_fp_imm(void) +{ + asm volatile (" \ + r0 = 0; \ + r1 = r10; \ + r1 += 0; \ + *(u64*)(r1 - 8) = r0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unpriv: cmp of stack pointer") +__success __failure_unpriv __msg_unpriv("R2 pointer comparison") +__retval(0) +__naked void unpriv_cmp_of_stack_pointer(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + if r2 == 0 goto l0_%=; \ +l0_%=: r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_unpriv_perf.c b/tools/testing/selftests/bpf/progs/verifier_unpriv_perf.c new file mode 100644 index 000000000000..4d77407a0a79 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_unpriv_perf.c @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/unpriv.c */ + +#include "vmlinux.h" +#include +#include "bpf_misc.h" + +SEC("perf_event") +__description("unpriv: spill/fill of different pointers ldx") +__failure __msg("same insn cannot be used with different pointers") +__naked void fill_of_different_pointers_ldx(void) +{ + asm volatile (" \ + r6 = r10; \ + r6 += -8; \ + if r1 == 0 goto l0_%=; \ + r2 = r10; \ + r2 += %[__imm_0]; \ + *(u64*)(r6 + 0) = r2; \ +l0_%=: if r1 != 0 goto l1_%=; \ + *(u64*)(r6 + 0) = r1; \ +l1_%=: r1 = *(u64*)(r6 + 0); \ + r1 = *(u64*)(r1 + %[sample_period]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__imm_0, + -(__s32) offsetof(struct bpf_perf_event_data, sample_period) - 8), + __imm_const(sample_period, + offsetof(struct bpf_perf_event_data, sample_period)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c deleted file mode 100644 index af0c0f336625..000000000000 --- a/tools/testing/selftests/bpf/verifier/unpriv.c +++ /dev/null @@ -1,562 +0,0 @@ -{ - "unpriv: return pointer", - .insns = { - BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 leaks addr", - .retval = POINTER_VALUE, -}, -{ - "unpriv: add const to pointer", - .insns = { - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, -}, -{ - "unpriv: add pointer to pointer", - .insns = { - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_10), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R1 pointer += pointer", -}, -{ - "unpriv: neg pointer", - .insns = { - BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R1 pointer arithmetic", -}, -{ - "unpriv: cmp pointer with const", - .insns = { - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R1 pointer comparison", -}, -{ - "unpriv: cmp pointer with pointer", - .insns = { - BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R10 pointer comparison", -}, -{ - "unpriv: check that printk is disallowed", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_trace_printk), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "unknown func bpf_trace_printk#6", - .result_unpriv = REJECT, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "unpriv: pass pointer to helper function", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr_unpriv = "R4 leaks addr", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: indirectly pass pointer on stack to helper function", - .insns = { - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_10, -8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr_unpriv = "invalid indirect read from stack R2 off -8+0 size 8", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: mangle pointer on stack 1", - .insns = { - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_10, -8), - BPF_ST_MEM(BPF_W, BPF_REG_10, -8, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "attempt to corrupt spilled", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: mangle pointer on stack 2", - .insns = { - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_10, -8), - BPF_ST_MEM(BPF_B, BPF_REG_10, -1, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "attempt to corrupt spilled", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: read pointer from stack in small chunks", - .insns = { - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_10, -8), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid size", - .result = REJECT, -}, -{ - "unpriv: write pointer into ctx", - .insns = { - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 leaks addr", - .result_unpriv = REJECT, - .errstr = "invalid bpf_context access", - .result = REJECT, -}, -{ - "unpriv: spill/fill of ctx", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, -}, -{ - "unpriv: spill/fill of ctx 2", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "unpriv: spill/fill of ctx 3", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R1 type=fp expected=ctx", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "unpriv: spill/fill of ctx 4", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW, - BPF_REG_10, BPF_REG_0, -8, BPF_ADD), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R1 type=scalar expected=ctx", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "unpriv: spill/fill of different pointers stx", - .insns = { - BPF_MOV64_IMM(BPF_REG_3, 42), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3, - offsetof(struct __sk_buff, mark)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "same insn cannot be used with different pointers", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - /* Same as above, but use BPF_ST_MEM to save 42 - * instead of BPF_STX_MEM. - */ - "unpriv: spill/fill of different pointers st", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - BPF_ST_MEM(BPF_W, BPF_REG_1, offsetof(struct __sk_buff, mark), 42), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "same insn cannot be used with different pointers", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "unpriv: spill/fill of different pointers stx - ctx and sock", - .insns = { - BPF_MOV64_REG(BPF_REG_8, BPF_REG_1), - /* struct bpf_sock *sock = bpf_sock_lookup(...); */ - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - /* u64 foo; */ - /* void *target = &foo; */ - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - /* if (skb == NULL) *target = sock; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0), - /* else *target = skb; */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - /* struct __sk_buff *skb = *target; */ - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - /* skb->mark = 42; */ - BPF_MOV64_IMM(BPF_REG_3, 42), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3, - offsetof(struct __sk_buff, mark)), - /* if (sk) bpf_sk_release(sk) */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "type=ctx expected=sock", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "unpriv: spill/fill of different pointers stx - leak sock", - .insns = { - BPF_MOV64_REG(BPF_REG_8, BPF_REG_1), - /* struct bpf_sock *sock = bpf_sock_lookup(...); */ - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - /* u64 foo; */ - /* void *target = &foo; */ - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - /* if (skb == NULL) *target = sock; */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0), - /* else *target = skb; */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - /* struct __sk_buff *skb = *target; */ - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - /* skb->mark = 42; */ - BPF_MOV64_IMM(BPF_REG_3, 42), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3, - offsetof(struct __sk_buff, mark)), - BPF_EXIT_INSN(), - }, - .result = REJECT, - //.errstr = "same insn cannot be used with different pointers", - .errstr = "Unreleased reference", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "unpriv: spill/fill of different pointers stx - sock and ctx (read)", - .insns = { - BPF_MOV64_REG(BPF_REG_8, BPF_REG_1), - /* struct bpf_sock *sock = bpf_sock_lookup(...); */ - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - /* u64 foo; */ - /* void *target = &foo; */ - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - /* if (skb) *target = skb */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - /* else *target = sock */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0), - /* struct bpf_sock *sk = *target; */ - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - /* if (sk) u32 foo = sk->mark; bpf_sk_release(sk); */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 2), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct bpf_sock, mark)), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "same insn cannot be used with different pointers", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "unpriv: spill/fill of different pointers stx - sock and ctx (write)", - .insns = { - BPF_MOV64_REG(BPF_REG_8, BPF_REG_1), - /* struct bpf_sock *sock = bpf_sock_lookup(...); */ - BPF_SK_LOOKUP(sk_lookup_tcp), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - /* u64 foo; */ - /* void *target = &foo; */ - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - /* if (skb) *target = skb */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - /* else *target = sock */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0), - /* struct bpf_sock *sk = *target; */ - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - /* if (sk) sk->mark = 42; bpf_sk_release(sk); */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3), - BPF_MOV64_IMM(BPF_REG_3, 42), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3, - offsetof(struct bpf_sock, mark)), - BPF_EMIT_CALL(BPF_FUNC_sk_release), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - //.errstr = "same insn cannot be used with different pointers", - .errstr = "cannot write into sock", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "unpriv: spill/fill of different pointers ldx", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, - -(__s32)offsetof(struct bpf_perf_event_data, - sample_period) - 8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, - offsetof(struct bpf_perf_event_data, sample_period)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "same insn cannot be used with different pointers", - .prog_type = BPF_PROG_TYPE_PERF_EVENT, -}, -{ - "unpriv: write pointer into map elem value", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "alu32: mov u32 const", - .insns = { - BPF_MOV32_IMM(BPF_REG_7, 0), - BPF_ALU32_IMM(BPF_AND, BPF_REG_7, 1), - BPF_MOV32_REG(BPF_REG_0, BPF_REG_7), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R7 invalid mem access 'scalar'", - .result_unpriv = REJECT, - .result = ACCEPT, - .retval = 0, -}, -{ - "unpriv: partial copy of pointer", - .insns = { - BPF_MOV32_REG(BPF_REG_1, BPF_REG_10), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R10 partial copy", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: pass pointer to tail_call", - .insns = { - BPF_MOV64_REG(BPF_REG_3, BPF_REG_1), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_prog1 = { 1 }, - .errstr_unpriv = "R3 leaks addr into helper", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: cmp map pointer with zero", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 1 }, - .errstr_unpriv = "R1 pointer comparison", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: write into frame pointer", - .insns = { - BPF_MOV64_REG(BPF_REG_10, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "frame pointer is read only", - .result = REJECT, -}, -{ - "unpriv: spill/fill frame pointer", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "frame pointer is read only", - .result = REJECT, -}, -{ - "unpriv: cmp of frame pointer", - .insns = { - BPF_JMP_IMM(BPF_JEQ, BPF_REG_10, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R10 pointer comparison", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: adding of fp, reg", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_10), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: adding of fp, imm", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "unpriv: cmp of stack pointer", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R2 pointer comparison", - .result_unpriv = REJECT, - .result = ACCEPT, -}, From patchwork Fri Apr 21 17:42:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220528 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCC64C7EE20 for ; Fri, 21 Apr 2023 17:43:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232546AbjDURnu (ORCPT ); Fri, 21 Apr 2023 13:43:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232813AbjDURnc (ORCPT ); Fri, 21 Apr 2023 13:43:32 -0400 Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com [IPv6:2a00:1450:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13576A25F for ; Fri, 21 Apr 2023 10:43:14 -0700 (PDT) Received: by mail-wr1-x42f.google.com with SMTP id ffacd0b85a97d-2f917585b26so1860763f8f.0 for ; Fri, 21 Apr 2023 10:43:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098992; x=1684690992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2YnTInlSOEXnQAAqK7LhsEiKq5CksDofUuibkqgDDxE=; b=JMbvlBFvHzqcFARWinKw3en7EYa3VdlOu5/9IiHO41IfGuO/7QUCuMC3y9pgKXIh/X cp8MxXPKg3OnBbJ1jwpLMtjC55yk+si2z/1IKE2+9JEsuBPz6T6xfdjzk216R4l9LO83 XErKj6784PtueINLgLyj8faIYIyF2JzKEFRfUDRp/gQfyCd81sk39E0g9h3bfQ3vneSD lOqvQ3t8qRHKgzamoy66YxORYHbf6l3FVRfaz8P0+E9lBZxr/AnKcI+Yg3/4uYxT6fJX D8xp8ocfAyz0RhqAN3ZKyL0pwzvXVu2j6yyF9+7D8G0gK46P9TDqWmj39e0cgYPTV9Hz kA9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098992; x=1684690992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2YnTInlSOEXnQAAqK7LhsEiKq5CksDofUuibkqgDDxE=; b=iQ4gRCJKBNYmKzjnOTuyGjMTBtWMEgW22v4pX5k4DgzBI9FxRFlZ9bGQ12lA8YhfFE Yjk6ZJmxI8Bk4YZ9tS1K2162oBkcSW4l7wHBuwuE4KAPtlE2AWrNP9pZjUpoQtESuZGC 2G+hw7hPBjJCs7epvjQbGK2cDxe2XRoLqPKl6V+I8zvG/WdUgajgylnU1KGxKZnadpXt HcP6N7Ww+Wo01TV2qnmMeYg8eWa/e3LGE3p+Fcn1T2mWjGLWIerJaoKtozq5p/Xhlit+ QIOoLs/AL0QKyXCJg3MpaDDGES4b3RGF7F/nPu2/wUuMnuJnh5AQJb8JjKtc9XsoCeGH 7zQQ== X-Gm-Message-State: AAQBX9dPRqQ7AsT3xFD7O+GLMW0NU/DTEB0ikEyCUeR+m9jSXVqMKCFH 47VXI6+eRHrIkDGwRCpRRU6ZIv0umRt/9w== X-Google-Smtp-Source: AKy350bdes4OUEMHQk9VVs36urKV4BdzlFZMaE4uYJGfEaIyOtQaKo0J7VlRgjVem20SrdHj+/DTvA== X-Received: by 2002:a5d:52d2:0:b0:2cf:eeae:88c3 with SMTP id r18-20020a5d52d2000000b002cfeeae88c3mr4387092wrv.32.1682098992352; Fri, 21 Apr 2023 10:43:12 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:11 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 23/24] selftests/bpf: verifier/value_illegal_alu converted to inline assembly Date: Fri, 21 Apr 2023 20:42:33 +0300 Message-Id: <20230421174234.2391278-24-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/value_illegal_alu automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../bpf/progs/verifier_value_illegal_alu.c | 149 ++++++++++++++++++ .../bpf/verifier/value_illegal_alu.c | 95 ----------- 3 files changed, 151 insertions(+), 95 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c delete mode 100644 tools/testing/selftests/bpf/verifier/value_illegal_alu.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 94405bf00b47..56b9248a15c0 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -60,6 +60,7 @@ #include "verifier_unpriv_perf.skel.h" #include "verifier_value_adj_spill.skel.h" #include "verifier_value.skel.h" +#include "verifier_value_illegal_alu.skel.h" #include "verifier_value_or_null.skel.h" #include "verifier_var_off.skel.h" #include "verifier_xadd.skel.h" @@ -156,6 +157,7 @@ void test_verifier_unpriv(void) { RUN(verifier_unpriv); } void test_verifier_unpriv_perf(void) { RUN(verifier_unpriv_perf); } void test_verifier_value_adj_spill(void) { RUN(verifier_value_adj_spill); } void test_verifier_value(void) { RUN(verifier_value); } +void test_verifier_value_illegal_alu(void) { RUN(verifier_value_illegal_alu); } void test_verifier_value_or_null(void) { RUN(verifier_value_or_null); } void test_verifier_var_off(void) { RUN(verifier_var_off); } void test_verifier_xadd(void) { RUN(verifier_xadd); } diff --git a/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c b/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c new file mode 100644 index 000000000000..71814a753216 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c @@ -0,0 +1,149 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/value_illegal_alu.c */ + +#include +#include +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +SEC("socket") +__description("map element value illegal alu op, 1") +__failure __msg("R0 bitwise operator &= on pointer") +__failure_unpriv +__naked void value_illegal_alu_op_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 &= 8; \ + r1 = 22; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("socket") +__description("map element value illegal alu op, 2") +__failure __msg("R0 32-bit pointer arithmetic prohibited") +__failure_unpriv +__naked void value_illegal_alu_op_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + w0 += 0; \ + r1 = 22; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("socket") +__description("map element value illegal alu op, 3") +__failure __msg("R0 pointer arithmetic with /= operator") +__failure_unpriv +__naked void value_illegal_alu_op_3(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 /= 42; \ + r1 = 22; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("socket") +__description("map element value illegal alu op, 4") +__failure __msg("invalid mem access 'scalar'") +__failure_unpriv __msg_unpriv("R0 pointer arithmetic prohibited") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void value_illegal_alu_op_4(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 = be64 r0; \ + r1 = 22; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("socket") +__description("map element value illegal alu op, 5") +__failure __msg("R0 invalid mem access 'scalar'") +__msg_unpriv("leaking pointer from stack off -8") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void value_illegal_alu_op_5(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r3 = 4096; \ + r2 = r10; \ + r2 += -8; \ + *(u64*)(r2 + 0) = r0; \ + lock *(u64 *)(r2 + 0) += r3; \ + r0 = *(u64*)(r2 + 0); \ + r1 = 22; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c b/tools/testing/selftests/bpf/verifier/value_illegal_alu.c deleted file mode 100644 index d6f29eb4bd57..000000000000 --- a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c +++ /dev/null @@ -1,95 +0,0 @@ -{ - "map element value illegal alu op, 1", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 8), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R0 bitwise operator &= on pointer", - .result = REJECT, -}, -{ - "map element value illegal alu op, 2", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R0 32-bit pointer arithmetic prohibited", - .result = REJECT, -}, -{ - "map element value illegal alu op, 3", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ALU64_IMM(BPF_DIV, BPF_REG_0, 42), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R0 pointer arithmetic with /= operator", - .result = REJECT, -}, -{ - "map element value illegal alu op, 4", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ENDIAN(BPF_FROM_BE, BPF_REG_0, 64), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 pointer arithmetic prohibited", - .errstr = "invalid mem access 'scalar'", - .result = REJECT, - .result_unpriv = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "map element value illegal alu op, 5", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_MOV64_IMM(BPF_REG_3, 4096), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_2, BPF_REG_3, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "leaking pointer from stack off -8", - .errstr = "R0 invalid mem access 'scalar'", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, From patchwork Fri Apr 21 17:42:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13220556 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0602BC7618E for ; Fri, 21 Apr 2023 17:44:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233513AbjDURoJ (ORCPT ); Fri, 21 Apr 2023 13:44:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232766AbjDURng (ORCPT ); Fri, 21 Apr 2023 13:43:36 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4833F18D for ; Fri, 21 Apr 2023 10:43:16 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-3f1763eea08so21186805e9.2 for ; Fri, 21 Apr 2023 10:43:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682098994; x=1684690994; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U3zKftaKj5yNZCfKYhwoITJngxvnHBNofVJGf00wzGE=; b=AqRKr5alWkRpFw1PV4hrpQREXBM5HqGuM3q9il0i27pbO/Lf3MWJmOpU3q36tYSjsr teoALbkWufUnSa+D+a0b3hfY95cDZhgFkq0a+FB8SDoX/vqDWi3sT0c3XhWsNh2issPG 7BEpMW+TcophyuHXUxNt9mzZFHCwYdSyw1e1htxnqq+T9WpaXHotYK/kfrmJTsPUlScP 63j75ZDdiDrxd1epDFtR0pSNVUMOIFjx6tzuR8Ivllzwf8PKDyhPVPj+ujt/2+6mwoKQ P7NkZAoECMv2XrGZSnmsIyNRf9Pba47jRVATN5z0iL79iHBrx4TpKIJuz5AX3GvDwXgx 3YCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682098994; x=1684690994; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U3zKftaKj5yNZCfKYhwoITJngxvnHBNofVJGf00wzGE=; b=klGEzWkReUtwhrQ541NmldfqcK5u0BQaM7ozPGLusIi2f4zQOSlCK/5HrPgbwk/pAI cOiyxYcEFUwiox/FQnq+8reiGEeZxVCQYgMt2PW2Bm87m7paCsCoCGobNbOnyOuVaLnS 8X8VYAZKmjQ2HnnIgX3+rJ4R+Ot6DlCLmgg5xsBOUn0Mo6M+iF6GKQlyUnQ5fynG4G5c CePOU8g57QN1P4I7mVm6lWC9b9CjbePrDWuPmcGRizUh7p998QD6CtFQ61Fmam6ZTFda ihMS8SF/Yux4XH3yGRaILsu9izlIac/sY3xRfJ6/PSNn0qJqsl7KMRQtsSOwpf5RnxDv a4Bw== X-Gm-Message-State: AAQBX9dYd+nntHlCDDI/nBr+Y8BtL1kV4m2rxlmrogFmkGzWG3KJbizM aOgtH7NvuOPoKLSgISz2e8TsA0u4n7Hldg== X-Google-Smtp-Source: AKy350buaBQHbyqzB7TjtOqzm0HV2+CNBClI+NO562f/zaQN4sBD+f5yXruZjZHmse/BNKxzeJmSZQ== X-Received: by 2002:adf:fd05:0:b0:2ef:b3e6:8293 with SMTP id e5-20020adffd05000000b002efb3e68293mr4306233wrr.9.1682098993733; Fri, 21 Apr 2023 10:43:13 -0700 (PDT) Received: from bigfoot.. (boundsly.muster.volia.net. [93.72.16.93]) by smtp.gmail.com with ESMTPSA id f4-20020a0560001b0400b002ffbf2213d4sm4849933wrz.75.2023.04.21.10.43.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 10:43:13 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next 24/24] selftests/bpf: verifier/value_ptr_arith converted to inline assembly Date: Fri, 21 Apr 2023 20:42:34 +0300 Message-Id: <20230421174234.2391278-25-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230421174234.2391278-1-eddyz87@gmail.com> References: <20230421174234.2391278-1-eddyz87@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test verifier/value_ptr_arith automatically converted to use inline assembly. Test cases "sanitation: alu with different scalars 2" and "sanitation: alu with different scalars 3" are updated to avoid -ENOENT as return value, as __retval() annotation only supports numeric literals. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 34 +- .../bpf/progs/verifier_value_ptr_arith.c | 1423 +++++++++++++++++ .../selftests/bpf/verifier/value_ptr_arith.c | 1140 ------------- 3 files changed, 1451 insertions(+), 1146 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c delete mode 100644 tools/testing/selftests/bpf/verifier/value_ptr_arith.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 56b9248a15c0..bcb955adb447 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -62,6 +62,7 @@ #include "verifier_value.skel.h" #include "verifier_value_illegal_alu.skel.h" #include "verifier_value_or_null.skel.h" +#include "verifier_value_ptr_arith.skel.h" #include "verifier_var_off.skel.h" #include "verifier_xadd.skel.h" #include "verifier_xdp.skel.h" @@ -164,29 +165,50 @@ void test_verifier_xadd(void) { RUN(verifier_xadd); } void test_verifier_xdp(void) { RUN(verifier_xdp); } void test_verifier_xdp_direct_packet_access(void) { RUN(verifier_xdp_direct_packet_access); } -static int init_array_access_maps(struct bpf_object *obj) +static int init_test_val_map(struct bpf_object *obj, char *map_name) { - struct bpf_map *array_ro; struct test_val value = { .index = (6 + 1) * sizeof(int), .foo[6] = 0xabcdef12, }; + struct bpf_map *map; int err, key = 0; - array_ro = bpf_object__find_map_by_name(obj, "map_array_ro"); - if (!ASSERT_OK_PTR(array_ro, "lookup map_array_ro")) + map = bpf_object__find_map_by_name(obj, map_name); + if (!map) { + PRINT_FAIL("Can't find map '%s'\n", map_name); return -EINVAL; + } - err = bpf_map_update_elem(bpf_map__fd(array_ro), &key, &value, 0); - if (!ASSERT_OK(err, "map_array_ro update")) + err = bpf_map_update_elem(bpf_map__fd(map), &key, &value, 0); + if (err) { + PRINT_FAIL("Error while updating map '%s': %d\n", map_name, err); return err; + } return 0; } +static int init_array_access_maps(struct bpf_object *obj) +{ + return init_test_val_map(obj, "map_array_ro"); +} + void test_verifier_array_access(void) { run_tests_aux("verifier_array_access", verifier_array_access__elf_bytes, init_array_access_maps); } + +static int init_value_ptr_arith_maps(struct bpf_object *obj) +{ + return init_test_val_map(obj, "map_array_48b"); +} + +void test_verifier_value_ptr_arith(void) +{ + run_tests_aux("verifier_value_ptr_arith", + verifier_value_ptr_arith__elf_bytes, + init_value_ptr_arith_maps); +} diff --git a/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c b/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c new file mode 100644 index 000000000000..5ba6e53571c8 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c @@ -0,0 +1,1423 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/value_ptr_arith.c */ + +#include +#include +#include +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct test_val); +} map_array_48b SEC(".maps"); + +struct other_val { + long long foo; + long long bar; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct other_val); +} map_hash_16b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +SEC("socket") +__description("map access: known scalar += value_ptr unknown vs const") +__success __failure_unpriv +__msg_unpriv("R1 tried to add from different maps, paths or scalars") +__retval(1) +__naked void value_ptr_unknown_vs_const(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_16b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r4 = *(u8*)(r0 + 0); \ + if r4 == 1 goto l3_%=; \ + r1 = 6; \ + r1 = -r1; \ + r1 &= 0x7; \ + goto l4_%=; \ +l3_%=: r1 = 3; \ +l4_%=: r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_16b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar += value_ptr const vs unknown") +__success __failure_unpriv +__msg_unpriv("R1 tried to add from different maps, paths or scalars") +__retval(1) +__naked void value_ptr_const_vs_unknown(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_16b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r4 = *(u8*)(r0 + 0); \ + if r4 == 1 goto l3_%=; \ + r1 = 3; \ + goto l4_%=; \ +l3_%=: r1 = 6; \ + r1 = -r1; \ + r1 &= 0x7; \ +l4_%=: r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_16b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar += value_ptr const vs const (ne)") +__success __failure_unpriv +__msg_unpriv("R1 tried to add from different maps, paths or scalars") +__retval(1) +__naked void ptr_const_vs_const_ne(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_16b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r4 = *(u8*)(r0 + 0); \ + if r4 == 1 goto l3_%=; \ + r1 = 3; \ + goto l4_%=; \ +l3_%=: r1 = 5; \ +l4_%=: r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_16b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar += value_ptr const vs const (eq)") +__success __success_unpriv __retval(1) +__naked void ptr_const_vs_const_eq(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_16b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r4 = *(u8*)(r0 + 0); \ + if r4 == 1 goto l3_%=; \ + r1 = 5; \ + goto l4_%=; \ +l3_%=: r1 = 5; \ +l4_%=: r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_16b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar += value_ptr unknown vs unknown (eq)") +__success __success_unpriv __retval(1) +__naked void ptr_unknown_vs_unknown_eq(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_16b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r4 = *(u8*)(r0 + 0); \ + if r4 == 1 goto l3_%=; \ + r1 = 6; \ + r1 = -r1; \ + r1 &= 0x7; \ + goto l4_%=; \ +l3_%=: r1 = 6; \ + r1 = -r1; \ + r1 &= 0x7; \ +l4_%=: r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_16b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar += value_ptr unknown vs unknown (lt)") +__success __failure_unpriv +__msg_unpriv("R1 tried to add from different maps, paths or scalars") +__retval(1) +__naked void ptr_unknown_vs_unknown_lt(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_16b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r4 = *(u8*)(r0 + 0); \ + if r4 == 1 goto l3_%=; \ + r1 = 6; \ + r1 = -r1; \ + r1 &= 0x3; \ + goto l4_%=; \ +l3_%=: r1 = 6; \ + r1 = -r1; \ + r1 &= 0x7; \ +l4_%=: r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_16b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar += value_ptr unknown vs unknown (gt)") +__success __failure_unpriv +__msg_unpriv("R1 tried to add from different maps, paths or scalars") +__retval(1) +__naked void ptr_unknown_vs_unknown_gt(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_16b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r4 = *(u8*)(r0 + 0); \ + if r4 == 1 goto l3_%=; \ + r1 = 6; \ + r1 = -r1; \ + r1 &= 0x7; \ + goto l4_%=; \ +l3_%=: r1 = 6; \ + r1 = -r1; \ + r1 &= 0x3; \ +l4_%=: r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_16b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar += value_ptr from different maps") +__success __success_unpriv __retval(1) +__naked void value_ptr_from_different_maps(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_16b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r1 = 4; \ + r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_16b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr -= known scalar from different maps") +__success __failure_unpriv +__msg_unpriv("R0 min value is outside of the allowed memory range") +__retval(1) +__naked void known_scalar_from_different_maps(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_16b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r1 = 4; \ + r0 -= r1; \ + r0 += r1; \ + r0 = *(u8*)(r0 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_16b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar += value_ptr from different maps, but same value properties") +__success __success_unpriv __retval(1) +__naked void maps_but_same_value_properties(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_len]); \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + if r0 == 1 goto l0_%=; \ + r1 = %[map_hash_48b] ll; \ + if r0 != 1 goto l1_%=; \ +l0_%=: r1 = %[map_array_48b] ll; \ +l1_%=: call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l2_%=; \ + r1 = 4; \ + r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l2_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_addr(map_hash_48b), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)) + : __clobber_all); +} + +SEC("socket") +__description("map access: mixing value pointer and scalar, 1") +__success __failure_unpriv __msg_unpriv("R2 pointer comparison prohibited") +__retval(0) +__naked void value_pointer_and_scalar_1(void) +{ + asm volatile (" \ + /* load map value pointer into r0 and r2 */ \ + r0 = 1; \ + r1 = %[map_array_48b] ll; \ + r2 = r10; \ + r2 += -16; \ + r6 = 0; \ + *(u64*)(r10 - 16) = r6; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: /* load some number from the map into r1 */ \ + r1 = *(u8*)(r0 + 0); \ + /* depending on r1, branch: */ \ + if r1 != 0 goto l1_%=; \ + /* branch A */ \ + r2 = r0; \ + r3 = 0; \ + goto l2_%=; \ +l1_%=: /* branch B */ \ + r2 = 0; \ + r3 = 0x100000; \ +l2_%=: /* common instruction */ \ + r2 += r3; \ + /* depending on r1, branch: */ \ + if r1 != 0 goto l3_%=; \ + /* branch A */ \ + goto l4_%=; \ +l3_%=: /* branch B */ \ + r0 = 0x13371337; \ + /* verifier follows fall-through */ \ + if r2 != 0x100000 goto l4_%=; \ + r0 = 0; \ + exit; \ +l4_%=: /* fake-dead code; targeted from branch A to \ + * prevent dead code sanitization \ + */ \ + r0 = *(u8*)(r0 + 0); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: mixing value pointer and scalar, 2") +__success __failure_unpriv __msg_unpriv("R0 invalid mem access 'scalar'") +__retval(0) +__naked void value_pointer_and_scalar_2(void) +{ + asm volatile (" \ + /* load map value pointer into r0 and r2 */ \ + r0 = 1; \ + r1 = %[map_array_48b] ll; \ + r2 = r10; \ + r2 += -16; \ + r6 = 0; \ + *(u64*)(r10 - 16) = r6; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: /* load some number from the map into r1 */ \ + r1 = *(u8*)(r0 + 0); \ + /* depending on r1, branch: */ \ + if r1 == 0 goto l1_%=; \ + /* branch A */ \ + r2 = 0; \ + r3 = 0x100000; \ + goto l2_%=; \ +l1_%=: /* branch B */ \ + r2 = r0; \ + r3 = 0; \ +l2_%=: /* common instruction */ \ + r2 += r3; \ + /* depending on r1, branch: */ \ + if r1 != 0 goto l3_%=; \ + /* branch A */ \ + goto l4_%=; \ +l3_%=: /* branch B */ \ + r0 = 0x13371337; \ + /* verifier follows fall-through */ \ + if r2 != 0x100000 goto l4_%=; \ + r0 = 0; \ + exit; \ +l4_%=: /* fake-dead code; targeted from branch A to \ + * prevent dead code sanitization, rejected \ + * via branch B however \ + */ \ + r0 = *(u8*)(r0 + 0); \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("sanitation: alu with different scalars 1") +__success __success_unpriv __retval(0x100000) +__naked void alu_with_different_scalars_1(void) +{ + asm volatile (" \ + r0 = 1; \ + r1 = %[map_array_48b] ll; \ + r2 = r10; \ + r2 += -16; \ + r6 = 0; \ + *(u64*)(r10 - 16) = r6; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = *(u32*)(r0 + 0); \ + if r1 == 0 goto l1_%=; \ + r2 = 0; \ + r3 = 0x100000; \ + goto l2_%=; \ +l1_%=: r2 = 42; \ + r3 = 0x100001; \ +l2_%=: r2 += r3; \ + r0 = r2; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("sanitation: alu with different scalars 2") +__success __success_unpriv __retval(0) +__naked void alu_with_different_scalars_2(void) +{ + asm volatile (" \ + r0 = 1; \ + r1 = %[map_array_48b] ll; \ + r6 = r1; \ + r2 = r10; \ + r2 += -16; \ + r7 = 0; \ + *(u64*)(r10 - 16) = r7; \ + call %[bpf_map_delete_elem]; \ + r7 = r0; \ + r1 = r6; \ + r2 = r10; \ + r2 += -16; \ + call %[bpf_map_delete_elem]; \ + r6 = r0; \ + r8 = r6; \ + r8 += r7; \ + r0 = r8; \ + r0 += %[einval]; \ + r0 += %[einval]; \ + exit; \ +" : + : __imm(bpf_map_delete_elem), + __imm_addr(map_array_48b), + __imm_const(einval, EINVAL) + : __clobber_all); +} + +SEC("socket") +__description("sanitation: alu with different scalars 3") +__success __success_unpriv __retval(0) +__naked void alu_with_different_scalars_3(void) +{ + asm volatile (" \ + r0 = %[einval]; \ + r0 *= -1; \ + r7 = r0; \ + r0 = %[einval]; \ + r0 *= -1; \ + r6 = r0; \ + r8 = r6; \ + r8 += r7; \ + r0 = r8; \ + r0 += %[einval]; \ + r0 += %[einval]; \ + exit; \ +" : + : __imm_const(einval, EINVAL) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += known scalar, upper oob arith, test 1") +__success __failure_unpriv +__msg_unpriv("R0 pointer arithmetic of map value goes out of range") +__retval(1) +__naked void upper_oob_arith_test_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 48; \ + r0 += r1; \ + r0 -= r1; \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += known scalar, upper oob arith, test 2") +__success __failure_unpriv +__msg_unpriv("R0 pointer arithmetic of map value goes out of range") +__retval(1) +__naked void upper_oob_arith_test_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 49; \ + r0 += r1; \ + r0 -= r1; \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += known scalar, upper oob arith, test 3") +__success __success_unpriv __retval(1) +__naked void upper_oob_arith_test_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 47; \ + r0 += r1; \ + r0 -= r1; \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr -= known scalar, lower oob arith, test 1") +__failure __msg("R0 min value is outside of the allowed memory range") +__failure_unpriv +__msg_unpriv("R0 pointer arithmetic of map value goes out of range") +__naked void lower_oob_arith_test_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 47; \ + r0 += r1; \ + r1 = 48; \ + r0 -= r1; \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr -= known scalar, lower oob arith, test 2") +__success __failure_unpriv +__msg_unpriv("R0 pointer arithmetic of map value goes out of range") +__retval(1) +__naked void lower_oob_arith_test_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 47; \ + r0 += r1; \ + r1 = 48; \ + r0 -= r1; \ + r1 = 1; \ + r0 += r1; \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr -= known scalar, lower oob arith, test 3") +__success __success_unpriv __retval(1) +__naked void lower_oob_arith_test_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 47; \ + r0 += r1; \ + r1 = 47; \ + r0 -= r1; \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar += value_ptr") +__success __success_unpriv __retval(1) +__naked void access_known_scalar_value_ptr_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 4; \ + r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += known scalar, 1") +__success __success_unpriv __retval(1) +__naked void value_ptr_known_scalar_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 4; \ + r0 += r1; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += known scalar, 2") +__failure __msg("invalid access to map value") +__failure_unpriv +__naked void value_ptr_known_scalar_2_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 49; \ + r0 += r1; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += known scalar, 3") +__failure __msg("invalid access to map value") +__failure_unpriv +__naked void value_ptr_known_scalar_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = -1; \ + r0 += r1; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += known scalar, 4") +__success __success_unpriv __retval(1) +__naked void value_ptr_known_scalar_4(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 5; \ + r0 += r1; \ + r1 = -2; \ + r0 += r1; \ + r1 = -1; \ + r0 += r1; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += known scalar, 5") +__success __success_unpriv __retval(0xabcdef12) +__naked void value_ptr_known_scalar_5(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = %[__imm_0]; \ + r1 += r0; \ + r0 = *(u32*)(r1 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_const(__imm_0, (6 + 1) * sizeof(int)) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += known scalar, 6") +__success __success_unpriv __retval(0xabcdef12) +__naked void value_ptr_known_scalar_6(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = %[__imm_0]; \ + r0 += r1; \ + r1 = %[__imm_1]; \ + r0 += r1; \ + r0 = *(u32*)(r0 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b), + __imm_const(__imm_0, (3 + 1) * sizeof(int)), + __imm_const(__imm_1, 3 * sizeof(int)) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += N, value_ptr -= N known scalar") +__success __success_unpriv __retval(0x12345678) +__naked void value_ptr_n_known_scalar(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + w1 = 0x12345678; \ + *(u32*)(r0 + 0) = r1; \ + r0 += 2; \ + r1 = 2; \ + r0 -= r1; \ + r0 = *(u32*)(r0 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: unknown scalar += value_ptr, 1") +__success __success_unpriv __retval(1) +__naked void unknown_scalar_value_ptr_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u8*)(r0 + 0); \ + r1 &= 0xf; \ + r1 += r0; \ + r0 = *(u8*)(r1 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: unknown scalar += value_ptr, 2") +__success __success_unpriv __retval(0xabcdef12) __flag(BPF_F_ANY_ALIGNMENT) +__naked void unknown_scalar_value_ptr_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + r1 &= 31; \ + r1 += r0; \ + r0 = *(u32*)(r1 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: unknown scalar += value_ptr, 3") +__success __failure_unpriv +__msg_unpriv("R0 pointer arithmetic of map value goes out of range") +__retval(0xabcdef12) __flag(BPF_F_ANY_ALIGNMENT) +__naked void unknown_scalar_value_ptr_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = -1; \ + r0 += r1; \ + r1 = 1; \ + r0 += r1; \ + r1 = *(u32*)(r0 + 0); \ + r1 &= 31; \ + r1 += r0; \ + r0 = *(u32*)(r1 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: unknown scalar += value_ptr, 4") +__failure __msg("R1 max value is outside of the allowed memory range") +__msg_unpriv("R1 pointer arithmetic of map value goes out of range") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void unknown_scalar_value_ptr_4(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 19; \ + r0 += r1; \ + r1 = *(u32*)(r0 + 0); \ + r1 &= 31; \ + r1 += r0; \ + r0 = *(u32*)(r1 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += unknown scalar, 1") +__success __success_unpriv __retval(1) +__naked void value_ptr_unknown_scalar_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u8*)(r0 + 0); \ + r1 &= 0xf; \ + r0 += r1; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += unknown scalar, 2") +__success __success_unpriv __retval(0xabcdef12) __flag(BPF_F_ANY_ALIGNMENT) +__naked void value_ptr_unknown_scalar_2_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + r1 &= 31; \ + r0 += r1; \ + r0 = *(u32*)(r0 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += unknown scalar, 3") +__success __success_unpriv __retval(1) +__naked void value_ptr_unknown_scalar_3(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r0 + 0); \ + r2 = *(u64*)(r0 + 8); \ + r3 = *(u64*)(r0 + 16); \ + r1 &= 0xf; \ + r3 &= 1; \ + r3 |= 1; \ + if r2 > r3 goto l0_%=; \ + r0 += r3; \ + r0 = *(u8*)(r0 + 0); \ + r0 = 1; \ +l1_%=: exit; \ +l0_%=: r0 = 2; \ + goto l1_%=; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr += value_ptr") +__failure __msg("R0 pointer += pointer prohibited") +__failure_unpriv +__naked void access_value_ptr_value_ptr_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 += r0; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: known scalar -= value_ptr") +__failure __msg("R1 tried to subtract pointer from scalar") +__failure_unpriv +__naked void access_known_scalar_value_ptr_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 4; \ + r1 -= r0; \ + r0 = *(u8*)(r1 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr -= known scalar") +__failure __msg("R0 min value is outside of the allowed memory range") +__failure_unpriv +__naked void access_value_ptr_known_scalar(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 4; \ + r0 -= r1; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr -= known scalar, 2") +__success __success_unpriv __retval(1) +__naked void value_ptr_known_scalar_2_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 6; \ + r2 = 4; \ + r0 += r1; \ + r0 -= r2; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: unknown scalar -= value_ptr") +__failure __msg("R1 tried to subtract pointer from scalar") +__failure_unpriv +__naked void access_unknown_scalar_value_ptr(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u8*)(r0 + 0); \ + r1 &= 0xf; \ + r1 -= r0; \ + r0 = *(u8*)(r1 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr -= unknown scalar") +__failure __msg("R0 min value is negative") +__failure_unpriv +__naked void access_value_ptr_unknown_scalar(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u8*)(r0 + 0); \ + r1 &= 0xf; \ + r0 -= r1; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr -= unknown scalar, 2") +__success __failure_unpriv +__msg_unpriv("R0 pointer arithmetic of map value goes out of range") +__retval(1) +__naked void value_ptr_unknown_scalar_2_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u8*)(r0 + 0); \ + r1 &= 0xf; \ + r1 |= 0x7; \ + r0 += r1; \ + r1 = *(u8*)(r0 + 0); \ + r1 &= 0x7; \ + r0 -= r1; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: value_ptr -= value_ptr") +__failure __msg("R0 invalid mem access 'scalar'") +__msg_unpriv("R0 pointer -= pointer prohibited") +__naked void access_value_ptr_value_ptr_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 -= r0; \ + r1 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("map access: trying to leak tainted dst reg") +__failure __msg("math between map_value pointer and 4294967295 is not allowed") +__failure_unpriv +__naked void to_leak_tainted_dst_reg(void) +{ + asm volatile (" \ + r0 = 0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r2 = r0; \ + w1 = 0xFFFFFFFF; \ + w1 = w1; \ + r2 -= r1; \ + *(u64*)(r0 + 0) = r2; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("tc") +__description("32bit pkt_ptr -= scalar") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void _32bit_pkt_ptr_scalar(void) +{ + asm volatile (" \ + r8 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data]); \ + r6 = r7; \ + r6 += 40; \ + if r6 > r8 goto l0_%=; \ + w4 = w7; \ + w6 -= w4; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("32bit scalar -= pkt_ptr") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void _32bit_scalar_pkt_ptr(void) +{ + asm volatile (" \ + r8 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data]); \ + r6 = r7; \ + r6 += 40; \ + if r6 > r8 goto l0_%=; \ + w4 = w6; \ + w4 -= w7; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c deleted file mode 100644 index 249187d3c530..000000000000 --- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c +++ /dev/null @@ -1,1140 +0,0 @@ -{ - "map access: known scalar += value_ptr unknown vs const", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), - BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4), - BPF_MOV64_IMM(BPF_REG_1, 6), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_MOV64_IMM(BPF_REG_1, 3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result_unpriv = REJECT, - .errstr_unpriv = "R1 tried to add from different maps, paths or scalars", - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: known scalar += value_ptr const vs unknown", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), - BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2), - BPF_MOV64_IMM(BPF_REG_1, 3), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 6), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result_unpriv = REJECT, - .errstr_unpriv = "R1 tried to add from different maps, paths or scalars", - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: known scalar += value_ptr const vs const (ne)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2), - BPF_MOV64_IMM(BPF_REG_1, 3), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_MOV64_IMM(BPF_REG_1, 5), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result_unpriv = REJECT, - .errstr_unpriv = "R1 tried to add from different maps, paths or scalars", - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: known scalar += value_ptr const vs const (eq)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2), - BPF_MOV64_IMM(BPF_REG_1, 5), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_MOV64_IMM(BPF_REG_1, 5), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: known scalar += value_ptr unknown vs unknown (eq)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4), - BPF_MOV64_IMM(BPF_REG_1, 6), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 6), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: known scalar += value_ptr unknown vs unknown (lt)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4), - BPF_MOV64_IMM(BPF_REG_1, 6), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x3), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 6), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result_unpriv = REJECT, - .errstr_unpriv = "R1 tried to add from different maps, paths or scalars", - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: known scalar += value_ptr unknown vs unknown (gt)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4), - BPF_MOV64_IMM(BPF_REG_1, 6), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7), - BPF_JMP_IMM(BPF_JA, 0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 6), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result_unpriv = REJECT, - .errstr_unpriv = "R1 tried to add from different maps, paths or scalars", - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: known scalar += value_ptr from different maps", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: value_ptr -= known scalar from different maps", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_IMM(BPF_REG_1, 4), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 min value is outside of the allowed memory range", - .retval = 1, -}, -{ - "map access: known scalar += value_ptr from different maps, but same value properties", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 5 }, - .fixup_map_array_48b = { 8 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: mixing value pointer and scalar, 1", - .insns = { - // load map value pointer into r0 and r2 - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_LD_MAP_FD(BPF_REG_ARG1, 0), - BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - // load some number from the map into r1 - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - // depending on r1, branch: - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 3), - // branch A - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_JMP_A(2), - // branch B - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0x100000), - // common instruction - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3), - // depending on r1, branch: - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - // branch A - BPF_JMP_A(4), - // branch B - BPF_MOV64_IMM(BPF_REG_0, 0x13371337), - // verifier follows fall-through - BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0x100000, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - // fake-dead code; targeted from branch A to - // prevent dead code sanitization - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 1 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R2 pointer comparison prohibited", - .retval = 0, -}, -{ - "map access: mixing value pointer and scalar, 2", - .insns = { - // load map value pointer into r0 and r2 - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_LD_MAP_FD(BPF_REG_ARG1, 0), - BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - // load some number from the map into r1 - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - // depending on r1, branch: - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3), - // branch A - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0x100000), - BPF_JMP_A(2), - // branch B - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, 0), - // common instruction - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3), - // depending on r1, branch: - BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), - // branch A - BPF_JMP_A(4), - // branch B - BPF_MOV64_IMM(BPF_REG_0, 0x13371337), - // verifier follows fall-through - BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0x100000, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - // fake-dead code; targeted from branch A to - // prevent dead code sanitization, rejected - // via branch B however - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 1 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 invalid mem access 'scalar'", - .retval = 0, -}, -{ - "sanitation: alu with different scalars 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_LD_MAP_FD(BPF_REG_ARG1, 0), - BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0x100000), - BPF_JMP_A(2), - BPF_MOV64_IMM(BPF_REG_2, 42), - BPF_MOV64_IMM(BPF_REG_3, 0x100001), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 1 }, - .result = ACCEPT, - .retval = 0x100000, -}, -{ - "sanitation: alu with different scalars 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0), - BPF_EMIT_CALL(BPF_FUNC_map_delete_elem), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16), - BPF_EMIT_CALL(BPF_FUNC_map_delete_elem), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_6), - BPF_ALU64_REG(BPF_ADD, BPF_REG_8, BPF_REG_7), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 1 }, - .result = ACCEPT, - .retval = -EINVAL * 2, -}, -{ - "sanitation: alu with different scalars 3", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, EINVAL), - BPF_ALU64_IMM(BPF_MUL, BPF_REG_0, -1), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_0, EINVAL), - BPF_ALU64_IMM(BPF_MUL, BPF_REG_0, -1), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_6), - BPF_ALU64_REG(BPF_ADD, BPF_REG_8, BPF_REG_7), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = -EINVAL * 2, -}, -{ - "map access: value_ptr += known scalar, upper oob arith, test 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_IMM(BPF_REG_1, 48), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 pointer arithmetic of map value goes out of range", - .retval = 1, -}, -{ - "map access: value_ptr += known scalar, upper oob arith, test 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_IMM(BPF_REG_1, 49), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 pointer arithmetic of map value goes out of range", - .retval = 1, -}, -{ - "map access: value_ptr += known scalar, upper oob arith, test 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_IMM(BPF_REG_1, 47), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: value_ptr -= known scalar, lower oob arith, test 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_IMM(BPF_REG_1, 47), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 48), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "R0 min value is outside of the allowed memory range", - .result_unpriv = REJECT, - .errstr_unpriv = "R0 pointer arithmetic of map value goes out of range", -}, -{ - "map access: value_ptr -= known scalar, lower oob arith, test 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_MOV64_IMM(BPF_REG_1, 47), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 48), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 pointer arithmetic of map value goes out of range", - .retval = 1, -}, -{ - "map access: value_ptr -= known scalar, lower oob arith, test 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_IMM(BPF_REG_1, 47), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 47), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: known scalar += value_ptr", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: value_ptr += known scalar, 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: value_ptr += known scalar, 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 49), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "invalid access to map value", -}, -{ - "map access: value_ptr += known scalar, 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, -1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "invalid access to map value", -}, -{ - "map access: value_ptr += known scalar, 4", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_MOV64_IMM(BPF_REG_1, 5), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, -2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, -1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: value_ptr += known scalar, 5", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, (6 + 1) * sizeof(int)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 0xabcdef12, -}, -{ - "map access: value_ptr += known scalar, 6", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_IMM(BPF_REG_1, (3 + 1) * sizeof(int)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 3 * sizeof(int)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 0xabcdef12, -}, -{ - "map access: value_ptr += N, value_ptr -= N known scalar", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV32_IMM(BPF_REG_1, 0x12345678), - BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2), - BPF_MOV64_IMM(BPF_REG_1, 2), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 0x12345678, -}, -{ - "map access: unknown scalar += value_ptr, 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: unknown scalar += value_ptr, 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 0xabcdef12, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "map access: unknown scalar += value_ptr, 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_MOV64_IMM(BPF_REG_1, -1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 pointer arithmetic of map value goes out of range", - .retval = 0xabcdef12, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "map access: unknown scalar += value_ptr, 4", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_IMM(BPF_REG_1, 19), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "R1 max value is outside of the allowed memory range", - .errstr_unpriv = "R1 pointer arithmetic of map value goes out of range", - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "map access: value_ptr += unknown scalar, 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: value_ptr += unknown scalar, 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 0xabcdef12, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "map access: value_ptr += unknown scalar, 3", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 16), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf), - BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 1), - BPF_ALU64_IMM(BPF_OR, BPF_REG_3, 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_3), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 2), - BPF_JMP_IMM(BPF_JA, 0, 0, -3), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: value_ptr += value_ptr", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "R0 pointer += pointer prohibited", -}, -{ - "map access: known scalar -= value_ptr", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 4), - BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "R1 tried to subtract pointer from scalar", -}, -{ - "map access: value_ptr -= known scalar", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_1, 4), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "R0 min value is outside of the allowed memory range", -}, -{ - "map access: value_ptr -= known scalar, 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_IMM(BPF_REG_1, 6), - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_2), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "map access: unknown scalar -= value_ptr", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf), - BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "R1 tried to subtract pointer from scalar", -}, -{ - "map access: value_ptr -= unknown scalar", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "R0 min value is negative", -}, -{ - "map access: value_ptr -= unknown scalar, 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf), - BPF_ALU64_IMM(BPF_OR, BPF_REG_1, 0x7), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 pointer arithmetic of map value goes out of range", - .retval = 1, -}, -{ - "map access: value_ptr -= value_ptr", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_0), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = REJECT, - .errstr = "R0 invalid mem access 'scalar'", - .errstr_unpriv = "R0 pointer -= pointer prohibited", -}, -{ - "map access: trying to leak tainted dst reg", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV32_IMM(BPF_REG_1, 0xFFFFFFFF), - BPF_MOV32_REG(BPF_REG_1, BPF_REG_1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 4 }, - .result = REJECT, - .errstr = "math between map_value pointer and 4294967295 is not allowed", -}, -{ - "32bit pkt_ptr -= scalar", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_7), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 40), - BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_8, 2), - BPF_ALU32_REG(BPF_MOV, BPF_REG_4, BPF_REG_7), - BPF_ALU32_REG(BPF_SUB, BPF_REG_6, BPF_REG_4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "32bit scalar -= pkt_ptr", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_7), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 40), - BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_8, 2), - BPF_ALU32_REG(BPF_MOV, BPF_REG_4, BPF_REG_6), - BPF_ALU32_REG(BPF_SUB, BPF_REG_4, BPF_REG_7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -},