From patchwork Thu Feb 23 03:07:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13149797 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4217BC64EC7 for ; Thu, 23 Feb 2023 03:07:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233467AbjBWDHs (ORCPT ); Wed, 22 Feb 2023 22:07:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233965AbjBWDHj (ORCPT ); Wed, 22 Feb 2023 22:07:39 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D7FC48E20; Wed, 22 Feb 2023 19:07:26 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id cp7-20020a17090afb8700b0023756229427so2354942pjb.1; Wed, 22 Feb 2023 19:07:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EaVa5Sxe+f1A4jOxubJnSLXCUuuY208KjUFt/MAPbc4=; b=TJO+gU8rHJPMLxdxhcqNTSMT39HXt/H2y9isu5kPLsLYwm61VQ/UarOT6eqVm0/5+K HLE3BAuNiIN/JkvA8IYLvVljWnK/0vYaStYaiEI8hcE0ssVAiK+VdgDIRmAW+jBW4CzL 8h4tTWf93gISmRRvL64L2m8ez/vz/JRw/kaDV+u70QkRlJHpp6LYvhq7X5DsOtIWJ9LM l//V+PRleJuSlA9SmKlm4LvYOW/qRNwuagtou48W/fzyetka18AafI1iUiOteEc9f6p8 jEbtaPIvuFsGQ9DrYVrjPaRG5wUzb8qLgSjnLEmH8/2EccnUnv5x/YO3EEftx8L1r/ZN mZug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EaVa5Sxe+f1A4jOxubJnSLXCUuuY208KjUFt/MAPbc4=; b=Mx+4ig3WU3ZlF0oQvrk8ojApCVynPiQDlTK4+pLUq52NisA/fJOEt0TjvCrlEs8T1g ojQXgLdbZCILN9W5FX/oazpeN//vdazjMNSA5e4VO+F0PHQVHL5zgByYpDy2ru2S/wrK Rm2MrYMDcaYjhhL/ZxyclcHilLWQ2baF7XUeJhr/i3cIgOM7zjoY3z67RtCUmsNmGTT7 TL3eZLORik8IhGK9blMOi6zbugV8HZvZFMquda7AsToAWKUGa86Nj0xmUc/6kUNQnwZ4 hcqY65kHlMcmKKnEuin7GhFvXBsJcZcikNy74DTEJOvb7v1Cq0mWwjVBDmKCvkOzE7B8 fGZg== X-Gm-Message-State: AO0yUKX1W9FutNGcDNrwryD9Ko1gz65kcYEV4gssVvPSCAcM7XpC9Nj2 eYFmtHFsB52QZbEdmSatuPs= X-Google-Smtp-Source: AK7set8ouExV+GDTaeFT3+v6KE57WYKhQ1ADNjl14vtkjDGe/o/32WoGlnU2DawP9vNHHQH8XXV9HQ== X-Received: by 2002:a17:902:ecc1:b0:19a:b67a:5bd8 with SMTP id a1-20020a170902ecc100b0019ab67a5bd8mr12584944plh.55.1677121645747; Wed, 22 Feb 2023 19:07:25 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:9cb3]) by smtp.gmail.com with ESMTPSA id o4-20020a170902bcc400b00198f73f9d54sm6774968pls.117.2023.02.22.19.07.24 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 22 Feb 2023 19:07:25 -0800 (PST) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, void@manifault.com, davemarchevsky@meta.com, tj@kernel.org, memxor@gmail.com, netdev@vger.kernel.org, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 bpf-next 1/4] bpf: Rename __kptr_ref -> __kptr and __kptr -> __kptr_untrusted. Date: Wed, 22 Feb 2023 19:07:14 -0800 Message-Id: <20230223030717.58668-2-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20230223030717.58668-1-alexei.starovoitov@gmail.com> References: <20230223030717.58668-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov __kptr meant to store PTR_UNTRUSTED kernel pointers inside bpf maps. The concept felt useful, but didn't get much traction, since bpf_rdonly_cast() was added soon after and bpf programs received a simpler way to access PTR_UNTRUSTED kernel pointers without going through restrictive __kptr usage. Rename __kptr_ref -> __kptr and __kptr -> __kptr_untrusted to indicate its intended usage. The main goal of __kptr_untrusted was to read/write such pointers directly while bpf_kptr_xchg was a mechanism to access refcnted kernel pointers. The next patch will allow RCU protected __kptr access with direct read. At that point __kptr_untrusted will be deprecated. Signed-off-by: Alexei Starovoitov --- Documentation/bpf/bpf_design_QA.rst | 4 ++-- Documentation/bpf/cpumasks.rst | 4 ++-- Documentation/bpf/kfuncs.rst | 2 +- kernel/bpf/btf.c | 4 ++-- tools/lib/bpf/bpf_helpers.h | 2 +- tools/testing/selftests/bpf/progs/cb_refs.c | 2 +- .../selftests/bpf/progs/cgrp_kfunc_common.h | 2 +- .../selftests/bpf/progs/cpumask_common.h | 2 +- .../selftests/bpf/progs/jit_probe_mem.c | 2 +- tools/testing/selftests/bpf/progs/lru_bug.c | 2 +- tools/testing/selftests/bpf/progs/map_kptr.c | 4 ++-- .../selftests/bpf/progs/map_kptr_fail.c | 6 ++--- .../selftests/bpf/progs/task_kfunc_common.h | 2 +- tools/testing/selftests/bpf/test_verifier.c | 22 +++++++++---------- 14 files changed, 30 insertions(+), 30 deletions(-) diff --git a/Documentation/bpf/bpf_design_QA.rst b/Documentation/bpf/bpf_design_QA.rst index bfff0e7e37c2..38372a956d65 100644 --- a/Documentation/bpf/bpf_design_QA.rst +++ b/Documentation/bpf/bpf_design_QA.rst @@ -314,7 +314,7 @@ Q: What is the compatibility story for special BPF types in map values? Q: Users are allowed to embed bpf_spin_lock, bpf_timer fields in their BPF map values (when using BTF support for BPF maps). This allows to use helpers for such objects on these fields inside map values. Users are also allowed to embed -pointers to some kernel types (with __kptr and __kptr_ref BTF tags). Will the +pointers to some kernel types (with __kptr_untrusted and __kptr BTF tags). Will the kernel preserve backwards compatibility for these features? A: It depends. For bpf_spin_lock, bpf_timer: YES, for kptr and everything else: @@ -324,7 +324,7 @@ For struct types that have been added already, like bpf_spin_lock and bpf_timer, the kernel will preserve backwards compatibility, as they are part of UAPI. For kptrs, they are also part of UAPI, but only with respect to the kptr -mechanism. The types that you can use with a __kptr and __kptr_ref tagged +mechanism. The types that you can use with a __kptr_untrusted and __kptr tagged pointer in your struct are NOT part of the UAPI contract. The supported types can and will change across kernel releases. However, operations like accessing kptr fields and bpf_kptr_xchg() helper will continue to be supported across kernel diff --git a/Documentation/bpf/cpumasks.rst b/Documentation/bpf/cpumasks.rst index 24bef9cbbeee..75344cd230e5 100644 --- a/Documentation/bpf/cpumasks.rst +++ b/Documentation/bpf/cpumasks.rst @@ -51,7 +51,7 @@ A ``struct bpf_cpumask *`` is allocated, acquired, and released, using the .. code-block:: c struct cpumask_map_value { - struct bpf_cpumask __kptr_ref * cpumask; + struct bpf_cpumask __kptr * cpumask; }; struct array_map { @@ -128,7 +128,7 @@ a map, the reference can be removed from the map with bpf_kptr_xchg(), or /* struct containing the struct bpf_cpumask kptr which is stored in the map. */ struct cpumasks_kfunc_map_value { - struct bpf_cpumask __kptr_ref * bpf_cpumask; + struct bpf_cpumask __kptr * bpf_cpumask; }; /* The map containing struct cpumasks_kfunc_map_value entries. */ diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index ca96ef3f6896..d085594eae19 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -527,7 +527,7 @@ You may also acquire a reference to a ``struct cgroup`` kptr that's already /* struct containing the struct task_struct kptr which is actually stored in the map. */ struct __cgroups_kfunc_map_value { - struct cgroup __kptr_ref * cgroup; + struct cgroup __kptr * cgroup; }; /* The map containing struct __cgroups_kfunc_map_value entries. */ diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index fa22ec79ac0e..01dee7d48e6d 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3283,9 +3283,9 @@ static int btf_find_kptr(const struct btf *btf, const struct btf_type *t, /* Reject extra tags */ if (btf_type_is_type_tag(btf_type_by_id(btf, t->type))) return -EINVAL; - if (!strcmp("kptr", __btf_name_by_offset(btf, t->name_off))) + if (!strcmp("kptr_untrusted", __btf_name_by_offset(btf, t->name_off))) type = BPF_KPTR_UNREF; - else if (!strcmp("kptr_ref", __btf_name_by_offset(btf, t->name_off))) + else if (!strcmp("kptr", __btf_name_by_offset(btf, t->name_off))) type = BPF_KPTR_REF; else return -EINVAL; diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h index 5ec1871acb2f..7d12d3e620cc 100644 --- a/tools/lib/bpf/bpf_helpers.h +++ b/tools/lib/bpf/bpf_helpers.h @@ -174,8 +174,8 @@ enum libbpf_tristate { #define __kconfig __attribute__((section(".kconfig"))) #define __ksym __attribute__((section(".ksyms"))) +#define __kptr_untrusted __attribute__((btf_type_tag("kptr_untrusted"))) #define __kptr __attribute__((btf_type_tag("kptr"))) -#define __kptr_ref __attribute__((btf_type_tag("kptr_ref"))) #ifndef ___bpf_concat #define ___bpf_concat(a, b) a ## b diff --git a/tools/testing/selftests/bpf/progs/cb_refs.c b/tools/testing/selftests/bpf/progs/cb_refs.c index 7653df1bc787..ce96b33e38d6 100644 --- a/tools/testing/selftests/bpf/progs/cb_refs.c +++ b/tools/testing/selftests/bpf/progs/cb_refs.c @@ -4,7 +4,7 @@ #include struct map_value { - struct prog_test_ref_kfunc __kptr_ref *ptr; + struct prog_test_ref_kfunc __kptr *ptr; }; struct { diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h b/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h index 7d30855bfe78..50d8660ffa26 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h +++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h @@ -10,7 +10,7 @@ #include struct __cgrps_kfunc_map_value { - struct cgroup __kptr_ref * cgrp; + struct cgroup __kptr * cgrp; }; struct hash_map { diff --git a/tools/testing/selftests/bpf/progs/cpumask_common.h b/tools/testing/selftests/bpf/progs/cpumask_common.h index ad34f3b602be..65e5496ca1b2 100644 --- a/tools/testing/selftests/bpf/progs/cpumask_common.h +++ b/tools/testing/selftests/bpf/progs/cpumask_common.h @@ -10,7 +10,7 @@ int err; struct __cpumask_map_value { - struct bpf_cpumask __kptr_ref * cpumask; + struct bpf_cpumask __kptr * cpumask; }; struct array_map { diff --git a/tools/testing/selftests/bpf/progs/jit_probe_mem.c b/tools/testing/selftests/bpf/progs/jit_probe_mem.c index 2d2e61470794..13f00ca2ed0a 100644 --- a/tools/testing/selftests/bpf/progs/jit_probe_mem.c +++ b/tools/testing/selftests/bpf/progs/jit_probe_mem.c @@ -4,7 +4,7 @@ #include #include -static struct prog_test_ref_kfunc __kptr_ref *v; +static struct prog_test_ref_kfunc __kptr *v; long total_sum = -1; extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym; diff --git a/tools/testing/selftests/bpf/progs/lru_bug.c b/tools/testing/selftests/bpf/progs/lru_bug.c index 687081a724b3..ad73029cb1e3 100644 --- a/tools/testing/selftests/bpf/progs/lru_bug.c +++ b/tools/testing/selftests/bpf/progs/lru_bug.c @@ -4,7 +4,7 @@ #include struct map_value { - struct task_struct __kptr *ptr; + struct task_struct __kptr_untrusted *ptr; }; struct { diff --git a/tools/testing/selftests/bpf/progs/map_kptr.c b/tools/testing/selftests/bpf/progs/map_kptr.c index 228ec45365a8..4a7da6cb5800 100644 --- a/tools/testing/selftests/bpf/progs/map_kptr.c +++ b/tools/testing/selftests/bpf/progs/map_kptr.c @@ -4,8 +4,8 @@ #include struct map_value { - struct prog_test_ref_kfunc __kptr *unref_ptr; - struct prog_test_ref_kfunc __kptr_ref *ref_ptr; + struct prog_test_ref_kfunc __kptr_untrusted *unref_ptr; + struct prog_test_ref_kfunc __kptr *ref_ptr; }; struct array_map { diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c index 760e41e1a632..e19e2a5f38cf 100644 --- a/tools/testing/selftests/bpf/progs/map_kptr_fail.c +++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c @@ -7,9 +7,9 @@ struct map_value { char buf[8]; - struct prog_test_ref_kfunc __kptr *unref_ptr; - struct prog_test_ref_kfunc __kptr_ref *ref_ptr; - struct prog_test_member __kptr_ref *ref_memb_ptr; + struct prog_test_ref_kfunc __kptr_untrusted *unref_ptr; + struct prog_test_ref_kfunc __kptr *ref_ptr; + struct prog_test_member __kptr *ref_memb_ptr; }; struct array_map { diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_common.h b/tools/testing/selftests/bpf/progs/task_kfunc_common.h index c0ffd171743e..4c2a4b0e3a25 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_common.h +++ b/tools/testing/selftests/bpf/progs/task_kfunc_common.h @@ -10,7 +10,7 @@ #include struct __tasks_kfunc_map_value { - struct task_struct __kptr_ref * task; + struct task_struct __kptr * task; }; struct hash_map { diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index 8b9949bb833d..49a70d9beb0b 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -699,13 +699,13 @@ static int create_cgroup_storage(bool percpu) * struct bpf_timer t; * }; * struct btf_ptr { + * struct prog_test_ref_kfunc __kptr_untrusted *ptr; * struct prog_test_ref_kfunc __kptr *ptr; - * struct prog_test_ref_kfunc __kptr_ref *ptr; - * struct prog_test_member __kptr_ref *ptr; + * struct prog_test_member __kptr *ptr; * } */ static const char btf_str_sec[] = "\0bpf_spin_lock\0val\0cnt\0l\0bpf_timer\0timer\0t" - "\0btf_ptr\0prog_test_ref_kfunc\0ptr\0kptr\0kptr_ref" + "\0btf_ptr\0prog_test_ref_kfunc\0ptr\0kptr\0kptr_untrusted" "\0prog_test_member"; static __u32 btf_raw_types[] = { /* int */ @@ -724,20 +724,20 @@ static __u32 btf_raw_types[] = { BTF_MEMBER_ENC(41, 4, 0), /* struct bpf_timer t; */ /* struct prog_test_ref_kfunc */ /* [6] */ BTF_STRUCT_ENC(51, 0, 0), - BTF_STRUCT_ENC(89, 0, 0), /* [7] */ + BTF_STRUCT_ENC(95, 0, 0), /* [7] */ + /* type tag "kptr_untrusted" */ + BTF_TYPE_TAG_ENC(80, 6), /* [8] */ /* type tag "kptr" */ - BTF_TYPE_TAG_ENC(75, 6), /* [8] */ - /* type tag "kptr_ref" */ - BTF_TYPE_TAG_ENC(80, 6), /* [9] */ - BTF_TYPE_TAG_ENC(80, 7), /* [10] */ + BTF_TYPE_TAG_ENC(75, 6), /* [9] */ + BTF_TYPE_TAG_ENC(75, 7), /* [10] */ BTF_PTR_ENC(8), /* [11] */ BTF_PTR_ENC(9), /* [12] */ BTF_PTR_ENC(10), /* [13] */ /* struct btf_ptr */ /* [14] */ BTF_STRUCT_ENC(43, 3, 24), - BTF_MEMBER_ENC(71, 11, 0), /* struct prog_test_ref_kfunc __kptr *ptr; */ - BTF_MEMBER_ENC(71, 12, 64), /* struct prog_test_ref_kfunc __kptr_ref *ptr; */ - BTF_MEMBER_ENC(71, 13, 128), /* struct prog_test_member __kptr_ref *ptr; */ + BTF_MEMBER_ENC(71, 11, 0), /* struct prog_test_ref_kfunc __kptr_untrusted *ptr; */ + BTF_MEMBER_ENC(71, 12, 64), /* struct prog_test_ref_kfunc __kptr *ptr; */ + BTF_MEMBER_ENC(71, 13, 128), /* struct prog_test_member __kptr *ptr; */ }; static char bpf_vlog[UINT_MAX >> 8]; From patchwork Thu Feb 23 03:07:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13149798 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8BA0C64ED6 for ; Thu, 23 Feb 2023 03:07:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233720AbjBWDHu (ORCPT ); Wed, 22 Feb 2023 22:07:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233708AbjBWDHl (ORCPT ); Wed, 22 Feb 2023 22:07:41 -0500 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 971B548E27; Wed, 22 Feb 2023 19:07:30 -0800 (PST) Received: by mail-pl1-x62c.google.com with SMTP id u14so7349969ple.7; Wed, 22 Feb 2023 19:07:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LxIoFt7I8yVjq/M6ku17cKIo5Gzn0LxFg4YBhjXYa2Q=; b=mBrpZumQytDJYXQNmOB/1ifHeAnxcXQEiVdzYM+9DVxzISQZGegthvJtwTd+fhg7Ax mcEp534CNJ/xXgq1PGMvt/04J+8yyq/paOzj5IuC70/2ywuHKETSV84JLoboI9VIX//l XQGtqV8KLlyLL8fAzVs2ncSMvLvyeHa9kGwlTNTXpQOHomo5SUXyzqOS2N7lBg/59rGx ynuHRi3T9J5GcR3KjcdmOcA2SjPVyfLSL4rt/XVkldJ1cuu6qQdVb+/LeQ72vxntS5Jy elI60mItJ0dm8x+GfIRuNcXMw2qJ5hoatAm0mP7RUgC9sn4nyuI6qDFO+7SYfdc6D6+r xrOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LxIoFt7I8yVjq/M6ku17cKIo5Gzn0LxFg4YBhjXYa2Q=; b=eoJ9pK+usr6FMOfTWAZeIoZFOuIIODZwXxV22cpJ5O9dCb4E6xSD8LQ1Dyu1n9XBHT KBnerYS5XdFFVbmaAEp0WkoaKEJ4PDhXiOhnAjQ4cSodKsEF6VeV+yj3NuA3v2hnwk7q uK7HCf51XHUeAl32avqLKecNfglAyLndMf1SXXWMfOKKkQPC9LlgRMjXssjanP/cTvrO Bao7Z7FbN9C/xhuVwJHgQKqtMRxnEQ3epSgDC4hqESeADGIL0GLcoLQUZ7DonHLd7Rdh 0TuGmegxcBeJz5Ox9JzDB39+2ybBMM8ixmOuPr82ecA2uBWRwRglFcPZ56fK2O0/X/vr ehlA== X-Gm-Message-State: AO0yUKVeT+01iiaqpRH0vzCBvituy7OSZjBpc5694T/koRr2joV2GbfX 1V7MDmq957th2U2HGZbReLY= X-Google-Smtp-Source: AK7set+gRm+keQxVu4pjkYpqd9q2QXyuLDgYiFNICenfB7AydKegAnVfPHlZqk2NKQRZzy0+tSzaYQ== X-Received: by 2002:a17:903:183:b0:19b:33c0:40ab with SMTP id z3-20020a170903018300b0019b33c040abmr11973246plg.43.1677121649777; Wed, 22 Feb 2023 19:07:29 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:9cb3]) by smtp.gmail.com with ESMTPSA id jw9-20020a170903278900b0019a7d58e595sm2514413plb.143.2023.02.22.19.07.28 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 22 Feb 2023 19:07:29 -0800 (PST) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, void@manifault.com, davemarchevsky@meta.com, tj@kernel.org, memxor@gmail.com, netdev@vger.kernel.org, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 bpf-next 2/4] bpf: Introduce kptr_rcu. Date: Wed, 22 Feb 2023 19:07:15 -0800 Message-Id: <20230223030717.58668-3-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20230223030717.58668-1-alexei.starovoitov@gmail.com> References: <20230223030717.58668-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov The life time of certain kernel structures like 'struct cgroup' is protected by RCU. Hence it's safe to dereference them directly from kptr_rcu tagged pointers in bpf maps. The resulting pointer is MEM_RCU and can be passed to kfuncs that expect KF_RCU. Derefrence of other kptr-s returns PTR_UNTRUSTED. For example: struct map_value { struct cgroup __kptr_rcu *cgrp; }; SEC("tp_btf/cgroup_mkdir") int BPF_PROG(test_cgrp_get_ancestors, struct cgroup *cgrp_arg, const char *path) { struct cgroup *cg, *cg2; cg = bpf_cgroup_acquire(cgrp_arg); // cg is PTR_TRUSTED and ref_obj_id > 0 bpf_kptr_xchg(&v->cgrp, cg); cg2 = v->cgrp; // This is new feature introduced by this patch. // cg2 is PTR_MAYBE_NULL | MEM_RCU. // When cg2 != NULL, it's a valid cgroup, but its percpu_ref could be zero bpf_cgroup_ancestor(cg2, level); // safe to do. } Signed-off-by: Alexei Starovoitov --- Documentation/bpf/kfuncs.rst | 11 ++++--- include/linux/bpf.h | 15 ++++++--- include/linux/btf.h | 2 +- kernel/bpf/btf.c | 22 ++++++++++++- kernel/bpf/helpers.c | 7 +++-- kernel/bpf/syscall.c | 4 +++ kernel/bpf/verifier.c | 33 +++++++++++++------- net/bpf/test_run.c | 3 +- tools/lib/bpf/bpf_helpers.h | 1 + tools/testing/selftests/bpf/verifier/calls.c | 2 +- 10 files changed, 72 insertions(+), 28 deletions(-) diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index d085594eae19..b76b3a699f96 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -232,11 +232,12 @@ added later. 2.4.8 KF_RCU flag ----------------- -The KF_RCU flag is used for kfuncs which have a rcu ptr as its argument. -When used together with KF_ACQUIRE, it indicates the kfunc should have a -single argument which must be a trusted argument or a MEM_RCU pointer. -The argument may have reference count of 0 and the kfunc must take this -into consideration. +The KF_RCU flag is a weaker version of KF_TRUSTED_ARGS. The kfuncs marked with +KF_RCU expect either PTR_TRUSTED or MEM_RCU arguments. The verifier guarantees +that the objects are valid and there is no use-after-free, but the pointers +maybe NULL and pointee object's reference count could have reached zero, hence +kfuncs must do != NULL check and consider refcnt==0 case when accessing such +arguments. .. _KF_deprecated_flag: diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 520b238abd5a..c6098b5d8e77 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -178,11 +178,12 @@ enum btf_field_type { BPF_TIMER = (1 << 1), BPF_KPTR_UNREF = (1 << 2), BPF_KPTR_REF = (1 << 3), - BPF_KPTR = BPF_KPTR_UNREF | BPF_KPTR_REF, - BPF_LIST_HEAD = (1 << 4), - BPF_LIST_NODE = (1 << 5), - BPF_RB_ROOT = (1 << 6), - BPF_RB_NODE = (1 << 7), + BPF_KPTR_RCU = (1 << 4), + BPF_KPTR = BPF_KPTR_UNREF | BPF_KPTR_REF | BPF_KPTR_RCU, + BPF_LIST_HEAD = (1 << 5), + BPF_LIST_NODE = (1 << 6), + BPF_RB_ROOT = (1 << 7), + BPF_RB_NODE = (1 << 8), BPF_GRAPH_NODE_OR_ROOT = BPF_LIST_NODE | BPF_LIST_HEAD | BPF_RB_NODE | BPF_RB_ROOT, }; @@ -284,6 +285,8 @@ static inline const char *btf_field_type_name(enum btf_field_type type) case BPF_KPTR_UNREF: case BPF_KPTR_REF: return "kptr"; + case BPF_KPTR_RCU: + return "kptr_rcu"; case BPF_LIST_HEAD: return "bpf_list_head"; case BPF_LIST_NODE: @@ -307,6 +310,7 @@ static inline u32 btf_field_type_size(enum btf_field_type type) return sizeof(struct bpf_timer); case BPF_KPTR_UNREF: case BPF_KPTR_REF: + case BPF_KPTR_RCU: return sizeof(u64); case BPF_LIST_HEAD: return sizeof(struct bpf_list_head); @@ -331,6 +335,7 @@ static inline u32 btf_field_type_align(enum btf_field_type type) return __alignof__(struct bpf_timer); case BPF_KPTR_UNREF: case BPF_KPTR_REF: + case BPF_KPTR_RCU: return __alignof__(u64); case BPF_LIST_HEAD: return __alignof__(struct bpf_list_head); diff --git a/include/linux/btf.h b/include/linux/btf.h index 49e0fe6d8274..556b3e2e7471 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -70,7 +70,7 @@ #define KF_TRUSTED_ARGS (1 << 4) /* kfunc only takes trusted pointer arguments */ #define KF_SLEEPABLE (1 << 5) /* kfunc may sleep */ #define KF_DESTRUCTIVE (1 << 6) /* kfunc performs destructive actions */ -#define KF_RCU (1 << 7) /* kfunc only takes rcu pointer arguments */ +#define KF_RCU (1 << 7) /* kfunc takes either rcu or trusted pointer arguments */ /* * Tag marking a kernel function as a kfunc. This is meant to minimize the diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 01dee7d48e6d..1428d7b15c1c 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3287,6 +3287,8 @@ static int btf_find_kptr(const struct btf *btf, const struct btf_type *t, type = BPF_KPTR_UNREF; else if (!strcmp("kptr", __btf_name_by_offset(btf, t->name_off))) type = BPF_KPTR_REF; + else if (!strcmp("kptr_rcu", __btf_name_by_offset(btf, t->name_off))) + type = BPF_KPTR_RCU; else return -EINVAL; @@ -3449,6 +3451,7 @@ static int btf_find_struct_field(const struct btf *btf, break; case BPF_KPTR_UNREF: case BPF_KPTR_REF: + case BPF_KPTR_RCU: ret = btf_find_kptr(btf, member_type, off, sz, idx < info_cnt ? &info[idx] : &tmp); if (ret < 0) @@ -3514,6 +3517,7 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t, break; case BPF_KPTR_UNREF: case BPF_KPTR_REF: + case BPF_KPTR_RCU: ret = btf_find_kptr(btf, var_type, off, sz, idx < info_cnt ? &info[idx] : &tmp); if (ret < 0) @@ -3552,6 +3556,18 @@ static int btf_find_field(const struct btf *btf, const struct btf_type *t, return -EINVAL; } +BTF_SET_START(rcu_protected_types) +BTF_ID(struct, prog_test_ref_kfunc) +BTF_ID(struct, cgroup) +BTF_SET_END(rcu_protected_types) + +static bool rcu_protected_object(const struct btf *btf, u32 btf_id) +{ + if (!btf_is_kernel(btf)) + return false; + return btf_id_set_contains(&rcu_protected_types, btf_id); +} + static int btf_parse_kptr(const struct btf *btf, struct btf_field *field, struct btf_field_info *info) { @@ -3570,10 +3586,13 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field, if (id < 0) return id; + if (info->type == BPF_KPTR_RCU && !rcu_protected_object(kernel_btf, id)) + return -EINVAL; + /* Find and stash the function pointer for the destruction function that * needs to be eventually invoked from the map free path. */ - if (info->type == BPF_KPTR_REF) { + if (info->type == BPF_KPTR_REF || info->type == BPF_KPTR_RCU) { const struct btf_type *dtor_func; const char *dtor_func_name; unsigned long addr; @@ -3737,6 +3756,7 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type break; case BPF_KPTR_UNREF: case BPF_KPTR_REF: + case BPF_KPTR_RCU: ret = btf_parse_kptr(btf, &rec->fields[i], &info_arr[i]); if (ret < 0) goto end; diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 5b278a38ae58..58d01560b665 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -2094,11 +2094,12 @@ __bpf_kfunc struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level) { struct cgroup *ancestor; - if (level > cgrp->level || level < 0) + if (!cgrp || level > cgrp->level || level < 0) return NULL; ancestor = cgrp->ancestors[level]; - cgroup_get(ancestor); + if (!cgroup_tryget(ancestor)) + return NULL; return ancestor; } #endif /* CONFIG_CGROUPS */ @@ -2166,7 +2167,7 @@ BTF_ID_FLAGS(func, bpf_rbtree_first, KF_RET_NULL) BTF_ID_FLAGS(func, bpf_cgroup_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS) BTF_ID_FLAGS(func, bpf_cgroup_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_cgroup_release, KF_RELEASE) -BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_RCU | KF_RET_NULL) #endif BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL) BTF_SET8_END(generic_btf_ids) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index e3fcdc9836a6..2e730918911c 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -539,6 +539,7 @@ void btf_record_free(struct btf_record *rec) switch (rec->fields[i].type) { case BPF_KPTR_UNREF: case BPF_KPTR_REF: + case BPF_KPTR_RCU: if (rec->fields[i].kptr.module) module_put(rec->fields[i].kptr.module); btf_put(rec->fields[i].kptr.btf); @@ -584,6 +585,7 @@ struct btf_record *btf_record_dup(const struct btf_record *rec) switch (fields[i].type) { case BPF_KPTR_UNREF: case BPF_KPTR_REF: + case BPF_KPTR_RCU: btf_get(fields[i].kptr.btf); if (fields[i].kptr.module && !try_module_get(fields[i].kptr.module)) { ret = -ENXIO; @@ -669,6 +671,7 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj) WRITE_ONCE(*(u64 *)field_ptr, 0); break; case BPF_KPTR_REF: + case BPF_KPTR_RCU: field->kptr.dtor((void *)xchg((unsigned long *)field_ptr, 0)); break; case BPF_LIST_HEAD: @@ -1058,6 +1061,7 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, break; case BPF_KPTR_UNREF: case BPF_KPTR_REF: + case BPF_KPTR_RCU: if (map->map_type != BPF_MAP_TYPE_HASH && map->map_type != BPF_MAP_TYPE_LRU_HASH && map->map_type != BPF_MAP_TYPE_ARRAY && diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5cb8b623f639..401ff0d74de7 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4183,7 +4183,7 @@ static int map_kptr_match_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno) { const char *targ_name = kernel_type_name(kptr_field->kptr.btf, kptr_field->kptr.btf_id); - int perm_flags = PTR_MAYBE_NULL | PTR_TRUSTED; + int perm_flags = PTR_MAYBE_NULL | PTR_TRUSTED | MEM_RCU; const char *reg_name = ""; /* Only unreferenced case accepts untrusted pointers */ @@ -4230,12 +4230,12 @@ static int map_kptr_match_type(struct bpf_verifier_env *env, * In the kptr_ref case, check_func_arg_reg_off already ensures reg->off * is zero. We must also ensure that btf_struct_ids_match does not walk * the struct to match type against first member of struct, i.e. reject - * second case from above. Hence, when type is BPF_KPTR_REF, we set + * second case from above. Hence, when type is BPF_KPTR_REF | BPF_KPTR_RCU, we set * strict mode to true for type match. */ if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id, reg->off, kptr_field->kptr.btf, kptr_field->kptr.btf_id, - kptr_field->type == BPF_KPTR_REF)) + kptr_field->type == BPF_KPTR_REF || kptr_field->type == BPF_KPTR_RCU)) goto bad_type; return 0; bad_type: @@ -4250,6 +4250,14 @@ static int map_kptr_match_type(struct bpf_verifier_env *env, return -EINVAL; } +/* The non-sleepable programs and sleepable programs with explicit bpf_rcu_read_lock() + * can dereference RCU protected pointers and result is PTR_TRUSTED. + */ +static bool in_rcu_cs(struct bpf_verifier_env *env) +{ + return env->cur_state->active_rcu_lock || !env->prog->aux->sleepable; +} + static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno, int value_regno, int insn_idx, struct btf_field *kptr_field) @@ -4273,7 +4281,7 @@ static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno, /* We only allow loading referenced kptr, since it will be marked as * untrusted, similar to unreferenced kptr. */ - if (class != BPF_LDX && kptr_field->type == BPF_KPTR_REF) { + if (class != BPF_LDX && kptr_field->type != BPF_KPTR_UNREF) { verbose(env, "store to referenced kptr disallowed\n"); return -EACCES; } @@ -4284,7 +4292,10 @@ static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno, * value from map as PTR_TO_BTF_ID, with the correct type. */ mark_btf_ld_reg(env, cur_regs(env), value_regno, PTR_TO_BTF_ID, kptr_field->kptr.btf, - kptr_field->kptr.btf_id, PTR_MAYBE_NULL | PTR_UNTRUSTED); + kptr_field->kptr.btf_id, + kptr_field->type == BPF_KPTR_RCU && in_rcu_cs(env) ? + PTR_MAYBE_NULL | MEM_RCU : + PTR_MAYBE_NULL | PTR_UNTRUSTED); /* For mark_ptr_or_null_reg */ val_reg->id = ++env->id_gen; } else if (class == BPF_STX) { @@ -4338,6 +4349,7 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno, switch (field->type) { case BPF_KPTR_UNREF: case BPF_KPTR_REF: + case BPF_KPTR_RCU: if (src != ACCESS_DIRECT) { verbose(env, "kptr cannot be accessed indirectly by helper\n"); return -EACCES; @@ -5134,11 +5146,10 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, * read lock region. Also mark rcu pointer as PTR_MAYBE_NULL since * it could be null in some cases. */ - if (!env->cur_state->active_rcu_lock || - !(is_trusted_reg(reg) || is_rcu_reg(reg))) - flag &= ~MEM_RCU; - else + if (in_rcu_cs(env) && (is_trusted_reg(reg) || is_rcu_reg(reg))) flag |= PTR_MAYBE_NULL; + else + flag &= ~MEM_RCU; } else if (reg->type & MEM_RCU) { /* ptr (reg) is marked as MEM_RCU, but the struct field is not tagged * with __rcu. Mark the flag as PTR_UNTRUSTED conservatively. @@ -6182,7 +6193,7 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno, verbose(env, "off=%d doesn't point to kptr\n", kptr_off); return -EACCES; } - if (kptr_field->type != BPF_KPTR_REF) { + if (kptr_field->type != BPF_KPTR_REF && kptr_field->type != BPF_KPTR_RCU) { verbose(env, "off=%d kptr isn't referenced kptr\n", kptr_off); return -EACCES; } @@ -9106,7 +9117,7 @@ static int process_kf_arg_ptr_to_kptr(struct bpf_verifier_env *env, } kptr_field = btf_record_find(reg->map_ptr->record, reg->off + reg->var_off.value, BPF_KPTR); - if (!kptr_field || kptr_field->type != BPF_KPTR_REF) { + if (!kptr_field || (kptr_field->type != BPF_KPTR_REF && kptr_field->type != BPF_KPTR_RCU)) { verbose(env, "arg#0 no referenced kptr at map value offset=%llu\n", reg->off + reg->var_off.value); return -EINVAL; diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 6f3d654b3339..73e5029ab5c9 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -737,6 +737,7 @@ __bpf_kfunc void bpf_kfunc_call_test_mem_len_fail2(u64 *mem, int len) __bpf_kfunc void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p) { + /* p could be NULL and p->cnt could be 0 */ } __bpf_kfunc void bpf_kfunc_call_test_destructive(void) @@ -784,7 +785,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_fail3) BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_pass1) BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1) BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2) -BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS | KF_RCU) BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE) BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg) BTF_SET8_END(test_sk_check_kfunc_ids) diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h index 7d12d3e620cc..affc0997f937 100644 --- a/tools/lib/bpf/bpf_helpers.h +++ b/tools/lib/bpf/bpf_helpers.h @@ -176,6 +176,7 @@ enum libbpf_tristate { #define __ksym __attribute__((section(".ksyms"))) #define __kptr_untrusted __attribute__((btf_type_tag("kptr_untrusted"))) #define __kptr __attribute__((btf_type_tag("kptr"))) +#define __kptr_rcu __attribute__((btf_type_tag("kptr_rcu"))) #ifndef ___bpf_concat #define ___bpf_concat(a, b) a ## b diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c index 289ed202ec66..9a326a800e5c 100644 --- a/tools/testing/selftests/bpf/verifier/calls.c +++ b/tools/testing/selftests/bpf/verifier/calls.c @@ -243,7 +243,7 @@ }, .result_unpriv = REJECT, .result = REJECT, - .errstr = "R1 must be referenced", + .errstr = "R1 must be", }, { "calls: valid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID", From patchwork Thu Feb 23 03:07:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13149796 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13518C61DA4 for ; Thu, 23 Feb 2023 03:07:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233545AbjBWDHt (ORCPT ); Wed, 22 Feb 2023 22:07:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233706AbjBWDHl (ORCPT ); Wed, 22 Feb 2023 22:07:41 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B59E1C7EB; Wed, 22 Feb 2023 19:07:34 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id cp7-20020a17090afb8700b0023756229427so2355186pjb.1; Wed, 22 Feb 2023 19:07:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KOk3Vym9nGE4EgQZXZOKt2VRaxDq+MExRT/H8oWqXsA=; b=d1nRcOMo6yIKpcrRbqqqv+QPtNyQWvxSQqqMBKP2ny37CeD07zglmWqGMJMAw8QiFX dQqh9dz+n8W2uwZ+D4MJIIUJrDVLHJiPq2CcpUfaYam6CSr2FbweAMDcD7Jm3q7mmzLs qLxsakpfIAz61Nkr3XjipMWVYnn/SjFo2uHtu8f7sZMOCQfAIDMNBgWd4Py0tIU28Tiu 1pdSZ2li4XPpoUgUzcjlhAwnDzHZa8Lx5hI3GKgip97nzigVdWZF4fPG25vekRTRoRg3 Wa+yLaj9iuMGEgF3gGXL9W/q8+eRU90upqgKxX+2HSejNWYq/NpzyOaIZA/nE95tz8er +1Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KOk3Vym9nGE4EgQZXZOKt2VRaxDq+MExRT/H8oWqXsA=; b=K3dUliHAtXElc8sE0RLDU02fUzHslPHOonfl/hpHx1QrI8YRnIYqy978GSGoSLCPk4 C/mw9ae+siv9ValMR4vOMG1jIdNjcVw60HpnTG/USQNNGQLkNKO6xQewCZ8cbuXDr9J3 VPXc+MUZLtQB+CIx3h8oaEgqlzOLDUKFtCCMrZKTF0sd6cOS+dRZW3lM7C3lUi6e2A9x loywtpcvxRZoApgWwHgQRW+rW69/NdXicYEodJ97RB7wOV91TMNDeLHolcurYuftDHAQ QhC0gqDcNEMkd7ImXuyRSAbE7YkCjEp+uUoHMgXNu+uWRWgi9Ap+wryyYZoR7+mw7qFE xFsw== X-Gm-Message-State: AO0yUKUwBYCS1AY4zFf0S4jdRZ892dV9tVhaXwWkgJdvb9qbVM21AApI ba0IFl8A38nYmGZwrSYOdd0= X-Google-Smtp-Source: AK7set9LdzR1HvwTHVh2xVTUYaFUb1VESgOT+j09s2VS1KbxN4yrzZA9CNMoFTXO6AXla4fUiHQoyQ== X-Received: by 2002:a17:903:1cc:b0:19a:6b55:a453 with SMTP id e12-20020a17090301cc00b0019a6b55a453mr12460126plh.9.1677121653866; Wed, 22 Feb 2023 19:07:33 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:9cb3]) by smtp.gmail.com with ESMTPSA id i4-20020a170902eb4400b0019a9436d2a0sm7155359pli.89.2023.02.22.19.07.32 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 22 Feb 2023 19:07:33 -0800 (PST) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, void@manifault.com, davemarchevsky@meta.com, tj@kernel.org, memxor@gmail.com, netdev@vger.kernel.org, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 bpf-next 3/4] selftests/bpf: Add a test case for kptr_rcu. Date: Wed, 22 Feb 2023 19:07:16 -0800 Message-Id: <20230223030717.58668-4-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20230223030717.58668-1-alexei.starovoitov@gmail.com> References: <20230223030717.58668-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Tweak existing map_kptr test to check kptr_rcu. Signed-off-by: Alexei Starovoitov --- tools/testing/selftests/bpf/progs/map_kptr.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/bpf/progs/map_kptr.c b/tools/testing/selftests/bpf/progs/map_kptr.c index 4a7da6cb5800..539659ed75c5 100644 --- a/tools/testing/selftests/bpf/progs/map_kptr.c +++ b/tools/testing/selftests/bpf/progs/map_kptr.c @@ -5,7 +5,7 @@ struct map_value { struct prog_test_ref_kfunc __kptr_untrusted *unref_ptr; - struct prog_test_ref_kfunc __kptr *ref_ptr; + struct prog_test_ref_kfunc __kptr_rcu *ref_ptr; }; struct array_map { @@ -61,6 +61,7 @@ extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp extern struct prog_test_ref_kfunc * bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **p, int a, int b) __ksym; extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym; +void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p) __ksym; #define WRITE_ONCE(x, val) ((*(volatile typeof(x) *) &(x)) = (val)) @@ -90,12 +91,23 @@ static void test_kptr_ref(struct map_value *v) WRITE_ONCE(v->unref_ptr, p); if (!p) return; + /* + * p is rcu_ptr_prog_test_ref_kfunc, + * because bpf prog is non-sleepable and runs in RCU CS. + * p can be passed to kfunc that requires KF_RCU. + */ + bpf_kfunc_call_test_ref(p); if (p->a + p->b > 100) return; /* store NULL */ p = bpf_kptr_xchg(&v->ref_ptr, NULL); if (!p) return; + /* + * p is trusted_ptr_prog_test_ref_kfunc. + * p can be passed to kfunc that requires KF_RCU. + */ + bpf_kfunc_call_test_ref(p); if (p->a + p->b > 100) { bpf_kfunc_call_test_release(p); return; @@ -288,6 +300,8 @@ int test_map_kptr_ref2(struct __sk_buff *ctx) if (p_st->cnt.refs.counter != 2) return 6; + /* p_st is MEM_RCU, because we're in RCU CS */ + bpf_kfunc_call_test_ref(p_st); return 0; } From patchwork Thu Feb 23 03:07:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13149799 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC483C61DA4 for ; Thu, 23 Feb 2023 03:08:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234016AbjBWDIC (ORCPT ); Wed, 22 Feb 2023 22:08:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233986AbjBWDHo (ORCPT ); Wed, 22 Feb 2023 22:07:44 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83F89457D7; Wed, 22 Feb 2023 19:07:38 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id k21-20020a17090aaa1500b002376652e160so90491pjq.0; Wed, 22 Feb 2023 19:07:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=p74sWJ64A1OPekYdfYFXpkbCTSqVdh7YcPKAPUNXdBE=; b=L9bXZ1MqwWFXg/8ifi6nCJQmxb3ZDId0XHfXhdr2wzYowNNF3PQRIhEY+vClfMSOG7 KXX0siBPKJ21Nj5TYklTSwZafmdM1Kzs/RBOHzLQRoRKTnbAyYwWLzLUMSqjsCbAqQg/ GJOl+mqw4elmxoYRdwDve0/q6LNga59fjf6rPKYeg8RM4cbgjYoGmPM7a5Popw43gmgo 1D3RNEhVquC/hNHudHPFXuNZScBuqqQMrMlyhCmVAwd/ONsKQiouaieE6pDW6LBZC0sv x4ydOl7aZvI7NlTFe5HNYV4teRzJ4JrjhNvRMIi1MARBmsdnxhkQnkxZqfpyVY8Af5XQ wRPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=p74sWJ64A1OPekYdfYFXpkbCTSqVdh7YcPKAPUNXdBE=; b=TlgkX8rowEoQZ/ZR2YOYietbaFCFslpfF0vu6bSzVFvSxtHlnFPSwoCf1DQvKJ/Vhx 4VV43pJUFqpZjTbkOvH+DijgqC27oMdm/g1Ao4rZYZbvnLhCy3OZv+svk4B7q8sz4RJp K5+GqMcKzK6KnUYd80EtyiBqM9lQSXX9WIrHIQlmXNlOilhKFfyPkUZqtxfUQ94eJRo2 V0v7x/L2f1T93bJqKE2RG20zh1lZ9Qed00Q6oz1+vWU3g8ZDRSHbMV24AG+DhxcqwL5m I1eYFUN0NyXaD00c8On0ckyM7d2WjVNsn8fUWO+jliq5PSfZ3/ba3Bj4m4gLmyEnBfrH 6X/w== X-Gm-Message-State: AO0yUKVliwZ2iXw/0kskqzILi4EXDmfVJp+5LKt8co0Eru0O5NEYg+Uc 1uHlI/97IeAH42ijVcPAZVA= X-Google-Smtp-Source: AK7set9WXKBfz0PFRawtp3BSTK031umHnNN/jmZC/jwPlzD/pYRgxRd4JhlzOkWq9axpLFR7e114AQ== X-Received: by 2002:a17:903:2444:b0:19a:627f:fb73 with SMTP id l4-20020a170903244400b0019a627ffb73mr12283176pls.57.1677121657910; Wed, 22 Feb 2023 19:07:37 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:9cb3]) by smtp.gmail.com with ESMTPSA id p2-20020a1709026b8200b0019c61616f82sm7057184plk.230.2023.02.22.19.07.36 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 22 Feb 2023 19:07:37 -0800 (PST) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, void@manifault.com, davemarchevsky@meta.com, tj@kernel.org, memxor@gmail.com, netdev@vger.kernel.org, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 bpf-next 4/4] selftests/bpf: Tweak cgroup kfunc test. Date: Wed, 22 Feb 2023 19:07:17 -0800 Message-Id: <20230223030717.58668-5-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20230223030717.58668-1-alexei.starovoitov@gmail.com> References: <20230223030717.58668-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Adjust cgroup kfunc test to dereference RCU protected cgroup pointer as PTR_TRUSTED and pass into KF_TRUSTED_ARGS kfunc. Signed-off-by: Alexei Starovoitov --- tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h | 2 +- tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c | 2 +- tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c | 7 ++++++- 3 files changed, 8 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h b/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h index 50d8660ffa26..eb5bf3125816 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h +++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h @@ -10,7 +10,7 @@ #include struct __cgrps_kfunc_map_value { - struct cgroup __kptr * cgrp; + struct cgroup __kptr_rcu * cgrp; }; struct hash_map { diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c index 4ad7fe24966d..d5a53b5e708f 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c +++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c @@ -205,7 +205,7 @@ int BPF_PROG(cgrp_kfunc_get_unreleased, struct cgroup *cgrp, const char *path) } SEC("tp_btf/cgroup_mkdir") -__failure __msg("arg#0 is untrusted_ptr_or_null_ expected ptr_ or socket") +__failure __msg("bpf_cgroup_release expects refcounted") int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path) { struct __cgrps_kfunc_map_value *v; diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c b/tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c index 0c23ea32df9f..37ed73186fba 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c +++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c @@ -61,7 +61,7 @@ int BPF_PROG(test_cgrp_acquire_leave_in_map, struct cgroup *cgrp, const char *pa SEC("tp_btf/cgroup_mkdir") int BPF_PROG(test_cgrp_xchg_release, struct cgroup *cgrp, const char *path) { - struct cgroup *kptr; + struct cgroup *kptr, *cg; struct __cgrps_kfunc_map_value *v; long status; @@ -80,6 +80,11 @@ int BPF_PROG(test_cgrp_xchg_release, struct cgroup *cgrp, const char *path) return 0; } + kptr = v->cgrp; + cg = bpf_cgroup_ancestor(kptr, 1); + if (cg) + bpf_cgroup_release(cg); + kptr = bpf_kptr_xchg(&v->cgrp, NULL); if (!kptr) { err = 3;