From patchwork Sun Apr 16 08:49:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 13212736 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AD85C77B72 for ; Sun, 16 Apr 2023 08:49:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230184AbjDPItk (ORCPT ); Sun, 16 Apr 2023 04:49:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229976AbjDPItj (ORCPT ); Sun, 16 Apr 2023 04:49:39 -0400 Received: from mail-qv1-f53.google.com (mail-qv1-f53.google.com [209.85.219.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52CB3106; Sun, 16 Apr 2023 01:49:38 -0700 (PDT) Received: by mail-qv1-f53.google.com with SMTP id qh25so12361541qvb.1; Sun, 16 Apr 2023 01:49:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681634977; x=1684226977; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bf8/DhXyxhQVxCLkoC9tNfxIfJDkgAT+2/+mmZzl0Og=; b=ckLo63LLv3aC872kK+l7PHvn16dsLcSt5FzZ5LHtFhEpCHLQqJ8m0Yek7lWlrUxKlO yLtdQ5/dRJNL6KsOZNFfe2OszMrnOZXuJ9oWHdEYwLpZBu2k6oToEeLKeNeOLADLACkI VpsrTY2eIkj+kzu+UHzHt+SrY3jfMmPmgTBuwF1w/vteg62fy40tmEgVXqDf1QRjtXF7 lPLlBX1yiiAtqIH8JlCIdWd20m3AMOauukbOSqPDGabLblpIKxuxoRsTd0Xs6vfrQcgN /qU+hbT/LPWfQBpqndxZCFjm80dVM8Q380rpfIQuZxlsdnyq7DkCMmljbbhbOHiXCya0 zv8g== X-Gm-Message-State: AAQBX9ej0JaJBNz1hviSb3IyCgft37sh7Ssad8Fu9jh8ASGb8i7rkcHH G11nYHQJm5zIyN9YrjvmZTSWX8hIkCkruQcE4YXDZg== X-Google-Smtp-Source: AKy350YkDXYDeTE1tPQM4QuNTo3oOLWXHxs8/nddkF4/XRhDcd7uRJUgO2tBH7420HK5vj9e/zZb2g== X-Received: by 2002:a05:6214:5090:b0:5ef:67b9:8d37 with SMTP id kk16-20020a056214509000b005ef67b98d37mr4173633qvb.13.1681634976964; Sun, 16 Apr 2023 01:49:36 -0700 (PDT) Received: from localhost ([24.1.27.177]) by smtp.gmail.com with ESMTPSA id s14-20020ad44b2e000000b005eee5c22f30sm2298803qvw.139.2023.04.16.01.49.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Apr 2023 01:49:36 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, memxor@gmail.com Subject: [PATCH bpf-next v2 1/3] bpf: Remove bpf_kfunc_call_test_kptr_get() test kfunc Date: Sun, 16 Apr 2023 03:49:26 -0500 Message-Id: <20230416084928.326135-2-void@manifault.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230416084928.326135-1-void@manifault.com> References: <20230416084928.326135-1-void@manifault.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net We've managed to improve the UX for kptrs significantly over the last 9 months. All of the prior main use cases, struct bpf_cpumask *, struct task_struct *, and struct cgroup *, have all been updated to be synchronized mainly using RCU. In other words, their KF_ACQUIRE kfunc calls are all KF_RCU, and the pointers themselves are MEM_RCU and can be accessed in an RCU read region in BPF. In a follow-on change, we'll be removing the KF_KPTR_GET kfunc flag. This patch prepares for that by removing the bpf_kfunc_call_test_kptr_get() kfunc, and all associated selftests. Signed-off-by: David Vernet --- net/bpf/test_run.c | 12 --- tools/testing/selftests/bpf/progs/map_kptr.c | 40 ++-------- .../selftests/bpf/progs/map_kptr_fail.c | 78 ------------------- .../testing/selftests/bpf/verifier/map_kptr.c | 27 ------- 4 files changed, 5 insertions(+), 152 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 0b9bd9b39990..f170e8a17974 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -679,17 +679,6 @@ __bpf_kfunc void bpf_kfunc_call_int_mem_release(int *p) { } -__bpf_kfunc struct prog_test_ref_kfunc * -bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **pp, int a, int b) -{ - struct prog_test_ref_kfunc *p = READ_ONCE(*pp); - - if (!p) - return NULL; - refcount_inc(&p->cnt); - return p; -} - struct prog_test_pass1 { int x0; struct { @@ -804,7 +793,6 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_get_rdwr_mem, KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_test_get_rdonly_mem, KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_test_acq_rdonly_mem, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_int_mem_release, KF_RELEASE) -BTF_ID_FLAGS(func, bpf_kfunc_call_test_kptr_get, KF_ACQUIRE | KF_RET_NULL | KF_KPTR_GET) BTF_ID_FLAGS(func, bpf_kfunc_call_test_pass_ctx) BTF_ID_FLAGS(func, bpf_kfunc_call_test_pass1) BTF_ID_FLAGS(func, bpf_kfunc_call_test_pass2) diff --git a/tools/testing/selftests/bpf/progs/map_kptr.c b/tools/testing/selftests/bpf/progs/map_kptr.c index dae5dab1bbf7..d7150041e5d1 100644 --- a/tools/testing/selftests/bpf/progs/map_kptr.c +++ b/tools/testing/selftests/bpf/progs/map_kptr.c @@ -115,8 +115,6 @@ DEFINE_MAP_OF_MAP(BPF_MAP_TYPE_HASH_OF_MAPS, hash_malloc_map, hash_of_hash_mallo DEFINE_MAP_OF_MAP(BPF_MAP_TYPE_HASH_OF_MAPS, lru_hash_map, hash_of_lru_hash_maps); extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym; -extern struct prog_test_ref_kfunc * -bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **p, int a, int b) __ksym; extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym; void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p) __ksym; @@ -187,25 +185,10 @@ static void test_kptr_ref(struct map_value *v) bpf_kfunc_call_test_release(p); } -static void test_kptr_get(struct map_value *v) -{ - struct prog_test_ref_kfunc *p; - - p = bpf_kfunc_call_test_kptr_get(&v->ref_ptr, 0, 0); - if (!p) - return; - if (p->a + p->b > 100) { - bpf_kfunc_call_test_release(p); - return; - } - bpf_kfunc_call_test_release(p); -} - static void test_kptr(struct map_value *v) { test_kptr_unref(v); test_kptr_ref(v); - test_kptr_get(v); } SEC("tc") @@ -338,38 +321,25 @@ int test_map_kptr_ref_pre(struct map_value *v) if (p_st->cnt.refs.counter != ref) return 4; - p = bpf_kfunc_call_test_kptr_get(&v->ref_ptr, 0, 0); - if (!p) - return 5; - ref++; - if (p_st->cnt.refs.counter != ref) { - ret = 6; - goto end; - } - bpf_kfunc_call_test_release(p); - ref--; - if (p_st->cnt.refs.counter != ref) - return 7; - p = bpf_kptr_xchg(&v->ref_ptr, NULL); if (!p) - return 8; + return 5; bpf_kfunc_call_test_release(p); ref--; if (p_st->cnt.refs.counter != ref) - return 9; + return 6; p = bpf_kfunc_call_test_acquire(&arg); if (!p) - return 10; + return 7; ref++; p = bpf_kptr_xchg(&v->ref_ptr, p); if (p) { - ret = 11; + ret = 8; goto end; } if (p_st->cnt.refs.counter != ref) - return 12; + return 9; /* Leave in map */ return 0; diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c index 15bf3127dba3..da8c724f839b 100644 --- a/tools/testing/selftests/bpf/progs/map_kptr_fail.c +++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c @@ -21,8 +21,6 @@ struct array_map { extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym; extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym; -extern struct prog_test_ref_kfunc * -bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **p, int a, int b) __ksym; SEC("?tc") __failure __msg("kptr access size must be BPF_DW") @@ -220,67 +218,6 @@ int reject_kptr_xchg_on_unref(struct __sk_buff *ctx) return 0; } -SEC("?tc") -__failure __msg("arg#0 expected pointer to map value") -int reject_kptr_get_no_map_val(struct __sk_buff *ctx) -{ - bpf_kfunc_call_test_kptr_get((void *)&ctx, 0, 0); - return 0; -} - -SEC("?tc") -__failure __msg("arg#0 expected pointer to map value") -int reject_kptr_get_no_null_map_val(struct __sk_buff *ctx) -{ - bpf_kfunc_call_test_kptr_get(bpf_map_lookup_elem(&array_map, &(int){0}), 0, 0); - return 0; -} - -SEC("?tc") -__failure __msg("arg#0 no referenced kptr at map value offset=0") -int reject_kptr_get_no_kptr(struct __sk_buff *ctx) -{ - struct map_value *v; - int key = 0; - - v = bpf_map_lookup_elem(&array_map, &key); - if (!v) - return 0; - - bpf_kfunc_call_test_kptr_get((void *)v, 0, 0); - return 0; -} - -SEC("?tc") -__failure __msg("arg#0 no referenced kptr at map value offset=8") -int reject_kptr_get_on_unref(struct __sk_buff *ctx) -{ - struct map_value *v; - int key = 0; - - v = bpf_map_lookup_elem(&array_map, &key); - if (!v) - return 0; - - bpf_kfunc_call_test_kptr_get(&v->unref_ptr, 0, 0); - return 0; -} - -SEC("?tc") -__failure __msg("kernel function bpf_kfunc_call_test_kptr_get args#0") -int reject_kptr_get_bad_type_match(struct __sk_buff *ctx) -{ - struct map_value *v; - int key = 0; - - v = bpf_map_lookup_elem(&array_map, &key); - if (!v) - return 0; - - bpf_kfunc_call_test_kptr_get((void *)&v->ref_memb_ptr, 0, 0); - return 0; -} - SEC("?tc") __failure __msg("R1 type=rcu_ptr_or_null_ expected=percpu_ptr_") int mark_ref_as_untrusted_or_null(struct __sk_buff *ctx) @@ -428,21 +365,6 @@ int kptr_xchg_ref_state(struct __sk_buff *ctx) return 0; } -SEC("?tc") -__failure __msg("Unreleased reference id=3 alloc_insn=") -int kptr_get_ref_state(struct __sk_buff *ctx) -{ - struct map_value *v; - int key = 0; - - v = bpf_map_lookup_elem(&array_map, &key); - if (!v) - return 0; - - bpf_kfunc_call_test_kptr_get(&v->ref_ptr, 0, 0); - return 0; -} - SEC("?tc") __failure __msg("Possibly NULL pointer passed to helper arg2") int kptr_xchg_possibly_null(struct __sk_buff *ctx) diff --git a/tools/testing/selftests/bpf/verifier/map_kptr.c b/tools/testing/selftests/bpf/verifier/map_kptr.c index d775ccb01989..a0cfc06d75bc 100644 --- a/tools/testing/selftests/bpf/verifier/map_kptr.c +++ b/tools/testing/selftests/bpf/verifier/map_kptr.c @@ -288,33 +288,6 @@ .result = REJECT, .errstr = "off=0 kptr isn't referenced kptr", }, -{ - "map_kptr: unref: bpf_kfunc_call_test_kptr_get rejected", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_LD_MAP_FD(BPF_REG_6, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .fixup_map_kptr = { 1 }, - .result = REJECT, - .errstr = "arg#0 no referenced kptr at map value offset=0", - .fixup_kfunc_btf_id = { - { "bpf_kfunc_call_test_kptr_get", 13 }, - } -}, /* Tests for referenced PTR_TO_BTF_ID */ { "map_kptr: ref: loaded pointer marked as untrusted", From patchwork Sun Apr 16 08:49:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 13212737 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34389C77B61 for ; Sun, 16 Apr 2023 08:49:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230330AbjDPItm (ORCPT ); Sun, 16 Apr 2023 04:49:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229472AbjDPItl (ORCPT ); Sun, 16 Apr 2023 04:49:41 -0400 Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B633512E; Sun, 16 Apr 2023 01:49:39 -0700 (PDT) Received: by mail-qt1-f173.google.com with SMTP id w38so8246425qtc.11; Sun, 16 Apr 2023 01:49:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681634978; x=1684226978; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DgspQl77tIJVbZbds2zeiklP4+aLtGFJe6XKlpY7kxM=; b=ay508hgeFX6cnUh79+fqVSk39BEAFetI4Zyf5EIgOJbHUzCXtKe8IjgPykosgV3u+n ph/yydC/eCPCS8344bN99oE3cjLFBqvPg7fjv/fzQGfQbhFQZFGNk3Dz9nRy9ERcYfld od3U3snRcn4X0RN0Y0kx2AWe/CA1M6SnX1uRlqjZHAVPreJKURUhIAnHMYtQXe51QO4K y+va4wTuQLzm2/+GQn/2QVIUd5rdjwBY9vKV1YBJAzr5LBmMFL1M6okhXKQfSFOoUMDY Q7+tDWi6IR4hPpuYZLurtVDljNRl7hKV+DDHEUIyGHUJEs4gkzcIkiZUcMfUYG0tr95P 0+2w== X-Gm-Message-State: AAQBX9dEUgclLWVaG7QgQS3cfBT+fo+lPKN/hSaNgStzuK99RpU+nINX Pbn0IS+hmD4baGpSisYPRmr51/yULbkCcKgiDNiS1Q== X-Google-Smtp-Source: AKy350ZRrp/1V4kI//p1pQdxXzpikazV88DthTmLzh6BjAnIc7GMogVhuveQufQ7QYJyD1KZmNx2/g== X-Received: by 2002:ac8:59d5:0:b0:3e3:902a:a084 with SMTP id f21-20020ac859d5000000b003e3902aa084mr18355105qtf.6.1681634978356; Sun, 16 Apr 2023 01:49:38 -0700 (PDT) Received: from localhost ([24.1.27.177]) by smtp.gmail.com with ESMTPSA id pr11-20020a05620a86cb00b0074acdb873a7sm2403006qkn.86.2023.04.16.01.49.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Apr 2023 01:49:37 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, memxor@gmail.com Subject: [PATCH bpf-next v2 2/3] bpf: Remove KF_KPTR_GET kfunc flag Date: Sun, 16 Apr 2023 03:49:27 -0500 Message-Id: <20230416084928.326135-3-void@manifault.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230416084928.326135-1-void@manifault.com> References: <20230416084928.326135-1-void@manifault.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net We've managed to improve the UX for kptrs significantly over the last 9 months. All of the existing use cases which previously had KF_KPTR_GET kfuncs (struct bpf_cpumask *, struct task_struct *, and struct cgroup *) have all been updated to be synchronized using RCU. In other words, their KF_KPTR_GET kfuncs have been removed in favor of KF_RCU | KF_ACQUIRE kfuncs, with the pointers themselves also being readable from maps in an RCU read region thanks to the types being RCU safe. While KF_KPTR_GET was a logical starting point for kptrs, it's become clear that they're not the correct abstraction. KF_KPTR_GET is a flag that essentially does nothing other than enforcing that the argument to a function is a pointer to a referenced kptr map value. At first glance, that's a useful thing to guarantee to a kfunc. It gives kfuncs the ability to try and acquire a reference on that kptr without requiring the BPF prog to do something like this: struct kptr_type *in_map, *new = NULL; in_map = bpf_kptr_xchg(&map->value, NULL); if (in_map) { new = bpf_kptr_type_acquire(in_map); in_map = bpf_kptr_xchg(&map->value, in_map); if (in_map) bpf_kptr_type_release(in_map); } That's clearly a pretty ugly (and racy) UX, and if using KF_KPTR_GET is the only alternative, it's better than nothing. However, the problem with any KF_KPTR_GET kfunc lies in the fact that it always requires some kind of synchronization in order to safely do an opportunistic acquire of the kptr in the map. This is because a BPF program running on another CPU could do a bpf_kptr_xchg() on that map value, and free the kptr after it's been read by the KF_KPTR_GET kfunc. For example, the now-removed bpf_task_kptr_get() kfunc did the following: struct task_struct *bpf_task_kptr_get(struct task_struct **pp) { struct task_struct *p; rcu_read_lock(); p = READ_ONCE(*pp); /* If p is non-NULL, it could still be freed by another CPU, * so we have to do an opportunistic refcount_inc_not_zero() * and return NULL if the task will be freed after the * current RCU read region. */ |f (p && !refcount_inc_not_zero(&p->rcu_users)) p = NULL; rcu_read_unlock(); return p; } In other words, the kfunc uses RCU to ensure that the task remains valid after it's been peeked from the map. However, this is completely redundant with just defining a KF_RCU kfunc that itself does a refcount_inc_not_zero(), which is exactly what bpf_task_acquire() now does. So, the question of whether KF_KPTR_GET is useful is actually, "Are there any synchronization mechanisms / safety flags that are required by certain kptrs, but which are not provided by the verifier to kfuncs?" The answer to that question today is "No", because every kptr we currently care about is RCU protected. Even if the answer ever became "yes", the proper way to support that referenced kptr type would be to add support for whatever synchronization mechanism it requires in the verifier, rather than giving kfuncs a flag that says, "Here's a pointer to a referenced kptr in a map, do whatever you need to do." With all that said -- so as to allow us to consolidate the kfunc API, and simplify the verifier a bit, this patch removes KF_KPTR_GET, and all relevant logic from the verifier. Signed-off-by: David Vernet --- include/linux/btf.h | 1 - kernel/bpf/verifier.c | 65 ------------------------------------------- 2 files changed, 66 deletions(-) diff --git a/include/linux/btf.h b/include/linux/btf.h index 813227bff58a..508199e38415 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -18,7 +18,6 @@ #define KF_ACQUIRE (1 << 0) /* kfunc is an acquire function */ #define KF_RELEASE (1 << 1) /* kfunc is a release function */ #define KF_RET_NULL (1 << 2) /* kfunc returns a pointer that may be NULL */ -#define KF_KPTR_GET (1 << 3) /* kfunc returns reference to a kptr */ /* Trusted arguments are those which are guaranteed to be valid when passed to * the kfunc. It is used to enforce that pointers obtained from either acquire * kfuncs, or from the main kernel on a tracepoint or struct_ops callback diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6a41b69a424e..5dae11ee01c3 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9339,11 +9339,6 @@ static bool is_kfunc_rcu(struct bpf_kfunc_call_arg_meta *meta) return meta->kfunc_flags & KF_RCU; } -static bool is_kfunc_arg_kptr_get(struct bpf_kfunc_call_arg_meta *meta, int arg) -{ - return arg == 0 && (meta->kfunc_flags & KF_KPTR_GET); -} - static bool __kfunc_param_match_suffix(const struct btf *btf, const struct btf_param *arg, const char *suffix) @@ -9554,7 +9549,6 @@ enum kfunc_ptr_arg_type { KF_ARG_PTR_TO_CTX, KF_ARG_PTR_TO_ALLOC_BTF_ID, /* Allocated object */ KF_ARG_PTR_TO_REFCOUNTED_KPTR, /* Refcounted local kptr */ - KF_ARG_PTR_TO_KPTR, /* PTR_TO_KPTR but type specific */ KF_ARG_PTR_TO_DYNPTR, KF_ARG_PTR_TO_ITER, KF_ARG_PTR_TO_LIST_HEAD, @@ -9666,21 +9660,6 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, if (is_kfunc_arg_refcounted_kptr(meta->btf, &args[argno])) return KF_ARG_PTR_TO_REFCOUNTED_KPTR; - if (is_kfunc_arg_kptr_get(meta, argno)) { - if (!btf_type_is_ptr(ref_t)) { - verbose(env, "arg#0 BTF type must be a double pointer for kptr_get kfunc\n"); - return -EINVAL; - } - ref_t = btf_type_by_id(meta->btf, ref_t->type); - ref_tname = btf_name_by_offset(meta->btf, ref_t->name_off); - if (!btf_type_is_struct(ref_t)) { - verbose(env, "kernel function %s args#0 pointer type %s %s is not supported\n", - meta->func_name, btf_type_str(ref_t), ref_tname); - return -EINVAL; - } - return KF_ARG_PTR_TO_KPTR; - } - if (is_kfunc_arg_dynptr(meta->btf, &args[argno])) return KF_ARG_PTR_TO_DYNPTR; @@ -9794,40 +9773,6 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env, return 0; } -static int process_kf_arg_ptr_to_kptr(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, - const struct btf_type *ref_t, - const char *ref_tname, - struct bpf_kfunc_call_arg_meta *meta, - int argno) -{ - struct btf_field *kptr_field; - - /* check_func_arg_reg_off allows var_off for - * PTR_TO_MAP_VALUE, but we need fixed offset to find - * off_desc. - */ - if (!tnum_is_const(reg->var_off)) { - verbose(env, "arg#0 must have constant offset\n"); - return -EINVAL; - } - - kptr_field = btf_record_find(reg->map_ptr->record, reg->off + reg->var_off.value, BPF_KPTR); - if (!kptr_field || kptr_field->type != BPF_KPTR_REF) { - verbose(env, "arg#0 no referenced kptr at map value offset=%llu\n", - reg->off + reg->var_off.value); - return -EINVAL; - } - - if (!btf_struct_ids_match(&env->log, meta->btf, ref_t->type, 0, kptr_field->kptr.btf, - kptr_field->kptr.btf_id, true)) { - verbose(env, "kernel function %s args#%d expected pointer to %s %s\n", - meta->func_name, argno, btf_type_str(ref_t), ref_tname); - return -EINVAL; - } - return 0; -} - static int ref_set_non_owning(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { struct bpf_verifier_state *state = env->cur_state; @@ -10315,7 +10260,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ /* Trusted arguments have the same offset checks as release arguments */ arg_type |= OBJ_RELEASE; break; - case KF_ARG_PTR_TO_KPTR: case KF_ARG_PTR_TO_DYNPTR: case KF_ARG_PTR_TO_ITER: case KF_ARG_PTR_TO_LIST_HEAD: @@ -10368,15 +10312,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ meta->arg_obj_drop.btf_id = reg->btf_id; } break; - case KF_ARG_PTR_TO_KPTR: - if (reg->type != PTR_TO_MAP_VALUE) { - verbose(env, "arg#0 expected pointer to map value\n"); - return -EINVAL; - } - ret = process_kf_arg_ptr_to_kptr(env, reg, ref_t, ref_tname, meta, i); - if (ret < 0) - return ret; - break; case KF_ARG_PTR_TO_DYNPTR: { enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR; From patchwork Sun Apr 16 08:49:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 13212738 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EAEBC77B72 for ; Sun, 16 Apr 2023 08:49:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230411AbjDPIts (ORCPT ); Sun, 16 Apr 2023 04:49:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230350AbjDPItm (ORCPT ); Sun, 16 Apr 2023 04:49:42 -0400 Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60524A4; Sun, 16 Apr 2023 01:49:41 -0700 (PDT) Received: by mail-qt1-f172.google.com with SMTP id e3so10572920qtm.12; Sun, 16 Apr 2023 01:49:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681634980; x=1684226980; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bsE9U0MMwp7wZrIbIbCM0Lvuc4kI+kVHJ8gjGciCgfw=; b=l+9EFTz6hfRyKCsz0WE0Z2OgDEbn75dOQJKBJZG4KBDuC5Dhg7z7c3ZbNDvilLqQ2x WzeMXcXdJucfT9s5n4o1Hp5NgmLiF+XYXWHOjQxCxlV8rDVSC2qgX0mF0qY8J+UUKigE pM9LNZQ+gOHfPHF/fAbxdcR+fhFThyOQTY/+wnHRGQhvAmNkhS8Q1Ysvck51GIBByUMI Lu9n4BE4sGQyOjYc2jkhrbtJB05u6SmlY9gPMcz2NxmuZgjBlg+jo0TxtHuEy4G+hY6f R6+UC+FJIa3RPGuZmJjP3Co/6dHwwImdmH5XaoboGFRYFqTv0M3DVa8sJHdLNOYNVPD0 MaxA== X-Gm-Message-State: AAQBX9fy2lqNwYrZi/O66vpLKokgAIw4ywRWe4tH6U79Hy1vfdkH+WQ9 5UIiruCCif7sxQB+59ApovU3Irmdaf3gjcEjE7C08g== X-Google-Smtp-Source: AKy350YDg4GE2SDrWaZTVd0wP5z9LzyzFhFukoi1IIYrsdmO+3TvEtXD0gQnEB+0bypZRLbEbsXABg== X-Received: by 2002:a05:622a:1056:b0:3e6:332a:70db with SMTP id f22-20020a05622a105600b003e6332a70dbmr18558382qte.37.1681634980093; Sun, 16 Apr 2023 01:49:40 -0700 (PDT) Received: from localhost ([24.1.27.177]) by smtp.gmail.com with ESMTPSA id d17-20020a05620a205100b0074d673f1b47sm645582qka.31.2023.04.16.01.49.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Apr 2023 01:49:39 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, memxor@gmail.com Subject: [PATCH bpf-next v2 3/3] bpf,docs: Remove KF_KPTR_GET from documentation Date: Sun, 16 Apr 2023 03:49:28 -0500 Message-Id: <20230416084928.326135-4-void@manifault.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230416084928.326135-1-void@manifault.com> References: <20230416084928.326135-1-void@manifault.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net A prior patch removed KF_KPTR_GET from the kernel. Now that it's no longer accessible to kfunc authors, this patch removes it from the BPF kfunc documentation. Signed-off-by: David Vernet --- Documentation/bpf/kfuncs.rst | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index 3b42cfe12437..ea2516374d92 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -184,16 +184,7 @@ in. All copies of the pointer being released are invalidated as a result of invoking kfunc with this flag. KF_RELEASE kfuncs automatically receive the protection afforded by the KF_TRUSTED_ARGS flag described below. -2.4.4 KF_KPTR_GET flag ----------------------- - -The KF_KPTR_GET flag is used to indicate that the kfunc takes the first argument -as a pointer to kptr, safely increments the refcount of the object it points to, -and returns a reference to the user. The rest of the arguments may be normal -arguments of a kfunc. The KF_KPTR_GET flag should be used in conjunction with -KF_ACQUIRE and KF_RET_NULL flags. - -2.4.5 KF_TRUSTED_ARGS flag +2.4.4 KF_TRUSTED_ARGS flag -------------------------- The KF_TRUSTED_ARGS flag is used for kfuncs taking pointer arguments. It @@ -205,7 +196,7 @@ exception described below). There are two types of pointers to kernel objects which are considered "valid": 1. Pointers which are passed as tracepoint or struct_ops callback arguments. -2. Pointers which were returned from a KF_ACQUIRE or KF_KPTR_GET kfunc. +2. Pointers which were returned from a KF_ACQUIRE kfunc. Pointers to non-BTF objects (e.g. scalar pointers) may also be passed to KF_TRUSTED_ARGS kfuncs, and may have a non-zero offset. @@ -232,13 +223,13 @@ In other words, you must: 2. Specify the type and name of the trusted nested field. This field must match the field in the original type definition exactly. -2.4.6 KF_SLEEPABLE flag +2.4.5 KF_SLEEPABLE flag ----------------------- The KF_SLEEPABLE flag is used for kfuncs that may sleep. Such kfuncs can only be called by sleepable BPF programs (BPF_F_SLEEPABLE). -2.4.7 KF_DESTRUCTIVE flag +2.4.6 KF_DESTRUCTIVE flag -------------------------- The KF_DESTRUCTIVE flag is used to indicate functions calling which is @@ -247,7 +238,7 @@ rebooting or panicking. Due to this additional restrictions apply to these calls. At the moment they only require CAP_SYS_BOOT capability, but more can be added later. -2.4.8 KF_RCU flag +2.4.7 KF_RCU flag ----------------- The KF_RCU flag is a weaker version of KF_TRUSTED_ARGS. The kfuncs marked with @@ -260,7 +251,7 @@ also be KF_RET_NULL. .. _KF_deprecated_flag: -2.4.9 KF_DEPRECATED flag +2.4.8 KF_DEPRECATED flag ------------------------ The KF_DEPRECATED flag is used for kfuncs which are scheduled to be