From patchwork Tue Nov 8 07:40:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13035974 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BCD6C433FE for ; Tue, 8 Nov 2022 07:41:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233749AbiKHHld (ORCPT ); Tue, 8 Nov 2022 02:41:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230246AbiKHHlM (ORCPT ); Tue, 8 Nov 2022 02:41:12 -0500 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 927BB26EA for ; Mon, 7 Nov 2022 23:40:57 -0800 (PST) Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A80UoUb007136 for ; Mon, 7 Nov 2022 23:40:57 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=Nbtmoc9lLSKreYlbRoKrmsrKIDLlzBEi1cQlB9kvHQI=; b=pFaCc9askuCEySDlWCeOgySpGPPuAybiz7KHmkkx+Mb0dAWD51cdemAwAlLXxHl9KgdJ 46sqcZE1MsOd2XhbRBjJDME/C+vikgjiZNqgfdnWRIF0wgRI13MUWHqfLy+o8WbdOsXd lMTc54KAtmpZVdcbpL405uGMoWY+JG2dNys= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kqcmqt6fg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Nov 2022 23:40:57 -0800 Received: from twshared27579.05.ash9.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:21d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 7 Nov 2022 23:40:56 -0800 Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id 3EFCC11D2334E; Mon, 7 Nov 2022 23:40:53 -0800 (PST) From: Yonghong Song To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Martin KaFai Lau , KP Singh Subject: [PATCH bpf-next v2 1/8] compiler_types: Define __rcu as __attribute__((btf_type_tag("rcu"))) Date: Mon, 7 Nov 2022 23:40:53 -0800 Message-ID: <20221108074053.262530-1-yhs@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221108074047.261848-1-yhs@fb.com> References: <20221108074047.261848-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: Z07tGaLWjb_n3Gj6CdxtPo9tnT7UhS9- X-Proofpoint-ORIG-GUID: Z07tGaLWjb_n3Gj6CdxtPo9tnT7UhS9- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-07_11,2022-11-07_02,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, without rcu attribute info in BTF, the verifier treats rcu tagged pointer as a normal pointer. This might be a problem for sleepable program where rcu_read_lock()/unlock() is not available. For example, for a sleepable fentry program, if rcu protected memory access is interleaved with a sleepable helper/kfunc, it is possible the memory access after the sleepable helper/kfunc might be invalid since the object might have been freed then. To prevent such cases, introducing rcu tagging for memory accesses in verifier can help to reject such programs. To enable rcu tagging in BTF, during kernel compilation, define __rcu as attribute btf_type_tag("rcu") so __rcu information can be preserved in dwarf and btf, and later can be used for bpf prog verification. Acked-by: KP Singh Signed-off-by: Yonghong Song --- include/linux/compiler_types.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index eb0466236661..7c1afe0f4129 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -49,7 +49,8 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } # endif # define __iomem # define __percpu BTF_TYPE_TAG(percpu) -# define __rcu +# define __rcu BTF_TYPE_TAG(rcu) + # define __chk_user_ptr(x) (void)0 # define __chk_io_ptr(x) (void)0 /* context/locking */ From patchwork Tue Nov 8 07:40:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13035975 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7589AC433FE for ; Tue, 8 Nov 2022 07:41:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233676AbiKHHli (ORCPT ); Tue, 8 Nov 2022 02:41:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233700AbiKHHlO (ORCPT ); Tue, 8 Nov 2022 02:41:14 -0500 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6481DE0EB for ; Mon, 7 Nov 2022 23:41:04 -0800 (PST) Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A80Uv3o007285 for ; Mon, 7 Nov 2022 23:41:04 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=dJC0bmBUZw6PnWO8upcy7UHlzVtLkCojRKl1o780Dsw=; b=a70onM8Yof6npVCMrpjdw6hcncSvBhdfAYksLd2bTLqXMD6m3LLZTKk8lohCo9l8WwVD EIqXJVHKDKIU2DqQ9s0RDjukjijNNYbRsuGYsJOItEIDOHuB5gfnMjon7pBxSzzk5oyw bbCsZcO/R0TldsEfWJ9OMCj+PG4TJfgy9eI= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kqcmqt6gh-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Nov 2022 23:41:04 -0800 Received: from twshared14438.02.ash8.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 7 Nov 2022 23:41:02 -0800 Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id 133BB11D23393; Mon, 7 Nov 2022 23:40:58 -0800 (PST) From: Yonghong Song To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Martin KaFai Lau Subject: [PATCH bpf-next v2 2/8] bpf: Refactor btf_struct_access callback interface Date: Mon, 7 Nov 2022 23:40:58 -0800 Message-ID: <20221108074058.262856-1-yhs@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221108074047.261848-1-yhs@fb.com> References: <20221108074047.261848-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: m5eEaC6-lERaud2YWFgOVlryJeizDMFR X-Proofpoint-ORIG-GUID: m5eEaC6-lERaud2YWFgOVlryJeizDMFR X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-07_11,2022-11-07_02,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Current bpf_verifier_ops->btf_struct_access() callback function walks through struct member access chain and gathers information like next_btf_id and bpf_type_flag (for __user and __percpu). In later patches, additional information like whether the pointer is tagged with __rcu will be gathered as well. So refactor btf_struct_access() interface to wrap next_btf_id and bpf_type_flag in a structure, so the new field can be easily added to the new structure without modifying all callback functions. Signed-off-by: Yonghong Song --- include/linux/bpf.h | 11 ++++++++--- include/linux/filter.h | 4 ++-- kernel/bpf/btf.c | 25 ++++++++++++------------- kernel/bpf/verifier.c | 20 ++++++++++---------- net/bpf/bpf_dummy_struct_ops.c | 6 ++---- net/core/filter.c | 20 ++++++++------------ net/ipv4/bpf_tcp_ca.c | 6 ++---- net/netfilter/nf_conntrack_bpf.c | 3 +-- 8 files changed, 45 insertions(+), 50 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 798aec816970..5011cb50abf1 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -758,6 +758,11 @@ struct bpf_prog_ops { union bpf_attr __user *uattr); }; +struct btf_struct_access_info { + u32 next_btf_id; + enum bpf_type_flag flag; +}; + struct bpf_verifier_ops { /* return eBPF function prototype for verification */ const struct bpf_func_proto * @@ -782,7 +787,7 @@ struct bpf_verifier_ops { const struct btf *btf, const struct btf_type *t, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag); + struct btf_struct_access_info *info); }; struct bpf_prog_offload_ops { @@ -2070,7 +2075,7 @@ static inline bool bpf_tracing_btf_ctx_access(int off, int size, int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag); + struct btf_struct_access_info *info); bool btf_struct_ids_match(struct bpf_verifier_log *log, const struct btf *btf, u32 id, int off, const struct btf *need_btf, u32 need_type_id, @@ -2323,7 +2328,7 @@ static inline int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag) + struct btf_struct_access_info *info) { return -EACCES; } diff --git a/include/linux/filter.h b/include/linux/filter.h index efc42a6e3aed..75340a5b83d3 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -570,8 +570,8 @@ DECLARE_STATIC_KEY_FALSE(bpf_stats_enabled_key); extern struct mutex nf_conn_btf_access_lock; extern int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, - enum bpf_access_type atype, u32 *next_btf_id, - enum bpf_type_flag *flag); + enum bpf_access_type atype, + struct btf_struct_access_info *info); typedef unsigned int (*bpf_dispatcher_fn)(const void *ctx, const struct bpf_insn *insnsi, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 5579ff3a5b54..cf16c0ead9f4 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -5639,7 +5639,7 @@ enum bpf_struct_walk_result { static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, - u32 *next_btf_id, enum bpf_type_flag *flag) + struct btf_struct_access_info *info) { u32 i, moff, mtrue_end, msize = 0, total_nelems = 0; const struct btf_type *mtype, *elem_type = NULL; @@ -5818,7 +5818,7 @@ static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf, /* return if the offset matches the member offset */ if (off == moff) { - *next_btf_id = mid; + info->next_btf_id = mid; return WALK_STRUCT; } @@ -5853,8 +5853,8 @@ static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf, stype = btf_type_skip_modifiers(btf, mtype->type, &id); if (btf_type_is_struct(stype)) { - *next_btf_id = id; - *flag = tmp_flag; + info->next_btf_id = id; + info->flag = tmp_flag; return WALK_PTR; } } @@ -5881,22 +5881,20 @@ static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf, int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, enum bpf_access_type atype __maybe_unused, - u32 *next_btf_id, enum bpf_type_flag *flag) + struct btf_struct_access_info *info) { - enum bpf_type_flag tmp_flag = 0; + struct btf_struct_access_info tmp_info = {}; int err; - u32 id; do { - err = btf_struct_walk(log, btf, t, off, size, &id, &tmp_flag); + err = btf_struct_walk(log, btf, t, off, size, &tmp_info); switch (err) { case WALK_PTR: /* If we found the pointer or scalar on t+off, * we're done. */ - *next_btf_id = id; - *flag = tmp_flag; + *info = tmp_info; return PTR_TO_BTF_ID; case WALK_SCALAR: return SCALAR_VALUE; @@ -5905,7 +5903,7 @@ int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, * by diving in it. At this point the offset is * aligned with the new type, so set it to 0. */ - t = btf_type_by_id(btf, id); + t = btf_type_by_id(btf, tmp_info.next_btf_id); off = 0; break; default: @@ -5942,7 +5940,7 @@ bool btf_struct_ids_match(struct bpf_verifier_log *log, bool strict) { const struct btf_type *type; - enum bpf_type_flag flag; + struct btf_struct_access_info info = {}; int err; /* Are we already done? */ @@ -5958,7 +5956,7 @@ bool btf_struct_ids_match(struct bpf_verifier_log *log, type = btf_type_by_id(btf, id); if (!type) return false; - err = btf_struct_walk(log, btf, type, off, 1, &id, &flag); + err = btf_struct_walk(log, btf, type, off, 1, &info); if (err != WALK_STRUCT) return false; @@ -5967,6 +5965,7 @@ bool btf_struct_ids_match(struct bpf_verifier_log *log, * continue the search with offset 0 in the new * type. */ + id = info.next_btf_id; if (!btf_types_are_same(btf, id, need_btf, need_type_id)) { off = 0; goto again; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d3b75aa0c54d..4d50f9568245 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4648,8 +4648,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg = regs + regno; const struct btf_type *t = btf_type_by_id(reg->btf, reg->btf_id); const char *tname = btf_name_by_offset(reg->btf, t->name_off); - enum bpf_type_flag flag = 0; - u32 btf_id; + struct btf_struct_access_info info = {}; int ret; if (off < 0) { @@ -4684,7 +4683,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, if (env->ops->btf_struct_access) { ret = env->ops->btf_struct_access(&env->log, reg->btf, t, - off, size, atype, &btf_id, &flag); + off, size, atype, &info); } else { if (atype != BPF_READ) { verbose(env, "only read is supported\n"); @@ -4692,7 +4691,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, } ret = btf_struct_access(&env->log, reg->btf, t, off, size, - atype, &btf_id, &flag); + atype, &info); } if (ret < 0) @@ -4702,10 +4701,11 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, * also inherit the untrusted flag. */ if (type_flag(reg->type) & PTR_UNTRUSTED) - flag |= PTR_UNTRUSTED; + info.flag |= PTR_UNTRUSTED; if (atype == BPF_READ && value_regno >= 0) - mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag); + mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, info.next_btf_id, + info.flag); return 0; } @@ -4717,11 +4717,10 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env, int value_regno) { struct bpf_reg_state *reg = regs + regno; + struct btf_struct_access_info info = {}; struct bpf_map *map = reg->map_ptr; - enum bpf_type_flag flag = 0; const struct btf_type *t; const char *tname; - u32 btf_id; int ret; if (!btf_vmlinux) { @@ -4756,12 +4755,13 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env, return -EACCES; } - ret = btf_struct_access(&env->log, btf_vmlinux, t, off, size, atype, &btf_id, &flag); + ret = btf_struct_access(&env->log, btf_vmlinux, t, off, size, atype, &info); if (ret < 0) return ret; if (value_regno >= 0) - mark_btf_ld_reg(env, regs, value_regno, ret, btf_vmlinux, btf_id, flag); + mark_btf_ld_reg(env, regs, value_regno, ret, btf_vmlinux, info.next_btf_id, + info.flag); return 0; } diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c index e78dadfc5829..dd72028ac5b1 100644 --- a/net/bpf/bpf_dummy_struct_ops.c +++ b/net/bpf/bpf_dummy_struct_ops.c @@ -159,8 +159,7 @@ static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + struct btf_struct_access_info *info) { const struct btf_type *state; s32 type_id; @@ -177,8 +176,7 @@ static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log, return -EACCES; } - err = btf_struct_access(log, btf, t, off, size, atype, next_btf_id, - flag); + err = btf_struct_access(log, btf, t, off, size, atype, info); if (err < 0) return err; diff --git a/net/core/filter.c b/net/core/filter.c index cb3b635e35be..98f10147c677 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -8653,26 +8653,24 @@ EXPORT_SYMBOL_GPL(nf_conn_btf_access_lock); int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, - enum bpf_access_type atype, u32 *next_btf_id, - enum bpf_type_flag *flag); + enum bpf_access_type atype, + struct btf_struct_access_info *info); EXPORT_SYMBOL_GPL(nfct_btf_struct_access); static int tc_cls_act_btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + struct btf_struct_access_info *info) { int ret = -EACCES; if (atype == BPF_READ) - return btf_struct_access(log, btf, t, off, size, atype, next_btf_id, - flag); + return btf_struct_access(log, btf, t, off, size, atype, info); mutex_lock(&nf_conn_btf_access_lock); if (nfct_btf_struct_access) - ret = nfct_btf_struct_access(log, btf, t, off, size, atype, next_btf_id, flag); + ret = nfct_btf_struct_access(log, btf, t, off, size, atype, info); mutex_unlock(&nf_conn_btf_access_lock); return ret; @@ -8741,18 +8739,16 @@ static int xdp_btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + struct btf_struct_access_info *info) { int ret = -EACCES; if (atype == BPF_READ) - return btf_struct_access(log, btf, t, off, size, atype, next_btf_id, - flag); + return btf_struct_access(log, btf, t, off, size, atype, info); mutex_lock(&nf_conn_btf_access_lock); if (nfct_btf_struct_access) - ret = nfct_btf_struct_access(log, btf, t, off, size, atype, next_btf_id, flag); + ret = nfct_btf_struct_access(log, btf, t, off, size, atype, info); mutex_unlock(&nf_conn_btf_access_lock); return ret; diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c index 6da16ae6a962..dde4ac3f44fd 100644 --- a/net/ipv4/bpf_tcp_ca.c +++ b/net/ipv4/bpf_tcp_ca.c @@ -72,14 +72,12 @@ static int bpf_tcp_ca_btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + struct btf_struct_access_info *info) { size_t end; if (atype == BPF_READ) - return btf_struct_access(log, btf, t, off, size, atype, next_btf_id, - flag); + return btf_struct_access(log, btf, t, off, size, atype, info); if (t != tcp_sock_type) { bpf_log(log, "only read is supported\n"); diff --git a/net/netfilter/nf_conntrack_bpf.c b/net/netfilter/nf_conntrack_bpf.c index 8639e7efd0e2..e09b0dfd8115 100644 --- a/net/netfilter/nf_conntrack_bpf.c +++ b/net/netfilter/nf_conntrack_bpf.c @@ -194,8 +194,7 @@ static int _nf_conntrack_btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + struct btf_struct_access_info *info) { const struct btf_type *ncit; const struct btf_type *nct; From patchwork Tue Nov 8 07:41:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13035976 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31BC5C4332F for ; Tue, 8 Nov 2022 07:41:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233728AbiKHHll (ORCPT ); Tue, 8 Nov 2022 02:41:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233724AbiKHHlV (ORCPT ); Tue, 8 Nov 2022 02:41:21 -0500 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6232F021 for ; Mon, 7 Nov 2022 23:41:07 -0800 (PST) Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A85mwsE021589 for ; Mon, 7 Nov 2022 23:41:07 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=LcPrfqPerYhi7vgjoBHWWYxXstPHFV/mW04rEbwsABA=; b=RJR3t9vNp1UBYWw/L9c9/IDM3rlaE1dbny6rRvwU+Ek0YoU/D4xlNgRwjAKpbpfqrZxs ekBjkSl15ruvri4L4dXgx0YYwxzoyfOjTIKhcHBcOEb6tiRwWceJRWa6gvqS2hBfXl7b R+bqSiLwd0n2RZJX5aBTmCAq7QYuA31pMqs= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kqh9w8hn5-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Nov 2022 23:41:07 -0800 Received: from twshared21592.39.frc1.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 7 Nov 2022 23:41:06 -0800 Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id 545D111D233AB; Mon, 7 Nov 2022 23:41:04 -0800 (PST) From: Yonghong Song To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Martin KaFai Lau Subject: [PATCH bpf-next v2 3/8] bpf: Abstract out functions to check sleepable helpers Date: Mon, 7 Nov 2022 23:41:04 -0800 Message-ID: <20221108074104.263145-1-yhs@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221108074047.261848-1-yhs@fb.com> References: <20221108074047.261848-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: sbajIRNxp_U5WYW3akNm57AlnxTW-uqk X-Proofpoint-GUID: sbajIRNxp_U5WYW3akNm57AlnxTW-uqk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-07_11,2022-11-07_02,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Abstract out two functions to check whether a particular helper is sleepable or not for bpf_lsm and bpf_trace. These two functions will be used later to check whether a helper is sleepable or not in verifier. There is no functionality change. Signed-off-by: Yonghong Song --- include/linux/bpf_lsm.h | 6 ++++++ include/linux/trace_events.h | 8 ++++++++ kernel/bpf/bpf_lsm.c | 20 ++++++++++++++++---- kernel/trace/bpf_trace.c | 22 ++++++++++++++++++---- 4 files changed, 48 insertions(+), 8 deletions(-) diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h index 4bcf76a9bb06..d99b1caf118e 100644 --- a/include/linux/bpf_lsm.h +++ b/include/linux/bpf_lsm.h @@ -28,6 +28,7 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog, const struct bpf_prog *prog); bool bpf_lsm_is_sleepable_hook(u32 btf_id); +const struct bpf_func_proto *bpf_lsm_sleepable_func_proto(enum bpf_func_id func_id); static inline struct bpf_storage_blob *bpf_inode( const struct inode *inode) @@ -50,6 +51,11 @@ static inline bool bpf_lsm_is_sleepable_hook(u32 btf_id) { return false; } +static inline const struct bpf_func_proto * +bpf_lsm_sleepable_func_proto(enum bpf_func_id func_id) +{ + return NULL; +} static inline int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog, const struct bpf_prog *prog) diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h index 20749bd9db71..ccf046bf253f 100644 --- a/include/linux/trace_events.h +++ b/include/linux/trace_events.h @@ -16,6 +16,8 @@ struct tracer; struct dentry; struct bpf_prog; union bpf_attr; +struct bpf_func_proto; +enum bpf_func_id; const char *trace_print_flags_seq(struct trace_seq *p, const char *delim, unsigned long flags, @@ -748,6 +750,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id, u32 *fd_type, const char **buf, u64 *probe_offset, u64 *probe_addr); int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); +const struct bpf_func_proto *bpf_tracing_sleepable_func_proto(enum bpf_func_id func_id); #else static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) { @@ -794,6 +797,11 @@ bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { return -EOPNOTSUPP; } +static inline const struct bpf_func_proto * +bpf_tracing_sleepable_func_proto(enum bpf_func_id func_id) +{ + return NULL; +} #endif enum { diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c index d6c9b3705f24..2f993a003389 100644 --- a/kernel/bpf/bpf_lsm.c +++ b/kernel/bpf/bpf_lsm.c @@ -192,6 +192,18 @@ static const struct bpf_func_proto bpf_get_attach_cookie_proto = { .arg1_type = ARG_PTR_TO_CTX, }; +const struct bpf_func_proto *bpf_lsm_sleepable_func_proto(enum bpf_func_id func_id) +{ + switch (func_id) { + case BPF_FUNC_ima_inode_hash: + return &bpf_ima_inode_hash_proto; + case BPF_FUNC_ima_file_hash: + return &bpf_ima_file_hash_proto; + default: + return NULL; + } +} + static const struct bpf_func_proto * bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { @@ -203,6 +215,10 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return func_proto; } + func_proto = bpf_lsm_sleepable_func_proto(func_id); + if (func_proto) + return prog->aux->sleepable ? func_proto : NULL; + switch (func_id) { case BPF_FUNC_inode_storage_get: return &bpf_inode_storage_get_proto; @@ -220,10 +236,6 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_spin_unlock_proto; case BPF_FUNC_bprm_opts_set: return &bpf_bprm_opts_set_proto; - case BPF_FUNC_ima_inode_hash: - return prog->aux->sleepable ? &bpf_ima_inode_hash_proto : NULL; - case BPF_FUNC_ima_file_hash: - return prog->aux->sleepable ? &bpf_ima_file_hash_proto : NULL; case BPF_FUNC_get_attach_cookie: return bpf_prog_has_trampoline(prog) ? &bpf_get_attach_cookie_proto : NULL; #ifdef CONFIG_NET diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index f2d8d070d024..7804fa71d3f1 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1383,9 +1383,27 @@ static int __init bpf_key_sig_kfuncs_init(void) late_initcall(bpf_key_sig_kfuncs_init); #endif /* CONFIG_KEYS */ +const struct bpf_func_proto *bpf_tracing_sleepable_func_proto(enum bpf_func_id func_id) +{ + switch (func_id) { + case BPF_FUNC_copy_from_user: + return &bpf_copy_from_user_proto; + case BPF_FUNC_copy_from_user_task: + return &bpf_copy_from_user_task_proto; + default: + return NULL; + } +} + static const struct bpf_func_proto * bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { + const struct bpf_func_proto *func_proto; + + func_proto = bpf_tracing_sleepable_func_proto(func_id); + if (func_proto) + return prog->aux->sleepable ? func_proto : NULL; + switch (func_id) { case BPF_FUNC_map_lookup_elem: return &bpf_map_lookup_elem_proto; @@ -1484,10 +1502,6 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_jiffies64_proto; case BPF_FUNC_get_task_stack: return &bpf_get_task_stack_proto; - case BPF_FUNC_copy_from_user: - return prog->aux->sleepable ? &bpf_copy_from_user_proto : NULL; - case BPF_FUNC_copy_from_user_task: - return prog->aux->sleepable ? &bpf_copy_from_user_task_proto : NULL; case BPF_FUNC_snprintf_btf: return &bpf_snprintf_btf_proto; case BPF_FUNC_per_cpu_ptr: From patchwork Tue Nov 8 07:41:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13035983 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52226C4321E for ; Tue, 8 Nov 2022 07:42:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233571AbiKHHmC (ORCPT ); Tue, 8 Nov 2022 02:42:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233591AbiKHHlc (ORCPT ); Tue, 8 Nov 2022 02:41:32 -0500 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC56613F61 for ; Mon, 7 Nov 2022 23:41:22 -0800 (PST) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.17.1.5/8.17.1.5) with ESMTP id 2A80COrT029642 for ; Mon, 7 Nov 2022 23:41:22 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=D93K/Todg/2FMN4Np5Is9cQBLvrEsE32LFl2lWP4jIs=; b=ZOvYHXVuogE9O4nvrVSnyZzhzfMFRmRlK7pXK0EwMgCFtO9RgAlkv5v3k/BEjka1mJfh QhkNxu5AQAQlDb/cvW8ufCm42kyeqwdCZg1gOsOWK9bAcXA/VVsQ4eoMjioSfFw1ByDs b3L7agl4S11B12Jq56Ma0YL1m9AeeilFlPE= Received: from maileast.thefacebook.com ([163.114.130.16]) by m0001303.ppops.net (PPS) with ESMTPS id 3kqcc5jgas-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Nov 2022 23:41:22 -0800 Received: from twshared6758.06.ash9.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 7 Nov 2022 23:41:13 -0800 Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id 8E37411D233CC; Mon, 7 Nov 2022 23:41:09 -0800 (PST) From: Yonghong Song To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Martin KaFai Lau Subject: [PATCH bpf-next v2 4/8] bpf: Add kfunc bpf_rcu_read_lock/unlock() Date: Mon, 7 Nov 2022 23:41:09 -0800 Message-ID: <20221108074109.263773-1-yhs@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221108074047.261848-1-yhs@fb.com> References: <20221108074047.261848-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: eKgEA_AOX-XwG4Aa_XN83Q8pUdlPqP4P X-Proofpoint-GUID: eKgEA_AOX-XwG4Aa_XN83Q8pUdlPqP4P X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-07_11,2022-11-07_02,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add two kfunc's bpf_rcu_read_lock() and bpf_rcu_read_unlock(). These two kfunc's can be used for all program types. A new kfunc hook type BTF_KFUNC_HOOK_GENERIC is added which corresponds to prog type BPF_PROG_TYPE_UNSPEC, indicating the kfunc intends to be used for all prog types. The kfunc bpf_rcu_read_lock() is tagged with new flag KF_RCU_LOCK and bpf_rcu_read_unlock() with new flag KF_RCU_UNLOCK. These two new flags are used by the verifier to identify these two helpers. Signed-off-by: Yonghong Song --- include/linux/bpf.h | 3 +++ include/linux/btf.h | 2 ++ kernel/bpf/btf.c | 8 ++++++++ kernel/bpf/helpers.c | 25 ++++++++++++++++++++++++- 4 files changed, 37 insertions(+), 1 deletion(-) For new kfuncs, I added KF_RCU_LOCK and KF_RCU_UNLOCK flags to indicate a helper could be bpf_rcu_read_lock/unlock(). This could be a waste for kfunc flag space as the flag is used to identify one helper. Alternatively, we might identify kfunc based on btf_id. Any suggestions are welcome. diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 5011cb50abf1..b4bbcafd1c9b 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2118,6 +2118,9 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog); const struct btf_func_model * bpf_jit_find_kfunc_model(const struct bpf_prog *prog, const struct bpf_insn *insn); +void bpf_rcu_read_lock(void); +void bpf_rcu_read_unlock(void); + struct bpf_core_ctx { struct bpf_verifier_log *log; const struct btf *btf; diff --git a/include/linux/btf.h b/include/linux/btf.h index d80345fa566b..8783ca7e6079 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -51,6 +51,8 @@ #define KF_TRUSTED_ARGS (1 << 4) /* kfunc only takes trusted pointer arguments */ #define KF_SLEEPABLE (1 << 5) /* kfunc may sleep */ #define KF_DESTRUCTIVE (1 << 6) /* kfunc performs destructive actions */ +#define KF_RCU_LOCK (1 << 7) /* kfunc does rcu_read_lock() */ +#define KF_RCU_UNLOCK (1 << 8) /* kfunc does rcu_read_unlock() */ /* * Return the name of the passed struct, if exists, or halt the build if for diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index cf16c0ead9f4..d2ee1669a2f3 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -199,6 +199,7 @@ DEFINE_IDR(btf_idr); DEFINE_SPINLOCK(btf_idr_lock); enum btf_kfunc_hook { + BTF_KFUNC_HOOK_GENERIC, BTF_KFUNC_HOOK_XDP, BTF_KFUNC_HOOK_TC, BTF_KFUNC_HOOK_STRUCT_OPS, @@ -7498,6 +7499,8 @@ static u32 *__btf_kfunc_id_set_contains(const struct btf *btf, static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type) { switch (prog_type) { + case BPF_PROG_TYPE_UNSPEC: + return BTF_KFUNC_HOOK_GENERIC; case BPF_PROG_TYPE_XDP: return BTF_KFUNC_HOOK_XDP; case BPF_PROG_TYPE_SCHED_CLS: @@ -7526,6 +7529,11 @@ u32 *btf_kfunc_id_set_contains(const struct btf *btf, u32 kfunc_btf_id) { enum btf_kfunc_hook hook; + u32 *kfunc_flags; + + kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_GENERIC, kfunc_btf_id); + if (kfunc_flags) + return kfunc_flags; hook = bpf_prog_type_to_kfunc_hook(prog_type); return __btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id); diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 283f55bbeb70..f364d01e9d93 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1717,9 +1717,32 @@ static const struct btf_kfunc_id_set tracing_kfunc_set = { .set = &tracing_btf_ids, }; +void bpf_rcu_read_lock(void) +{ + rcu_read_lock(); +} + +void bpf_rcu_read_unlock(void) +{ + rcu_read_unlock(); +} + +BTF_SET8_START(generic_btf_ids) +BTF_ID_FLAGS(func, bpf_rcu_read_lock, KF_RCU_LOCK) +BTF_ID_FLAGS(func, bpf_rcu_read_unlock, KF_RCU_UNLOCK) +BTF_SET8_END(generic_btf_ids) + +static const struct btf_kfunc_id_set generic_kfunc_set = { + .owner = THIS_MODULE, + .set = &generic_btf_ids, +}; + static int __init kfunc_init(void) { - return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &tracing_kfunc_set); + int ret; + + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &tracing_kfunc_set); + return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC, &generic_kfunc_set); } late_initcall(kfunc_init); From patchwork Tue Nov 8 07:41:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13035985 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFB8EC433FE for ; Tue, 8 Nov 2022 07:42:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233582AbiKHHmF (ORCPT ); Tue, 8 Nov 2022 02:42:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230246AbiKHHle (ORCPT ); Tue, 8 Nov 2022 02:41:34 -0500 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E474F1A21A for ; Mon, 7 Nov 2022 23:41:26 -0800 (PST) Received: from pps.filterd (m0109334.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A85pqVg028007 for ; Mon, 7 Nov 2022 23:41:26 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=wcfnRN/3dkIgtpsGQgxMUD9FM/rMXJlzKsVeIxlwS2s=; b=WoArbqg1SvSAf2idkPNpHcOgToZhZQTzUKeeC9HVvx8WAwylSoLGpEefzbVnCrGLiKuR 68zimvU2PUlZeFlnPYL+kh2V/3gtji6YEBPtaP1MEXerfJOQdNfm0ZNEqrFqY/B7JI4K +X7qLpKl7u15tbQx1Y5imCCA2X3/x+VOPhE= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kqhb78hca-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Nov 2022 23:41:26 -0800 Received: from twshared6758.06.ash9.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 7 Nov 2022 23:41:25 -0800 Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id C92B111D233E6; Mon, 7 Nov 2022 23:41:14 -0800 (PST) From: Yonghong Song To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Martin KaFai Lau Subject: [PATCH bpf-next v2 5/8] bpf: Add bpf_rcu_read_lock() verifier support Date: Mon, 7 Nov 2022 23:41:14 -0800 Message-ID: <20221108074114.264485-1-yhs@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221108074047.261848-1-yhs@fb.com> References: <20221108074047.261848-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: nA-mpSmp06jd1YUMTzxdABf5dHMd1ben X-Proofpoint-GUID: nA-mpSmp06jd1YUMTzxdABf5dHMd1ben X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-07_11,2022-11-07_02,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net To simplify the design and support the common practice, no nested bpf_rcu_read_lock() is allowed. During verification, each paired bpf_rcu_read_lock()/unlock() has a unique region id, starting from 1. Each rcu ptr register also remembers the region id when the ptr reg is initialized. The following is a simple example to illustrate the rcu lock regions and usage of rcu ptr's. ... <=== rcu lock region 0 bpf_rcu_read_lock() <=== rcu lock region 1 rcu_ptr1 = ... <=== rcu_ptr1 with region 1 ... using rcu_ptr1 ... bpf_rcu_read_unlock() ... <=== rcu lock region -1 bpf_rcu_read_lock() <=== rcu lock region 2 rcu_ptr2 = ... <=== rcu_ptr2 with region 2 ... using rcu_ptr2 ... ... using rcu_ptr1 ... <=== wrong, region 1 rcu_ptr in region 2 bpf_rcu_read_unlock() Outside the rcu lock region, the rcu lock region id is 0 or negative of previous valid rcu lock region id, so the next valid rcu lock region id can be easily computed. Note that rcu protection is not needed for non-sleepable program. But it is supported to make cross-sleepable/nonsleepable development easier. For non-sleepable program, the following insns can be inside the rcu lock region: - any non call insns except BPF_ABS/BPF_IND - non sleepable helpers or kfuncs Also, bpf_*_storage_get() helper's 5th hidden argument (for memory allocation flag) should be GFP_ATOMIC. If a pointer (PTR_TO_BTF_ID) is marked as rcu, then any use of this pointer and the load which gets this pointer needs to be protected by bpf_rcu_read_lock(). The following shows a couple of examples: struct task_struct { ... struct task_struct __rcu *real_parent; struct css_set __rcu *cgroups; ... }; struct css_set { ... struct cgroup *dfl_cgrp; ... } ... task = bpf_get_current_task_btf(); cgroups = task->cgroups; dfl_cgroup = cgroups->dfl_cgrp; ... using dfl_cgroup ... The bpf_rcu_read_lock/unlock() should be added like below to avoid verification failures. task = bpf_get_current_task_btf(); bpf_rcu_read_lock(); cgroups = task->cgroups; dfl_cgroup = cgroups->dfl_cgrp; bpf_rcu_read_unlock(); ... using dfl_cgroup ... The following is another example for task->real_parent. task = bpf_get_current_task_btf(); bpf_rcu_read_lock(); real_parent = task->real_parent; ... bpf_task_storage_get(&map, real_parent, 0, 0); bpf_rcu_read_unlock(); Signed-off-by: Yonghong Song --- include/linux/bpf.h | 1 + include/linux/bpf_verifier.h | 7 +++ kernel/bpf/btf.c | 32 ++++++++++++- kernel/bpf/verifier.c | 92 +++++++++++++++++++++++++++++++----- 4 files changed, 120 insertions(+), 12 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index b4bbcafd1c9b..98af0c9ec721 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -761,6 +761,7 @@ struct bpf_prog_ops { struct btf_struct_access_info { u32 next_btf_id; enum bpf_type_flag flag; + bool is_rcu; }; struct bpf_verifier_ops { diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 1a32baa78ce2..5d703637bb12 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -179,6 +179,10 @@ struct bpf_reg_state { */ s32 subreg_def; enum bpf_reg_liveness live; + /* 0: not rcu ptr; > 0: rcu ptr, id of the rcu read lock region where + * the rcu ptr reg is initialized. + */ + int active_rcu_lock; /* if (!precise && SCALAR_VALUE) min/max/tnum don't affect safety */ bool precise; }; @@ -324,6 +328,8 @@ struct bpf_verifier_state { u32 insn_idx; u32 curframe; u32 active_spin_lock; + /* <= 0: not in rcu read lock region; > 0: the rcu lock region id */ + int active_rcu_lock; bool speculative; /* first and last insn idx of this verifier state */ @@ -424,6 +430,7 @@ struct bpf_insn_aux_data { u32 seen; /* this insn was processed by the verifier at env->pass_cnt */ bool sanitize_stack_spill; /* subject to Spectre v4 sanitation */ bool zext_dst; /* this insn zero extends dst reg */ + bool storage_get_func_atomic; /* bpf_*_storage_get() with atomic memory alloc */ u8 alu_state; /* used in combination with alu_limit */ /* below fields are initialized once */ diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index d2ee1669a2f3..c5a9569f2ae0 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -5831,6 +5831,7 @@ static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf, if (btf_type_is_ptr(mtype)) { const struct btf_type *stype, *t; enum bpf_type_flag tmp_flag = 0; + bool is_rcu = false; u32 id; if (msize != size || off != moff) { @@ -5850,12 +5851,16 @@ static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf, /* check __percpu tag */ if (strcmp(tag_value, "percpu") == 0) tmp_flag = MEM_PERCPU; + /* check __rcu tag */ + if (strcmp(tag_value, "rcu") == 0) + is_rcu = true; } stype = btf_type_skip_modifiers(btf, mtype->type, &id); if (btf_type_is_struct(stype)) { info->next_btf_id = id; info->flag = tmp_flag; + info->is_rcu = is_rcu; return WALK_PTR; } } @@ -6317,7 +6322,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, { enum bpf_prog_type prog_type = resolve_prog_type(env->prog); bool rel = false, kptr_get = false, trusted_args = false; - bool sleepable = false; + bool sleepable = false, rcu_lock = false, rcu_unlock = false; struct bpf_verifier_log *log = &env->log; u32 i, nargs, ref_id, ref_obj_id = 0; bool is_kfunc = btf_is_kernel(btf); @@ -6356,6 +6361,31 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, kptr_get = kfunc_meta->flags & KF_KPTR_GET; trusted_args = kfunc_meta->flags & KF_TRUSTED_ARGS; sleepable = kfunc_meta->flags & KF_SLEEPABLE; + rcu_lock = kfunc_meta->flags & KF_RCU_LOCK; + rcu_unlock = kfunc_meta->flags & KF_RCU_UNLOCK; + } + + /* checking rcu read lock/unlock */ + if (env->cur_state->active_rcu_lock > 0) { + if (rcu_lock) { + bpf_log(log, "nested rcu read lock (kernel function %s)\n", func_name); + return -EINVAL; + } else if (rcu_unlock) { + /* change active_rcu_lock to its corresponding negative value to + * preserve the previous lock region id. + */ + env->cur_state->active_rcu_lock = -env->cur_state->active_rcu_lock; + } else if (sleepable) { + bpf_log(log, "kernel func %s is sleepable within rcu_read_lock region\n", + func_name); + return -EINVAL; + } + } else if (rcu_lock) { + /* a new lock region started, increase the region id. */ + env->cur_state->active_rcu_lock = (-env->cur_state->active_rcu_lock) + 1; + } else if (rcu_unlock) { + bpf_log(log, "unmatched rcu read unlock (kernel function %s)\n", func_name); + return -EINVAL; } /* check that BTF function arguments match actual types that the diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 4d50f9568245..85151f2a655a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include "disasm.h" @@ -513,6 +514,14 @@ static bool is_callback_calling_function(enum bpf_func_id func_id) func_id == BPF_FUNC_user_ringbuf_drain; } +static bool is_storage_get_function(enum bpf_func_id func_id) +{ + return func_id == BPF_FUNC_sk_storage_get || + func_id == BPF_FUNC_inode_storage_get || + func_id == BPF_FUNC_task_storage_get || + func_id == BPF_FUNC_cgrp_storage_get; +} + static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id, const struct bpf_map *map) { @@ -1203,6 +1212,7 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, dst_state->speculative = src->speculative; dst_state->curframe = src->curframe; dst_state->active_spin_lock = src->active_spin_lock; + dst_state->active_rcu_lock = src->active_rcu_lock; dst_state->branches = src->branches; dst_state->parent = src->parent; dst_state->first_insn_idx = src->first_insn_idx; @@ -1687,6 +1697,7 @@ static void __mark_reg_unknown(const struct bpf_verifier_env *env, reg->var_off = tnum_unknown; reg->frameno = 0; reg->precise = !env->bpf_capable; + reg->active_rcu_lock = 0; __mark_reg_unbounded(reg); } @@ -1727,7 +1738,7 @@ static void mark_btf_ld_reg(struct bpf_verifier_env *env, struct bpf_reg_state *regs, u32 regno, enum bpf_reg_type reg_type, struct btf *btf, u32 btf_id, - enum bpf_type_flag flag) + enum bpf_type_flag flag, bool set_rcu_lock) { if (reg_type == SCALAR_VALUE) { mark_reg_unknown(env, regs, regno); @@ -1737,6 +1748,9 @@ static void mark_btf_ld_reg(struct bpf_verifier_env *env, regs[regno].type = PTR_TO_BTF_ID | flag; regs[regno].btf = btf; regs[regno].btf_id = btf_id; + /* the reg rcu lock region id equals the current rcu lock region id */ + if (set_rcu_lock) + regs[regno].active_rcu_lock = env->cur_state->active_rcu_lock; } #define DEF_NOT_SUBREG (0) @@ -3940,7 +3954,7 @@ static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno, * value from map as PTR_TO_BTF_ID, with the correct type. */ mark_btf_ld_reg(env, cur_regs(env), value_regno, PTR_TO_BTF_ID, kptr_field->kptr.btf, - kptr_field->kptr.btf_id, PTR_MAYBE_NULL | PTR_UNTRUSTED); + kptr_field->kptr.btf_id, PTR_MAYBE_NULL | PTR_UNTRUSTED, false); /* For mark_ptr_or_null_reg */ val_reg->id = ++env->id_gen; } else if (class == BPF_STX) { @@ -4681,6 +4695,18 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, return -EACCES; } + /* If reg valid rcu region id does not equal to the current rcu region id, + * rcu access is not protected properly, either out of a valid rcu region, + * or in a different rcu region. + */ + if (env->prog->aux->sleepable && reg->active_rcu_lock && + reg->active_rcu_lock != env->cur_state->active_rcu_lock) { + verbose(env, + "R%d is ptr_%s access rcu-protected memory with off=%d, not rcu protected\n", + regno, tname, off); + return -EACCES; + } + if (env->ops->btf_struct_access) { ret = env->ops->btf_struct_access(&env->log, reg->btf, t, off, size, atype, &info); @@ -4697,6 +4723,16 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, if (ret < 0) return ret; + /* The value is a rcu pointer. For a sleepable program, the load needs to be + * in a rcu lock region, similar to rcu_dereference(). + */ + if (info.is_rcu && env->prog->aux->sleepable && env->cur_state->active_rcu_lock <= 0) { + verbose(env, + "R%d is rcu dereference ptr_%s with off=%d, not in rcu_read_lock region\n", + regno, tname, off); + return -EACCES; + } + /* If this is an untrusted pointer, all pointers formed by walking it * also inherit the untrusted flag. */ @@ -4705,7 +4741,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, if (atype == BPF_READ && value_regno >= 0) mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, info.next_btf_id, - info.flag); + info.flag, info.is_rcu && env->prog->aux->sleepable); return 0; } @@ -4761,7 +4797,7 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env, if (value_regno >= 0) mark_btf_ld_reg(env, regs, value_regno, ret, btf_vmlinux, info.next_btf_id, - info.flag); + info.flag, info.is_rcu && env->prog->aux->sleepable); return 0; } @@ -5874,6 +5910,17 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno, if (arg_type & PTR_MAYBE_NULL) type &= ~PTR_MAYBE_NULL; + /* If a rcu pointer is a helper argument, the helper should be protected in + * the same rcu lock region where the rcu pointer reg is initialized. + */ + if (env->prog->aux->sleepable && reg->active_rcu_lock && + reg->active_rcu_lock != env->cur_state->active_rcu_lock) { + verbose(env, + "R%d is arg type %s needs rcu protection\n", + regno, reg_type_str(env, reg->type)); + return -EACCES; + } + for (i = 0; i < ARRAY_SIZE(compatible->types); i++) { expected = compatible->types[i]; if (expected == NOT_INIT) @@ -7418,6 +7465,18 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn return err; } + if (env->cur_state->active_rcu_lock > 0) { + if (bpf_lsm_sleepable_func_proto(func_id) || + bpf_tracing_sleepable_func_proto(func_id)) { + verbose(env, "sleepable helper %s#%din rcu_read_lock region\n", + func_id_name(func_id), func_id); + return -EINVAL; + } + + if (env->prog->aux->sleepable && is_storage_get_function(func_id)) + env->insn_aux_data[insn_idx].storage_get_func_atomic = true; + } + meta.func_id = func_id; /* check args */ for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) { @@ -10605,6 +10664,11 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) return -EINVAL; } + if (env->prog->aux->sleepable && env->cur_state->active_rcu_lock > 0) { + verbose(env, "BPF_LD_[ABS|IND] cannot be used inside bpf_rcu_read_lock-ed region\n"); + return -EINVAL; + } + if (regs[ctx_reg].type != PTR_TO_CTX) { verbose(env, "at the time of BPF_LD_ABS|IND R6 != pointer to skb\n"); @@ -11869,6 +11933,9 @@ static bool states_equal(struct bpf_verifier_env *env, if (old->active_spin_lock != cur->active_spin_lock) return false; + if (old->active_rcu_lock != cur->active_rcu_lock) + return false; + /* for states to be equal callsites have to be the same * and all frame states need to be equivalent */ @@ -12553,6 +12620,11 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } + if (env->cur_state->active_rcu_lock > 0) { + verbose(env, "bpf_rcu_read_unlock is missing\n"); + return -EINVAL; + } + /* We must do check_reference_leak here before * prepare_func_exit to handle the case when * state->curframe > 0, it may be a callback @@ -14289,14 +14361,12 @@ static int do_misc_fixups(struct bpf_verifier_env *env) goto patch_call_imm; } - if (insn->imm == BPF_FUNC_task_storage_get || - insn->imm == BPF_FUNC_sk_storage_get || - insn->imm == BPF_FUNC_inode_storage_get || - insn->imm == BPF_FUNC_cgrp_storage_get) { - if (env->prog->aux->sleepable) - insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_KERNEL); - else + if (is_storage_get_function(insn->imm)) { + if (!env->prog->aux->sleepable || + env->insn_aux_data[i + delta].storage_get_func_atomic) insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_ATOMIC); + else + insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_KERNEL); insn_buf[1] = *insn; cnt = 2; From patchwork Tue Nov 8 07:41:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13035982 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98E3AC4332F for ; Tue, 8 Nov 2022 07:42:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233103AbiKHHmC (ORCPT ); Tue, 8 Nov 2022 02:42:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233615AbiKHHlc (ORCPT ); Tue, 8 Nov 2022 02:41:32 -0500 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D188D19C32 for ; Mon, 7 Nov 2022 23:41:26 -0800 (PST) Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A86tk0n000888 for ; Mon, 7 Nov 2022 23:41:26 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=TCUibAe8jVtpiEb5lALNKMY5SdZeRe3QM7+aZeuOISo=; b=qiaItgF/JoWu2v8P5k+YCwGltpR327pJNaf3Gyvtd2UZxjozk4VGIKx/M/9Z8pEXzQQM aZ2Hg2MB7NsZ86A26C8k2GPnoBU7FjRaYKFlm1gEVhtjV+xLGuGKExHRNRlhazbWTtCV heq6ytBQKhbopjmR9kCgijm46laoRBydemI= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kqj97g8s5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Nov 2022 23:41:26 -0800 Received: from twshared6758.06.ash9.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::d) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 7 Nov 2022 23:41:25 -0800 Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id 14CD111D23415; Mon, 7 Nov 2022 23:41:20 -0800 (PST) From: Yonghong Song To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Martin KaFai Lau Subject: [PATCH bpf-next v2 6/8] bpf: Enable sleeptable support for cgrp local storage Date: Mon, 7 Nov 2022 23:41:20 -0800 Message-ID: <20221108074120.265235-1-yhs@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221108074047.261848-1-yhs@fb.com> References: <20221108074047.261848-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: L1XwBSmTldihs45UNVSiL899Z4J2Nibp X-Proofpoint-ORIG-GUID: L1XwBSmTldihs45UNVSiL899Z4J2Nibp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-07_11,2022-11-07_02,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net With proper bpf_rcu_read_lock() support, sleepable support for cgrp local storage can be enabled as typical use case task->cgroups->dfl_cgrp can be protected with bpf_rcu_read_lock(). Signed-off-by: Yonghong Song --- kernel/bpf/verifier.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 85151f2a655a..5709eda21afd 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12927,10 +12927,11 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env, case BPF_MAP_TYPE_INODE_STORAGE: case BPF_MAP_TYPE_SK_STORAGE: case BPF_MAP_TYPE_TASK_STORAGE: + case BPF_MAP_TYPE_CGRP_STORAGE: break; default: verbose(env, - "Sleepable programs can only use array, hash, and ringbuf maps\n"); + "Sleepable programs can only use array, hash, ringbuf and local storage maps\n"); return -EINVAL; } From patchwork Tue Nov 8 07:41:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13035986 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29491C4321E for ; Tue, 8 Nov 2022 07:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230246AbiKHHmG (ORCPT ); Tue, 8 Nov 2022 02:42:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233640AbiKHHlf (ORCPT ); Tue, 8 Nov 2022 02:41:35 -0500 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3A5226E8 for ; Mon, 7 Nov 2022 23:41:29 -0800 (PST) Received: from pps.filterd (m0109334.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A85q32V028186 for ; Mon, 7 Nov 2022 23:41:29 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=vYmd+XUPO4E788rbu0d0k0RsfhRSejEIvekPruP1RQM=; b=h3EHvlGMd+ZtMfAI6NY47x8lPjHPOTPSRvvDsAuhL+8cUqFmXDJsuSeU3voEtGuuYvl4 R3GzKgUV+t8r4CY5TaUCJNnX6QYIH40k7jVshw9QoGshpz6AK3vgt/r4W69jLKlorknb Slsxu28wbNObjkHBFu9wBymnPNMkfVehz0o= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kqhb78hck-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Nov 2022 23:41:29 -0800 Received: from twshared24004.14.frc2.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 7 Nov 2022 23:41:28 -0800 Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id 92C3711D2346E; Mon, 7 Nov 2022 23:41:25 -0800 (PST) From: Yonghong Song To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Martin KaFai Lau Subject: [PATCH bpf-next v2 7/8] selftests/bpf: Add tests for bpf_rcu_read_lock() Date: Mon, 7 Nov 2022 23:41:25 -0800 Message-ID: <20221108074125.267314-1-yhs@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221108074047.261848-1-yhs@fb.com> References: <20221108074047.261848-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: RJHICZiUNLz8a8ZLsKrvNoUqFpB9IsMH X-Proofpoint-GUID: RJHICZiUNLz8a8ZLsKrvNoUqFpB9IsMH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-07_11,2022-11-07_02,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add a few positive/negative tests to test bpf_rcu_read_lock() and its corresponding verifier support. Signed-off-by: Yonghong Song --- .../selftests/bpf/prog_tests/rcu_read_lock.c | 127 +++++++ .../selftests/bpf/progs/rcu_read_lock.c | 353 ++++++++++++++++++ 2 files changed, 480 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c create mode 100644 tools/testing/selftests/bpf/progs/rcu_read_lock.c diff --git a/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c b/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c new file mode 100644 index 000000000000..4cf1a5188c34 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c @@ -0,0 +1,127 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates.*/ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include "rcu_read_lock.skel.h" + +static void test_local_storage(void) +{ + struct rcu_read_lock *skel; + int err; + + skel = rcu_read_lock__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + skel->bss->target_pid = syscall(SYS_gettid); + + bpf_program__set_autoload(skel->progs.cgrp_succ, true); + bpf_program__set_autoload(skel->progs.task_succ, true); + bpf_program__set_autoload(skel->progs.two_regions, true); + bpf_program__set_autoload(skel->progs.non_sleepable_1, true); + bpf_program__set_autoload(skel->progs.non_sleepable_2, true); + err = rcu_read_lock__load(skel); + if (!ASSERT_OK(err, "skel_load")) + goto done; + + err = rcu_read_lock__attach(skel); + if (!ASSERT_OK(err, "skel_attach")) + goto done; + + syscall(SYS_getpgid); + + ASSERT_EQ(skel->bss->result, 2, "result"); +done: + rcu_read_lock__destroy(skel); +} + +static void test_runtime_diff_rcu_tag(void) +{ + struct rcu_read_lock *skel; + int err; + + skel = rcu_read_lock__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + bpf_program__set_autoload(skel->progs.dump_ipv6_route, true); + err = rcu_read_lock__load(skel); + ASSERT_OK(err, "skel_load"); + rcu_read_lock__destroy(skel); +} + +static void test_negative(void) +{ +#define NUM_FAILED_PROGS 10 + struct rcu_read_lock *skel; + struct bpf_program *prog; + int i, err; + + for (i = 0; i < NUM_FAILED_PROGS; i++) { + skel = rcu_read_lock__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + switch (i) { + case 0: + prog = skel->progs.miss_lock; + break; + case 1: + prog = skel->progs.miss_unlock; + break; + case 2: + prog = skel->progs.cgrp_incorrect_rcu_region; + break; + case 3: + prog = skel->progs.task_incorrect_rcu_region1; + break; + case 4: + prog = skel->progs.task_incorrect_rcu_region2; + break; + case 5: + prog = skel->progs.non_sleepable_rcu_mismatch; + break; + case 6: + prog = skel->progs.inproper_sleepable_helper; + break; + case 7: + prog = skel->progs.inproper_sleepable_kfunc; + break; + case 8: + prog = skel->progs.nested_rcu_region; + break; + default: + prog = skel->progs.cross_rcu_region; + break; + } + + bpf_program__set_autoload(prog, true); + err = rcu_read_lock__load(skel); + if (!ASSERT_ERR(err, "skel_load")) { + rcu_read_lock__destroy(skel); + return; + } + } +} + +void test_rcu_read_lock(void) +{ + int cgroup_fd; + + cgroup_fd = test__join_cgroup("/rcu_read_lock"); + if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup /rcu_read_lock")) + return; + + if (test__start_subtest("local_storage")) + test_local_storage(); + if (test__start_subtest("runtime_diff_rcu_tag")) + test_runtime_diff_rcu_tag(); + if (test__start_subtest("negative_tests")) + test_negative(); + + close(cgroup_fd); +} diff --git a/tools/testing/selftests/bpf/progs/rcu_read_lock.c b/tools/testing/selftests/bpf/progs/rcu_read_lock.c new file mode 100644 index 000000000000..fbd1aeedc14c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/rcu_read_lock.c @@ -0,0 +1,353 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#include "vmlinux.h" +#include +#include +#include "bpf_tracing_net.h" +#include "bpf_misc.h" + +char _license[] SEC("license") = "GPL"; + +struct { + __uint(type, BPF_MAP_TYPE_CGRP_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, long); +} map_a SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_TASK_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, long); +} map_b SEC(".maps"); + +__u32 user_data, key_serial, target_pid = 0; +__u64 flags, result = 0; + +struct bpf_key *bpf_lookup_user_key(__u32 serial, __u64 flags) __ksym; +void bpf_key_put(struct bpf_key *key) __ksym; +void bpf_rcu_read_lock(void) __ksym; +void bpf_rcu_read_unlock(void) __ksym; + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int cgrp_succ(void *ctx) +{ + struct task_struct *task; + struct css_set *cgroups; + struct cgroup *dfl_cgrp; + long init_val = 2; + long *ptr; + + task = bpf_get_current_task_btf(); + if (task->pid != target_pid) + return 0; + + bpf_rcu_read_lock(); + cgroups = task->cgroups; + dfl_cgrp = cgroups->dfl_cgrp; + bpf_rcu_read_unlock(); + ptr = bpf_cgrp_storage_get(&map_a, dfl_cgrp, &init_val, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (!ptr) + return 0; + ptr = bpf_cgrp_storage_get(&map_a, dfl_cgrp, 0, 0); + if (!ptr) + return 0; + result = *ptr; + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_nanosleep") +int task_succ(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + if (task->pid != target_pid) + return 0; + + /* region including helper using rcu ptr */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_nanosleep") +int two_regions(void *ctx) +{ + struct task_struct *task, *real_parent; + + /* two regions */ + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + bpf_rcu_read_unlock(); + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry/" SYS_PREFIX "sys_getpgid") +int non_sleepable_1(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry/" SYS_PREFIX "sys_getpgid") +int non_sleepable_2(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + bpf_rcu_read_lock(); + real_parent = task->real_parent; + bpf_rcu_read_unlock(); + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?iter.s/ipv6_route") +int dump_ipv6_route(struct bpf_iter__ipv6_route *ctx) +{ + struct seq_file *seq = ctx->meta->seq; + struct fib6_info *rt = ctx->rt; + const struct net_device *dev; + struct fib6_nh *fib6_nh; + unsigned int flags; + struct nexthop *nh; + + if (rt == (void *)0) + return 0; + + /* fib6_nh is not a rcu ptr */ + fib6_nh = &rt->fib6_nh[0]; + flags = rt->fib6_flags; + + nh = rt->nh; + bpf_rcu_read_lock(); + if (rt->nh) + /* fib6_nh is a rcu ptr */ + fib6_nh = &nh->nh_info->fib6_nh; + + /* fib6_nh could be a rcu or non-rcu ptr */ + if (fib6_nh->fib_nh_gw_family) { + flags |= RTF_GATEWAY; + BPF_SEQ_PRINTF(seq, "%pi6 ", &fib6_nh->fib_nh_gw6); + } else { + BPF_SEQ_PRINTF(seq, "00000000000000000000000000000000 "); + } + + dev = fib6_nh->fib_nh_dev; + bpf_rcu_read_unlock(); + if (dev) + BPF_SEQ_PRINTF(seq, "%08x %08x %08x %08x %8s\n", rt->fib6_metric, + rt->fib6_ref.refs.counter, 0, flags, dev->name); + else + BPF_SEQ_PRINTF(seq, "%08x %08x %08x %08x\n", rt->fib6_metric, + rt->fib6_ref.refs.counter, 0, flags); + + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int miss_lock(void *ctx) +{ + struct task_struct *task; + struct css_set *cgroups; + struct cgroup *dfl_cgrp; + + /* missing bpf_rcu_read_lock() */ + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + cgroups = task->cgroups; + bpf_rcu_read_unlock(); + dfl_cgrp = cgroups->dfl_cgrp; + bpf_rcu_read_unlock(); + (void)bpf_cgrp_storage_get(&map_a, dfl_cgrp, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int miss_unlock(void *ctx) +{ + struct task_struct *task; + struct css_set *cgroups; + struct cgroup *dfl_cgrp; + + /* missing bpf_rcu_read_unlock() */ + bpf_rcu_read_lock(); + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + cgroups = task->cgroups; + bpf_rcu_read_unlock(); + dfl_cgrp = cgroups->dfl_cgrp; + (void)bpf_cgrp_storage_get(&map_a, dfl_cgrp, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int cgrp_incorrect_rcu_region(void *ctx) +{ + struct task_struct *task; + struct css_set *cgroups; + struct cgroup *dfl_cgrp; + + /* load with rcu_ptr outside the rcu read lock region */ + bpf_rcu_read_lock(); + task = bpf_get_current_task_btf(); + cgroups = task->cgroups; + bpf_rcu_read_unlock(); + dfl_cgrp = cgroups->dfl_cgrp; + (void)bpf_cgrp_storage_get(&map_a, dfl_cgrp, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int task_incorrect_rcu_region1(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + /* helper use of rcu ptr outside the rcu read lock region */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + bpf_rcu_read_unlock(); + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int task_incorrect_rcu_region2(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + /* missing bpf_rcu_read_unlock() in one path */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (real_parent) + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry/" SYS_PREFIX "sys_getpgid") +int non_sleepable_rcu_mismatch(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + /* non-sleepable: missing bpf_rcu_read_unlock() in one path */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (real_parent) + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int inproper_sleepable_helper(void *ctx) +{ + struct task_struct *task, *real_parent; + struct pt_regs *regs; + __u32 value = 0; + void *ptr; + + task = bpf_get_current_task_btf(); + + /* sleepable helper in rcu read lock region */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + regs = (struct pt_regs *)bpf_task_pt_regs(real_parent); + if (!regs) { + bpf_rcu_read_unlock(); + return 0; + } + + ptr = (void *)PT_REGS_IP(regs); + (void)bpf_copy_from_user_task(&value, sizeof(uint32_t), ptr, task, 0); + user_data = value; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?lsm.s/bpf") +int BPF_PROG(inproper_sleepable_kfunc, int cmd, union bpf_attr *attr, unsigned int size) +{ + struct bpf_key *bkey; + + /* sleepable kfunc in rcu read lock region */ + bpf_rcu_read_lock(); + bkey = bpf_lookup_user_key(key_serial, flags); + bpf_rcu_read_unlock(); + if (!bkey) + return -1; + bpf_key_put(bkey); + + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_nanosleep") +int nested_rcu_region(void *ctx) +{ + struct task_struct *task, *real_parent; + + /* nested rcu read lock regions */ + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_nanosleep") +int cross_rcu_region(void *ctx) +{ + struct task_struct *task, *real_parent; + + /* rcu ptr define/use in different regions */ + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + real_parent = task->real_parent; + bpf_rcu_read_unlock(); + bpf_rcu_read_lock(); + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} + From patchwork Tue Nov 8 07:41:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13035984 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E98FC43217 for ; Tue, 8 Nov 2022 07:42:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233591AbiKHHmD (ORCPT ); Tue, 8 Nov 2022 02:42:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233700AbiKHHlj (ORCPT ); Tue, 8 Nov 2022 02:41:39 -0500 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EBEFE51 for ; Mon, 7 Nov 2022 23:41:37 -0800 (PST) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A86i4Wm014417 for ; Mon, 7 Nov 2022 23:41:36 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=F2A6plehrPUA6Hy5MqYis1yg9Ub+V3rmUI5gFYvjk+w=; b=nh7pooYyLvJMtHZIOuxPJ4MZxqijaTD6vcUVJlR5QTCezoPuo7f5qHee4chyMqsiuClA o+/XNsys+8p4BJHOkWOQlJdiuhD37amYD4vtKQOa5GT2ZbHkohO0PSboS3oBIlVhztLx Op4zOm+aV7GAnhCttu5+17F+2nODKLNuHy4= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kqj3nga71-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Nov 2022 23:41:36 -0800 Received: from twshared14438.02.ash8.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 7 Nov 2022 23:41:35 -0800 Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id D0D1F11D235CA; Mon, 7 Nov 2022 23:41:30 -0800 (PST) From: Yonghong Song To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Martin KaFai Lau Subject: [PATCH bpf-next v2 8/8] selftests/bpf: Add rcu_read_lock test to s390x deny list Date: Mon, 7 Nov 2022 23:41:30 -0800 Message-ID: <20221108074130.268541-1-yhs@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221108074047.261848-1-yhs@fb.com> References: <20221108074047.261848-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: lFNenRUpj4jIee3p8yiZo3vm9lgkgjVi X-Proofpoint-GUID: lFNenRUpj4jIee3p8yiZo3vm9lgkgjVi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-07_11,2022-11-07_02,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The new rcu_read_lock test will fail on s390x with the following error message: ... test_rcu_read_lock:PASS:join_cgroup /rcu_read_lock 0 nsec test_local_storage:PASS:skel_open 0 nsec libbpf: prog 'cgrp_succ': failed to find kernel BTF type ID of '__s390x_sys_getpgid': -3 libbpf: prog 'cgrp_succ': failed to prepare load attributes: -3 libbpf: prog 'cgrp_succ': failed to load: -3 libbpf: failed to load object 'rcu_read_lock' libbpf: failed to load BPF skeleton 'rcu_read_lock': -3 test_local_storage:FAIL:skel_load unexpected error: -3 (errno 3) ... So add it to the s390x deny list. Signed-off-by: Yonghong Song --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index be4e3d47ea3e..dd5db40b5a09 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -41,6 +41,7 @@ module_attach # skel_attach skeleton attach failed: - mptcp netcnt # failed to load BPF skeleton 'netcnt_prog': -7 (?) probe_user # check_kprobe_res wrong kprobe res from probe read (?) +rcu_read_lock # failed to find kernel BTF type ID of '__x64_sys_getpgid': -3 (?) recursion # skel_attach unexpected error: -524 (trampoline) ringbuf # skel_load skeleton load failed (?) select_reuseport # intermittently fails on new s390x setup