From patchwork Mon Aug 8 14:06:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938823 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE987C00140 for ; Mon, 8 Aug 2022 14:06:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237718AbiHHOGr (ORCPT ); Mon, 8 Aug 2022 10:06:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233230AbiHHOGq (ORCPT ); Mon, 8 Aug 2022 10:06:46 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6BB0E2E for ; Mon, 8 Aug 2022 07:06:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4138860D14 for ; Mon, 8 Aug 2022 14:06:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1DF2FC433D6; Mon, 8 Aug 2022 14:06:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967604; bh=FLpzInOGRWtAqTPetYbmNbBNH4WOGXjovcX+AjI4KP4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M2gIJr0QwRsz99PP0a90eziYIOdK/YK35o+WWtJUXsNC48JKcFB10cDIJu8pA6LRg 3JXDDQFnkrO06Ih7EvSzoB4dyExIz6n+DZxpaH1sKryA6HZLhC63bISyov9HDQ5I3+ yGLHoEJvixegqbVlYTgfryiV4jlzuSNPHUvh8aC6b7HGEnoqZMSNJGvqZsByNjwWk7 1GbQSViv8p9GULaVhsTLGb/4nrb/uRKIS6ypX+c7ickaoYqFExf/mE1Q89WHUKrOyX xr7oFw000ex1DUvB+hDSQxVjrE7rlKmULKYTEaDGLFD2uzKwb2alVtf66SAh+BTidu z708Hc05W8gtw== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 01/17] bpf: Link shimlink directly in trampoline Date: Mon, 8 Aug 2022 16:06:10 +0200 Message-Id: <20220808140626.422731-2-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC We are going to get rid of struct bpf_tramp_link in following changes and cgroup_shim_find logic does not fit to that. We can store the link directly in the trampoline and omit the cgroup_shim_find searching logic. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 3 +++ kernel/bpf/trampoline.c | 23 +++-------------------- 2 files changed, 6 insertions(+), 20 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 20c26aed7896..ed2a921094bc 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -841,6 +841,8 @@ struct bpf_tramp_image { }; }; +struct bpf_shim_tramp_link; + struct bpf_trampoline { /* hlist for trampoline_table */ struct hlist_node hlist; @@ -868,6 +870,7 @@ struct bpf_trampoline { struct bpf_tramp_image *cur_image; u64 selector; struct module *mod; + struct bpf_shim_tramp_link *shim_link; }; struct bpf_attach_target_info { diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index ff87e38af8a7..7a65d33cda60 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -689,24 +689,6 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog return shim_link; } -static struct bpf_shim_tramp_link *cgroup_shim_find(struct bpf_trampoline *tr, - bpf_func_t bpf_func) -{ - struct bpf_tramp_link *link; - int kind; - - for (kind = 0; kind < BPF_TRAMP_MAX; kind++) { - hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) { - struct bpf_prog *p = link->link.prog; - - if (p->bpf_func == bpf_func) - return container_of(link, struct bpf_shim_tramp_link, link); - } - } - - return NULL; -} - int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog, int cgroup_atype) { @@ -733,7 +715,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog, mutex_lock(&tr->mutex); - shim_link = cgroup_shim_find(tr, bpf_func); + shim_link = tr->shim_link; if (shim_link) { /* Reusing existing shim attached by the other program. */ bpf_link_inc(&shim_link->link.link); @@ -756,6 +738,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog, goto err; shim_link->trampoline = tr; + tr->shim_link = shim_link; /* note, we're still holding tr refcnt from above */ mutex_unlock(&tr->mutex); @@ -789,7 +772,7 @@ void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog) return; mutex_lock(&tr->mutex); - shim_link = cgroup_shim_find(tr, bpf_func); + shim_link = tr->shim_link; mutex_unlock(&tr->mutex); if (shim_link) From patchwork Mon Aug 8 14:06:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938824 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15CD9C25B0C for ; Mon, 8 Aug 2022 14:07:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242998AbiHHOHC (ORCPT ); Mon, 8 Aug 2022 10:07:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239759AbiHHOHB (ORCPT ); Mon, 8 Aug 2022 10:07:01 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61B3F2642 for ; Mon, 8 Aug 2022 07:06:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D64F4B80EB3 for ; Mon, 8 Aug 2022 14:06:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8FBB0C433C1; Mon, 8 Aug 2022 14:06:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967615; bh=cSfO4v2Sox9eJhNIOMkAE4BFje8u0zzDJzsPjE0zzgg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dR7m8buc8u0l5G0PFsLvqF8ZQGFQqOjUZ6bplfcRJ6GHlGjl6Zc4HKYLKY3IrtGua Yr6FqjyVFC4FJR6eQQy0OijELzpbXm7LaKHOG/z1ed3tRBickjqTL6vFioHh0K2HlS 7W4PJGHtIkxrTrgzYC6K6UE4rq+gPIalOs4jmfe7ow+wxBOOG1OrndIwqMu0NRSfeM 7zvxQyj85Z3KE4iTD22MoGgDZEy5WplJJ6qT0zBYYS/3lDuKzELDxnwEucVUnhAv1w C3M0tvH87MGqrm2QJ0URWALsy3yx/f7zYhxCwv+6466ea63CmGMtUanO/pMcB4iwKr HQjLz70MhtF9w== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 02/17] bpf: Replace bpf_tramp_links with bpf_tramp_progs Date: Mon, 8 Aug 2022 16:06:11 +0200 Message-Id: <20220808140626.422731-3-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC This is first part of replacing link based program storage in trampoline with array of programs. Changing bpf_trampoline_get_progs to return bpf_tramp_progs instead of bpf_tramp_links and use this object all the way down from bpf_trampoline_update to arch jit generation. TODO now that we have ARM trampoline support we also need to change that one the same way. Signed-off-by: Jiri Olsa --- arch/x86/net/bpf_jit_comp.c | 38 +++++++++++++++--------------- include/linux/bpf.h | 16 +++++++++---- kernel/bpf/bpf_struct_ops.c | 18 +++++++-------- kernel/bpf/trampoline.c | 42 +++++++++++++++++++--------------- net/bpf/bpf_dummy_struct_ops.c | 10 ++++---- 5 files changed, 67 insertions(+), 57 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index c1f6c1c51d99..05c01b007bae 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1782,7 +1782,7 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_args, } static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog, - struct bpf_tramp_link *l, int stack_size, + struct bpf_tramp_prog *tp, int stack_size, int run_ctx_off, bool save_ret) { void (*exit)(struct bpf_prog *prog, u64 start, @@ -1792,8 +1792,8 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog, u8 *prog = *pprog; u8 *jmp_insn; int ctx_cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie); - struct bpf_prog *p = l->link.prog; - u64 cookie = l->cookie; + struct bpf_prog *p = tp->prog; + u64 cookie = tp->cookie; /* mov rdi, cookie */ emit_mov_imm64(&prog, BPF_REG_1, (long) cookie >> 32, (u32) (long) cookie); @@ -1899,14 +1899,14 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond) } static int invoke_bpf(const struct btf_func_model *m, u8 **pprog, - struct bpf_tramp_links *tl, int stack_size, + struct bpf_tramp_progs *tp, int stack_size, int run_ctx_off, bool save_ret) { int i; u8 *prog = *pprog; - for (i = 0; i < tl->nr_links; i++) { - if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, + for (i = 0; i < tp->nr_progs; i++) { + if (invoke_bpf_prog(m, &prog, &tp->progs[i], stack_size, run_ctx_off, save_ret)) return -EINVAL; } @@ -1915,7 +1915,7 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog, } static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog, - struct bpf_tramp_links *tl, int stack_size, + struct bpf_tramp_progs *tp, int stack_size, int run_ctx_off, u8 **branches) { u8 *prog = *pprog; @@ -1926,8 +1926,8 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog, */ emit_mov_imm32(&prog, false, BPF_REG_0, 0); emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8); - for (i = 0; i < tl->nr_links; i++) { - if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, run_ctx_off, true)) + for (i = 0; i < tp->nr_progs; i++) { + if (invoke_bpf_prog(m, &prog, &tp->progs[i], stack_size, run_ctx_off, true)) return -EINVAL; /* mod_ret prog stored return value into [rbp - 8]. Emit: @@ -2012,14 +2012,14 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog, */ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end, const struct btf_func_model *m, u32 flags, - struct bpf_tramp_links *tlinks, + struct bpf_tramp_progs *tprogs, void *orig_call) { int ret, i, nr_args = m->nr_args; int regs_off, ip_off, args_off, stack_size = nr_args * 8, run_ctx_off; - struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY]; - struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT]; - struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN]; + struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY]; + struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT]; + struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN]; u8 **branches = NULL; u8 *prog; bool save_ret; @@ -2112,13 +2112,13 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i } } - if (fentry->nr_links) + if (fentry->nr_progs) if (invoke_bpf(m, &prog, fentry, regs_off, run_ctx_off, flags & BPF_TRAMP_F_RET_FENTRY_RET)) return -EINVAL; - if (fmod_ret->nr_links) { - branches = kcalloc(fmod_ret->nr_links, sizeof(u8 *), + if (fmod_ret->nr_progs) { + branches = kcalloc(fmod_ret->nr_progs, sizeof(u8 *), GFP_KERNEL); if (!branches) return -ENOMEM; @@ -2150,7 +2150,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i prog += X86_PATCH_SIZE; } - if (fmod_ret->nr_links) { + if (fmod_ret->nr_progs) { /* From Intel 64 and IA-32 Architectures Optimization * Reference Manual, 3.4.1.4 Code Alignment, Assembly/Compiler * Coding Rule 11: All branch targets should be 16-byte @@ -2160,12 +2160,12 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i /* Update the branches saved in invoke_bpf_mod_ret with the * aligned address of do_fexit. */ - for (i = 0; i < fmod_ret->nr_links; i++) + for (i = 0; i < fmod_ret->nr_progs; i++) emit_cond_near_jump(&branches[i], prog, branches[i], X86_JNE); } - if (fexit->nr_links) + if (fexit->nr_progs) if (invoke_bpf(m, &prog, fexit, regs_off, run_ctx_off, false)) { ret = -EINVAL; goto cleanup; diff --git a/include/linux/bpf.h b/include/linux/bpf.h index ed2a921094bc..80b2c17da64d 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -767,9 +767,14 @@ struct btf_func_model { */ #define BPF_MAX_TRAMP_LINKS 38 -struct bpf_tramp_links { - struct bpf_tramp_link *links[BPF_MAX_TRAMP_LINKS]; - int nr_links; +struct bpf_tramp_prog { + struct bpf_prog *prog; + u64 cookie; +}; + +struct bpf_tramp_progs { + struct bpf_tramp_prog progs[BPF_MAX_TRAMP_LINKS]; + int nr_progs; }; struct bpf_tramp_run_ctx; @@ -797,7 +802,7 @@ struct bpf_tramp_run_ctx; struct bpf_tramp_image; int arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end, const struct btf_func_model *m, u32 flags, - struct bpf_tramp_links *tlinks, + struct bpf_tramp_progs *tprogs, void *orig_call); /* these two functions are called from generated trampoline */ u64 notrace __bpf_prog_enter(struct bpf_prog *prog, struct bpf_tramp_run_ctx *run_ctx); @@ -907,6 +912,7 @@ static __always_inline __nocfi unsigned int bpf_dispatcher_nop_func( } #ifdef CONFIG_BPF_JIT +struct bpf_tramp_link; int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr); int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr); struct bpf_trampoline *bpf_trampoline_get(u64 key, @@ -1236,7 +1242,7 @@ bool bpf_struct_ops_get(const void *kdata); void bpf_struct_ops_put(const void *kdata); int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, void *value); -int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks, +int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs, struct bpf_tramp_link *link, const struct btf_func_model *model, void *image, void *image_end); diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index 84b2d9dba79a..d51dced406eb 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -332,21 +332,21 @@ const struct bpf_link_ops bpf_struct_ops_link_lops = { .dealloc = bpf_struct_ops_link_dealloc, }; -int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks, +int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs, struct bpf_tramp_link *link, const struct btf_func_model *model, void *image, void *image_end) { u32 flags; - tlinks[BPF_TRAMP_FENTRY].links[0] = link; - tlinks[BPF_TRAMP_FENTRY].nr_links = 1; + tprogs[BPF_TRAMP_FENTRY].progs[0].prog = link->link.prog; + tprogs[BPF_TRAMP_FENTRY].nr_progs = 1; /* BPF_TRAMP_F_RET_FENTRY_RET is only used by bpf_struct_ops, * and it must be used alone. */ flags = model->ret_size > 0 ? BPF_TRAMP_F_RET_FENTRY_RET : 0; return arch_prepare_bpf_trampoline(NULL, image, image_end, - model, flags, tlinks, NULL); + model, flags, tprogs, NULL); } static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, @@ -357,7 +357,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, struct bpf_struct_ops_value *uvalue, *kvalue; const struct btf_member *member; const struct btf_type *t = st_ops->type; - struct bpf_tramp_links *tlinks = NULL; + struct bpf_tramp_progs *tprogs = NULL; void *udata, *kdata; int prog_fd, err = 0; void *image, *image_end; @@ -381,8 +381,8 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, if (uvalue->state || refcount_read(&uvalue->refcnt)) return -EINVAL; - tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL); - if (!tlinks) + tprogs = kcalloc(BPF_TRAMP_MAX, sizeof(*tprogs), GFP_KERNEL); + if (!tprogs) return -ENOMEM; uvalue = (struct bpf_struct_ops_value *)st_map->uvalue; @@ -478,7 +478,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, &bpf_struct_ops_link_lops, prog); st_map->links[i] = &link->link; - err = bpf_struct_ops_prepare_trampoline(tlinks, link, + err = bpf_struct_ops_prepare_trampoline(tprogs, link, &st_ops->func_models[i], image, image_end); if (err < 0) @@ -520,7 +520,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, memset(uvalue, 0, map->value_size); memset(kvalue, 0, map->value_size); unlock: - kfree(tlinks); + kfree(tprogs); mutex_unlock(&st_map->lock); return err; } diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 7a65d33cda60..f41fb1af9f0e 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -269,30 +269,34 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr) return ret; } -static struct bpf_tramp_links * +static struct bpf_tramp_progs * bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg) { + struct bpf_tramp_progs *tprogs; struct bpf_tramp_link *link; - struct bpf_tramp_links *tlinks; - struct bpf_tramp_link **links; + struct bpf_tramp_prog *tp; int kind; *total = 0; - tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL); - if (!tlinks) + tprogs = kcalloc(BPF_TRAMP_MAX, sizeof(*tprogs), GFP_KERNEL); + if (!tprogs) return ERR_PTR(-ENOMEM); for (kind = 0; kind < BPF_TRAMP_MAX; kind++) { - tlinks[kind].nr_links = tr->progs_cnt[kind]; + tprogs[kind].nr_progs = tr->progs_cnt[kind]; *total += tr->progs_cnt[kind]; - links = tlinks[kind].links; + tp = &tprogs[kind].progs[0]; hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) { - *ip_arg |= link->link.prog->call_get_func_ip; - *links++ = link; + struct bpf_prog *prog = link->link.prog; + + *ip_arg |= prog->call_get_func_ip; + tp->prog = prog; + tp->cookie = link->cookie; + tp++; } } - return tlinks; + return tprogs; } static void __bpf_tramp_image_put_deferred(struct work_struct *work) @@ -431,14 +435,14 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx) static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex) { struct bpf_tramp_image *im; - struct bpf_tramp_links *tlinks; + struct bpf_tramp_progs *tprogs; u32 orig_flags = tr->flags; bool ip_arg = false; int err, total; - tlinks = bpf_trampoline_get_progs(tr, &total, &ip_arg); - if (IS_ERR(tlinks)) - return PTR_ERR(tlinks); + tprogs = bpf_trampoline_get_progs(tr, &total, &ip_arg); + if (IS_ERR(tprogs)) + return PTR_ERR(tprogs); if (total == 0) { err = unregister_fentry(tr, tr->cur_image->image); @@ -457,8 +461,8 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut /* clear all bits except SHARE_IPMODIFY */ tr->flags &= BPF_TRAMP_F_SHARE_IPMODIFY; - if (tlinks[BPF_TRAMP_FEXIT].nr_links || - tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) { + if (tprogs[BPF_TRAMP_FEXIT].nr_progs || + tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs) { /* NOTE: BPF_TRAMP_F_RESTORE_REGS and BPF_TRAMP_F_SKIP_FRAME * should not be set together. */ @@ -478,7 +482,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut #endif err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE, - &tr->func.model, tr->flags, tlinks, + &tr->func.model, tr->flags, tprogs, tr->func.addr); if (err < 0) goto out; @@ -515,7 +519,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut /* If any error happens, restore previous flags */ if (err) tr->flags = orig_flags; - kfree(tlinks); + kfree(tprogs); return err; } @@ -983,7 +987,7 @@ void notrace __bpf_tramp_exit(struct bpf_tramp_image *tr) int __weak arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end, const struct btf_func_model *m, u32 flags, - struct bpf_tramp_links *tlinks, + struct bpf_tramp_progs *tprogs, void *orig_call) { return -ENOTSUPP; diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c index e78dadfc5829..17add0bdf323 100644 --- a/net/bpf/bpf_dummy_struct_ops.c +++ b/net/bpf/bpf_dummy_struct_ops.c @@ -80,7 +80,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr, const struct bpf_struct_ops *st_ops = &bpf_bpf_dummy_ops; const struct btf_type *func_proto; struct bpf_dummy_ops_test_args *args; - struct bpf_tramp_links *tlinks; + struct bpf_tramp_progs *tprogs; struct bpf_tramp_link *link = NULL; void *image = NULL; unsigned int op_idx; @@ -95,8 +95,8 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr, if (IS_ERR(args)) return PTR_ERR(args); - tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL); - if (!tlinks) { + tprogs = kcalloc(BPF_TRAMP_MAX, sizeof(*tprogs), GFP_KERNEL); + if (!tprogs) { err = -ENOMEM; goto out; } @@ -118,7 +118,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr, bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog); op_idx = prog->expected_attach_type; - err = bpf_struct_ops_prepare_trampoline(tlinks, link, + err = bpf_struct_ops_prepare_trampoline(tprogs, link, &st_ops->func_models[op_idx], image, image + PAGE_SIZE); if (err < 0) @@ -138,7 +138,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr, bpf_jit_free_exec(image); if (link) bpf_link_put(&link->link); - kfree(tlinks); + kfree(tprogs); return err; } From patchwork Mon Aug 8 14:06:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938825 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FCE3C00140 for ; Mon, 8 Aug 2022 14:07:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243086AbiHHOHL (ORCPT ); Mon, 8 Aug 2022 10:07:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243006AbiHHOHJ (ORCPT ); Mon, 8 Aug 2022 10:07:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD0CADFF5 for ; Mon, 8 Aug 2022 07:07:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4E66460D14 for ; Mon, 8 Aug 2022 14:07:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EF1D4C433C1; Mon, 8 Aug 2022 14:07:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967625; bh=nQg4rmtCFj/hNrdrQ3y3cWuU45PQ5SdF/EvwyXrb00g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kRxdKY0EgEOO6kn/Ro0aMBZLa7sVOVW25ufv4aFznkKffmeTFMLwk28wiL9bVrDjP l48JPua+IGyMwTd3Ws2fxOfH86s8Jro5HWSHFrp36b1aZbtjSbv3tLHM0j9XLnUxPI n9RrnhOcZlogwD2rV26+vLCSycaBXWoufgPeTKSEkuUnLmxg9zb3dPj0DN8WxHg9dZ ecq8TS42rJ6n8See8csfQyNFGMi/mjVsoaWD6N1/68cBkKdxKcJBbmH6j3GQuSpf5B OFyU+rOIislTADMjYXmBAyTvJlSSqJ16DQP4gbKr7yDCwzcaZudsHPxHJATo8e+OXN Zm4zlhUEHXMPA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 03/17] bpf: Store trampoline progs in arrays Date: Mon, 8 Aug 2022 16:06:12 +0200 Message-Id: <20220808140626.422731-4-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Storing programs for trampoline in array instead of in link based list. This way we can have same program coming from one link being shared across multiple trampolines. Replacing list_head links array with bpf_prog_array objects array that now stores all trampoline programs. We already have bpf_trampoline_get_progs returning bpf_tramp_progs object, so this patch does the rest: - storing trampoline programs of given type in bpf_prog_array objects - using bpf_tramp_prog object as program reference in link/unlink functions: int bpf_trampoline_link_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr); int bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr); - changing all the callers on above link/unlink to work with new interface - removing bpf_tramp_link struct, because it's no longer needed Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 25 ++++---- kernel/bpf/bpf_struct_ops.c | 16 +++-- kernel/bpf/syscall.c | 19 +++--- kernel/bpf/trampoline.c | 105 ++++++++++++++++++++------------- net/bpf/bpf_dummy_struct_ops.c | 8 +-- 5 files changed, 96 insertions(+), 77 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 80b2c17da64d..0617982ca859 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -868,7 +868,7 @@ struct bpf_trampoline { */ struct bpf_prog *extension_prog; /* list of BPF programs using this trampoline */ - struct hlist_head progs_hlist[BPF_TRAMP_MAX]; + struct bpf_prog_array *progs_array[BPF_TRAMP_MAX]; /* Number of attached programs. A counter per kind. */ int progs_cnt[BPF_TRAMP_MAX]; /* Executable image of trampoline */ @@ -912,9 +912,8 @@ static __always_inline __nocfi unsigned int bpf_dispatcher_nop_func( } #ifdef CONFIG_BPF_JIT -struct bpf_tramp_link; -int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr); -int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr); +int bpf_trampoline_link_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr); +int bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr); struct bpf_trampoline *bpf_trampoline_get(u64 key, struct bpf_attach_target_info *tgt_info); void bpf_trampoline_put(struct bpf_trampoline *tr); @@ -963,12 +962,12 @@ int bpf_jit_charge_modmem(u32 size); void bpf_jit_uncharge_modmem(u32 size); bool bpf_prog_has_trampoline(const struct bpf_prog *prog); #else -static inline int bpf_trampoline_link_prog(struct bpf_tramp_link *link, +static inline int bpf_trampoline_link_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr) { return -ENOTSUPP; } -static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, +static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr) { return -ENOTSUPP; @@ -1187,19 +1186,15 @@ struct bpf_link_ops { struct bpf_link_info *info); }; -struct bpf_tramp_link { - struct bpf_link link; - struct hlist_node tramp_hlist; - u64 cookie; -}; - struct bpf_shim_tramp_link { - struct bpf_tramp_link link; + struct bpf_link link; + struct bpf_tramp_prog tp; struct bpf_trampoline *trampoline; }; struct bpf_tracing_link { - struct bpf_tramp_link link; + struct bpf_link link; + struct bpf_tramp_prog tp; enum bpf_attach_type attach_type; struct bpf_trampoline *trampoline; struct bpf_prog *tgt_prog; @@ -1243,7 +1238,7 @@ void bpf_struct_ops_put(const void *kdata); int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, void *value); int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs, - struct bpf_tramp_link *link, + struct bpf_prog *prog, const struct btf_func_model *model, void *image, void *image_end); static inline bool bpf_try_module_get(const void *data, struct module *owner) diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index d51dced406eb..910f1b7deb8f 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -322,9 +322,7 @@ static void bpf_struct_ops_link_release(struct bpf_link *link) static void bpf_struct_ops_link_dealloc(struct bpf_link *link) { - struct bpf_tramp_link *tlink = container_of(link, struct bpf_tramp_link, link); - - kfree(tlink); + kfree(link); } const struct bpf_link_ops bpf_struct_ops_link_lops = { @@ -333,13 +331,13 @@ const struct bpf_link_ops bpf_struct_ops_link_lops = { }; int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs, - struct bpf_tramp_link *link, + struct bpf_prog *prog, const struct btf_func_model *model, void *image, void *image_end) { u32 flags; - tprogs[BPF_TRAMP_FENTRY].progs[0].prog = link->link.prog; + tprogs[BPF_TRAMP_FENTRY].progs[0].prog = prog; tprogs[BPF_TRAMP_FENTRY].nr_progs = 1; /* BPF_TRAMP_F_RET_FENTRY_RET is only used by bpf_struct_ops, * and it must be used alone. @@ -405,7 +403,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, for_each_member(i, t, member) { const struct btf_type *mtype, *ptype; struct bpf_prog *prog; - struct bpf_tramp_link *link; + struct bpf_link *link; u32 moff; moff = __btf_member_bit_offset(t, member) / 8; @@ -474,11 +472,11 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, err = -ENOMEM; goto reset_unlock; } - bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, + bpf_link_init(link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog); - st_map->links[i] = &link->link; + st_map->links[i] = link; - err = bpf_struct_ops_prepare_trampoline(tprogs, link, + err = bpf_struct_ops_prepare_trampoline(tprogs, prog, &st_ops->func_models[i], image, image_end); if (err < 0) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 83c7136c5788..42272909ac08 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2887,9 +2887,9 @@ EXPORT_SYMBOL(bpf_link_get_from_fd); static void bpf_tracing_link_release(struct bpf_link *link) { struct bpf_tracing_link *tr_link = - container_of(link, struct bpf_tracing_link, link.link); + container_of(link, struct bpf_tracing_link, link); - WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->link, + WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->tp, tr_link->trampoline)); bpf_trampoline_put(tr_link->trampoline); @@ -2902,7 +2902,7 @@ static void bpf_tracing_link_release(struct bpf_link *link) static void bpf_tracing_link_dealloc(struct bpf_link *link) { struct bpf_tracing_link *tr_link = - container_of(link, struct bpf_tracing_link, link.link); + container_of(link, struct bpf_tracing_link, link); kfree(tr_link); } @@ -2911,7 +2911,7 @@ static void bpf_tracing_link_show_fdinfo(const struct bpf_link *link, struct seq_file *seq) { struct bpf_tracing_link *tr_link = - container_of(link, struct bpf_tracing_link, link.link); + container_of(link, struct bpf_tracing_link, link); seq_printf(seq, "attach_type:\t%d\n", @@ -2922,7 +2922,7 @@ static int bpf_tracing_link_fill_link_info(const struct bpf_link *link, struct bpf_link_info *info) { struct bpf_tracing_link *tr_link = - container_of(link, struct bpf_tracing_link, link.link); + container_of(link, struct bpf_tracing_link, link); info->tracing.attach_type = tr_link->attach_type; bpf_trampoline_unpack_key(tr_link->trampoline->key, @@ -3004,10 +3004,11 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, err = -ENOMEM; goto out_put_prog; } - bpf_link_init(&link->link.link, BPF_LINK_TYPE_TRACING, + bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING, &bpf_tracing_link_lops, prog); link->attach_type = prog->expected_attach_type; - link->link.cookie = bpf_cookie; + link->tp.cookie = bpf_cookie; + link->tp.prog = prog; mutex_lock(&prog->aux->dst_mutex); @@ -3075,11 +3076,11 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, tgt_prog = prog->aux->dst_prog; } - err = bpf_link_prime(&link->link.link, &link_primer); + err = bpf_link_prime(&link->link, &link_primer); if (err) goto out_unlock; - err = bpf_trampoline_link_prog(&link->link, tr); + err = bpf_trampoline_link_prog(&link->tp, tr); if (err) { bpf_link_cleanup(&link_primer); link = NULL; diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index f41fb1af9f0e..854d0a3b9b31 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -152,7 +152,6 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { struct bpf_trampoline *tr; struct hlist_head *head; - int i; mutex_lock(&trampoline_mutex); head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)]; @@ -181,8 +180,6 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key) hlist_add_head(&tr->hlist, head); refcount_set(&tr->refcnt, 1); mutex_init(&tr->mutex); - for (i = 0; i < BPF_TRAMP_MAX; i++) - INIT_HLIST_HEAD(&tr->progs_hlist[i]); out: mutex_unlock(&trampoline_mutex); return tr; @@ -272,9 +269,11 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr) static struct bpf_tramp_progs * bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg) { + const struct bpf_prog_array_item *item; + struct bpf_prog_array *prog_array; struct bpf_tramp_progs *tprogs; - struct bpf_tramp_link *link; struct bpf_tramp_prog *tp; + struct bpf_prog *prog; int kind; *total = 0; @@ -287,13 +286,16 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_a *total += tr->progs_cnt[kind]; tp = &tprogs[kind].progs[0]; - hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) { - struct bpf_prog *prog = link->link.prog; + prog_array = tr->progs_array[kind]; + if (!prog_array) + continue; + item = &prog_array->items[0]; + while ((prog = READ_ONCE(item->prog))) { *ip_arg |= prog->call_get_func_ip; tp->prog = prog; - tp->cookie = link->cookie; - tp++; + tp->cookie = item->bpf_cookie; + tp++; item++; } } return tprogs; @@ -545,14 +547,16 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog) } } -static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr) +static int __bpf_trampoline_link_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr) { + struct bpf_prog_array *old_array, *new_array; + const struct bpf_prog_array_item *item; enum bpf_tramp_prog_type kind; - struct bpf_tramp_link *link_exiting; + struct bpf_prog *prog; int err = 0; int cnt = 0, i; - kind = bpf_attach_type_to_tramp(link->link.prog); + kind = bpf_attach_type_to_tramp(tp->prog); if (tr->extension_prog) /* cannot attach fentry/fexit if extension prog is attached. * cannot overwrite extension prog either. @@ -566,48 +570,57 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr /* Cannot attach extension if fentry/fexit are in use. */ if (cnt) return -EBUSY; - tr->extension_prog = link->link.prog; + tr->extension_prog = tp->prog; return bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, NULL, - link->link.prog->bpf_func); + tp->prog->bpf_func); } if (cnt >= BPF_MAX_TRAMP_LINKS) return -E2BIG; - if (!hlist_unhashed(&link->tramp_hlist)) - /* prog already linked */ - return -EBUSY; - hlist_for_each_entry(link_exiting, &tr->progs_hlist[kind], tramp_hlist) { - if (link_exiting->link.prog != link->link.prog) - continue; - /* prog already linked */ - return -EBUSY; + old_array = tr->progs_array[kind]; + if (old_array) { + item = &old_array->items[0]; + + while ((prog = READ_ONCE(item->prog))) { + /* prog already linked */ + if (prog == tp->prog) + return -EBUSY; + item++; + } } - hlist_add_head(&link->tramp_hlist, &tr->progs_hlist[kind]); + err = bpf_prog_array_copy(old_array, NULL, tp->prog, tp->cookie, &new_array); + if (err < 0) + return -ENOMEM; + tr->progs_array[kind] = new_array; tr->progs_cnt[kind]++; err = bpf_trampoline_update(tr, true /* lock_direct_mutex */); if (err) { - hlist_del_init(&link->tramp_hlist); + tr->progs_array[kind] = old_array; tr->progs_cnt[kind]--; + bpf_prog_array_free(new_array); + } else { + bpf_prog_array_free(old_array); } return err; } -int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr) +int bpf_trampoline_link_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr) { int err; mutex_lock(&tr->mutex); - err = __bpf_trampoline_link_prog(link, tr); + err = __bpf_trampoline_link_prog(tp, tr); mutex_unlock(&tr->mutex); return err; } -static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr) +static int __bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr) { + struct bpf_prog_array *old_array, *new_array; enum bpf_tramp_prog_type kind; int err; - kind = bpf_attach_type_to_tramp(link->link.prog); + kind = bpf_attach_type_to_tramp(tp->prog); if (kind == BPF_TRAMP_REPLACE) { WARN_ON_ONCE(!tr->extension_prog); err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, @@ -615,18 +628,26 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_ tr->extension_prog = NULL; return err; } - hlist_del_init(&link->tramp_hlist); + + old_array = tr->progs_array[kind]; + + err = bpf_prog_array_copy(old_array, tp->prog, NULL, 0, &new_array); + if (err < 0) + return err; + tr->progs_cnt[kind]--; + tr->progs_array[kind] = new_array; + bpf_prog_array_free(old_array); return bpf_trampoline_update(tr, true /* lock_direct_mutex */); } /* bpf_trampoline_unlink_prog() should never fail. */ -int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr) +int bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr) { int err; mutex_lock(&tr->mutex); - err = __bpf_trampoline_unlink_prog(link, tr); + err = __bpf_trampoline_unlink_prog(tp, tr); mutex_unlock(&tr->mutex); return err; } @@ -635,20 +656,20 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampolin static void bpf_shim_tramp_link_release(struct bpf_link *link) { struct bpf_shim_tramp_link *shim_link = - container_of(link, struct bpf_shim_tramp_link, link.link); + container_of(link, struct bpf_shim_tramp_link, link); /* paired with 'shim_link->trampoline = tr' in bpf_trampoline_link_cgroup_shim */ if (!shim_link->trampoline) return; - WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link, shim_link->trampoline)); + WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->tp, shim_link->trampoline)); bpf_trampoline_put(shim_link->trampoline); } static void bpf_shim_tramp_link_dealloc(struct bpf_link *link) { struct bpf_shim_tramp_link *shim_link = - container_of(link, struct bpf_shim_tramp_link, link.link); + container_of(link, struct bpf_shim_tramp_link, link); kfree(shim_link); } @@ -686,9 +707,10 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog p->type = BPF_PROG_TYPE_LSM; p->expected_attach_type = BPF_LSM_MAC; bpf_prog_inc(p); - bpf_link_init(&shim_link->link.link, BPF_LINK_TYPE_UNSPEC, + bpf_link_init(&shim_link->link, BPF_LINK_TYPE_UNSPEC, &bpf_shim_tramp_link_lops, p); bpf_cgroup_atype_get(p->aux->attach_btf_id, cgroup_atype); + shim_link->tp.prog = p; return shim_link; } @@ -722,7 +744,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog, shim_link = tr->shim_link; if (shim_link) { /* Reusing existing shim attached by the other program. */ - bpf_link_inc(&shim_link->link.link); + bpf_link_inc(&shim_link->link); mutex_unlock(&tr->mutex); bpf_trampoline_put(tr); /* bpf_trampoline_get above */ @@ -737,7 +759,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog, goto err; } - err = __bpf_trampoline_link_prog(&shim_link->link, tr); + err = __bpf_trampoline_link_prog(&shim_link->tp, tr); if (err) goto err; @@ -752,7 +774,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog, mutex_unlock(&tr->mutex); if (shim_link) - bpf_link_put(&shim_link->link.link); + bpf_link_put(&shim_link->link); /* have to release tr while _not_ holding its mutex */ bpf_trampoline_put(tr); /* bpf_trampoline_get above */ @@ -780,7 +802,7 @@ void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog) mutex_unlock(&tr->mutex); if (shim_link) - bpf_link_put(&shim_link->link.link); + bpf_link_put(&shim_link->link); bpf_trampoline_put(tr); /* bpf_trampoline_lookup above */ } @@ -817,9 +839,12 @@ void bpf_trampoline_put(struct bpf_trampoline *tr) goto out; WARN_ON_ONCE(mutex_is_locked(&tr->mutex)); - for (i = 0; i < BPF_TRAMP_MAX; i++) - if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[i]))) + for (i = 0; i < BPF_TRAMP_MAX; i++) { + if (!tr->progs_array[i]) + continue; + if (WARN_ON_ONCE(!bpf_prog_array_is_empty(tr->progs_array[i]))) goto out; + } /* This code will be executed even when the last bpf_tramp_image * is alive. All progs are detached from the trampoline and the diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c index 17add0bdf323..5a771cc74edf 100644 --- a/net/bpf/bpf_dummy_struct_ops.c +++ b/net/bpf/bpf_dummy_struct_ops.c @@ -81,7 +81,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr, const struct btf_type *func_proto; struct bpf_dummy_ops_test_args *args; struct bpf_tramp_progs *tprogs; - struct bpf_tramp_link *link = NULL; + struct bpf_link *link = NULL; void *image = NULL; unsigned int op_idx; int prog_ret; @@ -115,10 +115,10 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr, } /* prog doesn't take the ownership of the reference from caller */ bpf_prog_inc(prog); - bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog); + bpf_link_init(link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog); op_idx = prog->expected_attach_type; - err = bpf_struct_ops_prepare_trampoline(tprogs, link, + err = bpf_struct_ops_prepare_trampoline(tprogs, prog, &st_ops->func_models[op_idx], image, image + PAGE_SIZE); if (err < 0) @@ -137,7 +137,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr, kfree(args); bpf_jit_free_exec(image); if (link) - bpf_link_put(&link->link); + bpf_link_put(link); kfree(tprogs); return err; } From patchwork Mon Aug 8 14:06:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938826 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E645C00140 for ; Mon, 8 Aug 2022 14:07:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233230AbiHHOHZ (ORCPT ); Mon, 8 Aug 2022 10:07:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238443AbiHHOHU (ORCPT ); Mon, 8 Aug 2022 10:07:20 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9258EE2E for ; Mon, 8 Aug 2022 07:07:19 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1E762B80EB3 for ; Mon, 8 Aug 2022 14:07:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D425CC433C1; Mon, 8 Aug 2022 14:07:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967636; bh=+Lu9JjGawxYiBoSSMl+YO5W0AaELip0KX2jsCv9Qw+I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T/aHPsc38oLKpyH/i3KqDEwEeXTw+AEPK/jV2lrE3a8uDSBNEmwTL6ZM88/cyOzpt S4c8lHyH57lcurvU2owcwmzOUMTkwyK7x1q72hXbIzBVk8oN+qZ9okuhxeNh40Zrq0 iB5GvAFVLaq6rDY3hyjtOn139xEJcTSc1HQKVm1CBOKcK3RgWd9gdYd2oa0jVFPoJR TrzOnmS4Wvvcp/mxHG0+jQrGillqZs5tAHZ3GxFlhHcdzmEunGLkZ8lVEj89AsZxTC 1MpjBBkz/9ifSyZqeMyrpVArTR24icH3WRk/SzynHL2Tg1+VGkqx2xnlHABVRhAyq2 2z51qgYTzHGMQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 04/17] bpf: Add multi tracing attach types Date: Mon, 8 Aug 2022 16:06:13 +0200 Message-Id: <20220808140626.422731-5-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding new attach type to identify multi tracing attachment: BPF_TRACE_FENTRY_MULTI BPF_TRACE_FEXIT_MULTI Programs with such attach type will use specific link attachment interface coming in following changes. This was suggested by Andrii some (long) time ago and turned out to be easier than having special program flag for that. Suggested-by: Andrii Nakryiko Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 5 +++++ include/uapi/linux/bpf.h | 2 ++ kernel/bpf/syscall.c | 35 ++++++++++++++++++++++++++++++---- kernel/bpf/trampoline.c | 3 +++ kernel/bpf/verifier.c | 8 +++++++- net/bpf/test_run.c | 2 ++ tools/include/uapi/linux/bpf.h | 2 ++ 7 files changed, 52 insertions(+), 5 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 0617982ca859..32168ea92551 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1256,6 +1256,11 @@ static inline void bpf_module_put(const void *data, struct module *owner) module_put(owner); } +static inline bool is_tracing_multi(enum bpf_attach_type type) +{ + return type == BPF_TRACE_FENTRY_MULTI || type == BPF_TRACE_FEXIT_MULTI; +} + #ifdef CONFIG_NET /* Define it here to avoid the use of forward declaration */ struct bpf_dummy_ops_state { diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 7bf9ba1329be..fb6bc2c5e9e8 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -999,6 +999,8 @@ enum bpf_attach_type { BPF_PERF_EVENT, BPF_TRACE_KPROBE_MULTI, BPF_LSM_CGROUP, + BPF_TRACE_FENTRY_MULTI, + BPF_TRACE_FEXIT_MULTI, __MAX_BPF_ATTACH_TYPE }; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 42272909ac08..2e4765c7e6d4 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -35,6 +35,7 @@ #include #include #include +#include #define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \ (map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \ @@ -2304,7 +2305,8 @@ static int bpf_prog_load_check_attach(enum bpf_prog_type prog_type, enum bpf_attach_type expected_attach_type, struct btf *attach_btf, u32 btf_id, - struct bpf_prog *dst_prog) + struct bpf_prog *dst_prog, + bool multi_func) { if (btf_id) { if (btf_id > BTF_MAX_TYPE) @@ -2324,6 +2326,14 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type, } } + if (multi_func) { + if (prog_type != BPF_PROG_TYPE_TRACING) + return -EINVAL; + if (!attach_btf || btf_id) + return -EINVAL; + return 0; + } + if (attach_btf && (!btf_id || dst_prog)) return -EINVAL; @@ -2447,6 +2457,16 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type) } } +#define DEFINE_BPF_MULTI_FUNC(args...) \ + extern int bpf_multi_func(args); \ + int __init bpf_multi_func(args) { return 0; } + +DEFINE_BPF_MULTI_FUNC(unsigned long a1, unsigned long a2, + unsigned long a3, unsigned long a4, + unsigned long a5, unsigned long a6) + +BTF_ID_LIST_SINGLE(bpf_multi_func_btf_id, func, bpf_multi_func) + /* last field in 'union bpf_attr' used by this command */ #define BPF_PROG_LOAD_LAST_FIELD core_relo_rec_size @@ -2457,6 +2477,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) struct btf *attach_btf = NULL; int err; char license[128]; + bool multi_func; bool is_gpl; if (CHECK_ATTR(BPF_PROG_LOAD)) @@ -2498,6 +2519,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) if (is_perfmon_prog_type(type) && !perfmon_capable()) return -EPERM; + multi_func = is_tracing_multi(attr->expected_attach_type); + /* attach_prog_fd/attach_btf_obj_fd can specify fd of either bpf_prog * or btf, we need to check which one it is */ @@ -2516,7 +2539,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) return -ENOTSUPP; } } - } else if (attr->attach_btf_id) { + } else if (attr->attach_btf_id || multi_func) { /* fall back to vmlinux BTF, if BTF type ID is specified */ attach_btf = bpf_get_btf_vmlinux(); if (IS_ERR(attach_btf)) @@ -2529,7 +2552,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) bpf_prog_load_fixup_attach_type(attr); if (bpf_prog_load_check_attach(type, attr->expected_attach_type, attach_btf, attr->attach_btf_id, - dst_prog)) { + dst_prog, multi_func)) { if (dst_prog) bpf_prog_put(dst_prog); if (attach_btf) @@ -2549,7 +2572,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) prog->expected_attach_type = attr->expected_attach_type; prog->aux->attach_btf = attach_btf; - prog->aux->attach_btf_id = attr->attach_btf_id; + prog->aux->attach_btf_id = multi_func ? bpf_multi_func_btf_id[0] : attr->attach_btf_id; prog->aux->dst_prog = dst_prog; prog->aux->offload_requested = !!attr->prog_ifindex; prog->aux->sleepable = attr->prog_flags & BPF_F_SLEEPABLE; @@ -2955,6 +2978,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, case BPF_PROG_TYPE_TRACING: if (prog->expected_attach_type != BPF_TRACE_FENTRY && prog->expected_attach_type != BPF_TRACE_FEXIT && + prog->expected_attach_type != BPF_TRACE_FENTRY_MULTI && + prog->expected_attach_type != BPF_TRACE_FEXIT_MULTI && prog->expected_attach_type != BPF_MODIFY_RETURN) { err = -EINVAL; goto out_put_prog; @@ -3429,6 +3454,8 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type) case BPF_TRACE_RAW_TP: case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: case BPF_MODIFY_RETURN: return BPF_PROG_TYPE_TRACING; case BPF_LSM_MAC: diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 854d0a3b9b31..56899d63c08c 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -112,6 +112,7 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog) return (ptype == BPF_PROG_TYPE_TRACING && (eatype == BPF_TRACE_FENTRY || eatype == BPF_TRACE_FEXIT || + eatype == BPF_TRACE_FENTRY_MULTI || eatype == BPF_TRACE_FEXIT_MULTI || eatype == BPF_MODIFY_RETURN)) || (ptype == BPF_PROG_TYPE_LSM && eatype == BPF_LSM_MAC); } @@ -529,10 +530,12 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog) { switch (prog->expected_attach_type) { case BPF_TRACE_FENTRY: + case BPF_TRACE_FENTRY_MULTI: return BPF_TRAMP_FENTRY; case BPF_MODIFY_RETURN: return BPF_TRAMP_MODIFY_RETURN; case BPF_TRACE_FEXIT: + case BPF_TRACE_FEXIT_MULTI: return BPF_TRAMP_FEXIT; case BPF_LSM_MAC: if (!prog->aux->attach_func_proto->type) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 096fdac70165..5bc2e4183b58 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -10526,6 +10526,8 @@ static int check_return_code(struct bpf_verifier_env *env) switch (env->prog->expected_attach_type) { case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: range = tnum_const(0); break; case BPF_TRACE_RAW_TP: @@ -14289,6 +14291,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) if (prog_type == BPF_PROG_TYPE_TRACING && insn->imm == BPF_FUNC_get_func_ret) { if (eatype == BPF_TRACE_FEXIT || + eatype == BPF_TRACE_FEXIT_MULTI || eatype == BPF_MODIFY_RETURN) { /* Load nr_args from ctx - 8 */ insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8); @@ -14937,6 +14940,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, case BPF_LSM_CGROUP: case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: if (!btf_type_is_func(t)) { bpf_log(log, "attach_btf_id %u is not a function\n", btf_id); @@ -15093,7 +15098,8 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) if (!bpf_iter_prog_supported(prog)) return -EINVAL; return 0; - } + } else if (is_tracing_multi(prog->expected_attach_type)) + return prog->type == BPF_PROG_TYPE_TRACING ? 0 : -EINVAL; if (prog->type == BPF_PROG_TYPE_LSM) { ret = bpf_lsm_verify_prog(&env->log, prog); diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index cbc9cd5058cb..2cb78d4b0d32 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -760,6 +760,8 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog, switch (prog->expected_attach_type) { case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: if (bpf_fentry_test1(1) != 2 || bpf_fentry_test2(2, 3) != 5 || bpf_fentry_test3(4, 5, 6) != 15 || diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 59a217ca2dfd..94d623affd5f 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -999,6 +999,8 @@ enum bpf_attach_type { BPF_PERF_EVENT, BPF_TRACE_KPROBE_MULTI, BPF_LSM_CGROUP, + BPF_TRACE_FENTRY_MULTI, + BPF_TRACE_FEXIT_MULTI, __MAX_BPF_ATTACH_TYPE }; From patchwork Mon Aug 8 14:06:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938827 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71B22C00140 for ; Mon, 8 Aug 2022 14:07:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242904AbiHHOHa (ORCPT ); Mon, 8 Aug 2022 10:07:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237227AbiHHOH3 (ORCPT ); Mon, 8 Aug 2022 10:07:29 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F4E926DE for ; Mon, 8 Aug 2022 07:07:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3D7E960C21 for ; Mon, 8 Aug 2022 14:07:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3FB6DC433D6; Mon, 8 Aug 2022 14:07:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967647; bh=yNNrr42ZMVMIxsMzww5ClyIb5wB6bW/QUlKoQaG2UlY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aSLTKv1ujf4veYARDaxEQP8RLd4Sg5rQPaIcGbyELX6x3SASaOtyR4wxgCM8V6YuQ DXkI2P+JfYwKFlHpbxZX4QGcYB1RmFD39rFk0lEt6iQrGuZeKyTQy3+GomZhK8sQfu cbenNCw0GcsnEcFmwyrNBpozvAHYFCDaAtekiT/Oja+UmowC+LbstTHmlv89azWs9b HE2SNwUyj6r38P0NvFcclX+iagHfRC4V203fDYC43DHn2TSjGRa/n5tirkTC4vSfE3 UzT2uYn3H9bNn+Sjs9A9OlbqZn+wVInIQbL9fQUe77Xvb4+5PxmvBCTerQ16+HZLaN 7BQ8/ew/zLCrA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 05/17] bpf: Add bpf_tramp_id object Date: Mon, 8 Aug 2022 16:06:14 +0200 Message-Id: <20220808140626.422731-6-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding bpf_tramp_id object that allows to store multiple BTF function ids together with their resolved addresses. It will be used in following changes to identify and attach multiple functions to trampolines. The bpf_tramp_id object will be shared between trampoline and link in following changes, so it keeps refcount for that. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 12 ++++++++++++ kernel/bpf/trampoline.c | 38 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 50 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 32168ea92551..a5738d57f6bd 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -27,6 +27,7 @@ #include #include #include +#include struct bpf_verifier_env; struct bpf_verifier_log; @@ -846,6 +847,15 @@ struct bpf_tramp_image { }; }; +struct bpf_tramp_id { + u32 max; + u32 cnt; + u32 obj_id; + u32 *id; + void **addr; + refcount_t refcnt; +}; + struct bpf_shim_tramp_link; struct bpf_trampoline { @@ -917,6 +927,8 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline struct bpf_trampoline *bpf_trampoline_get(u64 key, struct bpf_attach_target_info *tgt_info); void bpf_trampoline_put(struct bpf_trampoline *tr); +struct bpf_tramp_id *bpf_tramp_id_alloc(u32 cnt); +void bpf_tramp_id_put(struct bpf_tramp_id *id); int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs); #define BPF_DISPATCHER_INIT(_name) { \ .mutex = __MUTEX_INITIALIZER(_name.mutex), \ diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 56899d63c08c..c0983ff5aa3a 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -149,6 +149,44 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym) PAGE_SIZE, true, ksym->name); } +struct bpf_tramp_id *bpf_tramp_id_alloc(u32 max) +{ + struct bpf_tramp_id *id; + + id = kzalloc(sizeof(*id), GFP_KERNEL); + if (id) { + id->id = kcalloc(max, sizeof(u32), GFP_KERNEL); + id->addr = kcalloc(max, sizeof(*id->addr), GFP_KERNEL); + if (!id->id || !id->addr) { + kfree(id->id); + kfree(id->addr); + kfree(id); + return NULL; + } + id->max = max; + refcount_set(&id->refcnt, 1); + } + return id; +} + +__maybe_unused +static struct bpf_tramp_id *bpf_tramp_id_get(struct bpf_tramp_id *id) +{ + refcount_inc(&id->refcnt); + return id; +} + +void bpf_tramp_id_put(struct bpf_tramp_id *id) +{ + if (!id) + return; + if (!refcount_dec_and_test(&id->refcnt)) + return; + kfree(id->addr); + kfree(id->id); + kfree(id); +} + static struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { struct bpf_trampoline *tr; From patchwork Mon Aug 8 14:06:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938828 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8B87C25B0C for ; Mon, 8 Aug 2022 14:07:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243014AbiHHOHn (ORCPT ); Mon, 8 Aug 2022 10:07:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242251AbiHHOHm (ORCPT ); Mon, 8 Aug 2022 10:07:42 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A85282677 for ; Mon, 8 Aug 2022 07:07:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 617EFB80EAF for ; Mon, 8 Aug 2022 14:07:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB084C433D6; Mon, 8 Aug 2022 14:07:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967659; bh=42IpPeJC0f9IzSX4fKvJWlllc4pn4JGafRldGuu6ebQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JyZyS5bhmiwllF1ee969z9a1ADguqdovyHpOmwv5krRB0IXJaTD3nISeFWKq2fKRv F4KKun/tlpxtPyR5+bDs8mkEKF8DJ8+k4+XvmM2RDDfD03Hla3pw2fmODd/x8nOhzs ySx8lLl6ESvDNHTkagoHM8y7IpUAR6sAJHT0a5nMSgl4jOyZ+1x2cJywUDYEWlcBJt O/CwLRV/FAwKrQKT3AK1P8ZcnqFrxbllSjqrnwH+CkB070AtQEUxH9DfHBNB/ttTHS bnY9aDo5jvnWavGQghSoivQQCDI0QQrQ1XTXyBDQ18X7Fm2iUbkIzZo8fXP3F8BrkE bgf8LKO4ceSKQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 06/17] bpf: Pass image struct to reg/unreg/modify fentry functions Date: Mon, 8 Aug 2022 16:06:15 +0200 Message-Id: <20220808140626.422731-7-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Passing image struct to reg/unreg/modify fentry functions as a preparation for following change, where we need to use the whole image in the update logic. Signed-off-by: Jiri Olsa --- kernel/bpf/trampoline.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index c0983ff5aa3a..d28070247fa3 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -244,8 +244,9 @@ static void bpf_trampoline_module_put(struct bpf_trampoline *tr) tr->mod = NULL; } -static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr) +static int unregister_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im) { + void *old_addr = im->image; void *ip = tr->func.addr; int ret; @@ -259,9 +260,11 @@ static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr) return ret; } -static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_addr, +static int modify_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im, bool lock_direct_mutex) { + void *old_addr = tr->cur_image->image; + void *new_addr = im->image; void *ip = tr->func.addr; int ret; @@ -277,8 +280,9 @@ static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_ad } /* first time registering */ -static int register_fentry(struct bpf_trampoline *tr, void *new_addr) +static int register_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im) { + void *new_addr = im->image; void *ip = tr->func.addr; unsigned long faddr; int ret; @@ -486,7 +490,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut return PTR_ERR(tprogs); if (total == 0) { - err = unregister_fentry(tr, tr->cur_image->image); + err = unregister_fentry(tr, tr->cur_image); bpf_tramp_image_put(tr->cur_image); tr->cur_image = NULL; tr->selector = 0; @@ -532,10 +536,10 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut WARN_ON(!tr->cur_image && tr->selector); if (tr->cur_image) /* progs already running at this address */ - err = modify_fentry(tr, tr->cur_image->image, im->image, lock_direct_mutex); + err = modify_fentry(tr, im, lock_direct_mutex); else /* first time registering */ - err = register_fentry(tr, im->image); + err = register_fentry(tr, im); #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS if (err == -EAGAIN) { From patchwork Mon Aug 8 14:06:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938829 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA522C00140 for ; Mon, 8 Aug 2022 14:07:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234788AbiHHOHx (ORCPT ); Mon, 8 Aug 2022 10:07:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243183AbiHHOHv (ORCPT ); Mon, 8 Aug 2022 10:07:51 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5D7FFD22 for ; Mon, 8 Aug 2022 07:07:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 40F4F60D14 for ; Mon, 8 Aug 2022 14:07:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A28CFC433D7; Mon, 8 Aug 2022 14:07:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967669; bh=0ImTWqSl8rHzkQjsJwwUSMF3ThxfoKnurf9mC2sBqoU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cGd7U2hrF9hfX2Wa/Zb/P93PC0YnG4MKj6YbTgzKWWpdlBcXWQCnquY9ORLYmnVJC arAtvJGRLkDilLwZQ1VQNYq5fyY1+c+Hwytit+WHvoMv4sGBC1/Uh7Eo2n0r/rC9uU Ybt9K/5ELtPPN22PjmHhTJVFuj2JaNd0WY9zRgwTVz8r/b1hkr8sFp6hFmVkKGRUsO 8b/ndNp71/54xehJ99WMJe+a7kwmSkfHSciju1BFf9tkRveitRbXSjD18Bni/VeQ7N opHP4Uw/G2OZ8gB/103IpO63+tkApooAkpN7tRv8Oc0gEq16azbCVgckG8UjBu1V3x drN65Z2N1ybOQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 07/17] bpf: Add support to postpone trampoline update Date: Mon, 8 Aug 2022 16:06:16 +0200 Message-Id: <20220808140626.422731-8-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding support to postpone the trampoline update and record it to the update list. If the update list is provided, all the reg/unreg/modify functions only add trampoline to the update list and stores the requested update information/data to the trampoline. It will bed used in following changes where we need to do the actual trampoline update at the end of the attachment. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 31 ++++++++++++----- kernel/bpf/trampoline.c | 76 +++++++++++++++++++++++++++++------------ 2 files changed, 78 insertions(+), 29 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a5738d57f6bd..a23ff5b8d14c 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -768,6 +768,28 @@ struct btf_func_model { */ #define BPF_MAX_TRAMP_LINKS 38 +enum bpf_tramp_update_action { + BPF_TRAMP_UPDATE_REG, + BPF_TRAMP_UPDATE_UNREG, + BPF_TRAMP_UPDATE_MODIFY, +}; + +enum bpf_tramp_prog_type { + BPF_TRAMP_FENTRY, + BPF_TRAMP_FEXIT, + BPF_TRAMP_MODIFY_RETURN, + BPF_TRAMP_MAX, + BPF_TRAMP_REPLACE, /* more than MAX */ +}; + +struct bpf_tramp_update { + enum bpf_tramp_update_action action; + struct bpf_tramp_image *im; + enum bpf_tramp_prog_type kind; + struct bpf_prog_array *old_array; + struct list_head list; +}; + struct bpf_tramp_prog { struct bpf_prog *prog; u64 cookie; @@ -827,14 +849,6 @@ struct bpf_ksym { bool prog; }; -enum bpf_tramp_prog_type { - BPF_TRAMP_FENTRY, - BPF_TRAMP_FEXIT, - BPF_TRAMP_MODIFY_RETURN, - BPF_TRAMP_MAX, - BPF_TRAMP_REPLACE, /* more than MAX */ -}; - struct bpf_tramp_image { void *image; struct bpf_ksym ksym; @@ -886,6 +900,7 @@ struct bpf_trampoline { u64 selector; struct module *mod; struct bpf_shim_tramp_link *shim_link; + struct bpf_tramp_update update; }; struct bpf_attach_target_info { diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index d28070247fa3..e926692ded85 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -31,7 +31,8 @@ static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE]; static DEFINE_MUTEX(trampoline_mutex); #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS -static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex); +static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex, + struct list_head *upd); static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, enum ftrace_ops_cmd cmd) { @@ -87,13 +88,13 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, enum ftrace_ops_cmd if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) && !(tr->flags & BPF_TRAMP_F_ORIG_STACK)) - ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */); + ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */, NULL); break; case FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER: tr->flags &= ~BPF_TRAMP_F_SHARE_IPMODIFY; if (tr->flags & BPF_TRAMP_F_ORIG_STACK) - ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */); + ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */, NULL); break; default: ret = -EINVAL; @@ -244,12 +245,20 @@ static void bpf_trampoline_module_put(struct bpf_trampoline *tr) tr->mod = NULL; } -static int unregister_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im) +static int unregister_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im, + struct list_head *upd) { void *old_addr = im->image; void *ip = tr->func.addr; int ret; + if (upd) { + tr->update.action = BPF_TRAMP_UPDATE_UNREG; + tr->update.im = NULL; + list_add_tail(&tr->update.list, upd); + return 0; + } + if (tr->func.ftrace_managed) ret = unregister_ftrace_direct_multi(tr->fops, (long)old_addr); else @@ -257,17 +266,24 @@ static int unregister_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image * if (!ret) bpf_trampoline_module_put(tr); + return ret; } static int modify_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im, - bool lock_direct_mutex) + bool lock_direct_mutex, struct list_head *upd) { void *old_addr = tr->cur_image->image; void *new_addr = im->image; void *ip = tr->func.addr; int ret; + if (upd) { + tr->update.action = BPF_TRAMP_UPDATE_MODIFY; + tr->update.im = im; + list_add_tail(&tr->update.list, upd); + return 0; + } if (tr->func.ftrace_managed) { if (lock_direct_mutex) ret = modify_ftrace_direct_multi(tr->fops, (long)new_addr); @@ -280,7 +296,8 @@ static int modify_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im, } /* first time registering */ -static int register_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im) +static int register_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im, + struct list_head *upd) { void *new_addr = im->image; void *ip = tr->func.addr; @@ -294,6 +311,15 @@ static int register_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im tr->func.ftrace_managed = true; } + if (upd) { + if (ip && !tr->func.ftrace_managed) + return -ENOTSUPP; + tr->update.action = BPF_TRAMP_UPDATE_REG; + tr->update.im = im; + list_add_tail(&tr->update.list, upd); + return 0; + } + if (bpf_trampoline_module_get(tr)) return -ENOENT; @@ -477,7 +503,8 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx) return ERR_PTR(err); } -static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex) +static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex, + struct list_head *upd) { struct bpf_tramp_image *im; struct bpf_tramp_progs *tprogs; @@ -490,10 +517,12 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut return PTR_ERR(tprogs); if (total == 0) { - err = unregister_fentry(tr, tr->cur_image); - bpf_tramp_image_put(tr->cur_image); - tr->cur_image = NULL; - tr->selector = 0; + err = unregister_fentry(tr, tr->cur_image, upd); + if (!upd) { + bpf_tramp_image_put(tr->cur_image); + tr->cur_image = NULL; + tr->selector = 0; + } goto out; } @@ -536,10 +565,10 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut WARN_ON(!tr->cur_image && tr->selector); if (tr->cur_image) /* progs already running at this address */ - err = modify_fentry(tr, im, lock_direct_mutex); + err = modify_fentry(tr, im, lock_direct_mutex, upd); else /* first time registering */ - err = register_fentry(tr, im); + err = register_fentry(tr, im, upd); #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS if (err == -EAGAIN) { @@ -553,7 +582,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut goto again; } #endif - if (err) + if (err || upd) goto out; if (tr->cur_image) @@ -592,7 +621,8 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog) } } -static int __bpf_trampoline_link_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr) +static int __bpf_trampoline_link_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr, + struct list_head *upd) { struct bpf_prog_array *old_array, *new_array; const struct bpf_prog_array_item *item; @@ -638,11 +668,14 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_prog *tp, struct bpf_tram return -ENOMEM; tr->progs_array[kind] = new_array; tr->progs_cnt[kind]++; - err = bpf_trampoline_update(tr, true /* lock_direct_mutex */); + err = bpf_trampoline_update(tr, true /* lock_direct_mutex */, upd); if (err) { tr->progs_array[kind] = old_array; tr->progs_cnt[kind]--; bpf_prog_array_free(new_array); + } else if (upd) { + tr->update.kind = kind; + tr->update.old_array = old_array; } else { bpf_prog_array_free(old_array); } @@ -654,12 +687,13 @@ int bpf_trampoline_link_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *t int err; mutex_lock(&tr->mutex); - err = __bpf_trampoline_link_prog(tp, tr); + err = __bpf_trampoline_link_prog(tp, tr, NULL); mutex_unlock(&tr->mutex); return err; } -static int __bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr) +static int __bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline *tr, + struct list_head *upd) { struct bpf_prog_array *old_array, *new_array; enum bpf_tramp_prog_type kind; @@ -683,7 +717,7 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_tr tr->progs_cnt[kind]--; tr->progs_array[kind] = new_array; bpf_prog_array_free(old_array); - return bpf_trampoline_update(tr, true /* lock_direct_mutex */); + return bpf_trampoline_update(tr, true /* lock_direct_mutex */, upd); } /* bpf_trampoline_unlink_prog() should never fail. */ @@ -692,7 +726,7 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_prog *tp, struct bpf_trampoline int err; mutex_lock(&tr->mutex); - err = __bpf_trampoline_unlink_prog(tp, tr); + err = __bpf_trampoline_unlink_prog(tp, tr, NULL); mutex_unlock(&tr->mutex); return err; } @@ -804,7 +838,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog, goto err; } - err = __bpf_trampoline_link_prog(&shim_link->tp, tr); + err = __bpf_trampoline_link_prog(&shim_link->tp, tr, NULL); if (err) goto err; From patchwork Mon Aug 8 14:06:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938830 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07AA5C00140 for ; Mon, 8 Aug 2022 14:08:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235514AbiHHOIE (ORCPT ); Mon, 8 Aug 2022 10:08:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237227AbiHHOIE (ORCPT ); Mon, 8 Aug 2022 10:08:04 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4706E2E for ; Mon, 8 Aug 2022 07:08:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A3FD2B8049B for ; Mon, 8 Aug 2022 14:08:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 637B6C433D6; Mon, 8 Aug 2022 14:07:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967680; bh=Eabw5c+cDjQGY9iSJWIk/1u/+WzAFRBu7rJ7bIbnxPk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fgqnkhisFFKwF2yjQE3EibEuQ4PmTRfy+XncD4pHs7lqoEFUL6zvrdiNt4MxAmGoi IkHAqnSVezoP61FQ26WrUA9MOboUmgkNlepkI4LQacpLpFfBOxG3ACbNtBz+BVnw6+ ccBcijfEgy0cg0nUZ23m+AnobyxWgowA6LrgnZ785dlHQBfO8KN4cEv8qSeTdD2403 eu00xA2gkbrreOIRMsbhnAkZazE1PqEYAczmS7KVWIqTLL0SptQrO1Yk5sajkGjLK5 +3+SCeAdCmFG7w/2rgxo2Y3zFDdLr1tHsG19+C2skb3hI+VBlvtzgmvv4cxCF+uOGA c4BjcXpCmKoUw== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 08/17] bpf: Factor bpf_trampoline_lookup function Date: Mon, 8 Aug 2022 16:06:17 +0200 Message-Id: <20220808140626.422731-9-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Factoring bpf_trampoline_lookup function and adding to new functions which will be needed in following changes: struct bpf_trampoline *__bpf_trampoline_lookup(u64 key) - returns trampoline without locking trampoline_mutex static struct bpf_trampoline *bpf_trampoline_alloc(void) - alocates trampoline object The bpf_trampoline_lookup logic stays the same. Signed-off-by: Jiri Olsa --- kernel/bpf/trampoline.c | 45 ++++++++++++++++++++++++++++++----------- 1 file changed, 33 insertions(+), 12 deletions(-) diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index e926692ded85..15801f6c5071 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -188,38 +188,59 @@ void bpf_tramp_id_put(struct bpf_tramp_id *id) kfree(id); } -static struct bpf_trampoline *bpf_trampoline_lookup(u64 key) +static struct bpf_trampoline *__bpf_trampoline_lookup(u64 key) { struct bpf_trampoline *tr; struct hlist_head *head; - mutex_lock(&trampoline_mutex); head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)]; hlist_for_each_entry(tr, head, hlist) { - if (tr->key == key) { - refcount_inc(&tr->refcnt); - goto out; - } + if (tr->key == key) + return tr; } + return NULL; +} + +static struct bpf_trampoline *bpf_trampoline_alloc(void) +{ + struct bpf_trampoline *tr; + tr = kzalloc(sizeof(*tr), GFP_KERNEL); if (!tr) - goto out; + return NULL; + + INIT_HLIST_NODE(&tr->hlist); + refcount_set(&tr->refcnt, 1); + mutex_init(&tr->mutex); #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS tr->fops = kzalloc(sizeof(struct ftrace_ops), GFP_KERNEL); if (!tr->fops) { kfree(tr); - tr = NULL; - goto out; + return NULL; } tr->fops->private = tr; tr->fops->ops_func = bpf_tramp_ftrace_ops_func; #endif + return tr; +} + +static struct bpf_trampoline *bpf_trampoline_lookup(u64 key) +{ + struct bpf_trampoline *tr; + struct hlist_head *head; + mutex_lock(&trampoline_mutex); + tr = __bpf_trampoline_lookup(key); + if (tr) { + refcount_inc(&tr->refcnt); + goto out; + } + tr = bpf_trampoline_alloc(); + if (!tr) + goto out; tr->key = key; - INIT_HLIST_NODE(&tr->hlist); + head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)]; hlist_add_head(&tr->hlist, head); - refcount_set(&tr->refcnt, 1); - mutex_init(&tr->mutex); out: mutex_unlock(&trampoline_mutex); return tr; From patchwork Mon Aug 8 14:06:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938831 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9792C00140 for ; Mon, 8 Aug 2022 14:08:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243353AbiHHOI0 (ORCPT ); Mon, 8 Aug 2022 10:08:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243282AbiHHOIQ (ORCPT ); Mon, 8 Aug 2022 10:08:16 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79DB2FD1C for ; Mon, 8 Aug 2022 07:08:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id AAE7DCE1107 for ; Mon, 8 Aug 2022 14:08:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0DA17C433B5; Mon, 8 Aug 2022 14:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967691; bh=tNzc71raX5o7gseJh9tni2LdxYhM88jj6NK+OdwaOT8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bnBH4eAnk5fMLcqilSvK9/DRJr1oW1fGp5e+EeASyUt5bi8Co107RDly3zd/8D6QV SGcorBQCOqNeq2R7LSEIcAaJoa3T90IjExHEg0WD8Ob4mLrgpGAKpfdcnu20ixl89A HV+ojE185+xg11ANlFsInAAnPiQy3/NcAs3XGC4RhLeo90EXuLSyA5wMqgEf+4mPqx Z7g3DhlMBMEAr870lJmeChEYCsLm+Y0DGeLQF5d45F3HdffifWz72aIdWL/QNOKb2y pxbd88LiZbLIYexOgMi94MhzNuulU7Gia54T6zDkIiaCFKkpHnlPxw3SQpW2EBqHBa AXGhtmJ5XLLPQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 09/17] bpf: Factor bpf_trampoline_put function Date: Mon, 8 Aug 2022 16:06:18 +0200 Message-Id: <20220808140626.422731-10-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Factoring bpf_trampoline_put function and adding new __bpf_trampoline_put function without locking trampoline_mutex. It will be needed in following changes. Signed-off-by: Jiri Olsa --- kernel/bpf/trampoline.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 15801f6c5071..b6b57aa09364 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -928,22 +928,19 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key, return tr; } -void bpf_trampoline_put(struct bpf_trampoline *tr) +static void __bpf_trampoline_put(struct bpf_trampoline *tr) { int i; - if (!tr) - return; - mutex_lock(&trampoline_mutex); if (!refcount_dec_and_test(&tr->refcnt)) - goto out; + return; WARN_ON_ONCE(mutex_is_locked(&tr->mutex)); for (i = 0; i < BPF_TRAMP_MAX; i++) { if (!tr->progs_array[i]) continue; if (WARN_ON_ONCE(!bpf_prog_array_is_empty(tr->progs_array[i]))) - goto out; + return; } /* This code will be executed even when the last bpf_tramp_image @@ -958,7 +955,14 @@ void bpf_trampoline_put(struct bpf_trampoline *tr) kfree(tr->fops); } kfree(tr); -out: +} + +void bpf_trampoline_put(struct bpf_trampoline *tr) +{ + if (!tr) + return; + mutex_lock(&trampoline_mutex); + __bpf_trampoline_put(tr); mutex_unlock(&trampoline_mutex); } From patchwork Mon Aug 8 14:06:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938832 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1CB1C00140 for ; Mon, 8 Aug 2022 14:08:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243342AbiHHOIn (ORCPT ); Mon, 8 Aug 2022 10:08:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243453AbiHHOId (ORCPT ); Mon, 8 Aug 2022 10:08:33 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38ADFFD1C for ; Mon, 8 Aug 2022 07:08:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6E530B80EB7 for ; Mon, 8 Aug 2022 14:08:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80E78C433D7; Mon, 8 Aug 2022 14:08:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967703; bh=tMToDOdyqOKRWuLq6A6FWQMvwPO2a9/YJ2D2sK9RWOw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FULk5KM6+0OE/kR6yzw3FVM7WQ9gs5Ek8JbE18aN31e1+CUEcbBC6bVcZNeAwF9aN WYylpSUROHySvh6IF9Vw2K8mOG+sDgRXxXAGVEvtMkcEIdCfQpNRiCKinGi4FaJOo3 WBsZ4hhCY7wjJp73snbXaapSuiOkTY3HbBASQm5YFeb8oMJ6G+6crhzVcMQAW9loYt eYIkjlI9Olp1vH6b+DhryJrUp2CuUzxwcEQBL9DJ+EaFkN+oy/48D+1cL03CXpv6Yh vLOzTS1cWvg6HalEBgroU/MsH3lBc4dbeuxgtIweABVFAl4RozTO6VCtm7fz9Pl8e4 twkOgpMnBB5qw== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 10/17] bpf: Add support to attach program to multiple trampolines Date: Mon, 8 Aug 2022 16:06:19 +0200 Message-Id: <20220808140626.422731-11-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding support to attach program to multiple trampolines with new attach/detach interface: int bpf_trampoline_multi_attach(struct bpf_tramp_prog *tp, struct bpf_tramp_id *id) int bpf_trampoline_multi_detach(struct bpf_tramp_prog *tp, struct bpf_tramp_id *id) The program is passed as bpf_tramp_prog object and trampolines to attach it to are passed as bpf_tramp_id object. The interface creates new bpf_trampoline object which is initialized as 'multi' trampoline and stored separtely from standard trampolines. There are following rules how the standard and multi trampolines go along: - multi trampoline can attach on top of existing single trampolines, which creates 2 types of function IDs: 1) single-IDs - functions that are attached within existing single trampolines 2) multi-IDs - functions that were 'free' and are now taken by new 'multi' trampoline - we allow overlapping of 2 'multi' trampolines if they are attached to same IDs - we do now allow any other overlapping of 2 'multi' trampolines - any new 'single' trampoline cannot attach to existing multi-IDs IDs. Maybe better explained on following example: - you want to attach program P to functions A,B,C,D,E,F via bpf_trampoline_multi_attach - D,E,F already have standard trampoline attached - the bpf_trampoline_multi_attach will create new 'multi' trampoline which spans over A,B,C functions and attach program P to single trampolines D,E,F - A,B,C functions are now 'not attachable' by any trampoline until the above 'multi' trampoline is released - D,E,F functions are still attachable by any new trampoline If the bpf_tramp_id object contains ID that exists as the standard trampoline the interface will add the program These limitations are there to avoid extra complexity of splitting existing trampolines, which turned out to be nightmare. However the current code does not prevent any such feature improvement in the future. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 8 + kernel/bpf/trampoline.c | 393 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 394 insertions(+), 7 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a23ff5b8d14c..9842554e4fa4 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -901,6 +901,12 @@ struct bpf_trampoline { struct module *mod; struct bpf_shim_tramp_link *shim_link; struct bpf_tramp_update update; + struct { + struct list_head list; + struct bpf_tramp_id *id; + struct bpf_tramp_id *id_multi; + struct bpf_tramp_id *id_singles; + } multi; }; struct bpf_attach_target_info { @@ -945,6 +951,8 @@ void bpf_trampoline_put(struct bpf_trampoline *tr); struct bpf_tramp_id *bpf_tramp_id_alloc(u32 cnt); void bpf_tramp_id_put(struct bpf_tramp_id *id); int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs); +int bpf_trampoline_multi_detach(struct bpf_tramp_prog *tp, struct bpf_tramp_id *id); +int bpf_trampoline_multi_attach(struct bpf_tramp_prog *tp, struct bpf_tramp_id *id); #define BPF_DISPATCHER_INIT(_name) { \ .mutex = __MUTEX_INITIALIZER(_name.mutex), \ .func = &_name##_func, \ diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index b6b57aa09364..9be6e5737eba 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -14,6 +14,7 @@ #include #include #include +#include /* dummy _ops. The verifier will operate on target program's ops. */ const struct bpf_verifier_ops bpf_extension_verifier_ops = { @@ -33,6 +34,7 @@ static DEFINE_MUTEX(trampoline_mutex); #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex, struct list_head *upd); +static bool key_in_multi_trampoline(u64 key); static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, enum ftrace_ops_cmd cmd) { @@ -170,7 +172,6 @@ struct bpf_tramp_id *bpf_tramp_id_alloc(u32 max) return id; } -__maybe_unused static struct bpf_tramp_id *bpf_tramp_id_get(struct bpf_tramp_id *id) { refcount_inc(&id->refcnt); @@ -226,10 +227,12 @@ static struct bpf_trampoline *bpf_trampoline_alloc(void) static struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { - struct bpf_trampoline *tr; + struct bpf_trampoline *tr = NULL; struct hlist_head *head; mutex_lock(&trampoline_mutex); + if (key_in_multi_trampoline(key)) + goto out; tr = __bpf_trampoline_lookup(key); if (tr) { refcount_inc(&tr->refcnt); @@ -357,7 +360,7 @@ static int register_fentry(struct bpf_trampoline *tr, struct bpf_tramp_image *im } static struct bpf_tramp_progs * -bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg) +bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg, bool *multi) { const struct bpf_prog_array_item *item; struct bpf_prog_array *prog_array; @@ -383,6 +386,7 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_a while ((prog = READ_ONCE(item->prog))) { *ip_arg |= prog->call_get_func_ip; + *multi |= is_tracing_multi(prog->expected_attach_type); tp->prog = prog; tp->cookie = item->bpf_cookie; tp++; item++; @@ -524,16 +528,23 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx) return ERR_PTR(err); } +static struct btf_func_model btf_multi_func_model = { + .ret_size = 0, + .nr_args = 6, + .arg_size = { 8, 8, 8, 8, 8, 8, }, +}; + static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex, struct list_head *upd) { + struct btf_func_model *model = &tr->func.model; struct bpf_tramp_image *im; struct bpf_tramp_progs *tprogs; u32 orig_flags = tr->flags; - bool ip_arg = false; + bool ip_arg = false, multi = false; int err, total; - tprogs = bpf_trampoline_get_progs(tr, &total, &ip_arg); + tprogs = bpf_trampoline_get_progs(tr, &total, &ip_arg, &multi); if (IS_ERR(tprogs)) return PTR_ERR(tprogs); @@ -568,6 +579,10 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut if (ip_arg) tr->flags |= BPF_TRAMP_F_IP_ARG; + if (multi) { + tr->flags |= BPF_TRAMP_F_ORIG_STACK; + model = &btf_multi_func_model; + } #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS again: @@ -577,7 +592,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut #endif err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE, - &tr->func.model, tr->flags, tprogs, + model, tr->flags, tprogs, tr->func.addr); if (err < 0) goto out; @@ -949,11 +964,18 @@ static void __bpf_trampoline_put(struct bpf_trampoline *tr) * fexit progs. The fentry-only trampoline will be freed via * multiple rcu callbacks. */ - hlist_del(&tr->hlist); if (tr->fops) { ftrace_free_filter(tr->fops); kfree(tr->fops); } + if (tr->multi.id) { + bpf_tramp_id_put(tr->multi.id); + bpf_tramp_id_put(tr->multi.id_multi); + bpf_tramp_id_put(tr->multi.id_singles); + list_del(&tr->multi.list); + } else { + hlist_del(&tr->hlist); + } kfree(tr); } @@ -966,6 +988,363 @@ void bpf_trampoline_put(struct bpf_trampoline *tr) mutex_unlock(&trampoline_mutex); } +static LIST_HEAD(multi_trampolines); + +static int btf_id_cmp(const void *a, const void *b) +{ + const u32 *x = a; + const u32 *y = b; + + if (*x == *y) + return 0; + return *x < *y ? -1 : 1; +} + +static bool id_match(struct bpf_tramp_id *a, struct bpf_tramp_id *b) +{ + if (a->obj_id != b->obj_id) + return false; + if (a->cnt != b->cnt) + return false; + return memcmp(a->id, b->id, sizeof(u32) * a->cnt) == 0; +} + +static bool id_cross(struct bpf_tramp_id *a, struct bpf_tramp_id *b) +{ + u32 i, id; + + for (i = 0; i < a->cnt; i++) { + id = a->id[i]; + if (bsearch(&id, b->id, b->cnt, sizeof(u32), btf_id_cmp)) + return true; + } + return false; +} + +static void id_add(struct bpf_tramp_id *id, int btf_id, void *addr) +{ + if (WARN_ON_ONCE(id->cnt >= id->max)) + return; + id->id[id->cnt] = btf_id; + id->addr[id->cnt] = addr; + id->cnt++; +} + +static bool key_in_multi_trampoline(u64 key) +{ + struct bpf_trampoline *tr; + struct bpf_tramp_id *id; + u32 obj_id, btf_id; + + bpf_trampoline_unpack_key(key, &obj_id, &btf_id); + + list_for_each_entry(tr, &multi_trampolines, multi.list) { + id = tr->multi.id_multi; + + if (obj_id != id->obj_id) + continue; + if (bsearch(&btf_id, id->id, id->cnt, sizeof(u32), btf_id_cmp)) + return true; + } + return false; +} + +static void bpf_trampoline_rollback(struct bpf_trampoline *tr) +{ + struct bpf_tramp_update *upd = &tr->update; + + tr->progs_array[upd->kind] = upd->old_array; + tr->progs_cnt[upd->kind]--; + if (tr->update.im) + bpf_tramp_image_put(tr->update.im); +} + +static void bpf_trampoline_commit(struct bpf_trampoline *tr) +{ + if (tr->cur_image) + bpf_tramp_image_put(tr->cur_image); + tr->cur_image = tr->update.im; + if (tr->update.action == BPF_TRAMP_UPDATE_UNREG) + tr->selector = 0; + else + tr->selector++; +} + +static int bpf_tramp_update_set(struct list_head *upd) +{ + struct bpf_trampoline *tr, *trm = NULL; + int i, rollback_cnt = 0, err = -EINVAL; + unsigned long ip, image_new, image_old; + + list_for_each_entry(tr, upd, update.list) { + if (tr->multi.id_multi) { + trm = tr; + continue; + } + + ip = (unsigned long) tr->func.addr; + image_new = tr->update.im ? (unsigned long) tr->update.im->image : 0; + image_old = tr->cur_image ? (unsigned long) tr->cur_image->image : 0; + + switch (tr->update.action) { + case BPF_TRAMP_UPDATE_REG: + err = register_ftrace_direct_multi(tr->fops, image_new); + break; + case BPF_TRAMP_UPDATE_MODIFY: + err = modify_ftrace_direct_multi(tr->fops, image_new); + break; + case BPF_TRAMP_UPDATE_UNREG: + err = unregister_ftrace_direct_multi(tr->fops, image_old); + break; + } + if (err) + goto out_rollback; + rollback_cnt++; + } + + if (!trm) + return 0; + + if (trm->update.action == BPF_TRAMP_UPDATE_REG) { + for (i = 0; i < trm->multi.id_multi->cnt; i++) { + ip = (unsigned long) trm->multi.id_multi->addr[i]; + err = ftrace_set_filter_ip(trm->fops, ip, 0, 0); + if (err) + goto out_rollback; + } + } + + image_new = trm->update.im ? (unsigned long) trm->update.im->image : 0; + image_old = trm->cur_image ? (unsigned long) trm->cur_image->image : 0; + + switch (trm->update.action) { + case BPF_TRAMP_UPDATE_REG: + err = register_ftrace_direct_multi(trm->fops, image_new); + break; + case BPF_TRAMP_UPDATE_MODIFY: + err = modify_ftrace_direct_multi(trm->fops, image_new); + break; + case BPF_TRAMP_UPDATE_UNREG: + err = unregister_ftrace_direct_multi(trm->fops, image_old); + break; + default: + break; + } + + if (!err) + return 0; + +out_rollback: + i = 0; + list_for_each_entry(tr, upd, update.list) { + if (tr->multi.id_multi) + continue; + + ip = (unsigned long) tr->func.addr; + image_new = tr->update.im ? (unsigned long) tr->update.im->image : 0; + image_old = tr->cur_image ? (unsigned long) tr->cur_image->image : 0; + + switch (tr->update.action) { + case BPF_TRAMP_UPDATE_REG: + err = unregister_ftrace_direct_multi(tr->fops, image_new); + break; + case BPF_TRAMP_UPDATE_MODIFY: + err = modify_ftrace_direct_multi(tr->fops, image_old); + break; + case BPF_TRAMP_UPDATE_UNREG: + err = register_ftrace_direct_multi(tr->fops, image_old); + break; + } + if (rollback_cnt == ++i) + break; + } + + return err; +} + +static struct bpf_trampoline* +multi_trampoline_alloc(struct bpf_tramp_id *id, struct bpf_tramp_prog *tp) +{ + struct bpf_tramp_id *id_multi = NULL, *id_singles = NULL; + struct bpf_trampoline *tr, *trm; + u64 key; + int i; + + trm = bpf_trampoline_alloc(); + if (!trm) + return NULL; + + id_multi = bpf_tramp_id_alloc(id->cnt); + id_singles = bpf_tramp_id_alloc(id->cnt); + if (!id_multi || !id_singles) + goto error; + + for (i = 0; i < id->cnt; i++) { + key = bpf_trampoline_compute_key(NULL, tp->prog->aux->attach_btf, + id->id[i]); + tr = __bpf_trampoline_lookup(key); + if (!tr) { + id_add(id_multi, id->id[i], id->addr[i]); + continue; + } + id_add(id_singles, id->id[i], id->addr[i]); + } + + trm->multi.id = bpf_tramp_id_get(id); + trm->multi.id_multi = id_multi; + trm->multi.id_singles = id_singles; + + list_add_tail(&trm->multi.list, &multi_trampolines); + return trm; +error: + bpf_tramp_id_put(id_multi); + bpf_tramp_id_put(id_singles); + kfree(trm); + return NULL; +} + +int bpf_trampoline_multi_attach(struct bpf_tramp_prog *tp, struct bpf_tramp_id *id) +{ + struct bpf_trampoline *tr, *trm = NULL, *n; + struct bpf_tramp_id *id_singles = NULL; + int i, err = -EBUSY; + LIST_HEAD(upd); + u64 key; + + mutex_lock(&trampoline_mutex); + + list_for_each_entry(tr, &multi_trampolines, multi.list) { + if (id_match(id, tr->multi.id)) { + trm = tr; + break; + } + if (id_cross(id, tr->multi.id)) + goto out_unlock; + } + + if (trm) { + id_singles = tr->multi.id_singles; + refcount_inc(&tr->refcnt); + } else { + trm = multi_trampoline_alloc(id, tp); + if (!trm) + goto out_rollback; + id_singles = trm->multi.id_singles; + } + + mutex_lock(&trm->mutex); + err = __bpf_trampoline_link_prog(tp, trm, &upd); + if (err) { + mutex_unlock(&trm->mutex); + __bpf_trampoline_put(trm); + goto out_unlock; + } + + for (i = 0; i < id_singles->cnt; i++) { + key = bpf_trampoline_compute_key(NULL, tp->prog->aux->attach_btf, + id_singles->id[i]); + tr = __bpf_trampoline_lookup(key); + if (!tr) + continue; + mutex_lock(&tr->mutex); + err = __bpf_trampoline_link_prog(tp, tr, &upd); + if (err) { + mutex_unlock(&tr->mutex); + goto out_rollback; + } + refcount_inc(&tr->refcnt); + } + + err = bpf_tramp_update_set(&upd); + if (err) + goto out_rollback; + + list_for_each_entry_safe(tr, n, &upd, update.list) { + bpf_trampoline_commit(tr); + list_del_init(&tr->update.list); + mutex_unlock(&tr->mutex); + } + +out_unlock: + mutex_unlock(&trampoline_mutex); + return err; + +out_rollback: + list_for_each_entry_safe(tr, n, &upd, update.list) { + bpf_trampoline_rollback(tr); + list_del_init(&tr->update.list); + mutex_unlock(&tr->mutex); + __bpf_trampoline_put(tr); + } + mutex_unlock(&trampoline_mutex); + return err; +} + +int bpf_trampoline_multi_detach(struct bpf_tramp_prog *tp, struct bpf_tramp_id *id) +{ + struct bpf_trampoline *tr, *trm = NULL, *n; + LIST_HEAD(upd); + int i, err; + u64 key; + + mutex_lock(&trampoline_mutex); + list_for_each_entry(tr, &multi_trampolines, multi.list) { + if (id_match(tr->multi.id, id)) { + trm = tr; + break; + } + } + if (!trm) { + err = -EINVAL; + goto fail_unlock; + } + + mutex_lock(&trm->mutex); + err = __bpf_trampoline_unlink_prog(tp, trm, &upd); + if (err) { + mutex_unlock(&trm->mutex); + goto fail_unlock; + } + + for (i = 0; i < trm->multi.id_singles->cnt; i++) { + key = bpf_trampoline_compute_key(NULL, tp->prog->aux->attach_btf, + trm->multi.id_singles->id[i]); + tr = __bpf_trampoline_lookup(key); + if (!tr) { + err = -EINVAL; + goto fail_unlock; + } + mutex_lock(&tr->mutex); + err = __bpf_trampoline_unlink_prog(tp, tr, &upd); + if (err) { + mutex_unlock(&tr->mutex); + goto fail_unlock; + } + } + + err = bpf_tramp_update_set(&upd); + if (err) + goto fail_unlock; + + list_for_each_entry_safe(tr, n, &upd, update.list) { + bpf_trampoline_commit(tr); + list_del_init(&tr->update.list); + mutex_unlock(&tr->mutex); + __bpf_trampoline_put(tr); + } + + mutex_unlock(&trampoline_mutex); + return 0; +fail_unlock: + list_for_each_entry_safe(tr, n, &upd, update.list) { + bpf_trampoline_rollback(tr); + list_del_init(&tr->update.list); + mutex_unlock(&tr->mutex); + } + mutex_unlock(&trampoline_mutex); + return err; +} + #define NO_START_TIME 1 static __always_inline u64 notrace bpf_prog_start_time(void) { From patchwork Mon Aug 8 14:06:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938833 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4692AC00140 for ; Mon, 8 Aug 2022 14:08:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243177AbiHHOIs (ORCPT ); Mon, 8 Aug 2022 10:08:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243391AbiHHOIj (ORCPT ); Mon, 8 Aug 2022 10:08:39 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05E8F11469 for ; Mon, 8 Aug 2022 07:08:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A3075B80EAF for ; Mon, 8 Aug 2022 14:08:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2EB2AC433D6; Mon, 8 Aug 2022 14:08:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967714; bh=tvoonCVkEcQXrDvihrwNK1EdUBcLBF4/LkMYkioETHM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kcIBkkF/URwlf7NQGt40MEWOOy0BI94FBIcBz2Qwm2TIsBnFBOZxLnSzMxnsD7J2z LHtNxP8zMZ57hVYG2sinTC+IYvcK+dlRWErOh50BSeF74xS+kNL0zygHgPVOp7ZrBp l7S5SB0LO9L9tcw8ovqipJ8hRqMHbY965TFlPqBFeloOyPcbQd7LPpNRJn8JyykZ+I zShI3HTbTGi5YjQCFPHQ+IQziMLH15pWLpkara5U+VR9ajLrePKBSdTRvQen9lfHrM NFqzpcE/5Me3aoNw0+wQl6NkzHqE8QTkvf9jUiCSbjCLS+kGutyv1HHPT+cXkQQMQo xCVcOmJEoq6hQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 11/17] bpf: Add support to create tracing multi link Date: Mon, 8 Aug 2022 16:06:20 +0200 Message-Id: <20220808140626.422731-12-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding new link to allow to attach program to multiple function BTF IDs. New fields are added to bpf_attr::link_create to pass array of BTF IDs: struct { __aligned_u64 btf_ids; /* addresses to attach */ __u32 btf_ids_cnt; /* addresses count */ } multi; The new link code will load these IDs into bpf_tramp_id object and resolve their ips. The resolve itself is done as per Andrii's suggestion: - lookup all names for given IDs - store and sort them by name - go through all kallsyms symbols and use bsearch to find it in provided names - if name is found, store the address for the name - resort the names array based on ID If there are multi symbols of the same name the first one will be used to resolve the address. The link is using bpf_trampoline_multi_attach interface to attach this IDs to the program. Signed-off-by: Jiri Olsa --- include/linux/trace_events.h | 5 + include/uapi/linux/bpf.h | 5 + kernel/bpf/syscall.c | 2 + kernel/trace/bpf_trace.c | 240 +++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 5 + 5 files changed, 257 insertions(+) diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h index e6e95a9f07a5..7825f9b0d9c3 100644 --- a/include/linux/trace_events.h +++ b/include/linux/trace_events.h @@ -747,6 +747,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id, u32 *fd_type, const char **buf, u64 *probe_offset, u64 *probe_addr); int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); +int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr); #else static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) { @@ -793,6 +794,10 @@ bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { return -EOPNOTSUPP; } +int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr) +{ + return -EOPNOTSUPP; +} #endif enum { diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index fb6bc2c5e9e8..b4b3a47d5324 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1017,6 +1017,7 @@ enum bpf_link_type { BPF_LINK_TYPE_PERF_EVENT = 7, BPF_LINK_TYPE_KPROBE_MULTI = 8, BPF_LINK_TYPE_STRUCT_OPS = 9, + BPF_LINK_TYPE_TRACING_MULTI = 10, MAX_BPF_LINK_TYPE, }; @@ -1503,6 +1504,10 @@ union bpf_attr { */ __u64 cookie; } tracing; + struct { + __aligned_u64 btf_ids; /* addresses to attach */ + __u32 btf_ids_cnt; /* addresses count */ + } tracing_multi; }; } link_create; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 2e4765c7e6d4..2b51787e8899 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4604,6 +4604,8 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr) ret = bpf_iter_link_attach(attr, uattr, prog); else if (prog->expected_attach_type == BPF_LSM_CGROUP) ret = cgroup_bpf_link_attach(attr, prog); + else if (is_tracing_multi(prog->expected_attach_type)) + ret = bpf_tracing_multi_attach(prog, attr); else ret = bpf_tracing_prog_attach(prog, attr->link_create.target_fd, diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 68e5cdd24cef..07a91a32e566 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2593,3 +2593,243 @@ static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx) return 0; } #endif + +struct bpf_tracing_multi_link { + struct bpf_link link; + enum bpf_attach_type attach_type; + struct bpf_tramp_prog tp; + struct bpf_tramp_id *id; +}; + +static void bpf_tracing_multi_link_release(struct bpf_link *link) +{ + struct bpf_tracing_multi_link *tr_link = + container_of(link, struct bpf_tracing_multi_link, link); + + WARN_ON_ONCE(bpf_trampoline_multi_detach(&tr_link->tp, tr_link->id)); +} + +static void bpf_tracing_multi_link_dealloc(struct bpf_link *link) +{ + struct bpf_tracing_multi_link *tr_link = + container_of(link, struct bpf_tracing_multi_link, link); + + bpf_tramp_id_put(tr_link->id); + kfree(tr_link); +} + +static void bpf_tracing_multi_link_show_fdinfo(const struct bpf_link *link, + struct seq_file *seq) +{ + struct bpf_tracing_multi_link *tr_link = + container_of(link, struct bpf_tracing_multi_link, link); + + seq_printf(seq, "attach_type:\t%d\n", tr_link->attach_type); +} + +static int bpf_tracing_multi_link_fill_link_info(const struct bpf_link *link, + struct bpf_link_info *info) +{ + struct bpf_tracing_multi_link *tr_link = + container_of(link, struct bpf_tracing_multi_link, link); + + info->tracing.attach_type = tr_link->attach_type; + return 0; +} + +static const struct bpf_link_ops bpf_tracing_multi_link_lops = { + .release = bpf_tracing_multi_link_release, + .dealloc = bpf_tracing_multi_link_dealloc, + .show_fdinfo = bpf_tracing_multi_link_show_fdinfo, + .fill_link_info = bpf_tracing_multi_link_fill_link_info, +}; + +static int check_multi_prog_type(struct bpf_prog *prog) +{ + if (prog->expected_attach_type != BPF_TRACE_FENTRY_MULTI && + prog->expected_attach_type != BPF_TRACE_FEXIT_MULTI) + return -EINVAL; + return 0; +} + +static int btf_ids_cmp(const void *a, const void *b) +{ + const u32 *x = a; + const u32 *y = b; + + if (*x == *y) + return 0; + return *x < *y ? -1 : 1; +} + +struct resolve_id { + const char *name; + void *addr; + u32 id; +}; + +static int rid_name_cmp(const void *a, const void *b) +{ + const struct resolve_id *x = a; + const struct resolve_id *y = b; + + return strcmp(x->name, y->name); +} + +static int rid_id_cmp(const void *a, const void *b) +{ + const struct resolve_id *x = a; + const struct resolve_id *y = b; + + if (x->id == y->id) + return 0; + return x->id < y->id ? -1 : 1; +} + +struct kallsyms_data { + struct resolve_id *rid; + u32 cnt; + u32 found; +}; + +static int kallsyms_callback(void *data, const char *name, + struct module *mod, unsigned long addr) +{ + struct kallsyms_data *args = data; + struct resolve_id *rid, id = { + .name = name, + }; + + rid = bsearch(&id, args->rid, args->cnt, sizeof(*rid), rid_name_cmp); + if (rid && !rid->addr) { + rid->addr = (void *) addr; + args->found++; + } + return args->found == args->cnt ? 1 : 0; +} + +static int bpf_tramp_id_resolve(struct bpf_tramp_id *id, struct bpf_prog *prog) +{ + struct kallsyms_data args; + const struct btf_type *t; + struct resolve_id *rid; + const char *name; + struct btf *btf; + int err = 0; + u32 i; + + btf = prog->aux->attach_btf; + if (!btf) + return -EINVAL; + + rid = kcalloc(id->cnt, sizeof(*rid), GFP_KERNEL); + if (!rid) + return -ENOMEM; + + err = -EINVAL; + for (i = 0; i < id->cnt; i++) { + t = btf_type_by_id(btf, id->id[i]); + if (!t) + goto out_free; + + name = btf_name_by_offset(btf, t->name_off); + if (!name) + goto out_free; + + rid[i].name = name; + rid[i].id = id->id[i]; + } + + sort(rid, id->cnt, sizeof(*rid), rid_name_cmp, NULL); + + args.rid = rid; + args.cnt = id->cnt; + args.found = 0; + kallsyms_on_each_symbol(kallsyms_callback, &args); + + sort(rid, id->cnt, sizeof(*rid), rid_id_cmp, NULL); + + for (i = 0; i < id->cnt; i++) { + if (!rid[i].addr) { + err = -EINVAL; + goto out_free; + } + /* all the addresses must be ftrace managed */ + if (!ftrace_location((unsigned long) rid[i].addr)) { + err = -EINVAL; + goto out_free; + } + id->addr[i] = rid[i].addr; + } + err = 0; +out_free: + kfree(rid); + return err; +} + +int bpf_tracing_multi_attach(struct bpf_prog *prog, + const union bpf_attr *attr) +{ + void __user *uids = u64_to_user_ptr(attr->link_create.tracing_multi.btf_ids); + u32 cnt_size, cnt = attr->link_create.tracing_multi.btf_ids_cnt; + struct bpf_tracing_multi_link *link = NULL; + struct bpf_link_primer link_primer; + struct bpf_tramp_id *id = NULL; + int err = -EINVAL; + + if (check_multi_prog_type(prog)) + return -EINVAL; + if (!cnt || !uids) + return -EINVAL; + + id = bpf_tramp_id_alloc(cnt); + if (!id) + return -ENOMEM; + + err = -EFAULT; + cnt_size = cnt * sizeof(id->id[0]); + if (copy_from_user(id->id, uids, cnt_size)) + goto out_free_id; + + id->cnt = cnt; + id->obj_id = btf_obj_id(prog->aux->attach_btf); + + /* Sort user provided BTF ids, so we can use memcmp + * and bsearch on top of it later. + */ + sort(id->id, cnt, sizeof(u32), btf_ids_cmp, NULL); + + err = bpf_tramp_id_resolve(id, prog); + if (err) + goto out_free_id; + + link = kzalloc(sizeof(*link), GFP_KERNEL); + if (!link) { + err = -ENOMEM; + goto out_free_id; + } + + bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI, + &bpf_tracing_multi_link_lops, prog); + link->attach_type = prog->expected_attach_type; + + err = bpf_link_prime(&link->link, &link_primer); + if (err) { + kfree(link); + goto out_free_id; + } + + link->id = id; + link->tp.cookie = 0; + link->tp.prog = prog; + + err = bpf_trampoline_multi_attach(&link->tp, id); + if (err) { + bpf_link_cleanup(&link_primer); + return err; + } + return bpf_link_settle(&link_primer); +out_free_id: + bpf_tramp_id_put(id); + return err; +} diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 94d623affd5f..4969f31471d5 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1017,6 +1017,7 @@ enum bpf_link_type { BPF_LINK_TYPE_PERF_EVENT = 7, BPF_LINK_TYPE_KPROBE_MULTI = 8, BPF_LINK_TYPE_STRUCT_OPS = 9, + BPF_LINK_TYPE_TRACING_MULTI = 10, MAX_BPF_LINK_TYPE, }; @@ -1503,6 +1504,10 @@ union bpf_attr { */ __u64 cookie; } tracing; + struct { + __aligned_u64 btf_ids; /* addresses to attach */ + __u32 btf_ids_cnt; /* addresses count */ + } tracing_multi; }; } link_create; From patchwork Mon Aug 8 14:06:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938834 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5B4DC00140 for ; Mon, 8 Aug 2022 14:08:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237349AbiHHOIv (ORCPT ); Mon, 8 Aug 2022 10:08:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243295AbiHHOIt (ORCPT ); Mon, 8 Aug 2022 10:08:49 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C6AEE02F for ; Mon, 8 Aug 2022 07:08:48 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 7067CCE1104 for ; Mon, 8 Aug 2022 14:08:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF3C1C433D6; Mon, 8 Aug 2022 14:08:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967724; bh=CPh9o+TjDhpbKIvPyxl3RMPxAeLpPxDlSpJikR8AfLQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TgcIbYufgQXDRU2TRAmVPEmQTGF5ado9o7wIFVaq2TZUd6uYyTdEKXGVTb9vBapEa iYYfkpSnPCnHQBbbzzt2YRiIZfMMdXMktCDxr2UMvvFCTgt9GSEWqub3GAk2sFqTIM ktVzfwEaeP+JHNr2PkRtCnxqC5ULAGRH8USoLQkjy+j1pV02NUQY/uLkcDofbVzwHo uZ/Lk1XQnNz+iQZMMixAO1d3ABav70CEaTtp99dncnJaC2JBjmWHtvkprL9YvvmQDD YpK5b/iu08hqjaYTmf0BhQf9XG5nt3vipmOyEokNsPQAA5Ct5avp1+l9/FYBUS1z38 0puuL53VdT5pA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 12/17] libbpf: Add btf__find_by_glob_kind function Date: Mon, 8 Aug 2022 16:06:21 +0200 Message-Id: <20220808140626.422731-13-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding btf__find_by_glob_kind function that returns array of BTF ids that match given kind and allow/deny patterns. int btf__find_by_glob_kind(const struct btf *btf, __u32 kind, const char *allow_pattern, const char *deny_pattern, __u32 **__ids); The __ids array is allocated and needs to be manually freed. The pattern check is done by glob_match function. Signed-off-by: Jiri Olsa --- tools/lib/bpf/btf.c | 41 +++++++++++++++++++++++++++++++++ tools/lib/bpf/btf.h | 3 +++ tools/lib/bpf/libbpf.c | 2 +- tools/lib/bpf/libbpf_internal.h | 1 + 4 files changed, 46 insertions(+), 1 deletion(-) diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c index 2d14f1a52d7a..ffe08acb2f9b 100644 --- a/tools/lib/bpf/btf.c +++ b/tools/lib/bpf/btf.c @@ -770,6 +770,47 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name, return btf_find_by_name_kind(btf, 1, type_name, kind); } +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind, + const char *allow_pattern, const char *deny_pattern, + __u32 **__ids) +{ + __u32 i, nr_types = btf__type_cnt(btf); + int cnt = 0, alloc = 0; + __u32 *ids = NULL; + + for (i = 1; i < nr_types; i++) { + const struct btf_type *t = btf__type_by_id(btf, i); + const char *name; + __u32 *p; + + if (btf_kind(t) != kind) + continue; + name = btf__name_by_offset(btf, t->name_off); + if (!name) + continue; + + if (deny_pattern && glob_match(name, deny_pattern)) + continue; + if (allow_pattern && !glob_match(name, allow_pattern)) + continue; + + if (cnt == alloc) { + alloc = max(16, alloc * 3 / 2); + p = libbpf_reallocarray(ids, alloc, sizeof(__u32)); + if (!p) { + free(ids); + return -ENOMEM; + } + ids = p; + } + ids[cnt] = i; + cnt++; + } + + *__ids = ids; + return cnt; +} + static bool btf_is_modifiable(const struct btf *btf) { return (void *)btf->hdr != btf->raw_data; diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h index 583760df83b4..4c05d2e77771 100644 --- a/tools/lib/bpf/btf.h +++ b/tools/lib/bpf/btf.h @@ -546,6 +546,9 @@ static inline struct btf_decl_tag *btf_decl_tag(const struct btf_type *t) return (struct btf_decl_tag *)(t + 1); } +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind, + const char *allow_pattern, const char *deny_pattern, + __u32 **__ids); #ifdef __cplusplus } /* extern "C" */ #endif diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 77e3797cf75a..0952eac92eab 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -10153,7 +10153,7 @@ struct bpf_link *bpf_program__attach_ksyscall(const struct bpf_program *prog, } /* Adapted from perf/util/string.c */ -static bool glob_match(const char *str, const char *pat) +bool glob_match(const char *str, const char *pat) { while (*str && *pat && *pat != '*') { if (*pat == '?') { /* Matches any single character */ diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index 4135ae0a2bc3..9dfc11f26364 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -573,4 +573,5 @@ static inline bool is_pow_of_2(size_t x) return x && (x & (x - 1)) == 0; } +bool glob_match(const char *str, const char *pat); #endif /* __LIBBPF_LIBBPF_INTERNAL_H */ From patchwork Mon Aug 8 14:06:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938835 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40CF8C00140 for ; Mon, 8 Aug 2022 14:09:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242964AbiHHOI7 (ORCPT ); Mon, 8 Aug 2022 10:08:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243127AbiHHOI6 (ORCPT ); Mon, 8 Aug 2022 10:08:58 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85701E0FF for ; Mon, 8 Aug 2022 07:08:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E4D8160DE0 for ; Mon, 8 Aug 2022 14:08:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AD0A3C433D6; Mon, 8 Aug 2022 14:08:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967736; bh=eWZqMpYAFiSfLO35cd2yQHt/uZkehHOtMaKBsbdvTJo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cACV7ab9mNU2lFda+8QEzYDhssu+vDSKPRxefWp8tdYBb6qr+q7wQBn6FkmDogy2O JRh4vyYgBGTxElZpCPOvKKDyXjdKvzXp9Wvb2UzkotrQMjphNtEmasNX5BmeJfdcc9 nKBRpjpn3PA3/quTKWgrwBeUymoxXmAiKu3T2LAheG8Ks0Vq34Oi81Ar97jOmu57lJ SiBXN97pCRFsleV28nIJNCStUqvD2AeVWmgC9sQalKt3rkzMO/H6ySWu7FTuKTHLSd iZhFXsIQ3j2Eqi43ZbiRe7CSnIz68vB7yeOSnCvm4KVoO3WITNKcpuo70QAzaq4OsA JlmszJVBflhEA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 13/17] libbpf: Add support to create tracing multi link Date: Mon, 8 Aug 2022 16:06:22 +0200 Message-Id: <20220808140626.422731-14-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding new interface tion to attach programs with tracing multi link: bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern, const struct bpf_tracing_multi_opts *opts); The program is attach to functions specified by patter or by btf IDs specified in bpf_tracing_multi_opts object. Adding support for new sections to attach programs with above functions: fentry.multi/pattern fexit.multi/pattern Signed-off-by: Jiri Olsa --- tools/lib/bpf/bpf.c | 7 ++++ tools/lib/bpf/bpf.h | 4 ++ tools/lib/bpf/libbpf.c | 89 ++++++++++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf.h | 14 +++++++ tools/lib/bpf/libbpf.map | 1 + 5 files changed, 115 insertions(+) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index efcc06dafbd9..06e6666ee90e 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -713,6 +713,13 @@ int bpf_link_create(int prog_fd, int target_fd, if (!OPTS_ZEROED(opts, kprobe_multi)) return libbpf_err(-EINVAL); break; + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: + attr.link_create.tracing_multi.btf_ids = (__u64) OPTS_GET(opts, tracing_multi.btf_ids, 0); + attr.link_create.tracing_multi.btf_ids_cnt = OPTS_GET(opts, tracing_multi.btf_ids_cnt, 0); + if (!OPTS_ZEROED(opts, tracing_multi)) + return libbpf_err(-EINVAL); + break; case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: case BPF_MODIFY_RETURN: diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index 9c50beabdd14..71676228f032 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -321,6 +321,10 @@ struct bpf_link_create_opts { struct { __u64 cookie; } tracing; + struct { + __u32 *btf_ids; + __u32 btf_ids_cnt; + } tracing_multi; }; size_t :0; }; diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 0952eac92eab..209a42bed165 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -116,6 +116,8 @@ static const char * const attach_type_name[] = { [BPF_SK_REUSEPORT_SELECT_OR_MIGRATE] = "sk_reuseport_select_or_migrate", [BPF_PERF_EVENT] = "perf_event", [BPF_TRACE_KPROBE_MULTI] = "trace_kprobe_multi", + [BPF_TRACE_FENTRY_MULTI] = "trace_fentry_multi", + [BPF_TRACE_FEXIT_MULTI] = "trace_fexit_multi", }; static const char * const link_type_name[] = { @@ -129,6 +131,7 @@ static const char * const link_type_name[] = { [BPF_LINK_TYPE_PERF_EVENT] = "perf_event", [BPF_LINK_TYPE_KPROBE_MULTI] = "kprobe_multi", [BPF_LINK_TYPE_STRUCT_OPS] = "struct_ops", + [BPF_LINK_TYPE_TRACING_MULTI] = "tracing_multi", }; static const char * const map_type_name[] = { @@ -8445,6 +8448,7 @@ static int attach_trace(const struct bpf_program *prog, long cookie, struct bpf_ static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link); static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link); static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link); +static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link); static const struct bpf_sec_def section_defs[] = { SEC_DEF("socket", SOCKET_FILTER, 0, SEC_NONE), @@ -8477,6 +8481,8 @@ static const struct bpf_sec_def section_defs[] = { SEC_DEF("fentry.s+", TRACING, BPF_TRACE_FENTRY, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace), SEC_DEF("fmod_ret.s+", TRACING, BPF_MODIFY_RETURN, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace), SEC_DEF("fexit.s+", TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace), + SEC_DEF("fentry.multi/", TRACING, BPF_TRACE_FENTRY_MULTI, 0, attach_tracing_multi), + SEC_DEF("fexit.multi/", TRACING, BPF_TRACE_FEXIT_MULTI, 0, attach_tracing_multi), SEC_DEF("freplace+", EXT, 0, SEC_ATTACH_BTF, attach_trace), SEC_DEF("lsm+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm), SEC_DEF("lsm.s+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm), @@ -10374,6 +10380,89 @@ static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, stru return libbpf_get_error(*link); } +struct bpf_link * +bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern, + const struct bpf_tracing_multi_opts *opts) +{ + LIBBPF_OPTS(bpf_link_create_opts, lopts); + __u32 *btf_ids, cnt, *free_ids = NULL; + char errmsg[STRERR_BUFSIZE]; + int prog_fd, link_fd, err; + struct bpf_link *link; + + btf_ids = OPTS_GET(opts, btf_ids, false); + cnt = OPTS_GET(opts, cnt, false); + + if (!pattern && !btf_ids && !cnt) + return libbpf_err_ptr(-EINVAL); + if (pattern && (btf_ids || cnt)) + return libbpf_err_ptr(-EINVAL); + + if (pattern) { + err = bpf_object__load_vmlinux_btf(prog->obj, true); + if (err) + return libbpf_err_ptr(err); + + cnt = btf__find_by_glob_kind(prog->obj->btf_vmlinux, BTF_KIND_FUNC, + pattern, NULL, &btf_ids); + if (cnt <= 0) + return libbpf_err_ptr(-EINVAL); + free_ids = btf_ids; + } + + lopts.tracing_multi.btf_ids = btf_ids; + lopts.tracing_multi.btf_ids_cnt = cnt; + + link = calloc(1, sizeof(*link)); + if (!link) + return libbpf_err_ptr(-ENOMEM); + link->detach = &bpf_link__detach_fd; + + prog_fd = bpf_program__fd(prog); + link_fd = bpf_link_create(prog_fd, 0, prog->expected_attach_type, &lopts); + if (link_fd < 0) { + err = -errno; + pr_warn("prog '%s': failed to attach: %s\n", + prog->name, libbpf_strerror_r(err, errmsg, sizeof(errmsg))); + goto error; + } + link->fd = link_fd; + free(free_ids); + return link; +error: + free(link); + free(free_ids); + return libbpf_err_ptr(err); +} + +static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link) +{ + const char *spec; + char *pattern; + bool is_fexit; + int n; + + /* no auto-attach for SEC("fentry.multi") and SEC("fexit.multi") */ + if (strcmp(prog->sec_name, "fentry.multi") == 0 || + strcmp(prog->sec_name, "fexit.multi") == 0) + return 0; + + is_fexit = str_has_pfx(prog->sec_name, "fexit.multi/"); + if (is_fexit) + spec = prog->sec_name + sizeof("fexit.multi/") - 1; + else + spec = prog->sec_name + sizeof("fentry.multi/") - 1; + + n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern); + if (n < 1) { + pr_warn("tracing multi pattern is invalid: %s\n", pattern); + return -EINVAL; + } + + *link = bpf_program__attach_tracing_multi(prog, pattern, NULL); + return libbpf_get_error(*link); +} + static void gen_uprobe_legacy_event_name(char *buf, size_t buf_sz, const char *binary_path, uint64_t offset) { diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 61493c4cddac..07992d8ee06c 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -503,6 +503,20 @@ bpf_program__attach_ksyscall(const struct bpf_program *prog, const char *syscall_name, const struct bpf_ksyscall_opts *opts); +struct bpf_tracing_multi_opts { + /* size of this struct, for forward/backward compatibility */ + size_t sz; + __u32 *btf_ids; + size_t cnt; + size_t :0; +}; + +#define bpf_kprobe_multi_opts__last_field retprobe + +LIBBPF_API struct bpf_link * +bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern, + const struct bpf_tracing_multi_opts *opts); + struct bpf_uprobe_opts { /* size of this struct, for forward/backward compatiblity */ size_t sz; diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 119e6e1ea7f1..2168516a75a5 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -358,6 +358,7 @@ LIBBPF_1.0.0 { bpf_obj_get_opts; bpf_prog_query_opts; bpf_program__attach_ksyscall; + bpf_program__attach_tracing_multi; btf__add_enum64; btf__add_enum64_value; libbpf_bpf_attach_type_str; From patchwork Mon Aug 8 14:06:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938836 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0D5BC00140 for ; Mon, 8 Aug 2022 14:09:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235719AbiHHOJO (ORCPT ); Mon, 8 Aug 2022 10:09:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242984AbiHHOJN (ORCPT ); Mon, 8 Aug 2022 10:09:13 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90C7FDFC6 for ; Mon, 8 Aug 2022 07:09:11 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id C340CCE10CC for ; Mon, 8 Aug 2022 14:09:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE089C433D6; Mon, 8 Aug 2022 14:09:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967748; bh=da56i1Vsz6n9IyT+bK6JKfWXGfNEiQX9+ckdJNYV52w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BIARH5eCUBj7xPnHFdlcZysaBCUO/EHP8zTZnz+PFZgMr41CDTC5XnQa6p8n5OAiV FuXAcGXHKOcVzAxZCLG71n+O9VUm7zVgqBaYb4IRPdlzzwApy+57I2SWDn1+kXP3IH CVczeNeFvxjzt+x2KxsaQOX3TSN+RqFGyjta+aUC9xh+vf17KOuK+qQKa8H/pqgxts VxFG54fXBIeB2k+8QSW9lZQgH/vS3TYGveGyl2mw+32netJhLBBmztCy0aWipWQ2CH jUAAxlBLmMKPSVewjzvlhMyWa73fUCsnYUm1myldwoQ3XpvLVOwRfqjL5fz8PgG9FS M/rWsC8qoVpCQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 14/17] selftests/bpf: Add fentry tracing multi func test Date: Mon, 8 Aug 2022 16:06:23 +0200 Message-Id: <20220808140626.422731-15-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding selftest for fentry multi func test that attaches to bpf_fentry_test* functions and checks argument values based on the processed function. We need to cast to real arguments types in multi_arg_check, because the checked value can be shorter than u64. Signed-off-by: Jiri Olsa --- tools/testing/selftests/bpf/Makefile | 3 +- .../selftests/bpf/prog_tests/tracing_multi.c | 34 +++++ .../selftests/bpf/progs/tracing_multi_check.c | 132 ++++++++++++++++++ .../bpf/progs/tracing_multi_fentry.c | 17 +++ 4 files changed, 185 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi.c create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_check.c create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_fentry.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 8d59ec7f4c2d..24f0ace18af2 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -348,7 +348,7 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ linked_vars.skel.h linked_maps.skel.h \ test_subskeleton.skel.h test_subskeleton_lib.skel.h \ - test_usdt.skel.h + test_usdt.skel.h tracing_multi_fentry_test.skel.h LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \ @@ -366,6 +366,7 @@ linked_maps.skel.h-deps := linked_maps1.o linked_maps2.o test_subskeleton.skel.h-deps := test_subskeleton_lib2.o test_subskeleton_lib.o test_subskeleton.o test_subskeleton_lib.skel.h-deps := test_subskeleton_lib2.o test_subskeleton_lib.o test_usdt.skel.h-deps := test_usdt.o test_usdt_multispec.o +tracing_multi_fentry_test.skel.h-deps := tracing_multi_fentry.o tracing_multi_check.o LINKED_BPF_SRCS := $(patsubst %.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps))) diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c new file mode 100644 index 000000000000..fbf2108ebb8a --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include "tracing_multi_fentry_test.skel.h" +#include "trace_helpers.h" + +static void multi_fentry_test(void) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + struct tracing_multi_fentry_test *skel = NULL; + int err, prog_fd; + + skel = tracing_multi_fentry_test__open_and_load(); + if (!ASSERT_OK_PTR(skel, "fentry_multi_skel_load")) + goto cleanup; + + err = tracing_multi_fentry_test__attach(skel); + if (!ASSERT_OK(err, "fentry_attach")) + goto cleanup; + + prog_fd = bpf_program__fd(skel->progs.test); + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "test_run"); + + ASSERT_EQ(skel->bss->test_result, 8, "test_result"); + +cleanup: + tracing_multi_fentry_test__destroy(skel); +} + +void test_tracing_multi_test(void) +{ + if (test__start_subtest("fentry")) + multi_fentry_test(); +} diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c new file mode 100644 index 000000000000..945dc4d070e1 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c @@ -0,0 +1,132 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +extern const void bpf_fentry_test1 __ksym; +extern const void bpf_fentry_test2 __ksym; +extern const void bpf_fentry_test3 __ksym; +extern const void bpf_fentry_test4 __ksym; +extern const void bpf_fentry_test5 __ksym; +extern const void bpf_fentry_test6 __ksym; +extern const void bpf_fentry_test7 __ksym; +extern const void bpf_fentry_test8 __ksym; + +void multi_arg_check(__u64 *ctx, __u64 *test_result) +{ + void *ip = (void *) bpf_get_func_ip(ctx); + __u64 value = 0; + + if (ip == &bpf_fentry_test1) { + int a; + + if (bpf_get_func_arg(ctx, 0, &value)) + return; + a = (int) value; + + *test_result += a == 1; + } else if (ip == &bpf_fentry_test2) { + __u64 b; + int a; + + if (bpf_get_func_arg(ctx, 0, &value)) + return; + a = (int) value; + if (bpf_get_func_arg(ctx, 1, &value)) + return; + b = value; + + *test_result += a == 2 && b == 3; + } else if (ip == &bpf_fentry_test3) { + char a, b; + __u64 c; + + if (bpf_get_func_arg(ctx, 0, &value)) + return; + a = (int) value; + if (bpf_get_func_arg(ctx, 1, &value)) + return; + b = (int) value; + if (bpf_get_func_arg(ctx, 2, &value)) + return; + c = value; + + *test_result += a == 4 && b == 5 && c == 6; + } else if (ip == &bpf_fentry_test4) { + void *a; + char b; + int c; + __u64 d; + + if (bpf_get_func_arg(ctx, 0, &value)) + return; + a = (void *) value; + if (bpf_get_func_arg(ctx, 1, &value)) + return; + b = (char) value; + if (bpf_get_func_arg(ctx, 2, &value)) + return; + c = (int) value; + if (bpf_get_func_arg(ctx, 3, &value)) + return; + d = value; + + *test_result += a == (void *) 7 && b == 8 && c == 9 && d == 10; + } else if (ip == &bpf_fentry_test5) { + __u64 a; + void *b; + short c; + int d; + __u64 e; + + if (bpf_get_func_arg(ctx, 0, &value)) + return; + a = value; + if (bpf_get_func_arg(ctx, 1, &value)) + return; + b = (void *) value; + if (bpf_get_func_arg(ctx, 2, &value)) + return; + c = (short) value; + if (bpf_get_func_arg(ctx, 3, &value)) + return; + d = (int) value; + if (bpf_get_func_arg(ctx, 4, &value)) + return; + e = value; + + *test_result += a == 11 && b == (void *) 12 && c == 13 && d == 14 && e == 15; + } else if (ip == &bpf_fentry_test6) { + __u64 a; + void *b; + short c; + int d; + void *e; + __u64 f; + + if (bpf_get_func_arg(ctx, 0, &value)) + return; + a = value; + if (bpf_get_func_arg(ctx, 1, &value)) + return; + b = (void *) value; + if (bpf_get_func_arg(ctx, 2, &value)) + return; + c = (short) value; + if (bpf_get_func_arg(ctx, 3, &value)) + return; + d = (int) value; + if (bpf_get_func_arg(ctx, 4, &value)) + return; + e = (void *) value; + if (bpf_get_func_arg(ctx, 5, &value)) + return; + f = value; + + *test_result += a == 16 && b == (void *) 17 && c == 18 && d == 19 && e == (void *) 20 && f == 21; + } else if (ip == &bpf_fentry_test7) { + *test_result += 1; + } else if (ip == &bpf_fentry_test8) { + *test_result += 1; + } +} diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c b/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c new file mode 100644 index 000000000000..b78d36772aa6 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c @@ -0,0 +1,17 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +char _license[] SEC("license") = "GPL"; + +__u64 test_result = 0; + +__hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result); + +SEC("fentry.multi/bpf_fentry_test*") +int BPF_PROG(test, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f) +{ + multi_arg_check(ctx, &test_result); + return 0; +} From patchwork Mon Aug 8 14:06:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938837 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FE7BC00140 for ; Mon, 8 Aug 2022 14:09:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242984AbiHHOJ0 (ORCPT ); Mon, 8 Aug 2022 10:09:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242764AbiHHOJX (ORCPT ); Mon, 8 Aug 2022 10:09:23 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9E6CDFC6 for ; Mon, 8 Aug 2022 07:09:21 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id AAA08B8049B for ; Mon, 8 Aug 2022 14:09:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6690C433D6; Mon, 8 Aug 2022 14:09:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967759; bh=KTCmzJUSMjJ0vUvuZqtfQzzCdRbrVfPPqubfBxYmCFo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ze8J5EeFrPCQKHZsQmZBljWiLDLUckPR78c6MIuTySDsVof8YCyUtcP1xMtM8V3ie VTqNcS6MbaRkGGnJgJp4ujdFqPyzh+ZB1UR8DfvMJxYafzuWwMgj636hokBvQ0yT/U Y0CJN/TgFaWesOxRmAcOiGlZ/O6NvV0Qb/P7woFSNtd1nKjKbbPzVQgQtboABO5R4s 8JMlTv1VCmRYg4tC2jEUWCl3SE2DQoKvKMevCF+dVLpYhB+J512NCpjCoRjGTm1IeQ EPIq7Ef9TKtXwv9hejxZck8l9SD7ttaXr+oRqe09pGxDPYzyxEl5Z6wVDIBVZbGnVL 3dWS830O6F+Qg== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 15/17] selftests/bpf: Add fexit tracing multi func test Date: Mon, 8 Aug 2022 16:06:24 +0200 Message-Id: <20220808140626.422731-16-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding selftest for fexit multi func test that attaches to bpf_fentry_test* functions and checks argument values based on the processed function. Signed-off-by: Jiri Olsa --- tools/testing/selftests/bpf/Makefile | 4 ++- .../selftests/bpf/prog_tests/tracing_multi.c | 28 +++++++++++++++++++ .../selftests/bpf/progs/tracing_multi_check.c | 26 +++++++++++++++++ .../selftests/bpf/progs/tracing_multi_fexit.c | 20 +++++++++++++ 4 files changed, 77 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_fexit.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 24f0ace18af2..9c0b26e6e645 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -348,7 +348,8 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ linked_vars.skel.h linked_maps.skel.h \ test_subskeleton.skel.h test_subskeleton_lib.skel.h \ - test_usdt.skel.h tracing_multi_fentry_test.skel.h + test_usdt.skel.h tracing_multi_fentry_test.skel.h \ + tracing_multi_fexit_test.skel.h LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \ @@ -367,6 +368,7 @@ test_subskeleton.skel.h-deps := test_subskeleton_lib2.o test_subskeleton_lib.o t test_subskeleton_lib.skel.h-deps := test_subskeleton_lib2.o test_subskeleton_lib.o test_usdt.skel.h-deps := test_usdt.o test_usdt_multispec.o tracing_multi_fentry_test.skel.h-deps := tracing_multi_fentry.o tracing_multi_check.o +tracing_multi_fexit_test.skel.h-deps := tracing_multi_fexit.o tracing_multi_check.o LINKED_BPF_SRCS := $(patsubst %.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps))) diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c index fbf2108ebb8a..b7b1e42754da 100644 --- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include #include "tracing_multi_fentry_test.skel.h" +#include "tracing_multi_fexit_test.skel.h" #include "trace_helpers.h" static void multi_fentry_test(void) @@ -27,8 +28,35 @@ static void multi_fentry_test(void) tracing_multi_fentry_test__destroy(skel); } +static void multi_fexit_test(void) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + struct tracing_multi_fexit_test *skel = NULL; + int err, prog_fd; + + skel = tracing_multi_fexit_test__open_and_load(); + if (!ASSERT_OK_PTR(skel, "tracing_multi_fentry_test__open_and_load")) + goto cleanup; + + err = tracing_multi_fexit_test__attach(skel); + if (!ASSERT_OK(err, "tracing_multi_fentry_test__attach")) + goto cleanup; + + prog_fd = bpf_program__fd(skel->progs.test); + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "test_run"); + + ASSERT_EQ(skel->bss->test_arg_result, 8, "test_arg_result"); + ASSERT_EQ(skel->bss->test_ret_result, 8, "test_ret_result"); + +cleanup: + tracing_multi_fexit_test__destroy(skel); +} + void test_tracing_multi_test(void) { if (test__start_subtest("fentry")) multi_fentry_test(); + if (test__start_subtest("fexit")) + multi_fexit_test(); } diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c index 945dc4d070e1..7144df65607e 100644 --- a/tools/testing/selftests/bpf/progs/tracing_multi_check.c +++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c @@ -130,3 +130,29 @@ void multi_arg_check(__u64 *ctx, __u64 *test_result) *test_result += 1; } } + +void multi_ret_check(void *ctx, __u64 *test_result) +{ + void *ip = (void *) bpf_get_func_ip(ctx); + __u64 ret = 0; + int err; + + if (bpf_get_func_ret(ctx, &ret)) + return; + if (ip == &bpf_fentry_test1) + *test_result += ret == 2; + else if (ip == &bpf_fentry_test2) + *test_result += ret == 5; + else if (ip == &bpf_fentry_test3) + *test_result += ret == 15; + else if (ip == &bpf_fentry_test4) + *test_result += ret == 34; + else if (ip == &bpf_fentry_test5) + *test_result += ret == 65; + else if (ip == &bpf_fentry_test6) + *test_result += ret == 111; + else if (ip == &bpf_fentry_test7) + *test_result += ret == 0; + else if (ip == &bpf_fentry_test8) + *test_result += ret == 0; +} diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_fexit.c b/tools/testing/selftests/bpf/progs/tracing_multi_fexit.c new file mode 100644 index 000000000000..54624acc7071 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tracing_multi_fexit.c @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +char _license[] SEC("license") = "GPL"; + +__hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result); +__hidden extern void multi_ret_check(void *ctx, __u64 *test_result); + +__u64 test_arg_result = 0; +__u64 test_ret_result = 0; + +SEC("fexit.multi/bpf_fentry_test*") +int BPF_PROG(test) +{ + multi_arg_check(ctx, &test_arg_result); + multi_ret_check(ctx, &test_ret_result); + return 0; +} From patchwork Mon Aug 8 14:06:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938838 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1838AC25B0C for ; Mon, 8 Aug 2022 14:09:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243114AbiHHOJd (ORCPT ); Mon, 8 Aug 2022 10:09:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235851AbiHHOJc (ORCPT ); Mon, 8 Aug 2022 10:09:32 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D8DDE02F for ; Mon, 8 Aug 2022 07:09:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BD2AA60D24 for ; Mon, 8 Aug 2022 14:09:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7FCBAC433C1; Mon, 8 Aug 2022 14:09:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967771; bh=8My5wtRdf4PvLuvJToWbcolGBgbXJ26kjBbvqc7N86s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nJxjI4qUgGVEPMnq3Y9KANaSd67kPmCv+rE273YaukABlELSgSHSgmEFGRfvYasNX 0cixvkJU2uDCjPn0QqrGnIG4LoILCzN5xRU+hhWy+tJCRp8e3ckJY8Z5TNhY8OJsmd HmOeXJTQTqZtMS6LSGy7WnvUYQY/fzydL7ShgIHfogvmnC9u7EqyZYJRG18WTv9MAP cpmlfnkw22XaYzw8PsotVG51tkUl/49CZNcOYRUM4BJycEs/1ZB6KBa7x0qH+O1IDI 8d+095KoSVklOPpD/eZH78mr8pcynhR+Ldqni00ALaBgn6hNxcKxz0/xNXofkvAVv0 mu6Xfis7dYXTQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 16/17] selftests/bpf: Add fentry/fexit tracing multi func test Date: Mon, 8 Aug 2022 16:06:25 +0200 Message-Id: <20220808140626.422731-17-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding selftest for fentry/fexit multi func tests that attaches to bpf_fentry_test* functions and checks argument values based on the processed function. Signed-off-by: Jiri Olsa --- tools/testing/selftests/bpf/Makefile | 4 ++- .../selftests/bpf/prog_tests/tracing_multi.c | 29 +++++++++++++++++++ .../bpf/progs/tracing_multi_fentry_fexit.c | 28 ++++++++++++++++++ 3 files changed, 60 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_fentry_fexit.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 9c0b26e6e645..a231cca9882f 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -349,7 +349,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ linked_vars.skel.h linked_maps.skel.h \ test_subskeleton.skel.h test_subskeleton_lib.skel.h \ test_usdt.skel.h tracing_multi_fentry_test.skel.h \ - tracing_multi_fexit_test.skel.h + tracing_multi_fexit_test.skel.h \ + tracing_multi_fentry_fexit_test.skel.h LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \ @@ -369,6 +370,7 @@ test_subskeleton_lib.skel.h-deps := test_subskeleton_lib2.o test_subskeleton_lib test_usdt.skel.h-deps := test_usdt.o test_usdt_multispec.o tracing_multi_fentry_test.skel.h-deps := tracing_multi_fentry.o tracing_multi_check.o tracing_multi_fexit_test.skel.h-deps := tracing_multi_fexit.o tracing_multi_check.o +tracing_multi_fentry_fexit_test.skel.h-deps := tracing_multi_fentry_fexit.o tracing_multi_check.o LINKED_BPF_SRCS := $(patsubst %.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps))) diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c index b7b1e42754da..409bcf1e3653 100644 --- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c @@ -2,6 +2,7 @@ #include #include "tracing_multi_fentry_test.skel.h" #include "tracing_multi_fexit_test.skel.h" +#include "tracing_multi_fentry_fexit_test.skel.h" #include "trace_helpers.h" static void multi_fentry_test(void) @@ -53,10 +54,38 @@ static void multi_fexit_test(void) tracing_multi_fexit_test__destroy(skel); } +static void multi_fentry_fexit_test(void) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + struct tracing_multi_fentry_fexit_test *skel = NULL; + int err, prog_fd; + + skel = tracing_multi_fentry_fexit_test__open_and_load(); + if (!ASSERT_OK_PTR(skel, "tracing_multi_fentry_fexit_test__open_and_load")) + goto cleanup; + + err = tracing_multi_fentry_fexit_test__attach(skel); + if (!ASSERT_OK(err, "tracing_multi_fentry_fexit_test__attach")) + goto cleanup; + + prog_fd = bpf_program__fd(skel->progs.test2); + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "test_run"); + + ASSERT_EQ(skel->bss->test1_arg_result, 8, "test1_arg_result"); + ASSERT_EQ(skel->bss->test2_arg_result, 8, "test2_arg_result"); + ASSERT_EQ(skel->bss->test2_ret_result, 8, "test2_ret_result"); + +cleanup: + tracing_multi_fentry_fexit_test__destroy(skel); +} + void test_tracing_multi_test(void) { if (test__start_subtest("fentry")) multi_fentry_test(); if (test__start_subtest("fexit")) multi_fexit_test(); + if (test__start_subtest("fentry_fexit")) + multi_fentry_fexit_test(); } diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_fentry_fexit.c b/tools/testing/selftests/bpf/progs/tracing_multi_fentry_fexit.c new file mode 100644 index 000000000000..54ee94d060b2 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tracing_multi_fentry_fexit.c @@ -0,0 +1,28 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +char _license[] SEC("license") = "GPL"; + +__u64 test1_arg_result = 0; +__u64 test2_arg_result = 0; +__u64 test2_ret_result = 0; + +__hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result); +__hidden extern void multi_ret_check(void *ctx, __u64 *test_result); + +SEC("fentry.multi/bpf_fentry_test*") +int BPF_PROG(test1, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f) +{ + multi_arg_check(ctx, &test1_arg_result); + return 0; +} + +SEC("fexit.multi/bpf_fentry_test*") +int BPF_PROG(test2, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, int ret) +{ + multi_arg_check(ctx, &test2_arg_result); + multi_ret_check(ctx, &test2_ret_result); + return 0; +} From patchwork Mon Aug 8 14:06:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 12938839 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E082C00140 for ; Mon, 8 Aug 2022 14:09:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243274AbiHHOJp (ORCPT ); Mon, 8 Aug 2022 10:09:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243151AbiHHOJo (ORCPT ); Mon, 8 Aug 2022 10:09:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26F99101ED for ; Mon, 8 Aug 2022 07:09:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 96E8460DE0 for ; Mon, 8 Aug 2022 14:09:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D777DC4347C; Mon, 8 Aug 2022 14:09:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659967782; bh=S6sIv3GtWDSFe1NEh5ra4nz+G3St4P7VHuJCX+AA4DI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MeRun3OeBu7a6tFbdnzYi+ir8Wj+g/weYjptGJSgYsGgqw+TNWy3z72uoKFaWQ/qg yu7LigaoqSwzAX6n5oXv6R0gX8w2hfgCHUaBtd6HcyEUPZlYukdqwRYZNv2vZ5n06P HJ9MIxCj3K8bMRT54nCBF02NYJ/8DOwoFfsDnlEbI7s2XeHHXwa+Talsp0HShKBvyK /8UfpHW/CFpUSo0Nru6mamsnIgdEc4/WccthOhUgij9XBfXwQjEN0ba/NjjgeGLcEA USVedRjF437mQTs6lOnP2gpL+XZ4AGnblUeQMY1ye48eb97qbJOsyY+d/wtmUBmA83 VP+hWIFE4+G/g== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo Subject: [RFC PATCH bpf-next 17/17] selftests/bpf: Add mixed tracing multi func test Date: Mon, 8 Aug 2022 16:06:26 +0200 Message-Id: <20220808140626.422731-18-jolsa@kernel.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808140626.422731-1-jolsa@kernel.org> References: <20220808140626.422731-1-jolsa@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding selftest for fentry/fexit multi func tests that attaches to bpf_fentry_test* functions, where some of them have already attached single trampoline. Signed-off-by: Jiri Olsa --- tools/testing/selftests/bpf/Makefile | 4 +- .../selftests/bpf/prog_tests/tracing_multi.c | 67 +++++++++++++++++++ .../selftests/bpf/progs/tracing_multi_mixed.c | 43 ++++++++++++ 3 files changed, 113 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_mixed.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index a231cca9882f..9d507615383b 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -350,7 +350,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ test_subskeleton.skel.h test_subskeleton_lib.skel.h \ test_usdt.skel.h tracing_multi_fentry_test.skel.h \ tracing_multi_fexit_test.skel.h \ - tracing_multi_fentry_fexit_test.skel.h + tracing_multi_fentry_fexit_test.skel.h \ + tracing_multi_mixed_test.skel.h LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \ @@ -371,6 +372,7 @@ test_usdt.skel.h-deps := test_usdt.o test_usdt_multispec.o tracing_multi_fentry_test.skel.h-deps := tracing_multi_fentry.o tracing_multi_check.o tracing_multi_fexit_test.skel.h-deps := tracing_multi_fexit.o tracing_multi_check.o tracing_multi_fentry_fexit_test.skel.h-deps := tracing_multi_fentry_fexit.o tracing_multi_check.o +tracing_multi_mixed_test.skel.h-deps := tracing_multi_mixed.o tracing_multi_check.o LINKED_BPF_SRCS := $(patsubst %.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps))) diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c index 409bcf1e3653..416940dcde48 100644 --- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c @@ -3,6 +3,7 @@ #include "tracing_multi_fentry_test.skel.h" #include "tracing_multi_fexit_test.skel.h" #include "tracing_multi_fentry_fexit_test.skel.h" +#include "tracing_multi_mixed_test.skel.h" #include "trace_helpers.h" static void multi_fentry_test(void) @@ -80,6 +81,70 @@ static void multi_fentry_fexit_test(void) tracing_multi_fentry_fexit_test__destroy(skel); } +static void multi_mixed_test(void) +{ + struct bpf_link *linkm1 = NULL, *linkm2 = NULL; + struct bpf_link *link1 = NULL, *link2 = NULL; + LIBBPF_OPTS(bpf_test_run_opts, topts); + struct tracing_multi_mixed_test *skel = NULL; + int err, prog_fd; + + skel = tracing_multi_mixed_test__open_and_load(); + if (!ASSERT_OK_PTR(skel, "tracing_multi_mixed_test__open_and_load")) + goto cleanup; + + link1 = bpf_program__attach_trace(skel->progs.test1); + if (!ASSERT_OK_PTR(link1, "attach_trace")) + goto cleanup; + + link2 = bpf_program__attach_trace(skel->progs.test2); + if (!ASSERT_OK_PTR(link2, "attach_trace")) + goto cleanup; + + linkm1 = bpf_program__attach_tracing_multi(skel->progs.test3, + "bpf_fentry_test*", NULL); + if (!ASSERT_OK_PTR(linkm1, "attach_tracing_multi")) + goto cleanup; + + linkm2 = bpf_program__attach_tracing_multi(skel->progs.test4, + "bpf_fentry_test*", NULL); + if (!ASSERT_OK_PTR(linkm2, "attach_tracing_multi")) + goto cleanup; + + prog_fd = bpf_program__fd(skel->progs.test1); + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "test_run"); + + ASSERT_EQ(skel->bss->test1_result, 1, "test1_result"); + ASSERT_EQ(skel->bss->test2_result, 1, "test2_result"); + ASSERT_EQ(skel->bss->test3_arg_result, 8, "test3_arg_result"); + ASSERT_EQ(skel->bss->test4_arg_result, 8, "test4_arg_result"); + ASSERT_EQ(skel->bss->test4_ret_result, 8, "test4_ret_result"); + + bpf_link__destroy(link1); + bpf_link__destroy(link2); + bpf_link__destroy(linkm1); + bpf_link__destroy(linkm2); + + linkm2 = link1 = link2 = NULL; + + linkm1 = bpf_program__attach_tracing_multi(skel->progs.test3, + "bpf_fentry_test*", NULL); + if (!ASSERT_OK_PTR(linkm1, "attach_tracing_multi")) + goto cleanup; + + link1 = bpf_program__attach_trace(skel->progs.test1); + if (!ASSERT_ERR_PTR(link1, "attach_trace")) + goto cleanup; + +cleanup: + bpf_link__destroy(link1); + bpf_link__destroy(link2); + bpf_link__destroy(linkm1); + bpf_link__destroy(linkm2); + tracing_multi_mixed_test__destroy(skel); +} + void test_tracing_multi_test(void) { if (test__start_subtest("fentry")) @@ -88,4 +153,6 @@ void test_tracing_multi_test(void) multi_fexit_test(); if (test__start_subtest("fentry_fexit")) multi_fentry_fexit_test(); + if (test__start_subtest("mixed")) + multi_mixed_test(); } diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_mixed.c b/tools/testing/selftests/bpf/progs/tracing_multi_mixed.c new file mode 100644 index 000000000000..468a044753e9 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tracing_multi_mixed.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +char _license[] SEC("license") = "GPL"; + +__hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result); +__hidden extern void multi_ret_check(void *ctx, __u64 *test_result); + +__u64 test1_result = 0; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(test1, int a) +{ + test1_result += a == 1; + return 0; +} + +__u64 test2_result = 0; +SEC("fexit/bpf_fentry_test2") +int BPF_PROG(test2, int a, __u64 b, int ret) +{ + test2_result += a == 2 && b == 3; + return 0; +} + +__u64 test3_arg_result = 0; +SEC("fentry.multi/bpf_fentry_test*") +int BPF_PROG(test3, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f) +{ + multi_arg_check(ctx, &test3_arg_result); + return 0; +} + +__u64 test4_arg_result = 0; +__u64 test4_ret_result = 0; +SEC("fexit.multi/bpf_fentry_test*") +int BPF_PROG(test4, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, int ret) +{ + multi_arg_check(ctx, &test4_arg_result); + multi_ret_check(ctx, &test4_ret_result); + return 0; +}