From patchwork Sun Jul 30 13:41:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333415 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C08120FF for ; Sun, 30 Jul 2023 13:42:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5F7D9C433C8; Sun, 30 Jul 2023 13:42:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724559; bh=utKDWO49AuejMEatoqjlcZuGpD8iQDlQ5/Hvs1h8uJU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dYYkhuW9HVQUJK1QfWk7w4X6X4SoqhHEFqGmfE+ElGNAvVBdup7CGpPRMWrPgDRZf aE5GlAB7ReOaJZyFA890H4odSWfkBf2aF2yQBERtP/+uyDq4UFIzqRLXIen9XV01Ev DvAUyVP8sIf6CuH5Szi3+x10Kx3AHSjJFYvt2CRASX0jOSAl6iLhMPlvCs52KhomAh 4PvAnQTwX5j6224o9lfmWchE6eUFanreU/8gXuErIFWBZQtf+Rw8os+L8g9tXMqLrC O0eIJndEBzXwLnRofmmuL9HFvmuzpTlSdecUkrNoFgSQ9x7LFHmHubTywpgabi+7KL TOaDSDr3hKUig== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 01/28] bpf: Switch BPF_F_KPROBE_MULTI_RETURN macro to enum Date: Sun, 30 Jul 2023 15:41:56 +0200 Message-ID: <20230730134223.94496-2-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Switching BPF_F_KPROBE_MULTI_RETURN macro to anonymous enum, so it'd show up in vmlinux.h. There's not functional change compared to having this as macro. Suggested-by: Andrii Nakryiko Signed-off-by: Jiri Olsa Acked-by: Yafang Shao --- include/uapi/linux/bpf.h | 4 +++- tools/include/uapi/linux/bpf.h | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 70da85200695..7abb382dc6c1 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1186,7 +1186,9 @@ enum bpf_perf_event_type { /* link_create.kprobe_multi.flags used in LINK_CREATE command for * BPF_TRACE_KPROBE_MULTI attach type to create return probe. */ -#define BPF_F_KPROBE_MULTI_RETURN (1U << 0) +enum { + BPF_F_KPROBE_MULTI_RETURN = (1U << 0) +}; /* link_create.netfilter.flags used in LINK_CREATE command for * BPF_PROG_TYPE_NETFILTER to enable IP packet defragmentation. diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 70da85200695..7abb382dc6c1 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1186,7 +1186,9 @@ enum bpf_perf_event_type { /* link_create.kprobe_multi.flags used in LINK_CREATE command for * BPF_TRACE_KPROBE_MULTI attach type to create return probe. */ -#define BPF_F_KPROBE_MULTI_RETURN (1U << 0) +enum { + BPF_F_KPROBE_MULTI_RETURN = (1U << 0) +}; /* link_create.netfilter.flags used in LINK_CREATE command for * BPF_PROG_TYPE_NETFILTER to enable IP packet defragmentation. From patchwork Sun Jul 30 13:41:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333416 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA466363 for ; Sun, 30 Jul 2023 13:42:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 88CA3C433C8; Sun, 30 Jul 2023 13:42:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724569; bh=+ayR/AaHlRtsV8VdyCrt32TrScBKTQSPwGrIYn2El5A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RTYk8lZ7bY7yRNGAYy3YbWOrhFMvfqJCVTRcrKaXo9nrCKDPVXLsz0J+po0X1dwKS DmxwqztyC+CHS6XOXlEr8tgMwH8D+n4yAzQmkN5bXsfLh/ITZe7vXvyJKdvS7j9lqf BAYZ/n+fdljb9b91F9DyMRGu7x73LezDKuESbYYk+iq2d0aS8IKP91qn4mV2EkYOW7 DtwLDDRVmFJ0VtwV74GG7qO+UjPUqjiCoGDRWRUZAS2mfid2n48OzzjBgBREYQRBX9 x/sXqTBxopxeikdoPsJhJDiES2hkAj7+Fo6Wl8g6bu4yacx1haZ3M1ywQP0qh7O4pF GtVUrUQaK0PJA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 02/28] bpf: Add attach_type checks under bpf_prog_attach_check_attach_type Date: Sun, 30 Jul 2023 15:41:57 +0200 Message-ID: <20230730134223.94496-3-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Add extra attach_type checks from link_create under bpf_prog_attach_check_attach_type. Suggested-by: Andrii Nakryiko Acked-by: Andrii Nakryiko Signed-off-by: Jiri Olsa Acked-by: Yafang Shao --- kernel/bpf/syscall.c | 120 +++++++++++++++++++------------------------ 1 file changed, 52 insertions(+), 68 deletions(-) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 7f4e8c357a6a..7c01186d4078 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3656,34 +3656,6 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr) return fd; } -static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog, - enum bpf_attach_type attach_type) -{ - switch (prog->type) { - case BPF_PROG_TYPE_CGROUP_SOCK: - case BPF_PROG_TYPE_CGROUP_SOCK_ADDR: - case BPF_PROG_TYPE_CGROUP_SOCKOPT: - case BPF_PROG_TYPE_SK_LOOKUP: - return attach_type == prog->expected_attach_type ? 0 : -EINVAL; - case BPF_PROG_TYPE_CGROUP_SKB: - if (!capable(CAP_NET_ADMIN)) - /* cg-skb progs can be loaded by unpriv user. - * check permissions at attach time. - */ - return -EPERM; - return prog->enforce_expected_attach_type && - prog->expected_attach_type != attach_type ? - -EINVAL : 0; - case BPF_PROG_TYPE_KPROBE: - if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI && - attach_type != BPF_TRACE_KPROBE_MULTI) - return -EINVAL; - return 0; - default: - return 0; - } -} - static enum bpf_prog_type attach_type_to_prog_type(enum bpf_attach_type attach_type) { @@ -3750,6 +3722,58 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type) } } +static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog, + enum bpf_attach_type attach_type) +{ + enum bpf_prog_type ptype; + + switch (prog->type) { + case BPF_PROG_TYPE_CGROUP_SOCK: + case BPF_PROG_TYPE_CGROUP_SOCK_ADDR: + case BPF_PROG_TYPE_CGROUP_SOCKOPT: + case BPF_PROG_TYPE_SK_LOOKUP: + return attach_type == prog->expected_attach_type ? 0 : -EINVAL; + case BPF_PROG_TYPE_CGROUP_SKB: + if (!capable(CAP_NET_ADMIN)) + /* cg-skb progs can be loaded by unpriv user. + * check permissions at attach time. + */ + return -EPERM; + return prog->enforce_expected_attach_type && + prog->expected_attach_type != attach_type ? + -EINVAL : 0; + case BPF_PROG_TYPE_EXT: + return 0; + case BPF_PROG_TYPE_NETFILTER: + if (attach_type != BPF_NETFILTER) + return -EINVAL; + return 0; + case BPF_PROG_TYPE_PERF_EVENT: + case BPF_PROG_TYPE_TRACEPOINT: + if (attach_type != BPF_PERF_EVENT) + return -EINVAL; + return 0; + case BPF_PROG_TYPE_KPROBE: + if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI && + attach_type != BPF_TRACE_KPROBE_MULTI) + return -EINVAL; + if (attach_type != BPF_PERF_EVENT && + attach_type != BPF_TRACE_KPROBE_MULTI) + return -EINVAL; + return 0; + case BPF_PROG_TYPE_SCHED_CLS: + if (attach_type != BPF_TCX_INGRESS && + attach_type != BPF_TCX_EGRESS) + return -EINVAL; + return 0; + default: + ptype = attach_type_to_prog_type(attach_type); + if (ptype == BPF_PROG_TYPE_UNSPEC || ptype != prog->type) + return -EINVAL; + return 0; + } +} + #define BPF_PROG_ATTACH_LAST_FIELD expected_revision #define BPF_F_ATTACH_MASK_BASE \ @@ -4856,7 +4880,6 @@ static int bpf_map_do_batch(const union bpf_attr *attr, #define BPF_LINK_CREATE_LAST_FIELD link_create.kprobe_multi.cookies static int link_create(union bpf_attr *attr, bpfptr_t uattr) { - enum bpf_prog_type ptype; struct bpf_prog *prog; int ret; @@ -4875,45 +4898,6 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr) if (ret) goto out; - switch (prog->type) { - case BPF_PROG_TYPE_EXT: - break; - case BPF_PROG_TYPE_NETFILTER: - if (attr->link_create.attach_type != BPF_NETFILTER) { - ret = -EINVAL; - goto out; - } - break; - case BPF_PROG_TYPE_PERF_EVENT: - case BPF_PROG_TYPE_TRACEPOINT: - if (attr->link_create.attach_type != BPF_PERF_EVENT) { - ret = -EINVAL; - goto out; - } - break; - case BPF_PROG_TYPE_KPROBE: - if (attr->link_create.attach_type != BPF_PERF_EVENT && - attr->link_create.attach_type != BPF_TRACE_KPROBE_MULTI) { - ret = -EINVAL; - goto out; - } - break; - case BPF_PROG_TYPE_SCHED_CLS: - if (attr->link_create.attach_type != BPF_TCX_INGRESS && - attr->link_create.attach_type != BPF_TCX_EGRESS) { - ret = -EINVAL; - goto out; - } - break; - default: - ptype = attach_type_to_prog_type(attr->link_create.attach_type); - if (ptype == BPF_PROG_TYPE_UNSPEC || ptype != prog->type) { - ret = -EINVAL; - goto out; - } - break; - } - switch (prog->type) { case BPF_PROG_TYPE_CGROUP_SKB: case BPF_PROG_TYPE_CGROUP_SOCK: From patchwork Sun Jul 30 13:41:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333417 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D5D520FF for ; Sun, 30 Jul 2023 13:42:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7276CC433C7; Sun, 30 Jul 2023 13:42:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724579; bh=os+fIg9VaPtQcJxkcjnZXUDl3h/oKx1ZdEndq3GKDgo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DsoB1Jd2YZ2+7Jw18wXNE7jrKx1bS8OF0pCpIGfLtIuwsv+LA3Q4s6PTFPBqH5wFC KLEeTmHMc/e2quNH9aTXTqH4OKTUghdGnHZ2w5jBftABuTQB4hjW1We6RQU9v/ty9v Sb8A7SYuad1Vj7v5AN/SKjmWO+6jwH7PYOvHqO9TpeLA3BXDMO88KNdANqlTiQz5zd 8tJv7KEAbs3IypuF1zvnsxq/DpxqsyeXeJZFN6+WZJJ1lpgw8xJmEQSHucrPx4qmZj igYfCz2EwRlyJImCMd3425JI/DhDB5wsOVndUnFNvGzm/UuJ+1+V3FPtFRJdf4jdJl 96Ym4bguav+Qw== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 03/28] bpf: Add multi uprobe link Date: Sun, 30 Jul 2023 15:41:58 +0200 Message-ID: <20230730134223.94496-4-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding new multi uprobe link that allows to attach bpf program to multiple uprobes. Uprobes to attach are specified via new link_create uprobe_multi union: struct { __aligned_u64 path; __aligned_u64 offsets; __aligned_u64 ref_ctr_offsets; __u32 cnt; __u32 flags; } uprobe_multi; Uprobes are defined for single binary specified in path and multiple calling sites specified in offsets array with optional reference counters specified in ref_ctr_offsets array. All specified arrays have length of 'cnt'. The 'flags' supports single bit for now that marks the uprobe as return probe. Acked-by: Andrii Nakryiko Signed-off-by: Jiri Olsa Acked-by: Yafang Shao --- include/linux/trace_events.h | 6 + include/uapi/linux/bpf.h | 16 +++ kernel/bpf/syscall.c | 14 +- kernel/trace/bpf_trace.c | 237 +++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 16 +++ 5 files changed, 286 insertions(+), 3 deletions(-) diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h index e66d04dbe56a..5b85cf18c350 100644 --- a/include/linux/trace_events.h +++ b/include/linux/trace_events.h @@ -752,6 +752,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id, u32 *fd_type, const char **buf, u64 *probe_offset, u64 *probe_addr); int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); +int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); #else static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) { @@ -798,6 +799,11 @@ bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { return -EOPNOTSUPP; } +static inline int +bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) +{ + return -EOPNOTSUPP; +} #endif enum { diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 7abb382dc6c1..f112a0b948f3 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1039,6 +1039,7 @@ enum bpf_attach_type { BPF_NETFILTER, BPF_TCX_INGRESS, BPF_TCX_EGRESS, + BPF_TRACE_UPROBE_MULTI, __MAX_BPF_ATTACH_TYPE }; @@ -1057,6 +1058,7 @@ enum bpf_link_type { BPF_LINK_TYPE_STRUCT_OPS = 9, BPF_LINK_TYPE_NETFILTER = 10, BPF_LINK_TYPE_TCX = 11, + BPF_LINK_TYPE_UPROBE_MULTI = 12, MAX_BPF_LINK_TYPE, }; @@ -1190,6 +1192,13 @@ enum { BPF_F_KPROBE_MULTI_RETURN = (1U << 0) }; +/* link_create.uprobe_multi.flags used in LINK_CREATE command for + * BPF_TRACE_UPROBE_MULTI attach type to create return probe. + */ +enum { + BPF_F_UPROBE_MULTI_RETURN = (1U << 0) +}; + /* link_create.netfilter.flags used in LINK_CREATE command for * BPF_PROG_TYPE_NETFILTER to enable IP packet defragmentation. */ @@ -1626,6 +1635,13 @@ union bpf_attr { }; __u64 expected_revision; } tcx; + struct { + __aligned_u64 path; + __aligned_u64 offsets; + __aligned_u64 ref_ctr_offsets; + __u32 cnt; + __u32 flags; + } uprobe_multi; }; } link_create; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 7c01186d4078..75c83300339e 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2815,10 +2815,12 @@ static void bpf_link_free_id(int id) /* Clean up bpf_link and corresponding anon_inode file and FD. After * anon_inode is created, bpf_link can't be just kfree()'d due to deferred - * anon_inode's release() call. This helper marksbpf_link as + * anon_inode's release() call. This helper marks bpf_link as * defunct, releases anon_inode file and puts reserved FD. bpf_prog's refcnt * is not decremented, it's the responsibility of a calling code that failed * to complete bpf_link initialization. + * This helper eventually calls link's dealloc callback, but does not call + * link's release callback. */ void bpf_link_cleanup(struct bpf_link_primer *primer) { @@ -3757,8 +3759,12 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog, if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI && attach_type != BPF_TRACE_KPROBE_MULTI) return -EINVAL; + if (prog->expected_attach_type == BPF_TRACE_UPROBE_MULTI && + attach_type != BPF_TRACE_UPROBE_MULTI) + return -EINVAL; if (attach_type != BPF_PERF_EVENT && - attach_type != BPF_TRACE_KPROBE_MULTI) + attach_type != BPF_TRACE_KPROBE_MULTI && + attach_type != BPF_TRACE_UPROBE_MULTI) return -EINVAL; return 0; case BPF_PROG_TYPE_SCHED_CLS: @@ -4954,8 +4960,10 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr) case BPF_PROG_TYPE_KPROBE: if (attr->link_create.attach_type == BPF_PERF_EVENT) ret = bpf_perf_link_attach(attr, prog); - else + else if (attr->link_create.attach_type == BPF_TRACE_KPROBE_MULTI) ret = bpf_kprobe_multi_link_attach(attr, prog); + else if (attr->link_create.attach_type == BPF_TRACE_UPROBE_MULTI) + ret = bpf_uprobe_multi_link_attach(attr, prog); break; default: ret = -EINVAL; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index c92eb8c6ff08..10284fd46f98 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -23,6 +23,7 @@ #include #include #include +#include #include @@ -2965,3 +2966,239 @@ static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx) return 0; } #endif + +#ifdef CONFIG_UPROBES +struct bpf_uprobe_multi_link; + +struct bpf_uprobe { + struct bpf_uprobe_multi_link *link; + loff_t offset; + struct uprobe_consumer consumer; +}; + +struct bpf_uprobe_multi_link { + struct path path; + struct bpf_link link; + u32 cnt; + struct bpf_uprobe *uprobes; +}; + +struct bpf_uprobe_multi_run_ctx { + struct bpf_run_ctx run_ctx; + unsigned long entry_ip; +}; + +static void bpf_uprobe_unregister(struct path *path, struct bpf_uprobe *uprobes, + u32 cnt) +{ + u32 i; + + for (i = 0; i < cnt; i++) { + uprobe_unregister(d_real_inode(path->dentry), uprobes[i].offset, + &uprobes[i].consumer); + } +} + +static void bpf_uprobe_multi_link_release(struct bpf_link *link) +{ + struct bpf_uprobe_multi_link *umulti_link; + + umulti_link = container_of(link, struct bpf_uprobe_multi_link, link); + bpf_uprobe_unregister(&umulti_link->path, umulti_link->uprobes, umulti_link->cnt); +} + +static void bpf_uprobe_multi_link_dealloc(struct bpf_link *link) +{ + struct bpf_uprobe_multi_link *umulti_link; + + umulti_link = container_of(link, struct bpf_uprobe_multi_link, link); + path_put(&umulti_link->path); + kvfree(umulti_link->uprobes); + kfree(umulti_link); +} + +static const struct bpf_link_ops bpf_uprobe_multi_link_lops = { + .release = bpf_uprobe_multi_link_release, + .dealloc = bpf_uprobe_multi_link_dealloc, +}; + +static int uprobe_prog_run(struct bpf_uprobe *uprobe, + unsigned long entry_ip, + struct pt_regs *regs) +{ + struct bpf_uprobe_multi_link *link = uprobe->link; + struct bpf_uprobe_multi_run_ctx run_ctx = { + .entry_ip = entry_ip, + }; + struct bpf_prog *prog = link->link.prog; + bool sleepable = prog->aux->sleepable; + struct bpf_run_ctx *old_run_ctx; + int err = 0; + + might_fault(); + + migrate_disable(); + + if (sleepable) + rcu_read_lock_trace(); + else + rcu_read_lock(); + + old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); + err = bpf_prog_run(link->link.prog, regs); + bpf_reset_run_ctx(old_run_ctx); + + if (sleepable) + rcu_read_unlock_trace(); + else + rcu_read_unlock(); + + migrate_enable(); + return err; +} + +static int +uprobe_multi_link_handler(struct uprobe_consumer *con, struct pt_regs *regs) +{ + struct bpf_uprobe *uprobe; + + uprobe = container_of(con, struct bpf_uprobe, consumer); + return uprobe_prog_run(uprobe, instruction_pointer(regs), regs); +} + +static int +uprobe_multi_link_ret_handler(struct uprobe_consumer *con, unsigned long func, struct pt_regs *regs) +{ + struct bpf_uprobe *uprobe; + + uprobe = container_of(con, struct bpf_uprobe, consumer); + return uprobe_prog_run(uprobe, func, regs); +} + +int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) +{ + struct bpf_uprobe_multi_link *link = NULL; + unsigned long __user *uref_ctr_offsets; + unsigned long *ref_ctr_offsets = NULL; + struct bpf_link_primer link_primer; + struct bpf_uprobe *uprobes = NULL; + unsigned long __user *uoffsets; + void __user *upath; + u32 flags, cnt, i; + struct path path; + char *name; + int err; + + /* no support for 32bit archs yet */ + if (sizeof(u64) != sizeof(void *)) + return -EOPNOTSUPP; + + if (prog->expected_attach_type != BPF_TRACE_UPROBE_MULTI) + return -EINVAL; + + flags = attr->link_create.uprobe_multi.flags; + if (flags & ~BPF_F_UPROBE_MULTI_RETURN) + return -EINVAL; + + /* + * path, offsets and cnt are mandatory, + * ref_ctr_offsets is optional + */ + upath = u64_to_user_ptr(attr->link_create.uprobe_multi.path); + uoffsets = u64_to_user_ptr(attr->link_create.uprobe_multi.offsets); + cnt = attr->link_create.uprobe_multi.cnt; + + if (!upath || !uoffsets || !cnt) + return -EINVAL; + + uref_ctr_offsets = u64_to_user_ptr(attr->link_create.uprobe_multi.ref_ctr_offsets); + + name = strndup_user(upath, PATH_MAX); + if (IS_ERR(name)) { + err = PTR_ERR(name); + return err; + } + + err = kern_path(name, LOOKUP_FOLLOW, &path); + kfree(name); + if (err) + return err; + + if (!d_is_reg(path.dentry)) { + err = -EBADF; + goto error_path_put; + } + + err = -ENOMEM; + + link = kzalloc(sizeof(*link), GFP_KERNEL); + uprobes = kvcalloc(cnt, sizeof(*uprobes), GFP_KERNEL); + + if (!uprobes || !link) + goto error_free; + + if (uref_ctr_offsets) { + ref_ctr_offsets = kvcalloc(cnt, sizeof(*ref_ctr_offsets), GFP_KERNEL); + if (!ref_ctr_offsets) + goto error_free; + } + + for (i = 0; i < cnt; i++) { + if (uref_ctr_offsets && __get_user(ref_ctr_offsets[i], uref_ctr_offsets + i)) { + err = -EFAULT; + goto error_free; + } + if (__get_user(uprobes[i].offset, uoffsets + i)) { + err = -EFAULT; + goto error_free; + } + + uprobes[i].link = link; + + if (flags & BPF_F_UPROBE_MULTI_RETURN) + uprobes[i].consumer.ret_handler = uprobe_multi_link_ret_handler; + else + uprobes[i].consumer.handler = uprobe_multi_link_handler; + } + + link->cnt = cnt; + link->uprobes = uprobes; + link->path = path; + + bpf_link_init(&link->link, BPF_LINK_TYPE_UPROBE_MULTI, + &bpf_uprobe_multi_link_lops, prog); + + err = bpf_link_prime(&link->link, &link_primer); + if (err) + goto error_free; + + for (i = 0; i < cnt; i++) { + err = uprobe_register_refctr(d_real_inode(link->path.dentry), + uprobes[i].offset, + ref_ctr_offsets ? ref_ctr_offsets[i] : 0, + &uprobes[i].consumer); + if (err) { + bpf_uprobe_unregister(&path, uprobes, i); + bpf_link_cleanup(&link_primer); + kvfree(ref_ctr_offsets); + return err; + } + } + + kvfree(ref_ctr_offsets); + return bpf_link_settle(&link_primer); + +error_free: + kvfree(ref_ctr_offsets); + kvfree(uprobes); + kfree(link); +error_path_put: + path_put(&path); + return err; +} +#else /* !CONFIG_UPROBES */ +int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) +{ + return -EOPNOTSUPP; +} +#endif /* CONFIG_UPROBES */ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 7abb382dc6c1..f112a0b948f3 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1039,6 +1039,7 @@ enum bpf_attach_type { BPF_NETFILTER, BPF_TCX_INGRESS, BPF_TCX_EGRESS, + BPF_TRACE_UPROBE_MULTI, __MAX_BPF_ATTACH_TYPE }; @@ -1057,6 +1058,7 @@ enum bpf_link_type { BPF_LINK_TYPE_STRUCT_OPS = 9, BPF_LINK_TYPE_NETFILTER = 10, BPF_LINK_TYPE_TCX = 11, + BPF_LINK_TYPE_UPROBE_MULTI = 12, MAX_BPF_LINK_TYPE, }; @@ -1190,6 +1192,13 @@ enum { BPF_F_KPROBE_MULTI_RETURN = (1U << 0) }; +/* link_create.uprobe_multi.flags used in LINK_CREATE command for + * BPF_TRACE_UPROBE_MULTI attach type to create return probe. + */ +enum { + BPF_F_UPROBE_MULTI_RETURN = (1U << 0) +}; + /* link_create.netfilter.flags used in LINK_CREATE command for * BPF_PROG_TYPE_NETFILTER to enable IP packet defragmentation. */ @@ -1626,6 +1635,13 @@ union bpf_attr { }; __u64 expected_revision; } tcx; + struct { + __aligned_u64 path; + __aligned_u64 offsets; + __aligned_u64 ref_ctr_offsets; + __u32 cnt; + __u32 flags; + } uprobe_multi; }; } link_create; From patchwork Sun Jul 30 13:41:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333418 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9E37363 for ; Sun, 30 Jul 2023 13:43:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E6B0C433C7; Sun, 30 Jul 2023 13:43:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724589; bh=e+EK2Q98JrhSsz3BORKtpTTOScJ6D/7ZZliGU3056ao=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g7oU4Hn8f8M3TIJ/fNdNsRxFePU37z59Zc3uURyJJfGbTgEX+YwbbzlyTIIIFKSAd D6K3mtu56NKeFtdmNDkxsV7AVEJUqzkY5qlm15js1jiorNJApj2Nphtwb5gK6lhnfj W6rdKS8d3fC3NOeARjiJhA/+hJMaeBidiWHg/ZQJPlotJTDMFvxa9hE/zs4Jl0ns90 /06McplspkCZj0+CMlfwoWkBQffiUM3MEYYP156Gz8RcSnH5+txSnpm5CAU5m9m1qg nf0hIQ5SmVPR7LPMuJFY1gscVbppCkOayQTVMQF5oKJ4g+1x/vouDVkNswXpI44ecj 6DIhOeDhxWwjw== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 04/28] bpf: Add cookies support for uprobe_multi link Date: Sun, 30 Jul 2023 15:41:59 +0200 Message-ID: <20230730134223.94496-5-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding support to specify cookies array for uprobe_multi link. The cookies array share indexes and length with other uprobe_multi arrays (offsets/ref_ctr_offsets). The cookies[i] value defines cookie for i-the uprobe and will be returned by bpf_get_attach_cookie helper when called from ebpf program hooked to that specific uprobe. Acked-by: Andrii Nakryiko Signed-off-by: Jiri Olsa Acked-by: Yafang Shao --- include/uapi/linux/bpf.h | 1 + kernel/bpf/syscall.c | 2 +- kernel/trace/bpf_trace.c | 45 +++++++++++++++++++++++++++++++--- tools/include/uapi/linux/bpf.h | 1 + 4 files changed, 44 insertions(+), 5 deletions(-) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index f112a0b948f3..2b6cd7d1c165 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1639,6 +1639,7 @@ union bpf_attr { __aligned_u64 path; __aligned_u64 offsets; __aligned_u64 ref_ctr_offsets; + __aligned_u64 cookies; __u32 cnt; __u32 flags; } uprobe_multi; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 75c83300339e..98459fb34ff7 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4883,7 +4883,7 @@ static int bpf_map_do_batch(const union bpf_attr *attr, return err; } -#define BPF_LINK_CREATE_LAST_FIELD link_create.kprobe_multi.cookies +#define BPF_LINK_CREATE_LAST_FIELD link_create.uprobe_multi.flags static int link_create(union bpf_attr *attr, bpfptr_t uattr) { struct bpf_prog *prog; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 10284fd46f98..d73a47bd2bbd 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -87,6 +87,8 @@ static int bpf_btf_printf_prepare(struct btf_ptr *ptr, u32 btf_ptr_size, static u64 bpf_kprobe_multi_cookie(struct bpf_run_ctx *ctx); static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx); +static u64 bpf_uprobe_multi_cookie(struct bpf_run_ctx *ctx); + /** * trace_call_bpf - invoke BPF program * @call: tracepoint event @@ -1099,6 +1101,18 @@ static const struct bpf_func_proto bpf_get_attach_cookie_proto_kmulti = { .arg1_type = ARG_PTR_TO_CTX, }; +BPF_CALL_1(bpf_get_attach_cookie_uprobe_multi, struct pt_regs *, regs) +{ + return bpf_uprobe_multi_cookie(current->bpf_ctx); +} + +static const struct bpf_func_proto bpf_get_attach_cookie_proto_umulti = { + .func = bpf_get_attach_cookie_uprobe_multi, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, +}; + BPF_CALL_1(bpf_get_attach_cookie_trace, void *, ctx) { struct bpf_trace_run_ctx *run_ctx; @@ -1545,9 +1559,11 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) &bpf_get_func_ip_proto_kprobe_multi : &bpf_get_func_ip_proto_kprobe; case BPF_FUNC_get_attach_cookie: - return prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI ? - &bpf_get_attach_cookie_proto_kmulti : - &bpf_get_attach_cookie_proto_trace; + if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI) + return &bpf_get_attach_cookie_proto_kmulti; + if (prog->expected_attach_type == BPF_TRACE_UPROBE_MULTI) + return &bpf_get_attach_cookie_proto_umulti; + return &bpf_get_attach_cookie_proto_trace; default: return bpf_tracing_func_proto(func_id, prog); } @@ -2973,6 +2989,7 @@ struct bpf_uprobe_multi_link; struct bpf_uprobe { struct bpf_uprobe_multi_link *link; loff_t offset; + u64 cookie; struct uprobe_consumer consumer; }; @@ -2986,6 +3003,7 @@ struct bpf_uprobe_multi_link { struct bpf_uprobe_multi_run_ctx { struct bpf_run_ctx run_ctx; unsigned long entry_ip; + struct bpf_uprobe *uprobe; }; static void bpf_uprobe_unregister(struct path *path, struct bpf_uprobe *uprobes, @@ -3029,6 +3047,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe, struct bpf_uprobe_multi_link *link = uprobe->link; struct bpf_uprobe_multi_run_ctx run_ctx = { .entry_ip = entry_ip, + .uprobe = uprobe, }; struct bpf_prog *prog = link->link.prog; bool sleepable = prog->aux->sleepable; @@ -3075,6 +3094,14 @@ uprobe_multi_link_ret_handler(struct uprobe_consumer *con, unsigned long func, s return uprobe_prog_run(uprobe, func, regs); } +static u64 bpf_uprobe_multi_cookie(struct bpf_run_ctx *ctx) +{ + struct bpf_uprobe_multi_run_ctx *run_ctx; + + run_ctx = container_of(current->bpf_ctx, struct bpf_uprobe_multi_run_ctx, run_ctx); + return run_ctx->uprobe->cookie; +} + int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { struct bpf_uprobe_multi_link *link = NULL; @@ -3083,6 +3110,7 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr struct bpf_link_primer link_primer; struct bpf_uprobe *uprobes = NULL; unsigned long __user *uoffsets; + u64 __user *ucookies; void __user *upath; u32 flags, cnt, i; struct path path; @@ -3102,7 +3130,7 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr /* * path, offsets and cnt are mandatory, - * ref_ctr_offsets is optional + * ref_ctr_offsets and cookies are optional */ upath = u64_to_user_ptr(attr->link_create.uprobe_multi.path); uoffsets = u64_to_user_ptr(attr->link_create.uprobe_multi.offsets); @@ -3112,6 +3140,7 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr return -EINVAL; uref_ctr_offsets = u64_to_user_ptr(attr->link_create.uprobe_multi.ref_ctr_offsets); + ucookies = u64_to_user_ptr(attr->link_create.uprobe_multi.cookies); name = strndup_user(upath, PATH_MAX); if (IS_ERR(name)) { @@ -3144,6 +3173,10 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr } for (i = 0; i < cnt; i++) { + if (ucookies && __get_user(uprobes[i].cookie, ucookies + i)) { + err = -EFAULT; + goto error_free; + } if (uref_ctr_offsets && __get_user(ref_ctr_offsets[i], uref_ctr_offsets + i)) { err = -EFAULT; goto error_free; @@ -3201,4 +3234,8 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr { return -EOPNOTSUPP; } +static u64 bpf_uprobe_multi_cookie(struct bpf_run_ctx *ctx) +{ + return 0; +} #endif /* CONFIG_UPROBES */ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index f112a0b948f3..2b6cd7d1c165 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1639,6 +1639,7 @@ union bpf_attr { __aligned_u64 path; __aligned_u64 offsets; __aligned_u64 ref_ctr_offsets; + __aligned_u64 cookies; __u32 cnt; __u32 flags; } uprobe_multi; From patchwork Sun Jul 30 13:42:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333419 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E713363 for ; Sun, 30 Jul 2023 13:43:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5F749C433C7; Sun, 30 Jul 2023 13:43:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724599; bh=cc6/LJfyIujGDdOPfY68II/+tjL8Y0P4OlTnqBoCOMw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SRwdsVVfNafB9xJqK6td0UWKeRZdZ1zymvzLiO5d0CA6y4osmuIcGO5eS7SPnnDSP 4criYj1jM3euVK6bkSGX5CvJRghNLnE+AFCoZ0UIT5sMXWEI2gk3SFa0YnSx/0TQkR dQy7nlUpjwti1H2CyAYHnHHOqc7gI/sNIz8CgOUi5yL6Dh6PLh15lMtmG8Zpsv94ok qBkyuG7qOoBp+ppLKQU5aCbYsqUF1J561+QE8Hwglgm0wAq14n3tQtfv9l9erMMYD5 IpqB4jj60bDdBVzrsCY/lbs5rfzGhY20KQmvjdfq8D+G5yrkzaaXIZfvEA5UpF03zm c9RNeJlSAeqFQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: Oleg Nesterov , bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 05/28] bpf: Add pid filter support for uprobe_multi link Date: Sun, 30 Jul 2023 15:42:00 +0200 Message-ID: <20230730134223.94496-6-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding support to specify pid for uprobe_multi link and the uprobes are created only for task with given pid value. Using the consumer.filter filter callback for that, so the task gets filtered during the uprobe installation. We still need to check the task during runtime in the uprobe handler, because the handler could get executed if there's another system wide consumer on the same uprobe (thanks Oleg for the insight). Cc: Oleg Nesterov Reviewed-by: Oleg Nesterov Signed-off-by: Jiri Olsa --- include/uapi/linux/bpf.h | 1 + kernel/bpf/syscall.c | 2 +- kernel/trace/bpf_trace.c | 33 +++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + 4 files changed, 36 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 2b6cd7d1c165..558902237338 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1642,6 +1642,7 @@ union bpf_attr { __aligned_u64 cookies; __u32 cnt; __u32 flags; + __u32 pid; } uprobe_multi; }; } link_create; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 98459fb34ff7..54df18b3fc4e 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4883,7 +4883,7 @@ static int bpf_map_do_batch(const union bpf_attr *attr, return err; } -#define BPF_LINK_CREATE_LAST_FIELD link_create.uprobe_multi.flags +#define BPF_LINK_CREATE_LAST_FIELD link_create.uprobe_multi.pid static int link_create(union bpf_attr *attr, bpfptr_t uattr) { struct bpf_prog *prog; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index d73a47bd2bbd..d5f30747378a 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2998,6 +2998,7 @@ struct bpf_uprobe_multi_link { struct bpf_link link; u32 cnt; struct bpf_uprobe *uprobes; + struct task_struct *task; }; struct bpf_uprobe_multi_run_ctx { @@ -3023,6 +3024,8 @@ static void bpf_uprobe_multi_link_release(struct bpf_link *link) umulti_link = container_of(link, struct bpf_uprobe_multi_link, link); bpf_uprobe_unregister(&umulti_link->path, umulti_link->uprobes, umulti_link->cnt); + if (umulti_link->task) + put_task_struct(umulti_link->task); } static void bpf_uprobe_multi_link_dealloc(struct bpf_link *link) @@ -3054,6 +3057,9 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe, struct bpf_run_ctx *old_run_ctx; int err = 0; + if (link->task && current != link->task) + return 0; + might_fault(); migrate_disable(); @@ -3076,6 +3082,16 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe, return err; } +static bool +uprobe_multi_link_filter(struct uprobe_consumer *con, enum uprobe_filter_ctx ctx, + struct mm_struct *mm) +{ + struct bpf_uprobe *uprobe; + + uprobe = container_of(con, struct bpf_uprobe, consumer); + return uprobe->link->task->mm == mm; +} + static int uprobe_multi_link_handler(struct uprobe_consumer *con, struct pt_regs *regs) { @@ -3109,12 +3125,14 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr unsigned long *ref_ctr_offsets = NULL; struct bpf_link_primer link_primer; struct bpf_uprobe *uprobes = NULL; + struct task_struct *task = NULL; unsigned long __user *uoffsets; u64 __user *ucookies; void __user *upath; u32 flags, cnt, i; struct path path; char *name; + pid_t pid; int err; /* no support for 32bit archs yet */ @@ -3158,6 +3176,15 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr goto error_path_put; } + pid = attr->link_create.uprobe_multi.pid; + if (pid) { + rcu_read_lock(); + task = get_pid_task(find_vpid(pid), PIDTYPE_PID); + rcu_read_unlock(); + if (!task) + goto error_path_put; + } + err = -ENOMEM; link = kzalloc(sizeof(*link), GFP_KERNEL); @@ -3192,11 +3219,15 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr uprobes[i].consumer.ret_handler = uprobe_multi_link_ret_handler; else uprobes[i].consumer.handler = uprobe_multi_link_handler; + + if (pid) + uprobes[i].consumer.filter = uprobe_multi_link_filter; } link->cnt = cnt; link->uprobes = uprobes; link->path = path; + link->task = task; bpf_link_init(&link->link, BPF_LINK_TYPE_UPROBE_MULTI, &bpf_uprobe_multi_link_lops, prog); @@ -3225,6 +3256,8 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr kvfree(ref_ctr_offsets); kvfree(uprobes); kfree(link); + if (task) + put_task_struct(task); error_path_put: path_put(&path); return err; diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 2b6cd7d1c165..558902237338 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1642,6 +1642,7 @@ union bpf_attr { __aligned_u64 cookies; __u32 cnt; __u32 flags; + __u32 pid; } uprobe_multi; }; } link_create; From patchwork Sun Jul 30 13:42:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333420 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6626C20FF for ; Sun, 30 Jul 2023 13:43:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 73C41C433C8; Sun, 30 Jul 2023 13:43:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724609; bh=GlUrnE4DR8dOqOUYw41o7qMLNx1RMy/hwmpDx4wG5zA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Um+IJJxUZbLYwicrl52VrpNA5lExK70GuRVXkCTrfmD42J4GxdrTws9IJgd6N2wMi l/naYEtaETzzIAB7swuvkhgCRt97U4YqOSQgj8WtLijYddkKbSabFnUS4S1JJLvOET qNIs065ZKt8Y8ESgGZ2iRlP+p3bEehhwy144DkdJkKh+14MdzbVFLwzqB/5Tq+JHRw 63vudfQtSR+YEXyT961gyr2MvY/bBwIY3LFzgt4X/5jfpSGdQ2O42QIe4Mu11AwS17 TP4Ll91lN8LIKEVjfGWu19S3vT3fx8GIHxp7b0KLPCHUhNHf9wiMyhpk2sj3NGVIPP XRMjTyyi46JXg== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 06/28] bpf: Add bpf_get_func_ip helper support for uprobe link Date: Sun, 30 Jul 2023 15:42:01 +0200 Message-ID: <20230730134223.94496-7-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding support for bpf_get_func_ip helper being called from ebpf program attached by uprobe_multi link. It returns the ip of the uprobe. Acked-by: Andrii Nakryiko Signed-off-by: Jiri Olsa --- kernel/trace/bpf_trace.c | 33 ++++++++++++++++++++++++++++++--- 1 file changed, 30 insertions(+), 3 deletions(-) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index d5f30747378a..6554555d84be 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -88,6 +88,7 @@ static u64 bpf_kprobe_multi_cookie(struct bpf_run_ctx *ctx); static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx); static u64 bpf_uprobe_multi_cookie(struct bpf_run_ctx *ctx); +static u64 bpf_uprobe_multi_entry_ip(struct bpf_run_ctx *ctx); /** * trace_call_bpf - invoke BPF program @@ -1101,6 +1102,18 @@ static const struct bpf_func_proto bpf_get_attach_cookie_proto_kmulti = { .arg1_type = ARG_PTR_TO_CTX, }; +BPF_CALL_1(bpf_get_func_ip_uprobe_multi, struct pt_regs *, regs) +{ + return bpf_uprobe_multi_entry_ip(current->bpf_ctx); +} + +static const struct bpf_func_proto bpf_get_func_ip_proto_uprobe_multi = { + .func = bpf_get_func_ip_uprobe_multi, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, +}; + BPF_CALL_1(bpf_get_attach_cookie_uprobe_multi, struct pt_regs *, regs) { return bpf_uprobe_multi_cookie(current->bpf_ctx); @@ -1555,9 +1568,11 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_override_return_proto; #endif case BPF_FUNC_get_func_ip: - return prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI ? - &bpf_get_func_ip_proto_kprobe_multi : - &bpf_get_func_ip_proto_kprobe; + if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI) + return &bpf_get_func_ip_proto_kprobe_multi; + if (prog->expected_attach_type == BPF_TRACE_UPROBE_MULTI) + return &bpf_get_func_ip_proto_uprobe_multi; + return &bpf_get_func_ip_proto_kprobe; case BPF_FUNC_get_attach_cookie: if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI) return &bpf_get_attach_cookie_proto_kmulti; @@ -3110,6 +3125,14 @@ uprobe_multi_link_ret_handler(struct uprobe_consumer *con, unsigned long func, s return uprobe_prog_run(uprobe, func, regs); } +static u64 bpf_uprobe_multi_entry_ip(struct bpf_run_ctx *ctx) +{ + struct bpf_uprobe_multi_run_ctx *run_ctx; + + run_ctx = container_of(current->bpf_ctx, struct bpf_uprobe_multi_run_ctx, run_ctx); + return run_ctx->entry_ip; +} + static u64 bpf_uprobe_multi_cookie(struct bpf_run_ctx *ctx) { struct bpf_uprobe_multi_run_ctx *run_ctx; @@ -3271,4 +3294,8 @@ static u64 bpf_uprobe_multi_cookie(struct bpf_run_ctx *ctx) { return 0; } +static u64 bpf_uprobe_multi_entry_ip(struct bpf_run_ctx *ctx) +{ + return 0; +} #endif /* CONFIG_UPROBES */ From patchwork Sun Jul 30 13:42:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333421 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 836A220FF for ; Sun, 30 Jul 2023 13:43:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 962FFC433C8; Sun, 30 Jul 2023 13:43:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724619; bh=r6c2A5lJbrpkgWz+ZJ8Q0j5JelsZXiEoA5aRhiukCb4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fLubojn8BMkInS/RDkohGhTXny6PXdLDlK6NMibPdekprJuXeCKrSakB9hdPCF21F gzOqjndunaodVJHiJtCaTiiNOsfdWVCbusg9EVI3dlakdf9ndF/mGMNon8Ti0Gde13 x8njN7lvGVqF8E8GrDTZgTWP03bVyUhk17rxxZRQ4WM2Jfp/CBtUuxBXdA46k2dTJw kMFea3jkvHA7eufXDqtC6OZiYsB3JLmIhAgGNvl8ClsoEgA6ka/oKtBY9TTkerUFxf 1AeQsQskC/KWOKm5m8VV9APnURbWRqi05jaC8O2CcQjrcnFD7udFSNSKi+o/ct3nUt Q80hh2BihmVyQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 07/28] libbpf: Add uprobe_multi attach type and link names Date: Sun, 30 Jul 2023 15:42:02 +0200 Message-ID: <20230730134223.94496-8-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding new uprobe_multi attach type and link names, so the functions can resolve the new values. Acked-by: Andrii Nakryiko Signed-off-by: Jiri Olsa --- tools/lib/bpf/libbpf.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 17883f5a44b9..2fc98d857142 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -120,6 +120,7 @@ static const char * const attach_type_name[] = { [BPF_NETFILTER] = "netfilter", [BPF_TCX_INGRESS] = "tcx_ingress", [BPF_TCX_EGRESS] = "tcx_egress", + [BPF_TRACE_UPROBE_MULTI] = "trace_uprobe_multi", }; static const char * const link_type_name[] = { @@ -135,6 +136,7 @@ static const char * const link_type_name[] = { [BPF_LINK_TYPE_STRUCT_OPS] = "struct_ops", [BPF_LINK_TYPE_NETFILTER] = "netfilter", [BPF_LINK_TYPE_TCX] = "tcx", + [BPF_LINK_TYPE_UPROBE_MULTI] = "uprobe_multi", }; static const char * const map_type_name[] = { From patchwork Sun Jul 30 13:42:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333422 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ABACF20FF for ; Sun, 30 Jul 2023 13:43:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB4E0C433C8; Sun, 30 Jul 2023 13:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724629; bh=Kh1yerpYg7R1zz5owwtW1QZoVH4SNkbcGPyc6YQNfyg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ERvZXc+sEJ1QgIhd4EhfDwxKD/VrZBQZN/kxXqIODk9HbzQhkdzd+zw/tWHVpmUf2 fdGfh7qJPOR9bJoFWEe0iiK4428jtCd0VbBF3Y56xe0t9GzPhhUWMQtNgjPlrd6u6j 1UjhVL37e8BnbPFZJFxpkoiQU6q20T8l4RllXrWrrGm+zlr8fsX8xA5/fKEx8wBw0g PtOp6BUkkwKR2ZMP9IyDtFbp5kkKB23zp2ZyP/TD95x5Fav06UVGBiQAVuVRmn/0zp df9EfDfFwcBADkSRcVnTiDTS0LxI8TmI8V0chFWcge9WE4rzrNC7HleTGvXmQw9UL9 etmJ8s5yUzqYA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 08/28] libbpf: Move elf_find_func_offset* functions to elf object Date: Sun, 30 Jul 2023 15:42:03 +0200 Message-ID: <20230730134223.94496-9-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding new elf object that will contain elf related functions. There's no functional change. Suggested-by: Andrii Nakryiko Signed-off-by: Jiri Olsa --- tools/lib/bpf/Build | 2 +- tools/lib/bpf/elf.c | 197 ++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf.c | 185 ------------------------------ tools/lib/bpf/libbpf_internal.h | 4 + 4 files changed, 202 insertions(+), 186 deletions(-) create mode 100644 tools/lib/bpf/elf.c diff --git a/tools/lib/bpf/Build b/tools/lib/bpf/Build index b8b0a6369363..2d0c282c8588 100644 --- a/tools/lib/bpf/Build +++ b/tools/lib/bpf/Build @@ -1,4 +1,4 @@ libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \ netlink.o bpf_prog_linfo.o libbpf_probes.o hashmap.o \ btf_dump.o ringbuf.o strset.o linker.o gen_loader.o relo_core.o \ - usdt.o zip.o + usdt.o zip.o elf.o diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c new file mode 100644 index 000000000000..735ef10093ac --- /dev/null +++ b/tools/lib/bpf/elf.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) + +#include +#include +#include +#include + +#include "libbpf_internal.h" +#include "str_error.h" + +#define STRERR_BUFSIZE 128 + +/* Return next ELF section of sh_type after scn, or first of that type if scn is NULL. */ +static Elf_Scn *elf_find_next_scn_by_type(Elf *elf, int sh_type, Elf_Scn *scn) +{ + while ((scn = elf_nextscn(elf, scn)) != NULL) { + GElf_Shdr sh; + + if (!gelf_getshdr(scn, &sh)) + continue; + if (sh.sh_type == sh_type) + return scn; + } + return NULL; +} + +/* Find offset of function name in the provided ELF object. "binary_path" is + * the path to the ELF binary represented by "elf", and only used for error + * reporting matters. "name" matches symbol name or name@@LIB for library + * functions. + */ +long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name) +{ + int i, sh_types[2] = { SHT_DYNSYM, SHT_SYMTAB }; + bool is_shared_lib, is_name_qualified; + long ret = -ENOENT; + size_t name_len; + GElf_Ehdr ehdr; + + if (!gelf_getehdr(elf, &ehdr)) { + pr_warn("elf: failed to get ehdr from %s: %s\n", binary_path, elf_errmsg(-1)); + ret = -LIBBPF_ERRNO__FORMAT; + goto out; + } + /* for shared lib case, we do not need to calculate relative offset */ + is_shared_lib = ehdr.e_type == ET_DYN; + + name_len = strlen(name); + /* Does name specify "@@LIB"? */ + is_name_qualified = strstr(name, "@@") != NULL; + + /* Search SHT_DYNSYM, SHT_SYMTAB for symbol. This search order is used because if + * a binary is stripped, it may only have SHT_DYNSYM, and a fully-statically + * linked binary may not have SHT_DYMSYM, so absence of a section should not be + * reported as a warning/error. + */ + for (i = 0; i < ARRAY_SIZE(sh_types); i++) { + size_t nr_syms, strtabidx, idx; + Elf_Data *symbols = NULL; + Elf_Scn *scn = NULL; + int last_bind = -1; + const char *sname; + GElf_Shdr sh; + + scn = elf_find_next_scn_by_type(elf, sh_types[i], NULL); + if (!scn) { + pr_debug("elf: failed to find symbol table ELF sections in '%s'\n", + binary_path); + continue; + } + if (!gelf_getshdr(scn, &sh)) + continue; + strtabidx = sh.sh_link; + symbols = elf_getdata(scn, 0); + if (!symbols) { + pr_warn("elf: failed to get symbols for symtab section in '%s': %s\n", + binary_path, elf_errmsg(-1)); + ret = -LIBBPF_ERRNO__FORMAT; + goto out; + } + nr_syms = symbols->d_size / sh.sh_entsize; + + for (idx = 0; idx < nr_syms; idx++) { + int curr_bind; + GElf_Sym sym; + Elf_Scn *sym_scn; + GElf_Shdr sym_sh; + + if (!gelf_getsym(symbols, idx, &sym)) + continue; + + if (GELF_ST_TYPE(sym.st_info) != STT_FUNC) + continue; + + sname = elf_strptr(elf, strtabidx, sym.st_name); + if (!sname) + continue; + + curr_bind = GELF_ST_BIND(sym.st_info); + + /* User can specify func, func@@LIB or func@@LIB_VERSION. */ + if (strncmp(sname, name, name_len) != 0) + continue; + /* ...but we don't want a search for "foo" to match 'foo2" also, so any + * additional characters in sname should be of the form "@@LIB". + */ + if (!is_name_qualified && sname[name_len] != '\0' && sname[name_len] != '@') + continue; + + if (ret >= 0) { + /* handle multiple matches */ + if (last_bind != STB_WEAK && curr_bind != STB_WEAK) { + /* Only accept one non-weak bind. */ + pr_warn("elf: ambiguous match for '%s', '%s' in '%s'\n", + sname, name, binary_path); + ret = -LIBBPF_ERRNO__FORMAT; + goto out; + } else if (curr_bind == STB_WEAK) { + /* already have a non-weak bind, and + * this is a weak bind, so ignore. + */ + continue; + } + } + + /* Transform symbol's virtual address (absolute for + * binaries and relative for shared libs) into file + * offset, which is what kernel is expecting for + * uprobe/uretprobe attachment. + * See Documentation/trace/uprobetracer.rst for more + * details. + * This is done by looking up symbol's containing + * section's header and using it's virtual address + * (sh_addr) and corresponding file offset (sh_offset) + * to transform sym.st_value (virtual address) into + * desired final file offset. + */ + sym_scn = elf_getscn(elf, sym.st_shndx); + if (!sym_scn) + continue; + if (!gelf_getshdr(sym_scn, &sym_sh)) + continue; + + ret = sym.st_value - sym_sh.sh_addr + sym_sh.sh_offset; + last_bind = curr_bind; + } + if (ret > 0) + break; + } + + if (ret > 0) { + pr_debug("elf: symbol address match for '%s' in '%s': 0x%lx\n", name, binary_path, + ret); + } else { + if (ret == 0) { + pr_warn("elf: '%s' is 0 in symtab for '%s': %s\n", name, binary_path, + is_shared_lib ? "should not be 0 in a shared library" : + "try using shared library path instead"); + ret = -ENOENT; + } else { + pr_warn("elf: failed to find symbol '%s' in '%s'\n", name, binary_path); + } + } +out: + return ret; +} + +/* Find offset of function name in ELF object specified by path. "name" matches + * symbol name or name@@LIB for library functions. + */ +long elf_find_func_offset_from_file(const char *binary_path, const char *name) +{ + char errmsg[STRERR_BUFSIZE]; + long ret = -ENOENT; + Elf *elf; + int fd; + + fd = open(binary_path, O_RDONLY | O_CLOEXEC); + if (fd < 0) { + ret = -errno; + pr_warn("failed to open %s: %s\n", binary_path, + libbpf_strerror_r(ret, errmsg, sizeof(errmsg))); + return ret; + } + elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); + if (!elf) { + pr_warn("elf: could not read elf from %s: %s\n", binary_path, elf_errmsg(-1)); + close(fd); + return -LIBBPF_ERRNO__FORMAT; + } + + ret = elf_find_func_offset(elf, binary_path, name); + elf_end(elf); + close(fd); + return ret; +} + diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 2fc98d857142..1bdda3f8c865 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -10986,191 +10986,6 @@ static int perf_event_uprobe_open_legacy(const char *probe_name, bool retprobe, return err; } -/* Return next ELF section of sh_type after scn, or first of that type if scn is NULL. */ -static Elf_Scn *elf_find_next_scn_by_type(Elf *elf, int sh_type, Elf_Scn *scn) -{ - while ((scn = elf_nextscn(elf, scn)) != NULL) { - GElf_Shdr sh; - - if (!gelf_getshdr(scn, &sh)) - continue; - if (sh.sh_type == sh_type) - return scn; - } - return NULL; -} - -/* Find offset of function name in the provided ELF object. "binary_path" is - * the path to the ELF binary represented by "elf", and only used for error - * reporting matters. "name" matches symbol name or name@@LIB for library - * functions. - */ -static long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name) -{ - int i, sh_types[2] = { SHT_DYNSYM, SHT_SYMTAB }; - bool is_shared_lib, is_name_qualified; - long ret = -ENOENT; - size_t name_len; - GElf_Ehdr ehdr; - - if (!gelf_getehdr(elf, &ehdr)) { - pr_warn("elf: failed to get ehdr from %s: %s\n", binary_path, elf_errmsg(-1)); - ret = -LIBBPF_ERRNO__FORMAT; - goto out; - } - /* for shared lib case, we do not need to calculate relative offset */ - is_shared_lib = ehdr.e_type == ET_DYN; - - name_len = strlen(name); - /* Does name specify "@@LIB"? */ - is_name_qualified = strstr(name, "@@") != NULL; - - /* Search SHT_DYNSYM, SHT_SYMTAB for symbol. This search order is used because if - * a binary is stripped, it may only have SHT_DYNSYM, and a fully-statically - * linked binary may not have SHT_DYMSYM, so absence of a section should not be - * reported as a warning/error. - */ - for (i = 0; i < ARRAY_SIZE(sh_types); i++) { - size_t nr_syms, strtabidx, idx; - Elf_Data *symbols = NULL; - Elf_Scn *scn = NULL; - int last_bind = -1; - const char *sname; - GElf_Shdr sh; - - scn = elf_find_next_scn_by_type(elf, sh_types[i], NULL); - if (!scn) { - pr_debug("elf: failed to find symbol table ELF sections in '%s'\n", - binary_path); - continue; - } - if (!gelf_getshdr(scn, &sh)) - continue; - strtabidx = sh.sh_link; - symbols = elf_getdata(scn, 0); - if (!symbols) { - pr_warn("elf: failed to get symbols for symtab section in '%s': %s\n", - binary_path, elf_errmsg(-1)); - ret = -LIBBPF_ERRNO__FORMAT; - goto out; - } - nr_syms = symbols->d_size / sh.sh_entsize; - - for (idx = 0; idx < nr_syms; idx++) { - int curr_bind; - GElf_Sym sym; - Elf_Scn *sym_scn; - GElf_Shdr sym_sh; - - if (!gelf_getsym(symbols, idx, &sym)) - continue; - - if (GELF_ST_TYPE(sym.st_info) != STT_FUNC) - continue; - - sname = elf_strptr(elf, strtabidx, sym.st_name); - if (!sname) - continue; - - curr_bind = GELF_ST_BIND(sym.st_info); - - /* User can specify func, func@@LIB or func@@LIB_VERSION. */ - if (strncmp(sname, name, name_len) != 0) - continue; - /* ...but we don't want a search for "foo" to match 'foo2" also, so any - * additional characters in sname should be of the form "@@LIB". - */ - if (!is_name_qualified && sname[name_len] != '\0' && sname[name_len] != '@') - continue; - - if (ret >= 0) { - /* handle multiple matches */ - if (last_bind != STB_WEAK && curr_bind != STB_WEAK) { - /* Only accept one non-weak bind. */ - pr_warn("elf: ambiguous match for '%s', '%s' in '%s'\n", - sname, name, binary_path); - ret = -LIBBPF_ERRNO__FORMAT; - goto out; - } else if (curr_bind == STB_WEAK) { - /* already have a non-weak bind, and - * this is a weak bind, so ignore. - */ - continue; - } - } - - /* Transform symbol's virtual address (absolute for - * binaries and relative for shared libs) into file - * offset, which is what kernel is expecting for - * uprobe/uretprobe attachment. - * See Documentation/trace/uprobetracer.rst for more - * details. - * This is done by looking up symbol's containing - * section's header and using it's virtual address - * (sh_addr) and corresponding file offset (sh_offset) - * to transform sym.st_value (virtual address) into - * desired final file offset. - */ - sym_scn = elf_getscn(elf, sym.st_shndx); - if (!sym_scn) - continue; - if (!gelf_getshdr(sym_scn, &sym_sh)) - continue; - - ret = sym.st_value - sym_sh.sh_addr + sym_sh.sh_offset; - last_bind = curr_bind; - } - if (ret > 0) - break; - } - - if (ret > 0) { - pr_debug("elf: symbol address match for '%s' in '%s': 0x%lx\n", name, binary_path, - ret); - } else { - if (ret == 0) { - pr_warn("elf: '%s' is 0 in symtab for '%s': %s\n", name, binary_path, - is_shared_lib ? "should not be 0 in a shared library" : - "try using shared library path instead"); - ret = -ENOENT; - } else { - pr_warn("elf: failed to find symbol '%s' in '%s'\n", name, binary_path); - } - } -out: - return ret; -} - -/* Find offset of function name in ELF object specified by path. "name" matches - * symbol name or name@@LIB for library functions. - */ -static long elf_find_func_offset_from_file(const char *binary_path, const char *name) -{ - char errmsg[STRERR_BUFSIZE]; - long ret = -ENOENT; - Elf *elf; - int fd; - - fd = open(binary_path, O_RDONLY | O_CLOEXEC); - if (fd < 0) { - ret = -errno; - pr_warn("failed to open %s: %s\n", binary_path, - libbpf_strerror_r(ret, errmsg, sizeof(errmsg))); - return ret; - } - elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); - if (!elf) { - pr_warn("elf: could not read elf from %s: %s\n", binary_path, elf_errmsg(-1)); - close(fd); - return -LIBBPF_ERRNO__FORMAT; - } - - ret = elf_find_func_offset(elf, binary_path, name); - elf_end(elf); - close(fd); - return ret; -} - /* Find offset of function name in archive specified by path. Currently * supported are .zip files that do not compress their contents, as used on * Android in the form of APKs, for example. "file_name" is the name of the ELF diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index e4d05662a96c..44eb63541507 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -15,6 +15,7 @@ #include #include #include +#include #include "relo_core.h" /* make sure libbpf doesn't use kernel-only integer typedefs */ @@ -577,4 +578,7 @@ static inline bool is_pow_of_2(size_t x) #define PROG_LOAD_ATTEMPTS 5 int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts); +long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name); +long elf_find_func_offset_from_file(const char *binary_path, const char *name); + #endif /* __LIBBPF_LIBBPF_INTERNAL_H */ From patchwork Sun Jul 30 13:42:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333423 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2D1A20FF for ; Sun, 30 Jul 2023 13:43:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1FE6C433C7; Sun, 30 Jul 2023 13:43:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724639; bh=cVpXUOUa44HCMe2aXpOuG9UNn1fqrMdKWU3G3AeFE2o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vk6BTFv0Y+sphcT0292CTPlzm/tng1avxB/MGwlNaNLIZXVhwg2UtnKFOJEE2L2WJ NqEXVDjuJ6kB7kFtwA4VdfollEdv81BI5xsOOtMz+2o1V5Llppyz+0V5gk+ssH2I1h 5s9O1XQL95cx3PYRS4ZFG8yV0EfxhPleQjNYBFf18cTzG9IDC+7UH6TXYjgBYbSMa5 MkQV8a34RoJDH5crXnPiYnXlPebG3xRZ76D44dh5yqxLHAEgPE2b7gQ5ugksmjmi9z w11RA/50Y2Q/41hcU6sSZ/V1si83nI0Ic/4Bn5fsp4sF9LSmVxOyJrj93GObMjEPa6 j3Yg9xLH7bPRg== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 09/28] libbpf: Add elf_open/elf_close functions Date: Sun, 30 Jul 2023 15:42:04 +0200 Message-ID: <20230730134223.94496-10-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding elf_open/elf_close functions and using it in elf_find_func_offset_from_file function. It will be used in following changes to save some common code. Acked-by: Andrii Nakryiko Signed-off-by: Jiri Olsa --- tools/lib/bpf/elf.c | 61 ++++++++++++++++++++++----------- tools/lib/bpf/libbpf_internal.h | 8 +++++ tools/lib/bpf/usdt.c | 30 +++++----------- 3 files changed, 57 insertions(+), 42 deletions(-) diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c index 735ef10093ac..71363acdeb67 100644 --- a/tools/lib/bpf/elf.c +++ b/tools/lib/bpf/elf.c @@ -10,6 +10,42 @@ #define STRERR_BUFSIZE 128 +int elf_open(const char *binary_path, struct elf_fd *elf_fd) +{ + char errmsg[STRERR_BUFSIZE]; + int fd, ret; + Elf *elf; + + if (elf_version(EV_CURRENT) == EV_NONE) { + pr_warn("elf: failed to init libelf for %s\n", binary_path); + return -LIBBPF_ERRNO__LIBELF; + } + fd = open(binary_path, O_RDONLY | O_CLOEXEC); + if (fd < 0) { + ret = -errno; + pr_warn("elf: failed to open %s: %s\n", binary_path, + libbpf_strerror_r(ret, errmsg, sizeof(errmsg))); + return ret; + } + elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); + if (!elf) { + pr_warn("elf: could not read elf from %s: %s\n", binary_path, elf_errmsg(-1)); + close(fd); + return -LIBBPF_ERRNO__FORMAT; + } + elf_fd->fd = fd; + elf_fd->elf = elf; + return 0; +} + +void elf_close(struct elf_fd *elf_fd) +{ + if (!elf_fd) + return; + elf_end(elf_fd->elf); + close(elf_fd->fd); +} + /* Return next ELF section of sh_type after scn, or first of that type if scn is NULL. */ static Elf_Scn *elf_find_next_scn_by_type(Elf *elf, int sh_type, Elf_Scn *scn) { @@ -170,28 +206,13 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name) */ long elf_find_func_offset_from_file(const char *binary_path, const char *name) { - char errmsg[STRERR_BUFSIZE]; + struct elf_fd elf_fd; long ret = -ENOENT; - Elf *elf; - int fd; - fd = open(binary_path, O_RDONLY | O_CLOEXEC); - if (fd < 0) { - ret = -errno; - pr_warn("failed to open %s: %s\n", binary_path, - libbpf_strerror_r(ret, errmsg, sizeof(errmsg))); + ret = elf_open(binary_path, &elf_fd); + if (ret) return ret; - } - elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); - if (!elf) { - pr_warn("elf: could not read elf from %s: %s\n", binary_path, elf_errmsg(-1)); - close(fd); - return -LIBBPF_ERRNO__FORMAT; - } - - ret = elf_find_func_offset(elf, binary_path, name); - elf_end(elf); - close(fd); + ret = elf_find_func_offset(elf_fd.elf, binary_path, name); + elf_close(&elf_fd); return ret; } - diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index 44eb63541507..0bbcd8e6fdc5 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -581,4 +581,12 @@ int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts); long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name); long elf_find_func_offset_from_file(const char *binary_path, const char *name); +struct elf_fd { + Elf *elf; + int fd; +}; + +int elf_open(const char *binary_path, struct elf_fd *elf_fd); +void elf_close(struct elf_fd *elf_fd); + #endif /* __LIBBPF_LIBBPF_INTERNAL_H */ diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c index 37455d00b239..8322337ab65b 100644 --- a/tools/lib/bpf/usdt.c +++ b/tools/lib/bpf/usdt.c @@ -946,32 +946,22 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct const char *usdt_provider, const char *usdt_name, __u64 usdt_cookie) { - int i, fd, err, spec_map_fd, ip_map_fd; + int i, err, spec_map_fd, ip_map_fd; LIBBPF_OPTS(bpf_uprobe_opts, opts); struct hashmap *specs_hash = NULL; struct bpf_link_usdt *link = NULL; struct usdt_target *targets = NULL; + struct elf_fd elf_fd; size_t target_cnt; - Elf *elf; spec_map_fd = bpf_map__fd(man->specs_map); ip_map_fd = bpf_map__fd(man->ip_to_spec_id_map); - fd = open(path, O_RDONLY | O_CLOEXEC); - if (fd < 0) { - err = -errno; - pr_warn("usdt: failed to open ELF binary '%s': %d\n", path, err); + err = elf_open(path, &elf_fd); + if (err) return libbpf_err_ptr(err); - } - elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); - if (!elf) { - err = -EBADF; - pr_warn("usdt: failed to parse ELF binary '%s': %s\n", path, elf_errmsg(-1)); - goto err_out; - } - - err = sanity_check_usdt_elf(elf, path); + err = sanity_check_usdt_elf(elf_fd.elf, path); if (err) goto err_out; @@ -984,7 +974,7 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct /* discover USDT in given binary, optionally limiting * activations to a given PID, if pid > 0 */ - err = collect_usdt_targets(man, elf, path, pid, usdt_provider, usdt_name, + err = collect_usdt_targets(man, elf_fd.elf, path, pid, usdt_provider, usdt_name, usdt_cookie, &targets, &target_cnt); if (err <= 0) { err = (err == 0) ? -ENOENT : err; @@ -1069,9 +1059,7 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct free(targets); hashmap__free(specs_hash); - elf_end(elf); - close(fd); - + elf_close(&elf_fd); return &link->link; err_out: @@ -1079,9 +1067,7 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct bpf_link__destroy(&link->link); free(targets); hashmap__free(specs_hash); - if (elf) - elf_end(elf); - close(fd); + elf_close(&elf_fd); return libbpf_err_ptr(err); } From patchwork Sun Jul 30 13:42:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333424 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E08ED363 for ; Sun, 30 Jul 2023 13:44:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 00D94C433C7; Sun, 30 Jul 2023 13:44:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724649; bh=nWABmTYAaW1iin4pc2IXwBpaMyBNOk+m8/amficAyYE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q2LiCbDZYbVC8uLYPCYZfts2uptfD7842HmIs1rcbxszcx9a6PQQbsheH729MkNy8 dtzOx9v9YUrUS13PfTEmV3Ki5W8BSqhwjTy7HmtkoIoiE5w5G6VGs7zU/CXdj6BhLa 75uzeis85buDbnmsXTeKeeYDH9OYx7llcqZbCu8jpVdSzAwbpq7gtJSUD4/iWL2ICN G4l2D2oN6KDsOjgRsaUsFcKm5dPP1D3Ir90UKuTOsOWVd9ud8VWO4ofAeeCUpJjeNB RgzPkngXdlS5Bt7jQtGu3xwAaPwZqP5jmzoRQ/fumhI/zPKmSKH3Agds6VBLnLVW1D 27KD6+OjcN3iA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 10/28] libbpf: Add elf symbol iterator Date: Sun, 30 Jul 2023 15:42:05 +0200 Message-ID: <20230730134223.94496-11-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding elf symbol iterator object (and some functions) that follow open-coded iterator pattern and some functions to ease up iterating elf object symbols. The idea is to iterate single symbol section with: struct elf_sym_iter iter; struct elf_sym *sym; if (elf_sym_iter_new(&iter, elf, binary_path, SHT_DYNSYM)) goto error; while ((sym = elf_sym_iter_next(&iter))) { ... } I considered opening the elf inside the iterator and iterate all symbol sections, but then it gets more complicated wrt user checks for when the next section is processed. Plus side is the we don't need 'exit' function, because caller/user is in charge of that. The returned iterated symbol object from elf_sym_iter_next function is placed inside the struct elf_sym_iter, so no extra allocation or argument is needed. Suggested-by: Andrii Nakryiko Acked-by: Andrii Nakryiko Signed-off-by: Jiri Olsa --- tools/lib/bpf/elf.c | 179 ++++++++++++++++++++++++++++---------------- 1 file changed, 115 insertions(+), 64 deletions(-) diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c index 71363acdeb67..8a3f8a725981 100644 --- a/tools/lib/bpf/elf.c +++ b/tools/lib/bpf/elf.c @@ -60,6 +60,104 @@ static Elf_Scn *elf_find_next_scn_by_type(Elf *elf, int sh_type, Elf_Scn *scn) return NULL; } +struct elf_sym { + const char *name; + GElf_Sym sym; + GElf_Shdr sh; +}; + +struct elf_sym_iter { + Elf *elf; + Elf_Data *syms; + size_t nr_syms; + size_t strtabidx; + size_t next_sym_idx; + struct elf_sym sym; + int st_type; +}; + +static int elf_sym_iter_new(struct elf_sym_iter *iter, + Elf *elf, const char *binary_path, + int sh_type, int st_type) +{ + Elf_Scn *scn = NULL; + GElf_Ehdr ehdr; + GElf_Shdr sh; + + memset(iter, 0, sizeof(*iter)); + + if (!gelf_getehdr(elf, &ehdr)) { + pr_warn("elf: failed to get ehdr from %s: %s\n", binary_path, elf_errmsg(-1)); + return -EINVAL; + } + + scn = elf_find_next_scn_by_type(elf, sh_type, NULL); + if (!scn) { + pr_debug("elf: failed to find symbol table ELF sections in '%s'\n", + binary_path); + return -ENOENT; + } + + if (!gelf_getshdr(scn, &sh)) + return -EINVAL; + + iter->strtabidx = sh.sh_link; + iter->syms = elf_getdata(scn, 0); + if (!iter->syms) { + pr_warn("elf: failed to get symbols for symtab section in '%s': %s\n", + binary_path, elf_errmsg(-1)); + return -EINVAL; + } + iter->nr_syms = iter->syms->d_size / sh.sh_entsize; + iter->elf = elf; + iter->st_type = st_type; + return 0; +} + +static struct elf_sym *elf_sym_iter_next(struct elf_sym_iter *iter) +{ + struct elf_sym *ret = &iter->sym; + GElf_Sym *sym = &ret->sym; + const char *name = NULL; + Elf_Scn *sym_scn; + size_t idx; + + for (idx = iter->next_sym_idx; idx < iter->nr_syms; idx++) { + if (!gelf_getsym(iter->syms, idx, sym)) + continue; + if (GELF_ST_TYPE(sym->st_info) != iter->st_type) + continue; + name = elf_strptr(iter->elf, iter->strtabidx, sym->st_name); + if (!name) + continue; + sym_scn = elf_getscn(iter->elf, sym->st_shndx); + if (!sym_scn) + continue; + if (!gelf_getshdr(sym_scn, &ret->sh)) + continue; + + iter->next_sym_idx = idx + 1; + ret->name = name; + return ret; + } + + return NULL; +} + + +/* Transform symbol's virtual address (absolute for binaries and relative + * for shared libs) into file offset, which is what kernel is expecting + * for uprobe/uretprobe attachment. + * See Documentation/trace/uprobetracer.rst for more details. This is done + * by looking up symbol's containing section's header and using iter's virtual + * address (sh_addr) and corresponding file offset (sh_offset) to transform + * sym.st_value (virtual address) into desired final file offset. + */ +static unsigned long elf_sym_offset(struct elf_sym *sym) +{ + return sym->sym.st_value - sym->sh.sh_addr + sym->sh.sh_offset; +} + /* Find offset of function name in the provided ELF object. "binary_path" is * the path to the ELF binary represented by "elf", and only used for error * reporting matters. "name" matches symbol name or name@@LIB for library @@ -91,67 +189,38 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name) * reported as a warning/error. */ for (i = 0; i < ARRAY_SIZE(sh_types); i++) { - size_t nr_syms, strtabidx, idx; - Elf_Data *symbols = NULL; - Elf_Scn *scn = NULL; + struct elf_sym_iter iter; + struct elf_sym *sym; int last_bind = -1; - const char *sname; - GElf_Shdr sh; + int cur_bind; - scn = elf_find_next_scn_by_type(elf, sh_types[i], NULL); - if (!scn) { - pr_debug("elf: failed to find symbol table ELF sections in '%s'\n", - binary_path); + ret = elf_sym_iter_new(&iter, elf, binary_path, sh_types[i], STT_FUNC); + if (ret == -ENOENT) continue; - } - if (!gelf_getshdr(scn, &sh)) - continue; - strtabidx = sh.sh_link; - symbols = elf_getdata(scn, 0); - if (!symbols) { - pr_warn("elf: failed to get symbols for symtab section in '%s': %s\n", - binary_path, elf_errmsg(-1)); - ret = -LIBBPF_ERRNO__FORMAT; + if (ret) goto out; - } - nr_syms = symbols->d_size / sh.sh_entsize; - - for (idx = 0; idx < nr_syms; idx++) { - int curr_bind; - GElf_Sym sym; - Elf_Scn *sym_scn; - GElf_Shdr sym_sh; - - if (!gelf_getsym(symbols, idx, &sym)) - continue; - - if (GELF_ST_TYPE(sym.st_info) != STT_FUNC) - continue; - - sname = elf_strptr(elf, strtabidx, sym.st_name); - if (!sname) - continue; - - curr_bind = GELF_ST_BIND(sym.st_info); + while ((sym = elf_sym_iter_next(&iter))) { /* User can specify func, func@@LIB or func@@LIB_VERSION. */ - if (strncmp(sname, name, name_len) != 0) + if (strncmp(sym->name, name, name_len) != 0) continue; /* ...but we don't want a search for "foo" to match 'foo2" also, so any * additional characters in sname should be of the form "@@LIB". */ - if (!is_name_qualified && sname[name_len] != '\0' && sname[name_len] != '@') + if (!is_name_qualified && sym->name[name_len] != '\0' && sym->name[name_len] != '@') continue; - if (ret >= 0) { + cur_bind = GELF_ST_BIND(sym->sym.st_info); + + if (ret > 0) { /* handle multiple matches */ - if (last_bind != STB_WEAK && curr_bind != STB_WEAK) { + if (last_bind != STB_WEAK && cur_bind != STB_WEAK) { /* Only accept one non-weak bind. */ pr_warn("elf: ambiguous match for '%s', '%s' in '%s'\n", - sname, name, binary_path); + sym->name, name, binary_path); ret = -LIBBPF_ERRNO__FORMAT; goto out; - } else if (curr_bind == STB_WEAK) { + } else if (cur_bind == STB_WEAK) { /* already have a non-weak bind, and * this is a weak bind, so ignore. */ @@ -159,26 +228,8 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name) } } - /* Transform symbol's virtual address (absolute for - * binaries and relative for shared libs) into file - * offset, which is what kernel is expecting for - * uprobe/uretprobe attachment. - * See Documentation/trace/uprobetracer.rst for more - * details. - * This is done by looking up symbol's containing - * section's header and using it's virtual address - * (sh_addr) and corresponding file offset (sh_offset) - * to transform sym.st_value (virtual address) into - * desired final file offset. - */ - sym_scn = elf_getscn(elf, sym.st_shndx); - if (!sym_scn) - continue; - if (!gelf_getshdr(sym_scn, &sym_sh)) - continue; - - ret = sym.st_value - sym_sh.sh_addr + sym_sh.sh_offset; - last_bind = curr_bind; + ret = elf_sym_offset(sym); + last_bind = cur_bind; } if (ret > 0) break; From patchwork Sun Jul 30 13:42:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333425 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E67D363 for ; Sun, 30 Jul 2023 13:44:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DE82DC433C8; Sun, 30 Jul 2023 13:44:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724659; bh=JRRJz8/nLZw6vAx2LW1/td9rYqJfps4sUneIFNzu93M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Jc+pZhpeuGh5Z3xZKO7SMjkAHgW7GIUDC7g8Mi0PAXiRZdok3/7vWk/eJab8QE34V by/JVv8vS+eagBL2BkpzJOoh/wGTKR546Aw6l7Kg5YPK9IK3y60Cp7a75snoVcRrNn gTeCD/CxLwYzn+0DFb/TNU+tvX1xLn5Kli3yfN3fG4YBL3OuWgrT9/zzE8keLtuZMZ aNNWPgu7y4knTTx3zNFY+Shuv4Rexuo9Zep1Ga2abWLJ0qufk5uGE9DYRgTixu1X56 UjMFle4QlRZDOYiONJcAxoVbmqllZGSBfOCEM3UMOyXyj5hSomL5hI9crJEEHAgC2/ MZWdPURTC6cgA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 11/28] libbpf: Add elf_resolve_syms_offsets function Date: Sun, 30 Jul 2023 15:42:06 +0200 Message-ID: <20230730134223.94496-12-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding elf_resolve_syms_offsets function that looks up offsets for symbols specified in syms array argument. Offsets are returned in allocated array with the 'cnt' size, that needs to be released by the caller. Signed-off-by: Jiri Olsa --- tools/lib/bpf/elf.c | 110 ++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf_internal.h | 2 + 2 files changed, 112 insertions(+) diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c index 8a3f8a725981..8c512641c1d7 100644 --- a/tools/lib/bpf/elf.c +++ b/tools/lib/bpf/elf.c @@ -267,3 +267,113 @@ long elf_find_func_offset_from_file(const char *binary_path, const char *name) elf_close(&elf_fd); return ret; } + +struct symbol { + const char *name; + int bind; + int idx; +}; + +static int symbol_cmp(const void *a, const void *b) +{ + const struct symbol *sym_a = a; + const struct symbol *sym_b = b; + + return strcmp(sym_a->name, sym_b->name); +} + +/* + * Return offsets in @poffsets for symbols specified in @syms array argument. + * On success returns 0 and offsets are returned in allocated array with @cnt + * size, that needs to be released by the caller. + */ +int elf_resolve_syms_offsets(const char *binary_path, int cnt, + const char **syms, unsigned long **poffsets) +{ + int sh_types[2] = { SHT_DYNSYM, SHT_SYMTAB }; + int err = 0, i, cnt_done = 0; + unsigned long *offsets; + struct symbol *symbols; + struct elf_fd elf_fd; + + err = elf_open(binary_path, &elf_fd); + if (err) + return err; + + offsets = calloc(cnt, sizeof(*offsets)); + symbols = calloc(cnt, sizeof(*symbols)); + + if (!offsets || !symbols) { + err = -ENOMEM; + goto out; + } + + for (i = 0; i < cnt; i++) { + symbols[i].name = syms[i]; + symbols[i].idx = i; + } + + qsort(symbols, cnt, sizeof(*symbols), symbol_cmp); + + for (i = 0; i < ARRAY_SIZE(sh_types); i++) { + struct elf_sym_iter iter; + struct elf_sym *sym; + + err = elf_sym_iter_new(&iter, elf_fd.elf, binary_path, sh_types[i], STT_FUNC); + if (err == -ENOENT) + continue; + if (err) + goto out; + + while ((sym = elf_sym_iter_next(&iter))) { + unsigned long sym_offset = elf_sym_offset(sym); + int bind = GELF_ST_BIND(sym->sym.st_info); + struct symbol *found, tmp = { + .name = sym->name, + }; + unsigned long *offset; + + found = bsearch(&tmp, symbols, cnt, sizeof(*symbols), symbol_cmp); + if (!found) + continue; + + offset = &offsets[found->idx]; + if (*offset > 0) { + /* same offset, no problem */ + if (*offset == sym_offset) + continue; + /* handle multiple matches */ + if (found->bind != STB_WEAK && bind != STB_WEAK) { + /* Only accept one non-weak bind. */ + pr_warn("elf: ambiguous match found '%s@%lu' in '%s' previous offset %lu\n", + sym->name, sym_offset, binary_path, *offset); + err = -ESRCH; + goto out; + } else if (bind == STB_WEAK) { + /* already have a non-weak bind, and + * this is a weak bind, so ignore. + */ + continue; + } + } else { + cnt_done++; + } + *offset = sym_offset; + found->bind = bind; + } + } + + if (cnt != cnt_done) { + err = -ENOENT; + goto out; + } + + *poffsets = offsets; + +out: + free(symbols); + if (err) + free(offsets); + elf_close(&elf_fd); + return err; +} diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index 0bbcd8e6fdc5..92851c5f912d 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -589,4 +589,6 @@ struct elf_fd { int elf_open(const char *binary_path, struct elf_fd *elf_fd); void elf_close(struct elf_fd *elf_fd); +int elf_resolve_syms_offsets(const char *binary_path, int cnt, + const char **syms, unsigned long **poffsets); #endif /* __LIBBPF_LIBBPF_INTERNAL_H */ From patchwork Sun Jul 30 13:42:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333426 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF765363 for ; Sun, 30 Jul 2023 13:44:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3577C433C8; Sun, 30 Jul 2023 13:44:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724669; bh=UFAtJW/kUi20OUAFfN9cQ/cXigtSrtYou4rmtbzMGxs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U47onJIEOQ/14XJ6vMUfjwX3gz4aWaJQLzg1cas3vNFMCFbyRNgZz3eLgp4hxcWak OCPLG0JP8jqCmrDyUXXyBAHZXzkDACnyjgj3j2oEv19IG0P+SzlYVsS3zHNHzL0zWg te8COFSovf63O/3753Vexz0Vvus+DNYG1ZNcqrab+FQq52/V7aDk2yk5GVz8BKu2cx rXHAH9Axw5iyRVYf0lSSUHZp4sFU1IRfoCteKUe2siXHJoDwfqIkr34Q32oMpW+VVQ 6eNHq22zYRVwfLwzDQ8czu+UdBFeL7LdR+4P0N2ZYSKkGIpijq5WMzZloOTD+RgtFS NqDOgtIDNvoWQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 12/28] libbpf: Add elf_resolve_pattern_offsets function Date: Sun, 30 Jul 2023 15:42:07 +0200 Message-ID: <20230730134223.94496-13-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding elf_resolve_pattern_offsets function that looks up offsets for symbols specified by pattern argument. The 'pattern' argument allows wildcards (*?' supported). Offsets are returned in allocated array together with its size and needs to be released by the caller. Signed-off-by: Jiri Olsa --- tools/lib/bpf/elf.c | 61 +++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf.c | 2 +- tools/lib/bpf/libbpf_internal.h | 5 +++ 3 files changed, 67 insertions(+), 1 deletion(-) diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c index 8c512641c1d7..9d0296c1726a 100644 --- a/tools/lib/bpf/elf.c +++ b/tools/lib/bpf/elf.c @@ -377,3 +377,64 @@ int elf_resolve_syms_offsets(const char *binary_path, int cnt, elf_close(&elf_fd); return err; } + +/* + * Return offsets in @poffsets for symbols specified by @pattern argument. + * On success returns 0 and offsets are returned in allocated @poffsets + * array with the @pctn size, that needs to be released by the caller. + */ +int elf_resolve_pattern_offsets(const char *binary_path, const char *pattern, + unsigned long **poffsets, size_t *pcnt) +{ + int sh_types[2] = { SHT_SYMTAB, SHT_DYNSYM }; + unsigned long *offsets = NULL; + size_t cap = 0, cnt = 0; + struct elf_fd elf_fd; + int err = 0, i; + + err = elf_open(binary_path, &elf_fd); + if (err) + return err; + + for (i = 0; i < ARRAY_SIZE(sh_types); i++) { + struct elf_sym_iter iter; + struct elf_sym *sym; + + err = elf_sym_iter_new(&iter, elf_fd.elf, binary_path, sh_types[i], STT_FUNC); + if (err == -ENOENT) + continue; + if (err) + goto out; + + while ((sym = elf_sym_iter_next(&iter))) { + if (!glob_match(sym->name, pattern)) + continue; + + err = libbpf_ensure_mem((void **) &offsets, &cap, sizeof(*offsets), + cnt + 1); + if (err) + goto out; + + offsets[cnt++] = elf_sym_offset(sym); + } + + /* If we found anything in the first symbol section, + * do not search others to avoid duplicates. + */ + if (cnt) + break; + } + + if (cnt) { + *poffsets = offsets; + *pcnt = cnt; + } else { + err = -ENOENT; + } + +out: + if (err) + free(offsets); + elf_close(&elf_fd); + return err; +} diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 1bdda3f8c865..445953a4d8fc 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -10551,7 +10551,7 @@ struct bpf_link *bpf_program__attach_ksyscall(const struct bpf_program *prog, } /* Adapted from perf/util/string.c */ -static bool glob_match(const char *str, const char *pat) +bool glob_match(const char *str, const char *pat) { while (*str && *pat && *pat != '*') { if (*pat == '?') { /* Matches any single character */ diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index 92851c5f912d..ead551318fec 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -578,6 +578,8 @@ static inline bool is_pow_of_2(size_t x) #define PROG_LOAD_ATTEMPTS 5 int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts); +bool glob_match(const char *str, const char *pat); + long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name); long elf_find_func_offset_from_file(const char *binary_path, const char *name); @@ -591,4 +593,7 @@ void elf_close(struct elf_fd *elf_fd); int elf_resolve_syms_offsets(const char *binary_path, int cnt, const char **syms, unsigned long **poffsets); +int elf_resolve_pattern_offsets(const char *binary_path, const char *pattern, + unsigned long **poffsets, size_t *pcnt); + #endif /* __LIBBPF_LIBBPF_INTERNAL_H */ From patchwork Sun Jul 30 13:42:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333427 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DCF8363 for ; Sun, 30 Jul 2023 13:44:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DA121C433C8; Sun, 30 Jul 2023 13:44:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724679; bh=kS46N1o+M01ZmsvoKyuB9PlBcUzMc5VPHvHAomf7JsE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UEwl0kBfBsgkiLZ3/wQaXlIlPhTQocPUV9YGmZmvlsN0geH9euEoocQfDzp3bFBPK n2jweCccfLerTl7yk4YxSuIr/xSCPOUGJE1C9VnTrjPEnCelLXTkKtY2gZiGFnfJC4 UiMGYQYg7A0Xo5nAckfzEvKgEx+V1COhtmxltID9ZXdSSSz4AI3J4t5FH1qKKwyMcI suCf3ce6XgwQfYHEmjtK/QlS4V1muhnlNPoYiiOgJ4eyx5NVpv5ewzwE8WeV87mj9A nyKi8iQUfL3rvciksYyvYtEK+S+U3DEU1N2XfxXbE8QyGWxOmekRNWTz78c9VRTgSu M8b9STiwLqcLA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 13/28] libbpf: Add bpf_link_create support for multi uprobes Date: Sun, 30 Jul 2023 15:42:08 +0200 Message-ID: <20230730134223.94496-14-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding new uprobe_multi struct to bpf_link_create_opts object to pass multiple uprobe data to link_create attr uapi. Acked-by: Andrii Nakryiko Signed-off-by: Jiri Olsa --- tools/lib/bpf/bpf.c | 11 +++++++++++ tools/lib/bpf/bpf.h | 11 ++++++++++- 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index c9b6b311a441..b0f1913763a3 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -767,6 +767,17 @@ int bpf_link_create(int prog_fd, int target_fd, if (!OPTS_ZEROED(opts, kprobe_multi)) return libbpf_err(-EINVAL); break; + case BPF_TRACE_UPROBE_MULTI: + attr.link_create.uprobe_multi.flags = OPTS_GET(opts, uprobe_multi.flags, 0); + attr.link_create.uprobe_multi.cnt = OPTS_GET(opts, uprobe_multi.cnt, 0); + attr.link_create.uprobe_multi.path = ptr_to_u64(OPTS_GET(opts, uprobe_multi.path, 0)); + attr.link_create.uprobe_multi.offsets = ptr_to_u64(OPTS_GET(opts, uprobe_multi.offsets, 0)); + attr.link_create.uprobe_multi.ref_ctr_offsets = ptr_to_u64(OPTS_GET(opts, uprobe_multi.ref_ctr_offsets, 0)); + attr.link_create.uprobe_multi.cookies = ptr_to_u64(OPTS_GET(opts, uprobe_multi.cookies, 0)); + attr.link_create.uprobe_multi.pid = OPTS_GET(opts, uprobe_multi.pid, 0); + if (!OPTS_ZEROED(opts, uprobe_multi)) + return libbpf_err(-EINVAL); + break; case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: case BPF_MODIFY_RETURN: diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index 044a74ffc38a..74c2887cfd24 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -392,6 +392,15 @@ struct bpf_link_create_opts { const unsigned long *addrs; const __u64 *cookies; } kprobe_multi; + struct { + __u32 flags; + __u32 cnt; + const char *path; + const unsigned long *offsets; + const unsigned long *ref_ctr_offsets; + const __u64 *cookies; + __u32 pid; + } uprobe_multi; struct { __u64 cookie; } tracing; @@ -409,7 +418,7 @@ struct bpf_link_create_opts { }; size_t :0; }; -#define bpf_link_create_opts__last_field kprobe_multi.cookies +#define bpf_link_create_opts__last_field uprobe_multi.pid LIBBPF_API int bpf_link_create(int prog_fd, int target_fd, enum bpf_attach_type attach_type, From patchwork Sun Jul 30 13:42:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333428 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B55C420FF for ; Sun, 30 Jul 2023 13:44:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C84ECC433C7; Sun, 30 Jul 2023 13:44:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724689; bh=9/c+0yPNR+ZkL/Wj1aseckWomQIFZdeaDh6R7S4ZRno=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aQfe5p9HWdviWB0qHtlGna90kV3juswwJnpbM/j1dxz6FygPgofJ9whghPYkLcVM6 WU+OhKY+t2l/to/FLWh9A6jRlMuJgzA7Z6AtvVPzCAgu6WQrkuaq1mq/fLZZt9Cu5k GPhx+vxU6MxPhcwmP5ghnIg3A29XXB0xTOPwTML/gMwQ/PAIg7Vbnk2jjLBQ5H6/nK 8Gaz6L2iRRnu9XuHh+JJgB+SLyv2HfLa6Um6yBWBbQFTnWGNJCd/cWUkyt2fmtqsRM SvNRF/SPd75KtirTRiPH/3rIvNz4MfwdumChDIoBvTu8aCET7bOO8E+d+CS5xV6aOX /6zwMIYCXyKng== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 14/28] libbpf: Add bpf_program__attach_uprobe_multi function Date: Sun, 30 Jul 2023 15:42:09 +0200 Message-ID: <20230730134223.94496-15-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding bpf_program__attach_uprobe_multi function that allows to attach multiple uprobes with uprobe_multi link. The user can specify uprobes with direct arguments: binary_path/func_pattern/pid or with struct bpf_uprobe_multi_opts opts argument fields: const char **syms; const unsigned long *offsets; const unsigned long *ref_ctr_offsets; const __u64 *cookies; User can specify 2 mutually exclusive set of inputs: 1) use only path/func_pattern/pid arguments 2) use path/pid with allowed combinations of: syms/offsets/ref_ctr_offsets/cookies/cnt - syms and offsets are mutually exclusive - ref_ctr_offsets and cookies are optional Any other usage results in error. Signed-off-by: Jiri Olsa --- tools/lib/bpf/libbpf.c | 114 +++++++++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf.h | 51 ++++++++++++++++++ tools/lib/bpf/libbpf.map | 1 + 3 files changed, 166 insertions(+) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 445953a4d8fc..eb16d6f307e0 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -11128,6 +11128,120 @@ static int resolve_full_path(const char *file, char *result, size_t result_sz) return -ENOENT; } +struct bpf_link * +bpf_program__attach_uprobe_multi(const struct bpf_program *prog, + pid_t pid, + const char *path, + const char *func_pattern, + const struct bpf_uprobe_multi_opts *opts) +{ + const unsigned long *ref_ctr_offsets = NULL, *offsets = NULL; + LIBBPF_OPTS(bpf_link_create_opts, lopts); + unsigned long *resolved_offsets = NULL; + int err = 0, link_fd, prog_fd; + struct bpf_link *link = NULL; + char errmsg[STRERR_BUFSIZE]; + char full_path[PATH_MAX]; + const __u64 *cookies; + const char **syms; + size_t cnt; + + if (!OPTS_VALID(opts, bpf_uprobe_multi_opts)) + return libbpf_err_ptr(-EINVAL); + + syms = OPTS_GET(opts, syms, NULL); + offsets = OPTS_GET(opts, offsets, NULL); + ref_ctr_offsets = OPTS_GET(opts, ref_ctr_offsets, NULL); + cookies = OPTS_GET(opts, cookies, NULL); + cnt = OPTS_GET(opts, cnt, 0); + + /* + * User can specify 2 mutually exclusive set of inputs: + * + * 1) use only path/func_pattern/pid arguments + * + * 2) use path/pid with allowed combinations of: + * syms/offsets/ref_ctr_offsets/cookies/cnt + * + * - syms and offsets are mutually exclusive + * - ref_ctr_offsets and cookies are optional + * + * Any other usage results in error. + */ + + if (!path) + return libbpf_err_ptr(-EINVAL); + if (!func_pattern && cnt == 0) + return libbpf_err_ptr(-EINVAL); + + if (func_pattern) { + if (syms || offsets || ref_ctr_offsets || cookies || cnt) + return libbpf_err_ptr(-EINVAL); + } else { + if (!!syms == !!offsets) + return libbpf_err_ptr(-EINVAL); + } + + if (func_pattern) { + if (!strchr(path, '/')) { + err = resolve_full_path(path, full_path, sizeof(full_path)); + if (err) { + pr_warn("prog '%s': failed to resolve full path for '%s': %d\n", + prog->name, path, err); + return libbpf_err_ptr(err); + } + path = full_path; + } + + err = elf_resolve_pattern_offsets(path, func_pattern, + &resolved_offsets, &cnt); + if (err < 0) + return libbpf_err_ptr(err); + offsets = resolved_offsets; + } else if (syms) { + err = elf_resolve_syms_offsets(path, cnt, syms, &resolved_offsets); + if (err < 0) + return libbpf_err_ptr(err); + offsets = resolved_offsets; + } + + lopts.uprobe_multi.path = path; + lopts.uprobe_multi.offsets = offsets; + lopts.uprobe_multi.ref_ctr_offsets = ref_ctr_offsets; + lopts.uprobe_multi.cookies = cookies; + lopts.uprobe_multi.cnt = cnt; + lopts.uprobe_multi.flags = OPTS_GET(opts, retprobe, false) ? BPF_F_UPROBE_MULTI_RETURN : 0; + + if (pid == 0) + pid = getpid(); + if (pid > 0) + lopts.uprobe_multi.pid = pid; + + link = calloc(1, sizeof(*link)); + if (!link) { + err = -ENOMEM; + goto error; + } + link->detach = &bpf_link__detach_fd; + + prog_fd = bpf_program__fd(prog); + link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &lopts); + if (link_fd < 0) { + err = -errno; + pr_warn("prog '%s': failed to attach multi-uprobe: %s\n", + prog->name, libbpf_strerror_r(err, errmsg, sizeof(errmsg))); + goto error; + } + link->fd = link_fd; + free(resolved_offsets); + return link; + +error: + free(resolved_offsets); + free(link); + return libbpf_err_ptr(err); +} + LIBBPF_API struct bpf_link * bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid, const char *binary_path, size_t func_offset, diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 55b97b208754..2e3eb3614c40 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -529,6 +529,57 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog, const char *pattern, const struct bpf_kprobe_multi_opts *opts); +struct bpf_uprobe_multi_opts { + /* size of this struct, for forward/backward compatibility */ + size_t sz; + /* array of function symbols to attach to */ + const char **syms; + /* array of function addresses to attach to */ + const unsigned long *offsets; + /* optional, array of associated ref counter offsets */ + const unsigned long *ref_ctr_offsets; + /* optional, array of associated BPF cookies */ + const __u64 *cookies; + /* number of elements in syms/addrs/cookies arrays */ + size_t cnt; + /* create return uprobes */ + bool retprobe; + size_t :0; +}; + +#define bpf_uprobe_multi_opts__last_field retprobe + +/** + * @brief **bpf_program__attach_uprobe_multi()** attaches a BPF program + * to multiple uprobes with uprobe_multi link. + * + * User can specify 2 mutually exclusive set of inputs: + * + * 1) use only path/func_pattern/pid arguments + * + * 2) use path/pid with allowed combinations of + * syms/offsets/ref_ctr_offsets/cookies/cnt + * + * - syms and offsets are mutually exclusive + * - ref_ctr_offsets and cookies are optional + * + * + * @param prog BPF program to attach + * @param pid Process ID to attach the uprobe to, 0 for self (own process), + * -1 for all processes + * @param binary_path Path to binary + * @param func_pattern Regular expression to specify functions to attach + * BPF program to + * @param opts Additional options (see **struct bpf_uprobe_multi_opts**) + * @return 0, on success; negative error code, otherwise + */ +LIBBPF_API struct bpf_link * +bpf_program__attach_uprobe_multi(const struct bpf_program *prog, + pid_t pid, + const char *binary_path, + const char *func_pattern, + const struct bpf_uprobe_multi_opts *opts); + struct bpf_ksyscall_opts { /* size of this struct, for forward/backward compatibility */ size_t sz; diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 9c7538dd5835..841a2f9c6fef 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -398,4 +398,5 @@ LIBBPF_1.3.0 { bpf_prog_detach_opts; bpf_program__attach_netfilter; bpf_program__attach_tcx; + bpf_program__attach_uprobe_multi; } LIBBPF_1.2.0; From patchwork Sun Jul 30 13:42:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333429 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9303F363 for ; Sun, 30 Jul 2023 13:44:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4217C433C9; Sun, 30 Jul 2023 13:44:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724699; bh=yWRz1XATG7r0/McBzvRGSUGc7uihDixAzOXcFx29c/0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jGzJArpQ3EAL86QqgSiB9TRTCtUNNz0/e+UpMdFhEk1M86ISjB7A+1diA5BMfIeaw OsIw6GYEwVbertQxLoA2tz37T+C8VfzQy4envdoWw4b1ZprxqZy0fv6akVHq6ASWPt T2m9Yp8bg+xWO014oiUGIhemN7ad9APDjQw+NxD4fJDGpHyGRrlJZXd1l+bIoaRAHS atXzNoIvvKopmX5euR2A9CpyggAmO1gejOrCMkKh+qxm9z4RvZEzOYT/XfUm9QWymm UmKWf6Dr40Vd0cEMst+NoFpkOHLdzTNRGEQMv9Y4xnhoIhaYpvxXJ4WNtV4QcJa293 k77FnKcmTE9Dw== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 15/28] libbpf: Add support for u[ret]probe.multi[.s] program sections Date: Sun, 30 Jul 2023 15:42:10 +0200 Message-ID: <20230730134223.94496-16-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding support for several uprobe_multi program sections to allow auto attach of multi_uprobe programs. Acked-by: Andrii Nakryiko Signed-off-by: Jiri Olsa --- tools/lib/bpf/libbpf.c | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index eb16d6f307e0..d7b1a159d001 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -8683,6 +8683,7 @@ static int attach_tp(const struct bpf_program *prog, long cookie, struct bpf_lin static int attach_raw_tp(const struct bpf_program *prog, long cookie, struct bpf_link **link); static int attach_trace(const struct bpf_program *prog, long cookie, struct bpf_link **link); static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link); +static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link); static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link); static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link); @@ -8698,6 +8699,10 @@ static const struct bpf_sec_def section_defs[] = { SEC_DEF("uretprobe.s+", KPROBE, 0, SEC_SLEEPABLE, attach_uprobe), SEC_DEF("kprobe.multi+", KPROBE, BPF_TRACE_KPROBE_MULTI, SEC_NONE, attach_kprobe_multi), SEC_DEF("kretprobe.multi+", KPROBE, BPF_TRACE_KPROBE_MULTI, SEC_NONE, attach_kprobe_multi), + SEC_DEF("uprobe.multi+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_NONE, attach_uprobe_multi), + SEC_DEF("uretprobe.multi+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_NONE, attach_uprobe_multi), + SEC_DEF("uprobe.multi.s+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_SLEEPABLE, attach_uprobe_multi), + SEC_DEF("uretprobe.multi.s+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_SLEEPABLE, attach_uprobe_multi), SEC_DEF("ksyscall+", KPROBE, 0, SEC_NONE, attach_ksyscall), SEC_DEF("kretsyscall+", KPROBE, 0, SEC_NONE, attach_ksyscall), SEC_DEF("usdt+", KPROBE, 0, SEC_NONE, attach_usdt), @@ -10904,6 +10909,37 @@ static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, stru return libbpf_get_error(*link); } +static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link) +{ + char *probe_type = NULL, *binary_path = NULL, *func_name = NULL; + LIBBPF_OPTS(bpf_uprobe_multi_opts, opts); + int n, ret = -EINVAL; + + *link = NULL; + + n = sscanf(prog->sec_name, "%m[^/]/%m[^:]:%ms", + &probe_type, &binary_path, &func_name); + switch (n) { + case 1: + /* handle SEC("u[ret]probe") - format is valid, but auto-attach is impossible. */ + ret = 0; + break; + case 3: + opts.retprobe = strcmp(probe_type, "uretprobe.multi") == 0; + *link = bpf_program__attach_uprobe_multi(prog, -1, binary_path, func_name, &opts); + ret = libbpf_get_error(*link); + break; + default: + pr_warn("prog '%s': invalid format of section definition '%s'\n", prog->name, + prog->sec_name); + break; + } + free(probe_type); + free(binary_path); + free(func_name); + return ret; +} + static void gen_uprobe_legacy_event_name(char *buf, size_t buf_sz, const char *binary_path, uint64_t offset) { From patchwork Sun Jul 30 13:42:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333430 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF7CC363 for ; Sun, 30 Jul 2023 13:45:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3236C433C9; Sun, 30 Jul 2023 13:45:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724709; bh=vmugc6jxgx13ipJJtpDTvjTS10L7/3Asxl8XSeTBeuY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X7gEejaytVig7ck6Xu8oIGZ45lfL9jmKKlnPhSspG6wwha2DSIPKPfW2VwM9f7am9 6VSaVWpH4NcQdgTTEL+HDLXxbqF9HJ6U6DqF1Ki0lONpY2bJSJLL81zT5tTytzRlq2 D/gKN1WvKY2E/27HopKH5aHc7kmN3pgvbcOvCGdx+9zVZPmNLI5tar5yOW+4H/lkYc o5+poH5P95YtQiUE1rzZ0XzRDCBfNJFiPuRD8N5RdgjiwGzy+uYhK1AQRttzvUChl+ +6ImA3yPyOD6rcV9GKLityZO/IyLePBkUuihRZfIsbc8v2w48jxJA31eDzfjUSR/QY CVZRGBh/aLC/A== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 16/28] libbpf: Add uprobe multi link detection Date: Sun, 30 Jul 2023 15:42:11 +0200 Message-ID: <20230730134223.94496-17-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding uprobe-multi link detection. It will be used later in bpf_program__attach_usdt function to check and use uprobe_multi link over standard uprobe links. Signed-off-by: Jiri Olsa --- tools/lib/bpf/libbpf.c | 36 +++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf_internal.h | 2 ++ 2 files changed, 38 insertions(+) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index d7b1a159d001..d1bb6eb7cde2 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -4819,6 +4819,39 @@ static int probe_perf_link(void) return link_fd < 0 && err == -EBADF; } +static int probe_uprobe_multi_link(void) +{ + LIBBPF_OPTS(bpf_prog_load_opts, load_opts, + .expected_attach_type = BPF_TRACE_UPROBE_MULTI, + ); + LIBBPF_OPTS(bpf_link_create_opts, link_opts); + struct bpf_insn insns[] = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }; + int prog_fd, link_fd, err; + unsigned long offset = 0; + + prog_fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL", + insns, ARRAY_SIZE(insns), &load_opts); + if (prog_fd < 0) + return -errno; + + /* Creating uprobe in '/' binary should fail with -EBADF. */ + link_opts.uprobe_multi.path = "/"; + link_opts.uprobe_multi.offsets = &offset; + link_opts.uprobe_multi.cnt = 1; + + link_fd = bpf_link_create(prog_fd, -1, BPF_TRACE_UPROBE_MULTI, &link_opts); + err = -errno; /* close() can clobber errno */ + + if (link_fd >= 0) + close(link_fd); + close(prog_fd); + + return link_fd < 0 && err == -EBADF; +} + static int probe_kern_bpf_cookie(void) { struct bpf_insn insns[] = { @@ -4915,6 +4948,9 @@ static struct kern_feature_desc { [FEAT_SYSCALL_WRAPPER] = { "Kernel using syscall wrapper", probe_kern_syscall_wrapper, }, + [FEAT_UPROBE_MULTI_LINK] = { + "BPF multi-uprobe link support", probe_uprobe_multi_link, + }, }; bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id feat_id) diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index ead551318fec..f0f08635adb0 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -355,6 +355,8 @@ enum kern_feature_id { FEAT_BTF_ENUM64, /* Kernel uses syscall wrapper (CONFIG_ARCH_HAS_SYSCALL_WRAPPER) */ FEAT_SYSCALL_WRAPPER, + /* BPF multi-uprobe link support */ + FEAT_UPROBE_MULTI_LINK, __FEAT_CNT, }; From patchwork Sun Jul 30 13:42:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333431 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC935363 for ; Sun, 30 Jul 2023 13:45:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B732BC433C8; Sun, 30 Jul 2023 13:45:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724719; bh=98Ndh1+7acv+2M46mFmN30bp5/3mwMzmCscmtQHSPhk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Osl7AaAWzCGcOALH647d4Tm8sHVa6QZBkvOYnwfRXwnu+MVIIv2P7CfWxC25V0L/K egcb6y9PwBVT+KlrlTD7fasoOwhxXanwPxrhASr+uRtApQ9PGm+dnX6BuXpNcF4NHu olcIQnoQfn4R6EjjGi045+2yq8L6M+3h51JwEkvWnhXPpfXWMaFkPOki8Sd54I8IZb nO5l2bGJ0PilfrS+WR91pdEnT6lQhxcvsI7TCNquJMDQbUCl2u2NcULOZew8+Tpoag PL8zJtzOvM2oaVPRJmEMkKk1JVI+6BrDEv0NFNX2/6FfeqMQrYnqdXBuTNkeVFhHuR lOCIN87uwxOdA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 17/28] libbpf: Add uprobe multi link support to bpf_program__attach_usdt Date: Sun, 30 Jul 2023 15:42:12 +0200 Message-ID: <20230730134223.94496-18-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding support for usdt_manager_attach_usdt to use uprobe_multi link to attach to usdt probes. The uprobe_multi support is detected before the usdt program is loaded and its expected_attach_type is set accordingly. If uprobe_multi support is detected the usdt_manager_attach_usdt gathers uprobes info and calls bpf_program__attach_uprobe to create all needed uprobes. If uprobe_multi support is not detected the old behaviour stays. Also adding usdt.s program section for sleepable usdt probes. Signed-off-by: Jiri Olsa --- tools/lib/bpf/libbpf.c | 13 ++++++- tools/lib/bpf/usdt.c | 86 ++++++++++++++++++++++++++++++++++-------- 2 files changed, 82 insertions(+), 17 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index d1bb6eb7cde2..e0463e050192 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -367,6 +367,8 @@ enum sec_def_flags { SEC_SLEEPABLE = 8, /* BPF program support non-linear XDP buffer */ SEC_XDP_FRAGS = 16, + /* Setup proper attach type for usdt probes. */ + SEC_USDT = 32, }; struct bpf_sec_def { @@ -6818,6 +6820,10 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog, if (prog->type == BPF_PROG_TYPE_XDP && (def & SEC_XDP_FRAGS)) opts->prog_flags |= BPF_F_XDP_HAS_FRAGS; + /* special check for usdt to use uprobe_multi link */ + if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK)) + prog->expected_attach_type = BPF_TRACE_UPROBE_MULTI; + if ((def & SEC_ATTACH_BTF) && !prog->attach_btf_id) { int btf_obj_fd = 0, btf_type_id = 0, err; const char *attach_name; @@ -6886,7 +6892,6 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog if (!insns || !insns_cnt) return -EINVAL; - load_attr.expected_attach_type = prog->expected_attach_type; if (kernel_supports(obj, FEAT_PROG_NAME)) prog_name = prog->name; load_attr.attach_prog_fd = prog->attach_prog_fd; @@ -6922,6 +6927,9 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog insns_cnt = prog->insns_cnt; } + /* allow prog_prepare_load_fn to change expected_attach_type */ + load_attr.expected_attach_type = prog->expected_attach_type; + if (obj->gen_loader) { bpf_gen__prog_load(obj->gen_loader, prog->type, prog->name, license, insns, insns_cnt, &load_attr, @@ -8741,7 +8749,8 @@ static const struct bpf_sec_def section_defs[] = { SEC_DEF("uretprobe.multi.s+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_SLEEPABLE, attach_uprobe_multi), SEC_DEF("ksyscall+", KPROBE, 0, SEC_NONE, attach_ksyscall), SEC_DEF("kretsyscall+", KPROBE, 0, SEC_NONE, attach_ksyscall), - SEC_DEF("usdt+", KPROBE, 0, SEC_NONE, attach_usdt), + SEC_DEF("usdt+", KPROBE, 0, SEC_USDT, attach_usdt), + SEC_DEF("usdt.s+", KPROBE, 0, SEC_USDT | SEC_SLEEPABLE, attach_usdt), SEC_DEF("tc/ingress", SCHED_CLS, BPF_TCX_INGRESS, SEC_NONE), /* alias for tcx */ SEC_DEF("tc/egress", SCHED_CLS, BPF_TCX_EGRESS, SEC_NONE), /* alias for tcx */ SEC_DEF("tcx/ingress", SCHED_CLS, BPF_TCX_INGRESS, SEC_NONE), diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c index 8322337ab65b..93794f01bb67 100644 --- a/tools/lib/bpf/usdt.c +++ b/tools/lib/bpf/usdt.c @@ -250,6 +250,7 @@ struct usdt_manager { bool has_bpf_cookie; bool has_sema_refcnt; + bool has_uprobe_multi; }; struct usdt_manager *usdt_manager_new(struct bpf_object *obj) @@ -284,6 +285,11 @@ struct usdt_manager *usdt_manager_new(struct bpf_object *obj) */ man->has_sema_refcnt = faccessat(AT_FDCWD, ref_ctr_sysfs_path, F_OK, AT_EACCESS) == 0; + /* + * Detect kernel support for uprobe multi link to be used for attaching + * usdt probes. + */ + man->has_uprobe_multi = kernel_supports(obj, FEAT_UPROBE_MULTI_LINK); return man; } @@ -808,6 +814,8 @@ struct bpf_link_usdt { long abs_ip; struct bpf_link *link; } *uprobes; + + struct bpf_link *multi_link; }; static int bpf_link_usdt_detach(struct bpf_link *link) @@ -816,6 +824,9 @@ static int bpf_link_usdt_detach(struct bpf_link *link) struct usdt_manager *man = usdt_link->usdt_man; int i; + bpf_link__destroy(usdt_link->multi_link); + + /* When having multi_link, uprobe_cnt is 0 */ for (i = 0; i < usdt_link->uprobe_cnt; i++) { /* detach underlying uprobe link */ bpf_link__destroy(usdt_link->uprobes[i].link); @@ -946,11 +957,13 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct const char *usdt_provider, const char *usdt_name, __u64 usdt_cookie) { + unsigned long *offsets = NULL, *ref_ctr_offsets = NULL; int i, err, spec_map_fd, ip_map_fd; LIBBPF_OPTS(bpf_uprobe_opts, opts); struct hashmap *specs_hash = NULL; struct bpf_link_usdt *link = NULL; struct usdt_target *targets = NULL; + __u64 *cookies = NULL; struct elf_fd elf_fd; size_t target_cnt; @@ -997,10 +1010,21 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct link->link.detach = &bpf_link_usdt_detach; link->link.dealloc = &bpf_link_usdt_dealloc; - link->uprobes = calloc(target_cnt, sizeof(*link->uprobes)); - if (!link->uprobes) { - err = -ENOMEM; - goto err_out; + if (man->has_uprobe_multi) { + offsets = calloc(target_cnt, sizeof(*offsets)); + cookies = calloc(target_cnt, sizeof(*cookies)); + ref_ctr_offsets = calloc(target_cnt, sizeof(*ref_ctr_offsets)); + + if (!offsets || !ref_ctr_offsets || !cookies) { + err = -ENOMEM; + goto err_out; + } + } else { + link->uprobes = calloc(target_cnt, sizeof(*link->uprobes)); + if (!link->uprobes) { + err = -ENOMEM; + goto err_out; + } } for (i = 0; i < target_cnt; i++) { @@ -1041,20 +1065,48 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct goto err_out; } - opts.ref_ctr_offset = target->sema_off; - opts.bpf_cookie = man->has_bpf_cookie ? spec_id : 0; - uprobe_link = bpf_program__attach_uprobe_opts(prog, pid, path, - target->rel_ip, &opts); - err = libbpf_get_error(uprobe_link); - if (err) { - pr_warn("usdt: failed to attach uprobe #%d for '%s:%s' in '%s': %d\n", - i, usdt_provider, usdt_name, path, err); + if (man->has_uprobe_multi) { + offsets[i] = target->rel_ip; + ref_ctr_offsets[i] = target->sema_off; + cookies[i] = spec_id; + } else { + opts.ref_ctr_offset = target->sema_off; + opts.bpf_cookie = man->has_bpf_cookie ? spec_id : 0; + uprobe_link = bpf_program__attach_uprobe_opts(prog, pid, path, + target->rel_ip, &opts); + err = libbpf_get_error(uprobe_link); + if (err) { + pr_warn("usdt: failed to attach uprobe #%d for '%s:%s' in '%s': %d\n", + i, usdt_provider, usdt_name, path, err); + goto err_out; + } + + link->uprobes[i].link = uprobe_link; + link->uprobes[i].abs_ip = target->abs_ip; + link->uprobe_cnt++; + } + } + + if (man->has_uprobe_multi) { + LIBBPF_OPTS(bpf_uprobe_multi_opts, opts_multi, + .ref_ctr_offsets = ref_ctr_offsets, + .offsets = offsets, + .cookies = cookies, + .cnt = target_cnt, + ); + + link->multi_link = bpf_program__attach_uprobe_multi(prog, pid, path, + NULL, &opts_multi); + if (!link->multi_link) { + err = -errno; + pr_warn("usdt: failed to attach uprobe multi for '%s:%s' in '%s': %d\n", + usdt_provider, usdt_name, path, err); goto err_out; } - link->uprobes[i].link = uprobe_link; - link->uprobes[i].abs_ip = target->abs_ip; - link->uprobe_cnt++; + free(offsets); + free(ref_ctr_offsets); + free(cookies); } free(targets); @@ -1063,6 +1115,10 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct return &link->link; err_out: + free(offsets); + free(ref_ctr_offsets); + free(cookies); + if (link) bpf_link__destroy(&link->link); free(targets); From patchwork Sun Jul 30 13:42:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333432 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D7E420FF for ; Sun, 30 Jul 2023 13:45:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B708FC433C8; Sun, 30 Jul 2023 13:45:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724729; bh=TYvbI4J+iiAairt216XCxpjfpCxYWmTZ2NCuQBTKWbk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c7M/SEf0qnH11FII5ykOhlczlWJmwtt2fsA0t96P2tnXkOh7RZ516ZXt0Nv5Tr8fr Ah2v2Z08LqsWTsjPSIJe5B1L653cCEcet/Vlz6uoN8hlh+hNWYIBy5l7PUFp7hRKTx l/s6N0wt7y9ei7x15H82eevKBVyXmAt2ijrrQVSh/42/Mg/TMQv/EgcihTijYNXs+B kAXA/+t/u5ldr9GxI2AuQPtkAgnuxwa0lNSctJHCe5t+ByT+c+QJ9WBLHvqrwXS05j Of8lKbDTsQWAwtRpts0AETy5eH9pzc5KT3p4xHI7JIGbIx/xHUo958i1sK1HcDjxol FUnjWz64bLDGQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 18/28] selftests/bpf: Move get_time_ns to testing_helpers.h Date: Sun, 30 Jul 2023 15:42:13 +0200 Message-ID: <20230730134223.94496-19-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net We'd like to have single copy of get_time_ns used b bench and test_progs, but we can't just include bench.h, because of conflicting 'struct env' objects. Moving get_time_ns to testing_helpers.h which is being included by both bench and test_progs objects. Signed-off-by: Jiri Olsa --- tools/testing/selftests/bpf/bench.h | 9 --------- .../selftests/bpf/prog_tests/kprobe_multi_test.c | 8 -------- tools/testing/selftests/bpf/testing_helpers.h | 10 ++++++++++ 3 files changed, 10 insertions(+), 17 deletions(-) diff --git a/tools/testing/selftests/bpf/bench.h b/tools/testing/selftests/bpf/bench.h index 7ff32be3d730..68180d8f8558 100644 --- a/tools/testing/selftests/bpf/bench.h +++ b/tools/testing/selftests/bpf/bench.h @@ -81,15 +81,6 @@ void grace_period_latency_basic_stats(struct bench_res res[], int res_cnt, void grace_period_ticks_basic_stats(struct bench_res res[], int res_cnt, struct basic_stats *gp_stat); -static inline __u64 get_time_ns(void) -{ - struct timespec t; - - clock_gettime(CLOCK_MONOTONIC, &t); - - return (u64)t.tv_sec * 1000000000 + t.tv_nsec; -} - static inline void atomic_inc(long *value) { (void)__atomic_add_fetch(value, 1, __ATOMIC_RELAXED); diff --git a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c index 2173c4bb555e..179fe300534f 100644 --- a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c +++ b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c @@ -304,14 +304,6 @@ static void test_attach_api_fails(void) kprobe_multi__destroy(skel); } -static inline __u64 get_time_ns(void) -{ - struct timespec t; - - clock_gettime(CLOCK_MONOTONIC, &t); - return (__u64) t.tv_sec * 1000000000 + t.tv_nsec; -} - static size_t symbol_hash(long key, void *ctx __maybe_unused) { return str_hash((const char *) key); diff --git a/tools/testing/selftests/bpf/testing_helpers.h b/tools/testing/selftests/bpf/testing_helpers.h index 5312323881b6..5b7a55136741 100644 --- a/tools/testing/selftests/bpf/testing_helpers.h +++ b/tools/testing/selftests/bpf/testing_helpers.h @@ -7,6 +7,7 @@ #include #include #include +#include int parse_num_list(const char *s, bool **set, int *set_len); __u32 link_info_prog_id(const struct bpf_link *link, struct bpf_link_info *info); @@ -33,4 +34,13 @@ int load_bpf_testmod(bool verbose); int unload_bpf_testmod(bool verbose); int kern_sync_rcu(void); +static inline __u64 get_time_ns(void) +{ + struct timespec t; + + clock_gettime(CLOCK_MONOTONIC, &t); + + return (u64)t.tv_sec * 1000000000 + t.tv_nsec; +} + #endif /* __TESTING_HELPERS_H */ From patchwork Sun Jul 30 13:42:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333433 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFA6C363 for ; Sun, 30 Jul 2023 13:45:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9FBB4C433C8; Sun, 30 Jul 2023 13:45:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724739; bh=y6vFacDuTSEWaNw3zurZjJfC7xwc5HJ1wjHlw/c1dtw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i5ulhPl0GN0JyOssYETPIvLwqoM7L7wqKoghfE22VkMXl2JkoZ6pJqtMrMAH66YmR AVPmwAidYW0/pIN3BJhMbrgkUh1D8jyt/kvugiIE/GiP/NaxhZoF8dMDnvD37XOHnf jDmmTLHsUYk1T02oLKy+utVg7ILjMpxdlaZ+aOwgH8a7wlbicodUp1riyyQrL3w4ZR 662fL0/b88QqUmotlkvrkiwC250tYWvkf9AtT8EZApU0qtZeDS3Uhu350Tx0K251W9 9t2/xRc9ro+iN38pHE5KMK9E3xXNT7NmvFOfSML8FdRyQWvaYY8hrgMK6j72akDT0s nPR3GqnqO2sqA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 19/28] selftests/bpf: Add uprobe_multi skel test Date: Sun, 30 Jul 2023 15:42:14 +0200 Message-ID: <20230730134223.94496-20-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding uprobe_multi test for skeleton load/attach functions, to test skeleton auto attach for uprobe_multi link. Test that bpf_get_func_ip works properly for uprobe_multi attachment. Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/uprobe_multi_test.c | 76 ++++++++++++++++ .../selftests/bpf/progs/uprobe_multi.c | 91 +++++++++++++++++++ 2 files changed, 167 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c create mode 100644 tools/testing/selftests/bpf/progs/uprobe_multi.c diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c new file mode 100644 index 000000000000..5cd1116bbb62 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c @@ -0,0 +1,76 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include "uprobe_multi.skel.h" + +static char test_data[] = "test_data"; + +noinline void uprobe_multi_func_1(void) +{ + asm volatile (""); +} + +noinline void uprobe_multi_func_2(void) +{ + asm volatile (""); +} + +noinline void uprobe_multi_func_3(void) +{ + asm volatile (""); +} + +static void uprobe_multi_test_run(struct uprobe_multi *skel) +{ + skel->bss->uprobe_multi_func_1_addr = (__u64) uprobe_multi_func_1; + skel->bss->uprobe_multi_func_2_addr = (__u64) uprobe_multi_func_2; + skel->bss->uprobe_multi_func_3_addr = (__u64) uprobe_multi_func_3; + + skel->bss->user_ptr = test_data; + skel->bss->pid = getpid(); + + /* trigger all probes */ + uprobe_multi_func_1(); + uprobe_multi_func_2(); + uprobe_multi_func_3(); + + /* + * There are 2 entry and 2 exit probe called for each uprobe_multi_func_[123] + * function and each slepable probe (6) increments uprobe_multi_sleep_result. + */ + ASSERT_EQ(skel->bss->uprobe_multi_func_1_result, 2, "uprobe_multi_func_1_result"); + ASSERT_EQ(skel->bss->uprobe_multi_func_2_result, 2, "uprobe_multi_func_2_result"); + ASSERT_EQ(skel->bss->uprobe_multi_func_3_result, 2, "uprobe_multi_func_3_result"); + + ASSERT_EQ(skel->bss->uretprobe_multi_func_1_result, 2, "uretprobe_multi_func_1_result"); + ASSERT_EQ(skel->bss->uretprobe_multi_func_2_result, 2, "uretprobe_multi_func_2_result"); + ASSERT_EQ(skel->bss->uretprobe_multi_func_3_result, 2, "uretprobe_multi_func_3_result"); + + ASSERT_EQ(skel->bss->uprobe_multi_sleep_result, 6, "uprobe_multi_sleep_result"); +} + +static void test_skel_api(void) +{ + struct uprobe_multi *skel = NULL; + int err; + + skel = uprobe_multi__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_multi__open_and_load")) + goto cleanup; + + err = uprobe_multi__attach(skel); + if (!ASSERT_OK(err, "uprobe_multi__attach")) + goto cleanup; + + uprobe_multi_test_run(skel); + +cleanup: + uprobe_multi__destroy(skel); +} + +void test_uprobe_multi_test(void) +{ + if (test__start_subtest("skel_api")) + test_skel_api(); +} diff --git a/tools/testing/selftests/bpf/progs/uprobe_multi.c b/tools/testing/selftests/bpf/progs/uprobe_multi.c new file mode 100644 index 000000000000..ab467970256a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/uprobe_multi.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +char _license[] SEC("license") = "GPL"; + +__u64 uprobe_multi_func_1_addr = 0; +__u64 uprobe_multi_func_2_addr = 0; +__u64 uprobe_multi_func_3_addr = 0; + +__u64 uprobe_multi_func_1_result = 0; +__u64 uprobe_multi_func_2_result = 0; +__u64 uprobe_multi_func_3_result = 0; + +__u64 uretprobe_multi_func_1_result = 0; +__u64 uretprobe_multi_func_2_result = 0; +__u64 uretprobe_multi_func_3_result = 0; + +__u64 uprobe_multi_sleep_result = 0; + +int pid = 0; +bool test_cookie = false; +void *user_ptr = 0; + +static __always_inline bool verify_sleepable_user_copy(void) +{ + char data[9]; + + bpf_copy_from_user(data, sizeof(data), user_ptr); + return bpf_strncmp(data, sizeof(data), "test_data") == 0; +} + +static void uprobe_multi_check(void *ctx, bool is_return, bool is_sleep) +{ + if (bpf_get_current_pid_tgid() >> 32 != pid) + return; + + __u64 cookie = test_cookie ? bpf_get_attach_cookie(ctx) : 0; + __u64 addr = bpf_get_func_ip(ctx); + +#define SET(__var, __addr, __cookie) ({ \ + if (addr == __addr && \ + (!test_cookie || (cookie == __cookie))) \ + __var += 1; \ +}) + + if (is_return) { + SET(uretprobe_multi_func_1_result, uprobe_multi_func_1_addr, 2); + SET(uretprobe_multi_func_2_result, uprobe_multi_func_2_addr, 3); + SET(uretprobe_multi_func_3_result, uprobe_multi_func_3_addr, 1); + } else { + SET(uprobe_multi_func_1_result, uprobe_multi_func_1_addr, 3); + SET(uprobe_multi_func_2_result, uprobe_multi_func_2_addr, 1); + SET(uprobe_multi_func_3_result, uprobe_multi_func_3_addr, 2); + } + +#undef SET + + if (is_sleep && verify_sleepable_user_copy()) + uprobe_multi_sleep_result += 1; +} + +SEC("uprobe.multi//proc/self/exe:uprobe_multi_func_*") +int uprobe(struct pt_regs *ctx) +{ + uprobe_multi_check(ctx, false, false); + return 0; +} + +SEC("uretprobe.multi//proc/self/exe:uprobe_multi_func_*") +int uretprobe(struct pt_regs *ctx) +{ + uprobe_multi_check(ctx, true, false); + return 0; +} + +SEC("uprobe.multi.s//proc/self/exe:uprobe_multi_func_*") +int uprobe_sleep(struct pt_regs *ctx) +{ + uprobe_multi_check(ctx, false, true); + return 0; +} + +SEC("uretprobe.multi.s//proc/self/exe:uprobe_multi_func_*") +int uretprobe_sleep(struct pt_regs *ctx) +{ + uprobe_multi_check(ctx, true, true); + return 0; +} From patchwork Sun Jul 30 13:42:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333434 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69D70363 for ; Sun, 30 Jul 2023 13:45:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 860DBC433C7; Sun, 30 Jul 2023 13:45:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724749; bh=n1KB+WDu9D681cV0l4zEC5LDZYZ+vt8QEyxrCaRiSh4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n9rjVCQdmguyrD/50oqDWGJ6HMPdLHURraxXdhhXvwn9/Soa10PR82/RVpI5qsl3n oAB1hmboqMCWMfXuJOZL6FpAfPjzBqaDRYTp41GNNNCl60gopQqU8qgBMVdNkV43KA wwqOZXLYIc2K40yAHOcUk0fdXw590OIOrxl+paypixITjDU7gCNZVuLBnnTymp6QLm 6slnoALDV/yE1ykfqbcNeZ8S1XEZ8X/PmTKIzSpOCIe9feKf9JBshMe0xOLB42ZRi9 JyxFoRj3ldEDPAJHKCDzgPGnw/VvcBBVUtNLUxFpmAtaiSytD/V3P+fp5M9WvTuWPL Lb0eLnMsxmqRA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 20/28] selftests/bpf: Add uprobe_multi api test Date: Sun, 30 Jul 2023 15:42:15 +0200 Message-ID: <20230730134223.94496-21-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding uprobe_multi test for bpf_program__attach_uprobe_multi attach function. Testing attachment using glob patterns and via bpf_uprobe_multi_opts paths/syms fields. Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/uprobe_multi_test.c | 65 +++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c index 5cd1116bbb62..2ac8954123e4 100644 --- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c @@ -69,8 +69,73 @@ static void test_skel_api(void) uprobe_multi__destroy(skel); } +static void +test_attach_api(const char *binary, const char *pattern, struct bpf_uprobe_multi_opts *opts) +{ + struct uprobe_multi *skel = NULL; + + skel = uprobe_multi__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_multi__open_and_load")) + goto cleanup; + + opts->retprobe = false; + skel->links.uprobe = bpf_program__attach_uprobe_multi(skel->progs.uprobe, -1, + binary, pattern, opts); + if (!ASSERT_OK_PTR(skel->links.uprobe, "bpf_program__attach_uprobe_multi")) + goto cleanup; + + opts->retprobe = true; + skel->links.uretprobe = bpf_program__attach_uprobe_multi(skel->progs.uretprobe, -1, + binary, pattern, opts); + if (!ASSERT_OK_PTR(skel->links.uretprobe, "bpf_program__attach_uprobe_multi")) + goto cleanup; + + opts->retprobe = false; + skel->links.uprobe_sleep = bpf_program__attach_uprobe_multi(skel->progs.uprobe_sleep, -1, + binary, pattern, opts); + if (!ASSERT_OK_PTR(skel->links.uprobe_sleep, "bpf_program__attach_uprobe_multi")) + goto cleanup; + + opts->retprobe = true; + skel->links.uretprobe_sleep = bpf_program__attach_uprobe_multi(skel->progs.uretprobe_sleep, + -1, binary, pattern, opts); + if (!ASSERT_OK_PTR(skel->links.uretprobe_sleep, "bpf_program__attach_uprobe_multi")) + goto cleanup; + + uprobe_multi_test_run(skel); + +cleanup: + uprobe_multi__destroy(skel); +} + +static void test_attach_api_pattern(void) +{ + LIBBPF_OPTS(bpf_uprobe_multi_opts, opts); + + test_attach_api("/proc/self/exe", "uprobe_multi_func_*", &opts); + test_attach_api("/proc/self/exe", "uprobe_multi_func_?", &opts); +} + +static void test_attach_api_syms(void) +{ + LIBBPF_OPTS(bpf_uprobe_multi_opts, opts); + const char *syms[3] = { + "uprobe_multi_func_1", + "uprobe_multi_func_2", + "uprobe_multi_func_3", + }; + + opts.syms = syms; + opts.cnt = ARRAY_SIZE(syms); + test_attach_api("/proc/self/exe", NULL, &opts); +} + void test_uprobe_multi_test(void) { if (test__start_subtest("skel_api")) test_skel_api(); + if (test__start_subtest("attach_api_pattern")) + test_attach_api_pattern(); + if (test__start_subtest("attach_api_syms")) + test_attach_api_syms(); } From patchwork Sun Jul 30 13:42:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333435 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 632B4363 for ; Sun, 30 Jul 2023 13:45:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 754C4C433C7; Sun, 30 Jul 2023 13:45:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724759; bh=kaXT5SWYy3HvZaU51TWaPrB5JkawjmLDVE9u8Ixyp60=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u5pBomrxaqq1pzOj9CxjZkTw5UpSuaTTXrAa+hB27vqLuueht0sYmP2iZPkGMCO98 DbJyHe5mcAYwtjP2ZhBWrvHbdn8kdtXMmdKS3aWrSiQWKrngYaJfqr5OQm5IKc0SfN YJOvYD+XEM+Q0LlJmtD51prPx8HBoK+FdWcN3XkIqDweJBfSyZxhudFQ5t2xzOoa/o LlWFBUlM/UauU+1pjtmkmi399vSDeJ+JDd6LWul6MLQV3yBMZF5jKvRJmfTDvF03En lb5EbnQR+/W0vAby+4JrjrHXdWDvLHpJ/IYf3ruVVF8zpKfpnF+OB98w9L4GRdm5zg aeMezhIMiJKiQ== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 21/28] selftests/bpf: Add uprobe_multi link test Date: Sun, 30 Jul 2023 15:42:16 +0200 Message-ID: <20230730134223.94496-22-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding uprobe_multi test for bpf_link_create attach function. Testing attachment using the struct bpf_link_create_opts. Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/uprobe_multi_test.c | 68 +++++++++++++++++++ 1 file changed, 68 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c index 2ac8954123e4..e460fd4d370d 100644 --- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c @@ -3,6 +3,7 @@ #include #include #include "uprobe_multi.skel.h" +#include "bpf/libbpf_internal.h" static char test_data[] = "test_data"; @@ -130,6 +131,71 @@ static void test_attach_api_syms(void) test_attach_api("/proc/self/exe", NULL, &opts); } +static void test_link_api(void) +{ + int prog_fd, link1_fd = -1, link2_fd = -1, link3_fd = -1, link4_fd = -1; + LIBBPF_OPTS(bpf_link_create_opts, opts); + const char *path = "/proc/self/exe"; + struct uprobe_multi *skel = NULL; + unsigned long *offsets = NULL; + const char *syms[3] = { + "uprobe_multi_func_1", + "uprobe_multi_func_2", + "uprobe_multi_func_3", + }; + int err; + + err = elf_resolve_syms_offsets(path, 3, syms, (unsigned long **) &offsets); + if (!ASSERT_OK(err, "elf_resolve_syms_offsets")) + return; + + opts.uprobe_multi.path = path; + opts.uprobe_multi.offsets = offsets; + opts.uprobe_multi.cnt = ARRAY_SIZE(syms); + + skel = uprobe_multi__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_multi__open_and_load")) + goto cleanup; + + opts.kprobe_multi.flags = 0; + prog_fd = bpf_program__fd(skel->progs.uprobe); + link1_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &opts); + if (!ASSERT_GE(link1_fd, 0, "link1_fd")) + goto cleanup; + + opts.kprobe_multi.flags = BPF_F_UPROBE_MULTI_RETURN; + prog_fd = bpf_program__fd(skel->progs.uretprobe); + link2_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &opts); + if (!ASSERT_GE(link2_fd, 0, "link2_fd")) + goto cleanup; + + opts.kprobe_multi.flags = 0; + prog_fd = bpf_program__fd(skel->progs.uprobe_sleep); + link3_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &opts); + if (!ASSERT_GE(link3_fd, 0, "link3_fd")) + goto cleanup; + + opts.kprobe_multi.flags = BPF_F_UPROBE_MULTI_RETURN; + prog_fd = bpf_program__fd(skel->progs.uretprobe_sleep); + link4_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &opts); + if (!ASSERT_GE(link4_fd, 0, "link4_fd")) + goto cleanup; + uprobe_multi_test_run(skel); + +cleanup: + if (link1_fd >= 0) + close(link1_fd); + if (link2_fd >= 0) + close(link2_fd); + if (link3_fd >= 0) + close(link3_fd); + if (link4_fd >= 0) + close(link4_fd); + + uprobe_multi__destroy(skel); + free(offsets); +} + void test_uprobe_multi_test(void) { if (test__start_subtest("skel_api")) @@ -138,4 +204,6 @@ void test_uprobe_multi_test(void) test_attach_api_pattern(); if (test__start_subtest("attach_api_syms")) test_attach_api_syms(); + if (test__start_subtest("link_api")) + test_link_api(); } From patchwork Sun Jul 30 13:42:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333436 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3A08363 for ; Sun, 30 Jul 2023 13:46:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B568C433C8; Sun, 30 Jul 2023 13:46:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724769; bh=NIeWJgVrAw17naHEv0PHMRCgDQmSX5RUF9i7t4pViDI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X0imUcNXlsfb/DFRargFS350WzPZtAcV4T7Hw1iHU4fCusVbm7CQjFuHWomIAxFSD 7cFHMKq0RadVMJqBjy67wjExhslUDWhalegp2VBLt+HcvOUMjmTWqzW0C5TmpNSTTn pp2fikxb85bssUZa3IOUfw6MoOuU0fCLzQ9RLXO7/WrYKoM7c/zPjItUQ1gJi4INEU zIZ6Y2ExJDabGZVcSE77AUwoX+3U8bEacBoggQAYGyGKBIDK3HlzmkJW84k2VkVi7N KwRCubWz1fuGNmZ+U1gGku4HOrVd06NGedWXjgILDtu4w89VSqsmqY8uqX+QXayYBP +lF09av6Q5J1w== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 22/28] selftests/bpf: Add uprobe_multi test program Date: Sun, 30 Jul 2023 15:42:17 +0200 Message-ID: <20230730134223.94496-23-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding uprobe_multi test program that defines 50k uprobe_multi_func_* functions and will serve as attach point for uprobe_multi bench test in following patch. Signed-off-by: Jiri Olsa --- tools/testing/selftests/bpf/Makefile | 5 ++ tools/testing/selftests/bpf/uprobe_multi.c | 67 ++++++++++++++++++++++ 2 files changed, 72 insertions(+) create mode 100644 tools/testing/selftests/bpf/uprobe_multi.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index e4e1e6492268..edef49fcd23e 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -585,6 +585,7 @@ TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \ $(OUTPUT)/liburandom_read.so \ $(OUTPUT)/xdp_synproxy \ $(OUTPUT)/sign-file \ + $(OUTPUT)/uprobe_multi \ ima_setup.sh \ verify_sig_setup.sh \ $(wildcard progs/btf_dump_test_case_*.c) \ @@ -698,6 +699,10 @@ $(OUTPUT)/veristat: $(OUTPUT)/veristat.o $(call msg,BINARY,,$@) $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@ +$(OUTPUT)/uprobe_multi: uprobe_multi.c + $(call msg,BINARY,,$@) + $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@ + EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \ prog_tests/tests.h map_tests/tests.h verifier/tests.h \ feature bpftool \ diff --git a/tools/testing/selftests/bpf/uprobe_multi.c b/tools/testing/selftests/bpf/uprobe_multi.c new file mode 100644 index 000000000000..d19184103fa3 --- /dev/null +++ b/tools/testing/selftests/bpf/uprobe_multi.c @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include + +#define __PASTE(a, b) a##b +#define PASTE(a, b) __PASTE(a, b) + +#define NAME(name, idx) PASTE(name, idx) + +#define DEF(name, idx) int NAME(name, idx)(void) { return 0; } +#define CALL(name, idx) NAME(name, idx)(); + +#define F(body, name, idx) body(name, idx) + +#define F10(body, name, idx) \ + F(body, PASTE(name, idx), 0) F(body, PASTE(name, idx), 1) F(body, PASTE(name, idx), 2) \ + F(body, PASTE(name, idx), 3) F(body, PASTE(name, idx), 4) F(body, PASTE(name, idx), 5) \ + F(body, PASTE(name, idx), 6) F(body, PASTE(name, idx), 7) F(body, PASTE(name, idx), 8) \ + F(body, PASTE(name, idx), 9) + +#define F100(body, name, idx) \ + F10(body, PASTE(name, idx), 0) F10(body, PASTE(name, idx), 1) F10(body, PASTE(name, idx), 2) \ + F10(body, PASTE(name, idx), 3) F10(body, PASTE(name, idx), 4) F10(body, PASTE(name, idx), 5) \ + F10(body, PASTE(name, idx), 6) F10(body, PASTE(name, idx), 7) F10(body, PASTE(name, idx), 8) \ + F10(body, PASTE(name, idx), 9) + +#define F1000(body, name, idx) \ + F100(body, PASTE(name, idx), 0) F100(body, PASTE(name, idx), 1) F100(body, PASTE(name, idx), 2) \ + F100(body, PASTE(name, idx), 3) F100(body, PASTE(name, idx), 4) F100(body, PASTE(name, idx), 5) \ + F100(body, PASTE(name, idx), 6) F100(body, PASTE(name, idx), 7) F100(body, PASTE(name, idx), 8) \ + F100(body, PASTE(name, idx), 9) + +#define F10000(body, name, idx) \ + F1000(body, PASTE(name, idx), 0) F1000(body, PASTE(name, idx), 1) F1000(body, PASTE(name, idx), 2) \ + F1000(body, PASTE(name, idx), 3) F1000(body, PASTE(name, idx), 4) F1000(body, PASTE(name, idx), 5) \ + F1000(body, PASTE(name, idx), 6) F1000(body, PASTE(name, idx), 7) F1000(body, PASTE(name, idx), 8) \ + F1000(body, PASTE(name, idx), 9) + +F10000(DEF, uprobe_multi_func_, 0) +F10000(DEF, uprobe_multi_func_, 1) +F10000(DEF, uprobe_multi_func_, 2) +F10000(DEF, uprobe_multi_func_, 3) +F10000(DEF, uprobe_multi_func_, 4) + +static int bench(void) +{ + F10000(CALL, uprobe_multi_func_, 0) + F10000(CALL, uprobe_multi_func_, 1) + F10000(CALL, uprobe_multi_func_, 2) + F10000(CALL, uprobe_multi_func_, 3) + F10000(CALL, uprobe_multi_func_, 4) + return 0; +} + +int main(int argc, char **argv) +{ + if (argc != 2) + goto error; + + if (!strcmp("bench", argv[1])) + return bench(); + +error: + fprintf(stderr, "usage: %s \n", argv[0]); + return -1; +} From patchwork Sun Jul 30 13:42:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333437 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C1F3363 for ; Sun, 30 Jul 2023 13:46:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 73CF2C433C8; Sun, 30 Jul 2023 13:46:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724779; bh=QZLZ1UkOLo9H98kcIawBe9i8KRU/GlG4NzN0fHBUPnE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mxs36hCSab/YixhojbosxTR939NwPmUqS+zMIj2sCY6SWFTJ0GLEezVzEUvZCla/H kvBYJN9UWlxkT1bJKtKn5sU9WAekXpGGNoCj5GNAD7mlvz/B7scNtqGzHZ7wqFqxUg G4Y15NAekcx1iS5zxDRWWPH/wZSN4VmwLOu5URB8/ygvAJb/afaI2kw8b8dIA3pxjn 8cJxvK4K224y6C0Eun2jGLQ3WxBz42D5rXaKhWoeU+Trjx0ec/1+BeZK3a1bzgpYqv aQLPjDIfQ/dApUbmptB7s7bufbV2IKRoG+WJIzQj7TeolxOfUHXex1kkhx7qDGDN4t b2B7GFKudKABA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 23/28] selftests/bpf: Add uprobe_multi bench test Date: Sun, 30 Jul 2023 15:42:18 +0200 Message-ID: <20230730134223.94496-24-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding test that attaches 50k uprobes in uprobe_multi binary. After the attach is done we run the binary and make sure we get proper amount of hits. The resulting attach/detach times on my setup: test_bench_attach_uprobe:PASS:uprobe_multi__open 0 nsec test_bench_attach_uprobe:PASS:uprobe_multi__attach 0 nsec test_bench_attach_uprobe:PASS:uprobes_count 0 nsec test_bench_attach_uprobe: attached in 0.346s test_bench_attach_uprobe: detached in 0.419s #262/5 uprobe_multi_test/bench_uprobe:OK Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/uprobe_multi_test.c | 40 +++++++++++++++++++ .../selftests/bpf/progs/uprobe_multi_bench.c | 15 +++++++ 2 files changed, 55 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/uprobe_multi_bench.c diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c index e460fd4d370d..56c2062af1c9 100644 --- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c @@ -3,7 +3,9 @@ #include #include #include "uprobe_multi.skel.h" +#include "uprobe_multi_bench.skel.h" #include "bpf/libbpf_internal.h" +#include "testing_helpers.h" static char test_data[] = "test_data"; @@ -196,6 +198,42 @@ static void test_link_api(void) free(offsets); } +static void test_bench_attach_uprobe(void) +{ + long attach_start_ns, attach_end_ns = 0; + struct uprobe_multi_bench *skel = NULL; + long detach_start_ns, detach_end_ns; + double attach_delta, detach_delta; + int err; + + skel = uprobe_multi_bench__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_multi__open")) + goto cleanup; + + attach_start_ns = get_time_ns(); + + err = uprobe_multi_bench__attach(skel); + if (!ASSERT_OK(err, "uprobe_multi__attach")) + goto cleanup; + + attach_end_ns = get_time_ns(); + + system("./uprobe_multi bench"); + + ASSERT_EQ(skel->bss->count, 50000, "uprobes_count"); + +cleanup: + detach_start_ns = get_time_ns(); + uprobe_multi_bench__destroy(skel); + detach_end_ns = get_time_ns(); + + attach_delta = (attach_end_ns - attach_start_ns) / 1000000000.0; + detach_delta = (detach_end_ns - detach_start_ns) / 1000000000.0; + + printf("%s: attached in %7.3lfs\n", __func__, attach_delta); + printf("%s: detached in %7.3lfs\n", __func__, detach_delta); +} + void test_uprobe_multi_test(void) { if (test__start_subtest("skel_api")) @@ -206,4 +244,6 @@ void test_uprobe_multi_test(void) test_attach_api_syms(); if (test__start_subtest("link_api")) test_link_api(); + if (test__start_subtest("bench_uprobe")) + test_bench_attach_uprobe(); } diff --git a/tools/testing/selftests/bpf/progs/uprobe_multi_bench.c b/tools/testing/selftests/bpf/progs/uprobe_multi_bench.c new file mode 100644 index 000000000000..5367f6105e30 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/uprobe_multi_bench.c @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +char _license[] SEC("license") = "GPL"; + +int count; + +SEC("uprobe.multi/./uprobe_multi:uprobe_multi_func_*") +int uprobe_bench(struct pt_regs *ctx) +{ + count++; + return 0; +} From patchwork Sun Jul 30 13:42:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333438 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AA3B363 for ; Sun, 30 Jul 2023 13:46:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 65819C433C7; Sun, 30 Jul 2023 13:46:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724789; bh=CUEHN+QEmEay3VqfBoz4mXyiK1GqAL/N/ictXXbupi8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VvwZjI32g1B2AjLbuXTrvx6kA0inRbZzBD+kBVuKcP/QtPE58xlaoh1CSD0bYpjqD 68oLbH7syjhLo04vvzLrcoeUhBzaLnvuUGsRy9dTKlFD2U+0qgWeoVZFOSQZ6qcsh6 Z0RqmeDUI8q0cXX5lYT45iB93YwAK+dhsd2cjL/l3gcNZGQNk+y0bIuSK8JN2YPVX+ e2G5vdKNjzkuI0KMmcuFMvreskJYHjZ+iKW7phHr3UzPETzEMlr+qdzllXylRMwQYT H1btxYisZKmAzX5kNeyB3o3zMf2HQSTgjEespBMXX8B3t4mDhZbOqS4VcobP7CaqTY 2lCq9xrdACJ1w== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 24/28] selftests/bpf: Add uprobe_multi usdt test code Date: Sun, 30 Jul 2023 15:42:19 +0200 Message-ID: <20230730134223.94496-25-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding code in uprobe_multi test binary that defines 50k usdts and will serve as attach point for uprobe_multi usdt bench test in following patch. Signed-off-by: Jiri Olsa --- tools/testing/selftests/bpf/uprobe_multi.c | 24 ++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/tools/testing/selftests/bpf/uprobe_multi.c b/tools/testing/selftests/bpf/uprobe_multi.c index d19184103fa3..850bf2d5a8bc 100644 --- a/tools/testing/selftests/bpf/uprobe_multi.c +++ b/tools/testing/selftests/bpf/uprobe_multi.c @@ -2,6 +2,7 @@ #include #include +#include #define __PASTE(a, b) a##b #define PASTE(a, b) __PASTE(a, b) @@ -53,6 +54,27 @@ static int bench(void) return 0; } +#define PROBE STAP_PROBE(test, usdt); + +#define PROBE10 PROBE PROBE PROBE PROBE PROBE \ + PROBE PROBE PROBE PROBE PROBE +#define PROBE100 PROBE10 PROBE10 PROBE10 PROBE10 PROBE10 \ + PROBE10 PROBE10 PROBE10 PROBE10 PROBE10 +#define PROBE1000 PROBE100 PROBE100 PROBE100 PROBE100 PROBE100 \ + PROBE100 PROBE100 PROBE100 PROBE100 PROBE100 +#define PROBE10000 PROBE1000 PROBE1000 PROBE1000 PROBE1000 PROBE1000 \ + PROBE1000 PROBE1000 PROBE1000 PROBE1000 PROBE1000 + +static int usdt(void) +{ + PROBE10000 + PROBE10000 + PROBE10000 + PROBE10000 + PROBE10000 + return 0; +} + int main(int argc, char **argv) { if (argc != 2) @@ -60,6 +82,8 @@ int main(int argc, char **argv) if (!strcmp("bench", argv[1])) return bench(); + if (!strcmp("usdt", argv[1])) + return usdt(); error: fprintf(stderr, "usage: %s \n", argv[0]); From patchwork Sun Jul 30 13:42:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333439 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A3BE363 for ; Sun, 30 Jul 2023 13:46:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8C39AC433C7; Sun, 30 Jul 2023 13:46:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724799; bh=YPrmyB5tQxEByL25F26LpxKBO3P6pu2hL7l8PcwZwJ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UEBYuvFBtCgdf40VXXxlPktePICoF6jJoBug4+B/PechPy1OZzPrM2ZqUBfXvgIgr f/YqzliN58wnJBRNNT0NEwKIDTFtpjJxwh/iHI+pi7ak1DALSCsUDIUluXRINubWUs oaEmjBTm8NzPPEHZzIkHs04cdaI5bTYC/B2N4Q9wBroWwrUsAjoBtAaY3HOTSlx1bH /9EiBYo4G1InlaGoe90beSWpTumRqELk5LAPP9CEbMtJqXc+CxkL/fZKy2nw7f0xI0 hZkKxxfW+9972r1HypuU7i0pDqBgjZg1r27+2DxeY5b9b7vSGhn1fXH7sI/v0bu/mT T4CmXUtDNmfJg== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 25/28] selftests/bpf: Add uprobe_multi usdt bench test Date: Sun, 30 Jul 2023 15:42:20 +0200 Message-ID: <20230730134223.94496-26-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding test that attaches 50k usdt probes in usdt_multi binary. After the attach is done we run the binary and make sure we get proper amount of hits. With current uprobes: # perf stat --null ./test_progs -n 254/6 #254/6 uprobe_multi_test/bench_usdt:OK #254 uprobe_multi_test:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED Performance counter stats for './test_progs -n 254/6': 1353.659680562 seconds time elapsed With uprobe_multi link: # perf stat --null ./test_progs -n 254/6 #254/6 uprobe_multi_test/bench_usdt:OK #254 uprobe_multi_test:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED Performance counter stats for './test_progs -n 254/6': 0.322046364 seconds time elapsed Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/uprobe_multi_test.c | 39 +++++++++++++++++++ .../selftests/bpf/progs/uprobe_multi_usdt.c | 16 ++++++++ 2 files changed, 55 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/uprobe_multi_usdt.c diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c index 56c2062af1c9..19a66431a61f 100644 --- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c @@ -4,6 +4,7 @@ #include #include "uprobe_multi.skel.h" #include "uprobe_multi_bench.skel.h" +#include "uprobe_multi_usdt.skel.h" #include "bpf/libbpf_internal.h" #include "testing_helpers.h" @@ -234,6 +235,42 @@ static void test_bench_attach_uprobe(void) printf("%s: detached in %7.3lfs\n", __func__, detach_delta); } +static void test_bench_attach_usdt(void) +{ + struct uprobe_multi_usdt *skel = NULL; + long attach_start_ns, attach_end_ns; + long detach_start_ns, detach_end_ns; + double attach_delta, detach_delta; + + skel = uprobe_multi_usdt__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_multi__open")) + goto cleanup; + + attach_start_ns = get_time_ns(); + + skel->links.usdt0 = bpf_program__attach_usdt(skel->progs.usdt0, -1, "./uprobe_multi", + "test", "usdt", NULL); + if (!ASSERT_OK_PTR(skel->links.usdt0, "bpf_program__attach_usdt")) + goto cleanup; + + attach_end_ns = get_time_ns(); + + system("./uprobe_multi usdt"); + + ASSERT_EQ(skel->bss->count, 50000, "usdt_count"); + +cleanup: + detach_start_ns = get_time_ns(); + uprobe_multi_usdt__destroy(skel); + detach_end_ns = get_time_ns(); + + attach_delta = (attach_end_ns - attach_start_ns) / 1000000000.0; + detach_delta = (detach_end_ns - detach_start_ns) / 1000000000.0; + + printf("%s: attached in %7.3lfs\n", __func__, attach_delta); + printf("%s: detached in %7.3lfs\n", __func__, detach_delta); +} + void test_uprobe_multi_test(void) { if (test__start_subtest("skel_api")) @@ -246,4 +283,6 @@ void test_uprobe_multi_test(void) test_link_api(); if (test__start_subtest("bench_uprobe")) test_bench_attach_uprobe(); + if (test__start_subtest("bench_usdt")) + test_bench_attach_usdt(); } diff --git a/tools/testing/selftests/bpf/progs/uprobe_multi_usdt.c b/tools/testing/selftests/bpf/progs/uprobe_multi_usdt.c new file mode 100644 index 000000000000..9e1c33d0bd2f --- /dev/null +++ b/tools/testing/selftests/bpf/progs/uprobe_multi_usdt.c @@ -0,0 +1,16 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "vmlinux.h" +#include +#include + +char _license[] SEC("license") = "GPL"; + +int count; + +SEC("usdt") +int usdt0(struct pt_regs *ctx) +{ + count++; + return 0; +} From patchwork Sun Jul 30 13:42:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333440 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B38C363 for ; Sun, 30 Jul 2023 13:46:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F20DC433C7; Sun, 30 Jul 2023 13:46:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724809; bh=UVSEgNTzIVPEsp/ldi2kMTzfANM9le6efNH62rxLv0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=krJ2Ep7fPrNbwMMlrKonNuANwvPMXWtgX35AUzpd2znaPdpJ9o2Gu5rrEWc0OtEE7 2k4957EpbtwL5ZdlYNZO2892nomKMltvPGdnGL90tLO9B3OZoNQaqoNIWZhWh9o45k kShuG/ZFNO8oFTtGRocDeszs9O7cJ+QxaOlfJqVBeNP4t/mcW6jpTmQyMWvd2Qx//N 1tSWDy01v3KPgu9AdcdNuhfN6/IEJmYdBR+dneHzoqzNPIzuCkB2m4GXZ6gLRB2N6J /+IHGviDq59h9OWFyermnGO5r9bu1mq596e5XVle0F9RpT0rMCE0B0LGvKkuUxa1K/ WK+MWi9oxDwXg== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 26/28] selftests/bpf: Add uprobe_multi cookie test Date: Sun, 30 Jul 2023 15:42:21 +0200 Message-ID: <20230730134223.94496-27-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Adding test for cookies setup/retrieval in uprobe_link uprobes and making sure bpf_get_attach_cookie works properly. Signed-off-by: Jiri Olsa --- .../selftests/bpf/prog_tests/bpf_cookie.c | 78 +++++++++++++++++++ 1 file changed, 78 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c index 26b2d1bffdfd..1454cebc262b 100644 --- a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c @@ -11,6 +11,7 @@ #include #include "test_bpf_cookie.skel.h" #include "kprobe_multi.skel.h" +#include "uprobe_multi.skel.h" /* uprobe attach point */ static noinline void trigger_func(void) @@ -239,6 +240,81 @@ static void kprobe_multi_attach_api_subtest(void) bpf_link__destroy(link1); kprobe_multi__destroy(skel); } + +/* defined in prog_tests/uprobe_multi_test.c */ +void uprobe_multi_func_1(void); +void uprobe_multi_func_2(void); +void uprobe_multi_func_3(void); + +static void uprobe_multi_test_run(struct uprobe_multi *skel) +{ + skel->bss->uprobe_multi_func_1_addr = (__u64) uprobe_multi_func_1; + skel->bss->uprobe_multi_func_2_addr = (__u64) uprobe_multi_func_2; + skel->bss->uprobe_multi_func_3_addr = (__u64) uprobe_multi_func_3; + + skel->bss->pid = getpid(); + skel->bss->test_cookie = true; + + uprobe_multi_func_1(); + uprobe_multi_func_2(); + uprobe_multi_func_3(); + + ASSERT_EQ(skel->bss->uprobe_multi_func_1_result, 1, "uprobe_multi_func_1_result"); + ASSERT_EQ(skel->bss->uprobe_multi_func_2_result, 1, "uprobe_multi_func_2_result"); + ASSERT_EQ(skel->bss->uprobe_multi_func_3_result, 1, "uprobe_multi_func_3_result"); + + ASSERT_EQ(skel->bss->uretprobe_multi_func_1_result, 1, "uretprobe_multi_func_1_result"); + ASSERT_EQ(skel->bss->uretprobe_multi_func_2_result, 1, "uretprobe_multi_func_2_result"); + ASSERT_EQ(skel->bss->uretprobe_multi_func_3_result, 1, "uretprobe_multi_func_3_result"); +} + +static void uprobe_multi_attach_api_subtest(void) +{ + struct bpf_link *link1 = NULL, *link2 = NULL; + struct uprobe_multi *skel = NULL; + LIBBPF_OPTS(bpf_uprobe_multi_opts, opts); + const char *syms[3] = { + "uprobe_multi_func_1", + "uprobe_multi_func_2", + "uprobe_multi_func_3", + }; + __u64 cookies[3]; + + cookies[0] = 3; /* uprobe_multi_func_1 */ + cookies[1] = 1; /* uprobe_multi_func_2 */ + cookies[2] = 2; /* uprobe_multi_func_3 */ + + opts.syms = syms; + opts.cnt = ARRAY_SIZE(syms); + opts.cookies = &cookies[0]; + + skel = uprobe_multi__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_multi")) + goto cleanup; + + link1 = bpf_program__attach_uprobe_multi(skel->progs.uprobe, -1, + "/proc/self/exe", NULL, &opts); + if (!ASSERT_OK_PTR(link1, "bpf_program__attach_uprobe_multi")) + goto cleanup; + + cookies[0] = 2; /* uprobe_multi_func_1 */ + cookies[1] = 3; /* uprobe_multi_func_2 */ + cookies[2] = 1; /* uprobe_multi_func_3 */ + + opts.retprobe = true; + link2 = bpf_program__attach_uprobe_multi(skel->progs.uretprobe, -1, + "/proc/self/exe", NULL, &opts); + if (!ASSERT_OK_PTR(link2, "bpf_program__attach_uprobe_multi_retprobe")) + goto cleanup; + + uprobe_multi_test_run(skel); + +cleanup: + bpf_link__destroy(link2); + bpf_link__destroy(link1); + uprobe_multi__destroy(skel); +} + static void uprobe_subtest(struct test_bpf_cookie *skel) { DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, opts); @@ -515,6 +591,8 @@ void test_bpf_cookie(void) kprobe_multi_attach_api_subtest(); if (test__start_subtest("uprobe")) uprobe_subtest(skel); + if (test__start_subtest("multi_uprobe_attach_api")) + uprobe_multi_attach_api_subtest(); if (test__start_subtest("tracepoint")) tp_subtest(skel); if (test__start_subtest("perf_event")) From patchwork Sun Jul 30 13:42:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333441 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B27D363 for ; Sun, 30 Jul 2023 13:46:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7219CC433C7; Sun, 30 Jul 2023 13:46:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724819; bh=xx8kNp7cdgM3ReT5R5BPvlZEMQUibYVgPsE5tNnrxdM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OsQkVYMngHqXnS6wn1u/HKrfgSPem/fJTbkap4+c48vSvyol1L5BzFIBrbf9jGIzn CQVIxDSe/NFVpJKEHV1pyEk47/IQAX3/LCOmNOKG/58NeDaP6di30viq4rjAraUAGz aKLyqCAlijgRJ4NqTX12fN7StbZzqYgf4EN3DWui+H/ul0zDuHH4+igV/2xeRkAwmL O4pXhxZHRHLoclN4IdsPXGVXHU5HAWk21yo9uVFUKyMsMfhhRvwVgUOpZ3Z+jNx8KS IGYsuS6ZFgOo1poSZXz7kbriyDLEHbqOk1lptyfK45A58QXYgEXFgdiH+1F0kCdkCr dJ7qDS2Y4ZNwA== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 27/28] selftests/bpf: Add uprobe_multi pid filter tests Date: Sun, 30 Jul 2023 15:42:22 +0200 Message-ID: <20230730134223.94496-28-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Running api and link tests also with pid filter and checking the probe gets executed only for specific pid. Spawning extra process to trigger attached uprobes and checking we get correct counts from executed programs. Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/uprobe_multi_test.c | 132 ++++++++++++++++-- .../selftests/bpf/progs/uprobe_multi.c | 6 +- 2 files changed, 126 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c index 19a66431a61f..32704a2c7184 100644 --- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c @@ -25,14 +25,87 @@ noinline void uprobe_multi_func_3(void) asm volatile (""); } -static void uprobe_multi_test_run(struct uprobe_multi *skel) +struct child { + int go[2]; + int pid; +}; + +static void release_child(struct child *child) +{ + int child_status; + + if (!child) + return; + close(child->go[1]); + close(child->go[0]); + if (child->pid > 0) + waitpid(child->pid, &child_status, 0); +} + +static void kick_child(struct child *child) +{ + char c = 1; + + if (child) { + write(child->go[1], &c, 1); + release_child(child); + } + fflush(NULL); +} + +static struct child *spawn_child(void) +{ + static struct child child; + int err; + int c; + + /* pipe to notify child to execute the trigger functions */ + if (pipe(child.go)) + return NULL; + + child.pid = fork(); + if (child.pid < 0) { + release_child(&child); + errno = EINVAL; + return NULL; + } + + /* child */ + if (child.pid == 0) { + close(child.go[1]); + + /* wait for parent's kick */ + err = read(child.go[0], &c, 1); + if (err != 1) + exit(err); + + uprobe_multi_func_1(); + uprobe_multi_func_2(); + uprobe_multi_func_3(); + + exit(errno); + } + + return &child; +} + +static void uprobe_multi_test_run(struct uprobe_multi *skel, struct child *child) { skel->bss->uprobe_multi_func_1_addr = (__u64) uprobe_multi_func_1; skel->bss->uprobe_multi_func_2_addr = (__u64) uprobe_multi_func_2; skel->bss->uprobe_multi_func_3_addr = (__u64) uprobe_multi_func_3; skel->bss->user_ptr = test_data; - skel->bss->pid = getpid(); + + /* + * Disable pid check in bpf program if we are pid filter test, + * because the probe should be executed only by child->pid + * passed at the probe attach. + */ + skel->bss->pid = child ? 0 : getpid(); + + if (child) + kick_child(child); /* trigger all probes */ uprobe_multi_func_1(); @@ -52,6 +125,9 @@ static void uprobe_multi_test_run(struct uprobe_multi *skel) ASSERT_EQ(skel->bss->uretprobe_multi_func_3_result, 2, "uretprobe_multi_func_3_result"); ASSERT_EQ(skel->bss->uprobe_multi_sleep_result, 6, "uprobe_multi_sleep_result"); + + if (child) + ASSERT_EQ(skel->bss->child_pid, child->pid, "uprobe_multi_child_pid"); } static void test_skel_api(void) @@ -67,15 +143,17 @@ static void test_skel_api(void) if (!ASSERT_OK(err, "uprobe_multi__attach")) goto cleanup; - uprobe_multi_test_run(skel); + uprobe_multi_test_run(skel, NULL); cleanup: uprobe_multi__destroy(skel); } static void -test_attach_api(const char *binary, const char *pattern, struct bpf_uprobe_multi_opts *opts) +__test_attach_api(const char *binary, const char *pattern, struct bpf_uprobe_multi_opts *opts, + struct child *child) { + pid_t pid = child ? child->pid : -1; struct uprobe_multi *skel = NULL; skel = uprobe_multi__open_and_load(); @@ -83,35 +161,51 @@ test_attach_api(const char *binary, const char *pattern, struct bpf_uprobe_multi goto cleanup; opts->retprobe = false; - skel->links.uprobe = bpf_program__attach_uprobe_multi(skel->progs.uprobe, -1, + skel->links.uprobe = bpf_program__attach_uprobe_multi(skel->progs.uprobe, pid, binary, pattern, opts); if (!ASSERT_OK_PTR(skel->links.uprobe, "bpf_program__attach_uprobe_multi")) goto cleanup; opts->retprobe = true; - skel->links.uretprobe = bpf_program__attach_uprobe_multi(skel->progs.uretprobe, -1, + skel->links.uretprobe = bpf_program__attach_uprobe_multi(skel->progs.uretprobe, pid, binary, pattern, opts); if (!ASSERT_OK_PTR(skel->links.uretprobe, "bpf_program__attach_uprobe_multi")) goto cleanup; opts->retprobe = false; - skel->links.uprobe_sleep = bpf_program__attach_uprobe_multi(skel->progs.uprobe_sleep, -1, + skel->links.uprobe_sleep = bpf_program__attach_uprobe_multi(skel->progs.uprobe_sleep, pid, binary, pattern, opts); if (!ASSERT_OK_PTR(skel->links.uprobe_sleep, "bpf_program__attach_uprobe_multi")) goto cleanup; opts->retprobe = true; skel->links.uretprobe_sleep = bpf_program__attach_uprobe_multi(skel->progs.uretprobe_sleep, - -1, binary, pattern, opts); + pid, binary, pattern, opts); if (!ASSERT_OK_PTR(skel->links.uretprobe_sleep, "bpf_program__attach_uprobe_multi")) goto cleanup; - uprobe_multi_test_run(skel); + uprobe_multi_test_run(skel, child); cleanup: uprobe_multi__destroy(skel); } +static void +test_attach_api(const char *binary, const char *pattern, struct bpf_uprobe_multi_opts *opts) +{ + struct child *child; + + /* no pid filter */ + __test_attach_api(binary, pattern, opts, NULL); + + /* pid filter */ + child = spawn_child(); + if (!ASSERT_OK_PTR(child, "spawn_child")) + return; + + __test_attach_api(binary, pattern, opts, child); +} + static void test_attach_api_pattern(void) { LIBBPF_OPTS(bpf_uprobe_multi_opts, opts); @@ -134,7 +228,7 @@ static void test_attach_api_syms(void) test_attach_api("/proc/self/exe", NULL, &opts); } -static void test_link_api(void) +static void __test_link_api(struct child *child) { int prog_fd, link1_fd = -1, link2_fd = -1, link3_fd = -1, link4_fd = -1; LIBBPF_OPTS(bpf_link_create_opts, opts); @@ -155,6 +249,7 @@ static void test_link_api(void) opts.uprobe_multi.path = path; opts.uprobe_multi.offsets = offsets; opts.uprobe_multi.cnt = ARRAY_SIZE(syms); + opts.uprobe_multi.pid = child ? child->pid : 0; skel = uprobe_multi__open_and_load(); if (!ASSERT_OK_PTR(skel, "uprobe_multi__open_and_load")) @@ -183,7 +278,7 @@ static void test_link_api(void) link4_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &opts); if (!ASSERT_GE(link4_fd, 0, "link4_fd")) goto cleanup; - uprobe_multi_test_run(skel); + uprobe_multi_test_run(skel, child); cleanup: if (link1_fd >= 0) @@ -199,6 +294,21 @@ static void test_link_api(void) free(offsets); } +void test_link_api(void) +{ + struct child *child; + + /* no pid filter */ + __test_link_api(NULL); + + /* pid filter */ + child = spawn_child(); + if (!ASSERT_OK_PTR(child, "spawn_child")) + return; + + __test_link_api(child); +} + static void test_bench_attach_uprobe(void) { long attach_start_ns, attach_end_ns = 0; diff --git a/tools/testing/selftests/bpf/progs/uprobe_multi.c b/tools/testing/selftests/bpf/progs/uprobe_multi.c index ab467970256a..ec648a6699e6 100644 --- a/tools/testing/selftests/bpf/progs/uprobe_multi.c +++ b/tools/testing/selftests/bpf/progs/uprobe_multi.c @@ -21,6 +21,8 @@ __u64 uretprobe_multi_func_3_result = 0; __u64 uprobe_multi_sleep_result = 0; int pid = 0; +int child_pid = 0; + bool test_cookie = false; void *user_ptr = 0; @@ -34,7 +36,9 @@ static __always_inline bool verify_sleepable_user_copy(void) static void uprobe_multi_check(void *ctx, bool is_return, bool is_sleep) { - if (bpf_get_current_pid_tgid() >> 32 != pid) + child_pid = bpf_get_current_pid_tgid() >> 32; + + if (pid && child_pid != pid) return; __u64 cookie = test_cookie ? bpf_get_attach_cookie(ctx) : 0; From patchwork Sun Jul 30 13:42:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13333442 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D882363 for ; Sun, 30 Jul 2023 13:47:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5552CC433C8; Sun, 30 Jul 2023 13:47:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690724829; bh=RY6UMDMXK1iJghRtWzlILqa6pOuuQe01XR9ZONDF4kM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TeJc0W0yv2BubUomWnaUpC6bNwcHcvoyiaxPt6/nhOY2Kf1o5QtizYd3ioa0TdoH2 bQf9XhqkdztLjumI4tKST/Sc5XO9oxl+5LjDa/gMsYzfyXYLPIzWVxxUTsDr51ozd8 S8BLxdqsxlXkxRX+MqIrp/srBgyZg44vmzrQFRTATjrcoUv3j97ymtUpHbuAPmGqHu Mnd2dp0wRC6v1jmtlbLuaZ0vteuDJJllcu5VuVudh4azVLUicWZ0gcmRketmCIt8BW MLDPGo1LxSzzdHamFPcu1lllLLOuOmmUkPDFaRl0Ne6lgBQzpU9mOH4BAp6nhgnRCu 3+CDuYmJYFuCg== From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Yafang Shao Subject: [PATCHv5 bpf-next 28/28] selftests/bpf: Add extra link to uprobe_multi tests Date: Sun, 30 Jul 2023 15:42:23 +0200 Message-ID: <20230730134223.94496-29-jolsa@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230730134223.94496-1-jolsa@kernel.org> References: <20230730134223.94496-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Attaching extra program to same functions system wide for api and link tests. This way we can test the pid filter works properly when there's extra system wide consumer on the same uprobe that will trigger the original uprobe handler. We expect to have the same counts as before. Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/uprobe_multi_test.c | 17 +++++++++++++++++ .../testing/selftests/bpf/progs/uprobe_multi.c | 6 ++++++ 2 files changed, 23 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c index 32704a2c7184..55a9c1bfa098 100644 --- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c @@ -184,6 +184,12 @@ __test_attach_api(const char *binary, const char *pattern, struct bpf_uprobe_mul if (!ASSERT_OK_PTR(skel->links.uretprobe_sleep, "bpf_program__attach_uprobe_multi")) goto cleanup; + opts->retprobe = false; + skel->links.uprobe_extra = bpf_program__attach_uprobe_multi(skel->progs.uprobe_extra, -1, + binary, pattern, opts); + if (!ASSERT_OK_PTR(skel->links.uprobe_extra, "bpf_program__attach_uprobe_multi")) + goto cleanup; + uprobe_multi_test_run(skel, child); cleanup: @@ -240,6 +246,7 @@ static void __test_link_api(struct child *child) "uprobe_multi_func_2", "uprobe_multi_func_3", }; + int link_extra_fd = -1; int err; err = elf_resolve_syms_offsets(path, 3, syms, (unsigned long **) &offsets); @@ -278,6 +285,14 @@ static void __test_link_api(struct child *child) link4_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &opts); if (!ASSERT_GE(link4_fd, 0, "link4_fd")) goto cleanup; + + opts.kprobe_multi.flags = 0; + opts.uprobe_multi.pid = 0; + prog_fd = bpf_program__fd(skel->progs.uprobe_extra); + link_extra_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &opts); + if (!ASSERT_GE(link_extra_fd, 0, "link_extra_fd")) + goto cleanup; + uprobe_multi_test_run(skel, child); cleanup: @@ -289,6 +304,8 @@ static void __test_link_api(struct child *child) close(link3_fd); if (link4_fd >= 0) close(link4_fd); + if (link_extra_fd >= 0) + close(link_extra_fd); uprobe_multi__destroy(skel); free(offsets); diff --git a/tools/testing/selftests/bpf/progs/uprobe_multi.c b/tools/testing/selftests/bpf/progs/uprobe_multi.c index ec648a6699e6..419d9aa28fce 100644 --- a/tools/testing/selftests/bpf/progs/uprobe_multi.c +++ b/tools/testing/selftests/bpf/progs/uprobe_multi.c @@ -93,3 +93,9 @@ int uretprobe_sleep(struct pt_regs *ctx) uprobe_multi_check(ctx, true, true); return 0; } + +SEC("uprobe.multi//proc/self/exe:uprobe_multi_func_*") +int uprobe_extra(struct pt_regs *ctx) +{ + return 0; +}