diff mbox series

[v5,bpf-next,09/11] bpf: Support ->fill_link_info for perf_event

Message ID 20230623141546.3751-10-laoar.shao@gmail.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series bpf: Support ->fill_link_info for kprobe_multi and perf_event links | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-6 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 fail Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for veristat
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-16 fail Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on s390x with gcc
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1734 this patch: 1733
netdev/cc_maintainers success CCed 12 of 12 maintainers
netdev/build_clang success Errors and warnings before: 184 this patch: 182
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1733 this patch: 1732
netdev/checkpatch fail ERROR: space prohibited before that ':' (ctx:WxV)
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Yafang Shao June 23, 2023, 2:15 p.m. UTC
By introducing support for ->fill_link_info to the perf_event link, users
gain the ability to inspect it using `bpftool link show`. While the current
approach involves accessing this information via `bpftool perf show`,
consolidating link information for all link types in one place offers
greater convenience. Additionally, this patch extends support to the
generic perf event, which is not currently accommodated by
`bpftool perf show`. While only the perf type and config are exposed to
userspace, other attributes such as sample_period and sample_freq are
ignored. It's important to note that if kptr_restrict is not permitted, the
probed address will not be exposed, maintaining security measures.

A new enum bpf_perf_event_type is introduced to help the user understand
which struct is relevant.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 include/uapi/linux/bpf.h       |  35 +++++++++++++
 kernel/bpf/syscall.c           | 115 +++++++++++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h |  35 +++++++++++++
 3 files changed, 185 insertions(+)

Comments

Andrii Nakryiko June 23, 2023, 9:55 p.m. UTC | #1
On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote:
>
> By introducing support for ->fill_link_info to the perf_event link, users
> gain the ability to inspect it using `bpftool link show`. While the current
> approach involves accessing this information via `bpftool perf show`,
> consolidating link information for all link types in one place offers
> greater convenience. Additionally, this patch extends support to the
> generic perf event, which is not currently accommodated by
> `bpftool perf show`. While only the perf type and config are exposed to
> userspace, other attributes such as sample_period and sample_freq are
> ignored. It's important to note that if kptr_restrict is not permitted, the
> probed address will not be exposed, maintaining security measures.
>
> A new enum bpf_perf_event_type is introduced to help the user understand
> which struct is relevant.
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> ---
>  include/uapi/linux/bpf.h       |  35 +++++++++++++
>  kernel/bpf/syscall.c           | 115 +++++++++++++++++++++++++++++++++++++++++
>  tools/include/uapi/linux/bpf.h |  35 +++++++++++++
>  3 files changed, 185 insertions(+)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 23691ea..1c579d5 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1056,6 +1056,14 @@ enum bpf_link_type {
>         MAX_BPF_LINK_TYPE,
>  };
>
> +enum bpf_perf_event_type {
> +       BPF_PERF_EVENT_UNSPEC = 0,
> +       BPF_PERF_EVENT_UPROBE = 1,
> +       BPF_PERF_EVENT_KPROBE = 2,
> +       BPF_PERF_EVENT_TRACEPOINT = 3,
> +       BPF_PERF_EVENT_EVENT = 4,
> +};
> +
>  /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
>   *
>   * NONE(default): No further bpf programs allowed in the subtree.
> @@ -6443,6 +6451,33 @@ struct bpf_link_info {
>                         __u32 count;
>                         __u32 flags;
>                 } kprobe_multi;
> +               struct {
> +                       __u32 type; /* enum bpf_perf_event_type */
> +                       __u32 :32;
> +                       union {
> +                               struct {
> +                                       __aligned_u64 file_name; /* in/out */
> +                                       __u32 name_len;
> +                                       __u32 offset;/* offset from file_name */
> +                                       __u32 flags;
> +                               } uprobe; /* BPF_PERF_EVENT_UPROBE */
> +                               struct {
> +                                       __aligned_u64 func_name; /* in/out */
> +                                       __u32 name_len;
> +                                       __u32 offset;/* offset from func_name */
> +                                       __u64 addr;
> +                                       __u32 flags;
> +                               } kprobe; /* BPF_PERF_EVENT_KPROBE */
> +                               struct {
> +                                       __aligned_u64 tp_name;   /* in/out */
> +                                       __u32 name_len;
> +                               } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
> +                               struct {
> +                                       __u64 config;
> +                                       __u32 type;
> +                               } event; /* BPF_PERF_EVENT_EVENT */
> +                       };
> +               } perf_event;
>         };
>  } __attribute__((aligned(8)));
>
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index c863d39..02dad3c 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -3394,9 +3394,124 @@ static int bpf_perf_link_fill_common(const struct perf_event *event,
>         return 0;
>  }
>
> +#ifdef CONFIG_KPROBE_EVENTS
> +static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
> +                                    struct bpf_link_info *info)
> +{
> +       char __user *uname;
> +       u64 addr, offset;
> +       u32 ulen, type;
> +       int err;
> +
> +       uname = u64_to_user_ptr(info->perf_event.kprobe.func_name);
> +       ulen = info->perf_event.kprobe.name_len;
> +       info->perf_event.type = BPF_PERF_EVENT_KPROBE;
> +       err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
> +                                       &type);
> +       if (err)
> +               return err;
> +
> +       info->perf_event.kprobe.offset = offset;
> +       if (type == BPF_FD_TYPE_KRETPROBE)
> +               info->perf_event.kprobe.flags = 1;

hm... ok, sorry, I didn't realize that these flags are not part of
UAPI. I don't think just randomly defining 1 to mean retprobe is a
good approach. Let's drop flags if there are actually no flags.

How about in addition to BPF_PERF_EVENT_UPROBE add
BPF_PERF_EVENT_URETPROBE, and for BPF_PERF_EVENT_KPROBE add also
BPF_PERF_EVENT_KRETPROBE. They will share respective perf_event.uprobe
and perf_event.kprobe sections in bpf_link_info.

It seems consistent with what we did for bpf_task_fd_type enum.

> +       if (!kallsyms_show_value(current_cred()))
> +               return 0;
> +       info->perf_event.kprobe.addr = addr;
> +       return 0;
> +}
> +#endif
> +

[...]
Yafang Shao June 25, 2023, 2:35 p.m. UTC | #2
On Sat, Jun 24, 2023 at 5:55 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> >
> > By introducing support for ->fill_link_info to the perf_event link, users
> > gain the ability to inspect it using `bpftool link show`. While the current
> > approach involves accessing this information via `bpftool perf show`,
> > consolidating link information for all link types in one place offers
> > greater convenience. Additionally, this patch extends support to the
> > generic perf event, which is not currently accommodated by
> > `bpftool perf show`. While only the perf type and config are exposed to
> > userspace, other attributes such as sample_period and sample_freq are
> > ignored. It's important to note that if kptr_restrict is not permitted, the
> > probed address will not be exposed, maintaining security measures.
> >
> > A new enum bpf_perf_event_type is introduced to help the user understand
> > which struct is relevant.
> >
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> > ---
> >  include/uapi/linux/bpf.h       |  35 +++++++++++++
> >  kernel/bpf/syscall.c           | 115 +++++++++++++++++++++++++++++++++++++++++
> >  tools/include/uapi/linux/bpf.h |  35 +++++++++++++
> >  3 files changed, 185 insertions(+)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 23691ea..1c579d5 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -1056,6 +1056,14 @@ enum bpf_link_type {
> >         MAX_BPF_LINK_TYPE,
> >  };
> >
> > +enum bpf_perf_event_type {
> > +       BPF_PERF_EVENT_UNSPEC = 0,
> > +       BPF_PERF_EVENT_UPROBE = 1,
> > +       BPF_PERF_EVENT_KPROBE = 2,
> > +       BPF_PERF_EVENT_TRACEPOINT = 3,
> > +       BPF_PERF_EVENT_EVENT = 4,
> > +};
> > +
> >  /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
> >   *
> >   * NONE(default): No further bpf programs allowed in the subtree.
> > @@ -6443,6 +6451,33 @@ struct bpf_link_info {
> >                         __u32 count;
> >                         __u32 flags;
> >                 } kprobe_multi;
> > +               struct {
> > +                       __u32 type; /* enum bpf_perf_event_type */
> > +                       __u32 :32;
> > +                       union {
> > +                               struct {
> > +                                       __aligned_u64 file_name; /* in/out */
> > +                                       __u32 name_len;
> > +                                       __u32 offset;/* offset from file_name */
> > +                                       __u32 flags;
> > +                               } uprobe; /* BPF_PERF_EVENT_UPROBE */
> > +                               struct {
> > +                                       __aligned_u64 func_name; /* in/out */
> > +                                       __u32 name_len;
> > +                                       __u32 offset;/* offset from func_name */
> > +                                       __u64 addr;
> > +                                       __u32 flags;
> > +                               } kprobe; /* BPF_PERF_EVENT_KPROBE */
> > +                               struct {
> > +                                       __aligned_u64 tp_name;   /* in/out */
> > +                                       __u32 name_len;
> > +                               } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
> > +                               struct {
> > +                                       __u64 config;
> > +                                       __u32 type;
> > +                               } event; /* BPF_PERF_EVENT_EVENT */
> > +                       };
> > +               } perf_event;
> >         };
> >  } __attribute__((aligned(8)));
> >
> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > index c863d39..02dad3c 100644
> > --- a/kernel/bpf/syscall.c
> > +++ b/kernel/bpf/syscall.c
> > @@ -3394,9 +3394,124 @@ static int bpf_perf_link_fill_common(const struct perf_event *event,
> >         return 0;
> >  }
> >
> > +#ifdef CONFIG_KPROBE_EVENTS
> > +static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
> > +                                    struct bpf_link_info *info)
> > +{
> > +       char __user *uname;
> > +       u64 addr, offset;
> > +       u32 ulen, type;
> > +       int err;
> > +
> > +       uname = u64_to_user_ptr(info->perf_event.kprobe.func_name);
> > +       ulen = info->perf_event.kprobe.name_len;
> > +       info->perf_event.type = BPF_PERF_EVENT_KPROBE;
> > +       err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
> > +                                       &type);
> > +       if (err)
> > +               return err;
> > +
> > +       info->perf_event.kprobe.offset = offset;
> > +       if (type == BPF_FD_TYPE_KRETPROBE)
> > +               info->perf_event.kprobe.flags = 1;
>
> hm... ok, sorry, I didn't realize that these flags are not part of
> UAPI. I don't think just randomly defining 1 to mean retprobe is a
> good approach. Let's drop flags if there are actually no flags.
>
> How about in addition to BPF_PERF_EVENT_UPROBE add
> BPF_PERF_EVENT_URETPROBE, and for BPF_PERF_EVENT_KPROBE add also
> BPF_PERF_EVENT_KRETPROBE. They will share respective perf_event.uprobe
> and perf_event.kprobe sections in bpf_link_info.
>
> It seems consistent with what we did for bpf_task_fd_type enum.

Good idea. Will do it.
diff mbox series

Patch

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 23691ea..1c579d5 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1056,6 +1056,14 @@  enum bpf_link_type {
 	MAX_BPF_LINK_TYPE,
 };
 
+enum bpf_perf_event_type {
+	BPF_PERF_EVENT_UNSPEC = 0,
+	BPF_PERF_EVENT_UPROBE = 1,
+	BPF_PERF_EVENT_KPROBE = 2,
+	BPF_PERF_EVENT_TRACEPOINT = 3,
+	BPF_PERF_EVENT_EVENT = 4,
+};
+
 /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
  *
  * NONE(default): No further bpf programs allowed in the subtree.
@@ -6443,6 +6451,33 @@  struct bpf_link_info {
 			__u32 count;
 			__u32 flags;
 		} kprobe_multi;
+		struct {
+			__u32 type; /* enum bpf_perf_event_type */
+			__u32 :32;
+			union {
+				struct {
+					__aligned_u64 file_name; /* in/out */
+					__u32 name_len;
+					__u32 offset;/* offset from file_name */
+					__u32 flags;
+				} uprobe; /* BPF_PERF_EVENT_UPROBE */
+				struct {
+					__aligned_u64 func_name; /* in/out */
+					__u32 name_len;
+					__u32 offset;/* offset from func_name */
+					__u64 addr;
+					__u32 flags;
+				} kprobe; /* BPF_PERF_EVENT_KPROBE */
+				struct {
+					__aligned_u64 tp_name;   /* in/out */
+					__u32 name_len;
+				} tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
+				struct {
+					__u64 config;
+					__u32 type;
+				} event; /* BPF_PERF_EVENT_EVENT */
+			};
+		} perf_event;
 	};
 } __attribute__((aligned(8)));
 
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index c863d39..02dad3c 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3394,9 +3394,124 @@  static int bpf_perf_link_fill_common(const struct perf_event *event,
 	return 0;
 }
 
+#ifdef CONFIG_KPROBE_EVENTS
+static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
+				     struct bpf_link_info *info)
+{
+	char __user *uname;
+	u64 addr, offset;
+	u32 ulen, type;
+	int err;
+
+	uname = u64_to_user_ptr(info->perf_event.kprobe.func_name);
+	ulen = info->perf_event.kprobe.name_len;
+	info->perf_event.type = BPF_PERF_EVENT_KPROBE;
+	err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
+					&type);
+	if (err)
+		return err;
+
+	info->perf_event.kprobe.offset = offset;
+	if (type == BPF_FD_TYPE_KRETPROBE)
+		info->perf_event.kprobe.flags = 1;
+	if (!kallsyms_show_value(current_cred()))
+		return 0;
+	info->perf_event.kprobe.addr = addr;
+	return 0;
+}
+#endif
+
+#ifdef CONFIG_UPROBE_EVENTS
+static int bpf_perf_link_fill_uprobe(const struct perf_event *event,
+				     struct bpf_link_info *info)
+{
+	char __user *uname;
+	u64 addr, offset;
+	u32 ulen, type;
+	int err;
+
+	uname = u64_to_user_ptr(info->perf_event.uprobe.file_name);
+	ulen = info->perf_event.uprobe.name_len;
+	info->perf_event.type = BPF_PERF_EVENT_UPROBE;
+	err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
+					&type);
+	if (err)
+		return err;
+
+	info->perf_event.uprobe.offset = offset;
+	if (type == BPF_FD_TYPE_URETPROBE)
+		info->perf_event.uprobe.flags = 1;
+	return 0;
+}
+#endif
+
+static int bpf_perf_link_fill_probe(const struct perf_event *event,
+				    struct bpf_link_info *info)
+{
+#ifdef CONFIG_KPROBE_EVENTS
+	if (event->tp_event->flags & TRACE_EVENT_FL_KPROBE)
+		return bpf_perf_link_fill_kprobe(event, info);
+#endif
+#ifdef CONFIG_UPROBE_EVENTS
+	if (event->tp_event->flags & TRACE_EVENT_FL_UPROBE)
+		return bpf_perf_link_fill_uprobe(event, info);
+#endif
+	return -EOPNOTSUPP;
+}
+
+static int bpf_perf_link_fill_tracepoint(const struct perf_event *event,
+					 struct bpf_link_info *info)
+{
+	char __user *uname;
+	u64 addr, offset;
+	u32 ulen, type;
+
+	uname = u64_to_user_ptr(info->perf_event.tracepoint.tp_name);
+	ulen = info->perf_event.tracepoint.name_len;
+	info->perf_event.type = BPF_PERF_EVENT_TRACEPOINT;
+	return bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
+					 &type);
+}
+
+static int bpf_perf_link_fill_perf_event(const struct perf_event *event,
+					 struct bpf_link_info *info)
+{
+	info->perf_event.event.type = event->attr.type;
+	info->perf_event.event.config = event->attr.config;
+	info->perf_event.type = BPF_PERF_EVENT_EVENT;
+	return 0;
+}
+
+static int bpf_perf_link_fill_link_info(const struct bpf_link *link,
+					struct bpf_link_info *info)
+{
+	struct bpf_perf_link *perf_link;
+	const struct perf_event *event;
+
+	perf_link = container_of(link, struct bpf_perf_link, link);
+	event = perf_get_event(perf_link->perf_file);
+	if (IS_ERR(event))
+		return PTR_ERR(event);
+
+	if (!event->prog)
+		return -EINVAL;
+
+	switch (event->prog->type) {
+	case BPF_PROG_TYPE_PERF_EVENT:
+		return bpf_perf_link_fill_perf_event(event, info);
+	case BPF_PROG_TYPE_TRACEPOINT:
+		return bpf_perf_link_fill_tracepoint(event, info);
+	case BPF_PROG_TYPE_KPROBE:
+		return bpf_perf_link_fill_probe(event, info);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 static const struct bpf_link_ops bpf_perf_link_lops = {
 	.release = bpf_perf_link_release,
 	.dealloc = bpf_perf_link_dealloc,
+	.fill_link_info = bpf_perf_link_fill_link_info,
 };
 
 static int bpf_perf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 23691ea..1c579d5 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1056,6 +1056,14 @@  enum bpf_link_type {
 	MAX_BPF_LINK_TYPE,
 };
 
+enum bpf_perf_event_type {
+	BPF_PERF_EVENT_UNSPEC = 0,
+	BPF_PERF_EVENT_UPROBE = 1,
+	BPF_PERF_EVENT_KPROBE = 2,
+	BPF_PERF_EVENT_TRACEPOINT = 3,
+	BPF_PERF_EVENT_EVENT = 4,
+};
+
 /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
  *
  * NONE(default): No further bpf programs allowed in the subtree.
@@ -6443,6 +6451,33 @@  struct bpf_link_info {
 			__u32 count;
 			__u32 flags;
 		} kprobe_multi;
+		struct {
+			__u32 type; /* enum bpf_perf_event_type */
+			__u32 :32;
+			union {
+				struct {
+					__aligned_u64 file_name; /* in/out */
+					__u32 name_len;
+					__u32 offset;/* offset from file_name */
+					__u32 flags;
+				} uprobe; /* BPF_PERF_EVENT_UPROBE */
+				struct {
+					__aligned_u64 func_name; /* in/out */
+					__u32 name_len;
+					__u32 offset;/* offset from func_name */
+					__u64 addr;
+					__u32 flags;
+				} kprobe; /* BPF_PERF_EVENT_KPROBE */
+				struct {
+					__aligned_u64 tp_name;   /* in/out */
+					__u32 name_len;
+				} tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
+				struct {
+					__u64 config;
+					__u32 type;
+				} event; /* BPF_PERF_EVENT_EVENT */
+			};
+		} perf_event;
 	};
 } __attribute__((aligned(8)));