Message ID | 20230623141546.3751-10-laoar.shao@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | bpf: Support ->fill_link_info for kprobe_multi and perf_event links | expand |
On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote: > > By introducing support for ->fill_link_info to the perf_event link, users > gain the ability to inspect it using `bpftool link show`. While the current > approach involves accessing this information via `bpftool perf show`, > consolidating link information for all link types in one place offers > greater convenience. Additionally, this patch extends support to the > generic perf event, which is not currently accommodated by > `bpftool perf show`. While only the perf type and config are exposed to > userspace, other attributes such as sample_period and sample_freq are > ignored. It's important to note that if kptr_restrict is not permitted, the > probed address will not be exposed, maintaining security measures. > > A new enum bpf_perf_event_type is introduced to help the user understand > which struct is relevant. > > Signed-off-by: Yafang Shao <laoar.shao@gmail.com> > --- > include/uapi/linux/bpf.h | 35 +++++++++++++ > kernel/bpf/syscall.c | 115 +++++++++++++++++++++++++++++++++++++++++ > tools/include/uapi/linux/bpf.h | 35 +++++++++++++ > 3 files changed, 185 insertions(+) > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h > index 23691ea..1c579d5 100644 > --- a/include/uapi/linux/bpf.h > +++ b/include/uapi/linux/bpf.h > @@ -1056,6 +1056,14 @@ enum bpf_link_type { > MAX_BPF_LINK_TYPE, > }; > > +enum bpf_perf_event_type { > + BPF_PERF_EVENT_UNSPEC = 0, > + BPF_PERF_EVENT_UPROBE = 1, > + BPF_PERF_EVENT_KPROBE = 2, > + BPF_PERF_EVENT_TRACEPOINT = 3, > + BPF_PERF_EVENT_EVENT = 4, > +}; > + > /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command > * > * NONE(default): No further bpf programs allowed in the subtree. > @@ -6443,6 +6451,33 @@ struct bpf_link_info { > __u32 count; > __u32 flags; > } kprobe_multi; > + struct { > + __u32 type; /* enum bpf_perf_event_type */ > + __u32 :32; > + union { > + struct { > + __aligned_u64 file_name; /* in/out */ > + __u32 name_len; > + __u32 offset;/* offset from file_name */ > + __u32 flags; > + } uprobe; /* BPF_PERF_EVENT_UPROBE */ > + struct { > + __aligned_u64 func_name; /* in/out */ > + __u32 name_len; > + __u32 offset;/* offset from func_name */ > + __u64 addr; > + __u32 flags; > + } kprobe; /* BPF_PERF_EVENT_KPROBE */ > + struct { > + __aligned_u64 tp_name; /* in/out */ > + __u32 name_len; > + } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */ > + struct { > + __u64 config; > + __u32 type; > + } event; /* BPF_PERF_EVENT_EVENT */ > + }; > + } perf_event; > }; > } __attribute__((aligned(8))); > > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c > index c863d39..02dad3c 100644 > --- a/kernel/bpf/syscall.c > +++ b/kernel/bpf/syscall.c > @@ -3394,9 +3394,124 @@ static int bpf_perf_link_fill_common(const struct perf_event *event, > return 0; > } > > +#ifdef CONFIG_KPROBE_EVENTS > +static int bpf_perf_link_fill_kprobe(const struct perf_event *event, > + struct bpf_link_info *info) > +{ > + char __user *uname; > + u64 addr, offset; > + u32 ulen, type; > + int err; > + > + uname = u64_to_user_ptr(info->perf_event.kprobe.func_name); > + ulen = info->perf_event.kprobe.name_len; > + info->perf_event.type = BPF_PERF_EVENT_KPROBE; > + err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr, > + &type); > + if (err) > + return err; > + > + info->perf_event.kprobe.offset = offset; > + if (type == BPF_FD_TYPE_KRETPROBE) > + info->perf_event.kprobe.flags = 1; hm... ok, sorry, I didn't realize that these flags are not part of UAPI. I don't think just randomly defining 1 to mean retprobe is a good approach. Let's drop flags if there are actually no flags. How about in addition to BPF_PERF_EVENT_UPROBE add BPF_PERF_EVENT_URETPROBE, and for BPF_PERF_EVENT_KPROBE add also BPF_PERF_EVENT_KRETPROBE. They will share respective perf_event.uprobe and perf_event.kprobe sections in bpf_link_info. It seems consistent with what we did for bpf_task_fd_type enum. > + if (!kallsyms_show_value(current_cred())) > + return 0; > + info->perf_event.kprobe.addr = addr; > + return 0; > +} > +#endif > + [...]
On Sat, Jun 24, 2023 at 5:55 AM Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote: > > > > By introducing support for ->fill_link_info to the perf_event link, users > > gain the ability to inspect it using `bpftool link show`. While the current > > approach involves accessing this information via `bpftool perf show`, > > consolidating link information for all link types in one place offers > > greater convenience. Additionally, this patch extends support to the > > generic perf event, which is not currently accommodated by > > `bpftool perf show`. While only the perf type and config are exposed to > > userspace, other attributes such as sample_period and sample_freq are > > ignored. It's important to note that if kptr_restrict is not permitted, the > > probed address will not be exposed, maintaining security measures. > > > > A new enum bpf_perf_event_type is introduced to help the user understand > > which struct is relevant. > > > > Signed-off-by: Yafang Shao <laoar.shao@gmail.com> > > --- > > include/uapi/linux/bpf.h | 35 +++++++++++++ > > kernel/bpf/syscall.c | 115 +++++++++++++++++++++++++++++++++++++++++ > > tools/include/uapi/linux/bpf.h | 35 +++++++++++++ > > 3 files changed, 185 insertions(+) > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h > > index 23691ea..1c579d5 100644 > > --- a/include/uapi/linux/bpf.h > > +++ b/include/uapi/linux/bpf.h > > @@ -1056,6 +1056,14 @@ enum bpf_link_type { > > MAX_BPF_LINK_TYPE, > > }; > > > > +enum bpf_perf_event_type { > > + BPF_PERF_EVENT_UNSPEC = 0, > > + BPF_PERF_EVENT_UPROBE = 1, > > + BPF_PERF_EVENT_KPROBE = 2, > > + BPF_PERF_EVENT_TRACEPOINT = 3, > > + BPF_PERF_EVENT_EVENT = 4, > > +}; > > + > > /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command > > * > > * NONE(default): No further bpf programs allowed in the subtree. > > @@ -6443,6 +6451,33 @@ struct bpf_link_info { > > __u32 count; > > __u32 flags; > > } kprobe_multi; > > + struct { > > + __u32 type; /* enum bpf_perf_event_type */ > > + __u32 :32; > > + union { > > + struct { > > + __aligned_u64 file_name; /* in/out */ > > + __u32 name_len; > > + __u32 offset;/* offset from file_name */ > > + __u32 flags; > > + } uprobe; /* BPF_PERF_EVENT_UPROBE */ > > + struct { > > + __aligned_u64 func_name; /* in/out */ > > + __u32 name_len; > > + __u32 offset;/* offset from func_name */ > > + __u64 addr; > > + __u32 flags; > > + } kprobe; /* BPF_PERF_EVENT_KPROBE */ > > + struct { > > + __aligned_u64 tp_name; /* in/out */ > > + __u32 name_len; > > + } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */ > > + struct { > > + __u64 config; > > + __u32 type; > > + } event; /* BPF_PERF_EVENT_EVENT */ > > + }; > > + } perf_event; > > }; > > } __attribute__((aligned(8))); > > > > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c > > index c863d39..02dad3c 100644 > > --- a/kernel/bpf/syscall.c > > +++ b/kernel/bpf/syscall.c > > @@ -3394,9 +3394,124 @@ static int bpf_perf_link_fill_common(const struct perf_event *event, > > return 0; > > } > > > > +#ifdef CONFIG_KPROBE_EVENTS > > +static int bpf_perf_link_fill_kprobe(const struct perf_event *event, > > + struct bpf_link_info *info) > > +{ > > + char __user *uname; > > + u64 addr, offset; > > + u32 ulen, type; > > + int err; > > + > > + uname = u64_to_user_ptr(info->perf_event.kprobe.func_name); > > + ulen = info->perf_event.kprobe.name_len; > > + info->perf_event.type = BPF_PERF_EVENT_KPROBE; > > + err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr, > > + &type); > > + if (err) > > + return err; > > + > > + info->perf_event.kprobe.offset = offset; > > + if (type == BPF_FD_TYPE_KRETPROBE) > > + info->perf_event.kprobe.flags = 1; > > hm... ok, sorry, I didn't realize that these flags are not part of > UAPI. I don't think just randomly defining 1 to mean retprobe is a > good approach. Let's drop flags if there are actually no flags. > > How about in addition to BPF_PERF_EVENT_UPROBE add > BPF_PERF_EVENT_URETPROBE, and for BPF_PERF_EVENT_KPROBE add also > BPF_PERF_EVENT_KRETPROBE. They will share respective perf_event.uprobe > and perf_event.kprobe sections in bpf_link_info. > > It seems consistent with what we did for bpf_task_fd_type enum. Good idea. Will do it.
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 23691ea..1c579d5 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1056,6 +1056,14 @@ enum bpf_link_type { MAX_BPF_LINK_TYPE, }; +enum bpf_perf_event_type { + BPF_PERF_EVENT_UNSPEC = 0, + BPF_PERF_EVENT_UPROBE = 1, + BPF_PERF_EVENT_KPROBE = 2, + BPF_PERF_EVENT_TRACEPOINT = 3, + BPF_PERF_EVENT_EVENT = 4, +}; + /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command * * NONE(default): No further bpf programs allowed in the subtree. @@ -6443,6 +6451,33 @@ struct bpf_link_info { __u32 count; __u32 flags; } kprobe_multi; + struct { + __u32 type; /* enum bpf_perf_event_type */ + __u32 :32; + union { + struct { + __aligned_u64 file_name; /* in/out */ + __u32 name_len; + __u32 offset;/* offset from file_name */ + __u32 flags; + } uprobe; /* BPF_PERF_EVENT_UPROBE */ + struct { + __aligned_u64 func_name; /* in/out */ + __u32 name_len; + __u32 offset;/* offset from func_name */ + __u64 addr; + __u32 flags; + } kprobe; /* BPF_PERF_EVENT_KPROBE */ + struct { + __aligned_u64 tp_name; /* in/out */ + __u32 name_len; + } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */ + struct { + __u64 config; + __u32 type; + } event; /* BPF_PERF_EVENT_EVENT */ + }; + } perf_event; }; } __attribute__((aligned(8))); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index c863d39..02dad3c 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3394,9 +3394,124 @@ static int bpf_perf_link_fill_common(const struct perf_event *event, return 0; } +#ifdef CONFIG_KPROBE_EVENTS +static int bpf_perf_link_fill_kprobe(const struct perf_event *event, + struct bpf_link_info *info) +{ + char __user *uname; + u64 addr, offset; + u32 ulen, type; + int err; + + uname = u64_to_user_ptr(info->perf_event.kprobe.func_name); + ulen = info->perf_event.kprobe.name_len; + info->perf_event.type = BPF_PERF_EVENT_KPROBE; + err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr, + &type); + if (err) + return err; + + info->perf_event.kprobe.offset = offset; + if (type == BPF_FD_TYPE_KRETPROBE) + info->perf_event.kprobe.flags = 1; + if (!kallsyms_show_value(current_cred())) + return 0; + info->perf_event.kprobe.addr = addr; + return 0; +} +#endif + +#ifdef CONFIG_UPROBE_EVENTS +static int bpf_perf_link_fill_uprobe(const struct perf_event *event, + struct bpf_link_info *info) +{ + char __user *uname; + u64 addr, offset; + u32 ulen, type; + int err; + + uname = u64_to_user_ptr(info->perf_event.uprobe.file_name); + ulen = info->perf_event.uprobe.name_len; + info->perf_event.type = BPF_PERF_EVENT_UPROBE; + err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr, + &type); + if (err) + return err; + + info->perf_event.uprobe.offset = offset; + if (type == BPF_FD_TYPE_URETPROBE) + info->perf_event.uprobe.flags = 1; + return 0; +} +#endif + +static int bpf_perf_link_fill_probe(const struct perf_event *event, + struct bpf_link_info *info) +{ +#ifdef CONFIG_KPROBE_EVENTS + if (event->tp_event->flags & TRACE_EVENT_FL_KPROBE) + return bpf_perf_link_fill_kprobe(event, info); +#endif +#ifdef CONFIG_UPROBE_EVENTS + if (event->tp_event->flags & TRACE_EVENT_FL_UPROBE) + return bpf_perf_link_fill_uprobe(event, info); +#endif + return -EOPNOTSUPP; +} + +static int bpf_perf_link_fill_tracepoint(const struct perf_event *event, + struct bpf_link_info *info) +{ + char __user *uname; + u64 addr, offset; + u32 ulen, type; + + uname = u64_to_user_ptr(info->perf_event.tracepoint.tp_name); + ulen = info->perf_event.tracepoint.name_len; + info->perf_event.type = BPF_PERF_EVENT_TRACEPOINT; + return bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr, + &type); +} + +static int bpf_perf_link_fill_perf_event(const struct perf_event *event, + struct bpf_link_info *info) +{ + info->perf_event.event.type = event->attr.type; + info->perf_event.event.config = event->attr.config; + info->perf_event.type = BPF_PERF_EVENT_EVENT; + return 0; +} + +static int bpf_perf_link_fill_link_info(const struct bpf_link *link, + struct bpf_link_info *info) +{ + struct bpf_perf_link *perf_link; + const struct perf_event *event; + + perf_link = container_of(link, struct bpf_perf_link, link); + event = perf_get_event(perf_link->perf_file); + if (IS_ERR(event)) + return PTR_ERR(event); + + if (!event->prog) + return -EINVAL; + + switch (event->prog->type) { + case BPF_PROG_TYPE_PERF_EVENT: + return bpf_perf_link_fill_perf_event(event, info); + case BPF_PROG_TYPE_TRACEPOINT: + return bpf_perf_link_fill_tracepoint(event, info); + case BPF_PROG_TYPE_KPROBE: + return bpf_perf_link_fill_probe(event, info); + default: + return -EOPNOTSUPP; + } +} + static const struct bpf_link_ops bpf_perf_link_lops = { .release = bpf_perf_link_release, .dealloc = bpf_perf_link_dealloc, + .fill_link_info = bpf_perf_link_fill_link_info, }; static int bpf_perf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 23691ea..1c579d5 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1056,6 +1056,14 @@ enum bpf_link_type { MAX_BPF_LINK_TYPE, }; +enum bpf_perf_event_type { + BPF_PERF_EVENT_UNSPEC = 0, + BPF_PERF_EVENT_UPROBE = 1, + BPF_PERF_EVENT_KPROBE = 2, + BPF_PERF_EVENT_TRACEPOINT = 3, + BPF_PERF_EVENT_EVENT = 4, +}; + /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command * * NONE(default): No further bpf programs allowed in the subtree. @@ -6443,6 +6451,33 @@ struct bpf_link_info { __u32 count; __u32 flags; } kprobe_multi; + struct { + __u32 type; /* enum bpf_perf_event_type */ + __u32 :32; + union { + struct { + __aligned_u64 file_name; /* in/out */ + __u32 name_len; + __u32 offset;/* offset from file_name */ + __u32 flags; + } uprobe; /* BPF_PERF_EVENT_UPROBE */ + struct { + __aligned_u64 func_name; /* in/out */ + __u32 name_len; + __u32 offset;/* offset from func_name */ + __u64 addr; + __u32 flags; + } kprobe; /* BPF_PERF_EVENT_KPROBE */ + struct { + __aligned_u64 tp_name; /* in/out */ + __u32 name_len; + } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */ + struct { + __u64 config; + __u32 type; + } event; /* BPF_PERF_EVENT_EVENT */ + }; + } perf_event; }; } __attribute__((aligned(8)));
By introducing support for ->fill_link_info to the perf_event link, users gain the ability to inspect it using `bpftool link show`. While the current approach involves accessing this information via `bpftool perf show`, consolidating link information for all link types in one place offers greater convenience. Additionally, this patch extends support to the generic perf event, which is not currently accommodated by `bpftool perf show`. While only the perf type and config are exposed to userspace, other attributes such as sample_period and sample_freq are ignored. It's important to note that if kptr_restrict is not permitted, the probed address will not be exposed, maintaining security measures. A new enum bpf_perf_event_type is introduced to help the user understand which struct is relevant. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> --- include/uapi/linux/bpf.h | 35 +++++++++++++ kernel/bpf/syscall.c | 115 +++++++++++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 35 +++++++++++++ 3 files changed, 185 insertions(+)