diff mbox series

[bpf-next,2/3] bpf: Add bpf_perf_event_read_sample() helper

Message ID 20221101052340.1210239-3-namhyung@kernel.org (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series bpf: Add bpf_perf_event_read_sample() helper (v1) | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1693 this patch: 1693
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_verifier on x86_64 with llvm-16
netdev/cc_maintainers warning 3 maintainers not CCed: mhiramat@kernel.org song@kernel.org martin.lau@linux.dev
netdev/build_clang success Errors and warnings before: 171 this patch: 171
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1685 this patch: 1685
netdev/checkpatch warning WARNING: ENOSYS means 'invalid syscall nr' and nothing else WARNING: line length of 106 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-1 pending Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-5 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-6 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_progs on s390x with gcc

Commit Message

Namhyung Kim Nov. 1, 2022, 5:23 a.m. UTC
The bpf_perf_event_read_sample() helper is to get the specified sample
data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
decision for filtering on samples.  Currently PERF_SAMPLE_IP and
PERF_SAMPLE_DATA flags are supported only.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 include/uapi/linux/bpf.h       | 23 ++++++++++++++++
 kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
 3 files changed, 95 insertions(+)

Comments

Jiri Olsa Nov. 1, 2022, 10:02 a.m. UTC | #1
On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
> The bpf_perf_event_read_sample() helper is to get the specified sample
> data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
> decision for filtering on samples.  Currently PERF_SAMPLE_IP and
> PERF_SAMPLE_DATA flags are supported only.
> 
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
>  include/uapi/linux/bpf.h       | 23 ++++++++++++++++
>  kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
>  tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
>  3 files changed, 95 insertions(+)
> 
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 94659f6b3395..cba501de9373 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -5481,6 +5481,28 @@ union bpf_attr {
>   *		0 on success.
>   *
>   *		**-ENOENT** if the bpf_local_storage cannot be found.
> + *
> + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
> + *	Description
> + *		For an eBPF program attached to a perf event, retrieve the
> + *		sample data associated to *ctx*	and store it in the buffer
> + *		pointed by *buf* up to size *size* bytes.
> + *
> + *		The *sample_flags* should contain a single value in the
> + *		**enum perf_event_sample_format**.
> + *	Return
> + *		On success, number of bytes written to *buf*. On error, a
> + *		negative value.
> + *
> + *		The *buf* can be set to **NULL** to return the number of bytes
> + *		required to store the requested sample data.
> + *
> + *		**-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
> + *
> + *		**-ENOENT** if the associated perf event doesn't have the data.
> + *
> + *		**-ENOSYS** if system doesn't support the sample data to be
> + *		retrieved.
>   */
>  #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
>  	FN(unspec, 0, ##ctx)				\
> @@ -5695,6 +5717,7 @@ union bpf_attr {
>  	FN(user_ringbuf_drain, 209, ##ctx)		\
>  	FN(cgrp_storage_get, 210, ##ctx)		\
>  	FN(cgrp_storage_delete, 211, ##ctx)		\
> +	FN(perf_event_read_sample, 212, ##ctx)		\
>  	/* */
>  
>  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index ce0228c72a93..befd937afa3c 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -28,6 +28,7 @@
>  
>  #include <uapi/linux/bpf.h>
>  #include <uapi/linux/btf.h>
> +#include <uapi/linux/perf_event.h>
>  
>  #include <asm/tlb.h>
>  
> @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
>  	.arg4_type      = ARG_ANYTHING,
>  };
>  
> +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
> +	   void *, buf, u32, size, u64, flags)
> +{

I wonder we could add perf_btf (like we have tp_btf) program type that
could access ctx->data directly without helpers

> +	struct perf_sample_data *sd = ctx->data;
> +	void *data;
> +	u32 to_copy = sizeof(u64);
> +
> +	/* only allow a single sample flag */
> +	if (!is_power_of_2(flags))
> +		return -EINVAL;
> +
> +	/* support reading only already populated info */
> +	if (flags & ~sd->sample_flags)
> +		return -ENOENT;
> +
> +	switch (flags) {
> +	case PERF_SAMPLE_IP:
> +		data = &sd->ip;
> +		break;
> +	case PERF_SAMPLE_ADDR:
> +		data = &sd->addr;
> +		break;

AFAICS from pe_prog_convert_ctx_access you should be able to read addr
directly from context right? same as sample_period.. so I think if this
will be generic way to read sample data, should we add sample_period
as well?


> +	default:
> +		return -ENOSYS;
> +	}
> +
> +	if (!buf)
> +		return to_copy;
> +
> +	if (size < to_copy)
> +		to_copy = size;

should we fail in here instead? is there any point in returning
not complete data?

jirka


> +
> +	memcpy(buf, data, to_copy);
> +	return to_copy;
> +}
> +
> +static const struct bpf_func_proto bpf_perf_event_read_sample_proto = {
> +	.func           = bpf_perf_event_read_sample,
> +	.gpl_only       = true,
> +	.ret_type       = RET_INTEGER,
> +	.arg1_type      = ARG_PTR_TO_CTX,
> +	.arg2_type      = ARG_PTR_TO_MEM_OR_NULL,
> +	.arg3_type      = ARG_CONST_SIZE_OR_ZERO,
> +	.arg4_type      = ARG_ANYTHING,
> +};
> +
>  static const struct bpf_func_proto *
>  pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>  {
> @@ -1759,6 +1806,8 @@ pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>  		return &bpf_read_branch_records_proto;
>  	case BPF_FUNC_get_attach_cookie:
>  		return &bpf_get_attach_cookie_proto_pe;
> +	case BPF_FUNC_perf_event_read_sample:
> +		return &bpf_perf_event_read_sample_proto;
>  	default:
>  		return bpf_tracing_func_proto(func_id, prog);
>  	}
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 94659f6b3395..cba501de9373 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -5481,6 +5481,28 @@ union bpf_attr {
>   *		0 on success.
>   *
>   *		**-ENOENT** if the bpf_local_storage cannot be found.
> + *
> + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
> + *	Description
> + *		For an eBPF program attached to a perf event, retrieve the
> + *		sample data associated to *ctx*	and store it in the buffer
> + *		pointed by *buf* up to size *size* bytes.
> + *
> + *		The *sample_flags* should contain a single value in the
> + *		**enum perf_event_sample_format**.
> + *	Return
> + *		On success, number of bytes written to *buf*. On error, a
> + *		negative value.
> + *
> + *		The *buf* can be set to **NULL** to return the number of bytes
> + *		required to store the requested sample data.
> + *
> + *		**-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
> + *
> + *		**-ENOENT** if the associated perf event doesn't have the data.
> + *
> + *		**-ENOSYS** if system doesn't support the sample data to be
> + *		retrieved.
>   */
>  #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
>  	FN(unspec, 0, ##ctx)				\
> @@ -5695,6 +5717,7 @@ union bpf_attr {
>  	FN(user_ringbuf_drain, 209, ##ctx)		\
>  	FN(cgrp_storage_get, 210, ##ctx)		\
>  	FN(cgrp_storage_delete, 211, ##ctx)		\
> +	FN(perf_event_read_sample, 212, ##ctx)		\
>  	/* */
>  
>  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> -- 
> 2.38.1.273.g43a17bfeac-goog
>
Alexei Starovoitov Nov. 1, 2022, 6:26 p.m. UTC | #2
On Tue, Nov 1, 2022 at 3:03 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
> > The bpf_perf_event_read_sample() helper is to get the specified sample
> > data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
> > decision for filtering on samples.  Currently PERF_SAMPLE_IP and
> > PERF_SAMPLE_DATA flags are supported only.
> >
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> >  include/uapi/linux/bpf.h       | 23 ++++++++++++++++
> >  kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
> >  tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
> >  3 files changed, 95 insertions(+)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 94659f6b3395..cba501de9373 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -5481,6 +5481,28 @@ union bpf_attr {
> >   *           0 on success.
> >   *
> >   *           **-ENOENT** if the bpf_local_storage cannot be found.
> > + *
> > + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
> > + *   Description
> > + *           For an eBPF program attached to a perf event, retrieve the
> > + *           sample data associated to *ctx* and store it in the buffer
> > + *           pointed by *buf* up to size *size* bytes.
> > + *
> > + *           The *sample_flags* should contain a single value in the
> > + *           **enum perf_event_sample_format**.
> > + *   Return
> > + *           On success, number of bytes written to *buf*. On error, a
> > + *           negative value.
> > + *
> > + *           The *buf* can be set to **NULL** to return the number of bytes
> > + *           required to store the requested sample data.
> > + *
> > + *           **-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
> > + *
> > + *           **-ENOENT** if the associated perf event doesn't have the data.
> > + *
> > + *           **-ENOSYS** if system doesn't support the sample data to be
> > + *           retrieved.
> >   */
> >  #define ___BPF_FUNC_MAPPER(FN, ctx...)                       \
> >       FN(unspec, 0, ##ctx)                            \
> > @@ -5695,6 +5717,7 @@ union bpf_attr {
> >       FN(user_ringbuf_drain, 209, ##ctx)              \
> >       FN(cgrp_storage_get, 210, ##ctx)                \
> >       FN(cgrp_storage_delete, 211, ##ctx)             \
> > +     FN(perf_event_read_sample, 212, ##ctx)          \
> >       /* */
> >
> >  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index ce0228c72a93..befd937afa3c 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -28,6 +28,7 @@
> >
> >  #include <uapi/linux/bpf.h>
> >  #include <uapi/linux/btf.h>
> > +#include <uapi/linux/perf_event.h>
> >
> >  #include <asm/tlb.h>
> >
> > @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
> >       .arg4_type      = ARG_ANYTHING,
> >  };
> >
> > +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
> > +        void *, buf, u32, size, u64, flags)
> > +{
>
> I wonder we could add perf_btf (like we have tp_btf) program type that
> could access ctx->data directly without helpers
>
> > +     struct perf_sample_data *sd = ctx->data;
> > +     void *data;
> > +     u32 to_copy = sizeof(u64);
> > +
> > +     /* only allow a single sample flag */
> > +     if (!is_power_of_2(flags))
> > +             return -EINVAL;
> > +
> > +     /* support reading only already populated info */
> > +     if (flags & ~sd->sample_flags)
> > +             return -ENOENT;
> > +
> > +     switch (flags) {
> > +     case PERF_SAMPLE_IP:
> > +             data = &sd->ip;
> > +             break;
> > +     case PERF_SAMPLE_ADDR:
> > +             data = &sd->addr;
> > +             break;
>
> AFAICS from pe_prog_convert_ctx_access you should be able to read addr
> directly from context right? same as sample_period.. so I think if this
> will be generic way to read sample data, should we add sample_period
> as well?

+1
Let's avoid new stable helpers for this.
Pls use CORE and read perf_sample_data directly.
Song Liu Nov. 1, 2022, 6:46 p.m. UTC | #3
On Tue, Nov 1, 2022 at 11:26 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Nov 1, 2022 at 3:03 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
> > > The bpf_perf_event_read_sample() helper is to get the specified sample
> > > data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
> > > decision for filtering on samples.  Currently PERF_SAMPLE_IP and
> > > PERF_SAMPLE_DATA flags are supported only.
> > >
> > > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > > ---
> > >  include/uapi/linux/bpf.h       | 23 ++++++++++++++++
> > >  kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
> > >  tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
> > >  3 files changed, 95 insertions(+)
> > >
> > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > index 94659f6b3395..cba501de9373 100644
> > > --- a/include/uapi/linux/bpf.h
> > > +++ b/include/uapi/linux/bpf.h
> > > @@ -5481,6 +5481,28 @@ union bpf_attr {
> > >   *           0 on success.
> > >   *
> > >   *           **-ENOENT** if the bpf_local_storage cannot be found.
> > > + *
> > > + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
> > > + *   Description
> > > + *           For an eBPF program attached to a perf event, retrieve the
> > > + *           sample data associated to *ctx* and store it in the buffer
> > > + *           pointed by *buf* up to size *size* bytes.
> > > + *
> > > + *           The *sample_flags* should contain a single value in the
> > > + *           **enum perf_event_sample_format**.
> > > + *   Return
> > > + *           On success, number of bytes written to *buf*. On error, a
> > > + *           negative value.
> > > + *
> > > + *           The *buf* can be set to **NULL** to return the number of bytes
> > > + *           required to store the requested sample data.
> > > + *
> > > + *           **-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
> > > + *
> > > + *           **-ENOENT** if the associated perf event doesn't have the data.
> > > + *
> > > + *           **-ENOSYS** if system doesn't support the sample data to be
> > > + *           retrieved.
> > >   */
> > >  #define ___BPF_FUNC_MAPPER(FN, ctx...)                       \
> > >       FN(unspec, 0, ##ctx)                            \
> > > @@ -5695,6 +5717,7 @@ union bpf_attr {
> > >       FN(user_ringbuf_drain, 209, ##ctx)              \
> > >       FN(cgrp_storage_get, 210, ##ctx)                \
> > >       FN(cgrp_storage_delete, 211, ##ctx)             \
> > > +     FN(perf_event_read_sample, 212, ##ctx)          \
> > >       /* */
> > >
> > >  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > index ce0228c72a93..befd937afa3c 100644
> > > --- a/kernel/trace/bpf_trace.c
> > > +++ b/kernel/trace/bpf_trace.c
> > > @@ -28,6 +28,7 @@
> > >
> > >  #include <uapi/linux/bpf.h>
> > >  #include <uapi/linux/btf.h>
> > > +#include <uapi/linux/perf_event.h>
> > >
> > >  #include <asm/tlb.h>
> > >
> > > @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
> > >       .arg4_type      = ARG_ANYTHING,
> > >  };
> > >
> > > +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
> > > +        void *, buf, u32, size, u64, flags)
> > > +{
> >
> > I wonder we could add perf_btf (like we have tp_btf) program type that
> > could access ctx->data directly without helpers
> >
> > > +     struct perf_sample_data *sd = ctx->data;
> > > +     void *data;
> > > +     u32 to_copy = sizeof(u64);
> > > +
> > > +     /* only allow a single sample flag */
> > > +     if (!is_power_of_2(flags))
> > > +             return -EINVAL;
> > > +
> > > +     /* support reading only already populated info */
> > > +     if (flags & ~sd->sample_flags)
> > > +             return -ENOENT;
> > > +
> > > +     switch (flags) {
> > > +     case PERF_SAMPLE_IP:
> > > +             data = &sd->ip;
> > > +             break;
> > > +     case PERF_SAMPLE_ADDR:
> > > +             data = &sd->addr;
> > > +             break;
> >
> > AFAICS from pe_prog_convert_ctx_access you should be able to read addr
> > directly from context right? same as sample_period.. so I think if this
> > will be generic way to read sample data, should we add sample_period
> > as well?
>
> +1
> Let's avoid new stable helpers for this.
> Pls use CORE and read perf_sample_data directly.

We have legacy ways to access sample_period and addr with
struct bpf_perf_event_data and struct bpf_perf_event_data_kern. I
think mixing that
with CORE makes it confusing for the user. And a helper or a kfunc would make it
easier to follow. perf_btf might also be a good approach for this.

Thanks,
Song
Alexei Starovoitov Nov. 1, 2022, 6:52 p.m. UTC | #4
On Tue, Nov 1, 2022 at 11:47 AM Song Liu <song@kernel.org> wrote:
>
> On Tue, Nov 1, 2022 at 11:26 AM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Tue, Nov 1, 2022 at 3:03 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > >
> > > On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
> > > > The bpf_perf_event_read_sample() helper is to get the specified sample
> > > > data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
> > > > decision for filtering on samples.  Currently PERF_SAMPLE_IP and
> > > > PERF_SAMPLE_DATA flags are supported only.
> > > >
> > > > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > > > ---
> > > >  include/uapi/linux/bpf.h       | 23 ++++++++++++++++
> > > >  kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
> > > >  tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
> > > >  3 files changed, 95 insertions(+)
> > > >
> > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > index 94659f6b3395..cba501de9373 100644
> > > > --- a/include/uapi/linux/bpf.h
> > > > +++ b/include/uapi/linux/bpf.h
> > > > @@ -5481,6 +5481,28 @@ union bpf_attr {
> > > >   *           0 on success.
> > > >   *
> > > >   *           **-ENOENT** if the bpf_local_storage cannot be found.
> > > > + *
> > > > + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
> > > > + *   Description
> > > > + *           For an eBPF program attached to a perf event, retrieve the
> > > > + *           sample data associated to *ctx* and store it in the buffer
> > > > + *           pointed by *buf* up to size *size* bytes.
> > > > + *
> > > > + *           The *sample_flags* should contain a single value in the
> > > > + *           **enum perf_event_sample_format**.
> > > > + *   Return
> > > > + *           On success, number of bytes written to *buf*. On error, a
> > > > + *           negative value.
> > > > + *
> > > > + *           The *buf* can be set to **NULL** to return the number of bytes
> > > > + *           required to store the requested sample data.
> > > > + *
> > > > + *           **-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
> > > > + *
> > > > + *           **-ENOENT** if the associated perf event doesn't have the data.
> > > > + *
> > > > + *           **-ENOSYS** if system doesn't support the sample data to be
> > > > + *           retrieved.
> > > >   */
> > > >  #define ___BPF_FUNC_MAPPER(FN, ctx...)                       \
> > > >       FN(unspec, 0, ##ctx)                            \
> > > > @@ -5695,6 +5717,7 @@ union bpf_attr {
> > > >       FN(user_ringbuf_drain, 209, ##ctx)              \
> > > >       FN(cgrp_storage_get, 210, ##ctx)                \
> > > >       FN(cgrp_storage_delete, 211, ##ctx)             \
> > > > +     FN(perf_event_read_sample, 212, ##ctx)          \
> > > >       /* */
> > > >
> > > >  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > > index ce0228c72a93..befd937afa3c 100644
> > > > --- a/kernel/trace/bpf_trace.c
> > > > +++ b/kernel/trace/bpf_trace.c
> > > > @@ -28,6 +28,7 @@
> > > >
> > > >  #include <uapi/linux/bpf.h>
> > > >  #include <uapi/linux/btf.h>
> > > > +#include <uapi/linux/perf_event.h>
> > > >
> > > >  #include <asm/tlb.h>
> > > >
> > > > @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
> > > >       .arg4_type      = ARG_ANYTHING,
> > > >  };
> > > >
> > > > +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
> > > > +        void *, buf, u32, size, u64, flags)
> > > > +{
> > >
> > > I wonder we could add perf_btf (like we have tp_btf) program type that
> > > could access ctx->data directly without helpers
> > >
> > > > +     struct perf_sample_data *sd = ctx->data;
> > > > +     void *data;
> > > > +     u32 to_copy = sizeof(u64);
> > > > +
> > > > +     /* only allow a single sample flag */
> > > > +     if (!is_power_of_2(flags))
> > > > +             return -EINVAL;
> > > > +
> > > > +     /* support reading only already populated info */
> > > > +     if (flags & ~sd->sample_flags)
> > > > +             return -ENOENT;
> > > > +
> > > > +     switch (flags) {
> > > > +     case PERF_SAMPLE_IP:
> > > > +             data = &sd->ip;
> > > > +             break;
> > > > +     case PERF_SAMPLE_ADDR:
> > > > +             data = &sd->addr;
> > > > +             break;
> > >
> > > AFAICS from pe_prog_convert_ctx_access you should be able to read addr
> > > directly from context right? same as sample_period.. so I think if this
> > > will be generic way to read sample data, should we add sample_period
> > > as well?
> >
> > +1
> > Let's avoid new stable helpers for this.
> > Pls use CORE and read perf_sample_data directly.
>
> We have legacy ways to access sample_period and addr with
> struct bpf_perf_event_data and struct bpf_perf_event_data_kern. I
> think mixing that
> with CORE makes it confusing for the user. And a helper or a kfunc would make it
> easier to follow. perf_btf might also be a good approach for this.

imo that's a counter argument to non-CORE style.
struct bpf_perf_event_data has sample_period and addr,
and as soon as we pushed the boundaries it turned out it's not enough.
Now we're proposing to extend uapi a bit with sample_ip.
That will repeat the same mistake.
Just use CORE and read everything that is there today
and will be there in the future.
Song Liu Nov. 1, 2022, 8:04 p.m. UTC | #5
On Tue, Nov 1, 2022 at 11:53 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Nov 1, 2022 at 11:47 AM Song Liu <song@kernel.org> wrote:
> >
> > On Tue, Nov 1, 2022 at 11:26 AM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Tue, Nov 1, 2022 at 3:03 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > > >
> > > > On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
> > > > > The bpf_perf_event_read_sample() helper is to get the specified sample
> > > > > data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
> > > > > decision for filtering on samples.  Currently PERF_SAMPLE_IP and
> > > > > PERF_SAMPLE_DATA flags are supported only.
> > > > >
> > > > > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > > > > ---
> > > > >  include/uapi/linux/bpf.h       | 23 ++++++++++++++++
> > > > >  kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
> > > > >  tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
> > > > >  3 files changed, 95 insertions(+)
> > > > >
> > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > > index 94659f6b3395..cba501de9373 100644
> > > > > --- a/include/uapi/linux/bpf.h
> > > > > +++ b/include/uapi/linux/bpf.h
> > > > > @@ -5481,6 +5481,28 @@ union bpf_attr {
> > > > >   *           0 on success.
> > > > >   *
> > > > >   *           **-ENOENT** if the bpf_local_storage cannot be found.
> > > > > + *
> > > > > + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
> > > > > + *   Description
> > > > > + *           For an eBPF program attached to a perf event, retrieve the
> > > > > + *           sample data associated to *ctx* and store it in the buffer
> > > > > + *           pointed by *buf* up to size *size* bytes.
> > > > > + *
> > > > > + *           The *sample_flags* should contain a single value in the
> > > > > + *           **enum perf_event_sample_format**.
> > > > > + *   Return
> > > > > + *           On success, number of bytes written to *buf*. On error, a
> > > > > + *           negative value.
> > > > > + *
> > > > > + *           The *buf* can be set to **NULL** to return the number of bytes
> > > > > + *           required to store the requested sample data.
> > > > > + *
> > > > > + *           **-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
> > > > > + *
> > > > > + *           **-ENOENT** if the associated perf event doesn't have the data.
> > > > > + *
> > > > > + *           **-ENOSYS** if system doesn't support the sample data to be
> > > > > + *           retrieved.
> > > > >   */
> > > > >  #define ___BPF_FUNC_MAPPER(FN, ctx...)                       \
> > > > >       FN(unspec, 0, ##ctx)                            \
> > > > > @@ -5695,6 +5717,7 @@ union bpf_attr {
> > > > >       FN(user_ringbuf_drain, 209, ##ctx)              \
> > > > >       FN(cgrp_storage_get, 210, ##ctx)                \
> > > > >       FN(cgrp_storage_delete, 211, ##ctx)             \
> > > > > +     FN(perf_event_read_sample, 212, ##ctx)          \
> > > > >       /* */
> > > > >
> > > > >  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > > > index ce0228c72a93..befd937afa3c 100644
> > > > > --- a/kernel/trace/bpf_trace.c
> > > > > +++ b/kernel/trace/bpf_trace.c
> > > > > @@ -28,6 +28,7 @@
> > > > >
> > > > >  #include <uapi/linux/bpf.h>
> > > > >  #include <uapi/linux/btf.h>
> > > > > +#include <uapi/linux/perf_event.h>
> > > > >
> > > > >  #include <asm/tlb.h>
> > > > >
> > > > > @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
> > > > >       .arg4_type      = ARG_ANYTHING,
> > > > >  };
> > > > >
> > > > > +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
> > > > > +        void *, buf, u32, size, u64, flags)
> > > > > +{
> > > >
> > > > I wonder we could add perf_btf (like we have tp_btf) program type that
> > > > could access ctx->data directly without helpers
> > > >
> > > > > +     struct perf_sample_data *sd = ctx->data;
> > > > > +     void *data;
> > > > > +     u32 to_copy = sizeof(u64);
> > > > > +
> > > > > +     /* only allow a single sample flag */
> > > > > +     if (!is_power_of_2(flags))
> > > > > +             return -EINVAL;
> > > > > +
> > > > > +     /* support reading only already populated info */
> > > > > +     if (flags & ~sd->sample_flags)
> > > > > +             return -ENOENT;
> > > > > +
> > > > > +     switch (flags) {
> > > > > +     case PERF_SAMPLE_IP:
> > > > > +             data = &sd->ip;
> > > > > +             break;
> > > > > +     case PERF_SAMPLE_ADDR:
> > > > > +             data = &sd->addr;
> > > > > +             break;
> > > >
> > > > AFAICS from pe_prog_convert_ctx_access you should be able to read addr
> > > > directly from context right? same as sample_period.. so I think if this
> > > > will be generic way to read sample data, should we add sample_period
> > > > as well?
> > >
> > > +1
> > > Let's avoid new stable helpers for this.
> > > Pls use CORE and read perf_sample_data directly.
> >
> > We have legacy ways to access sample_period and addr with
> > struct bpf_perf_event_data and struct bpf_perf_event_data_kern. I
> > think mixing that
> > with CORE makes it confusing for the user. And a helper or a kfunc would make it
> > easier to follow. perf_btf might also be a good approach for this.
>
> imo that's a counter argument to non-CORE style.
> struct bpf_perf_event_data has sample_period and addr,
> and as soon as we pushed the boundaries it turned out it's not enough.
> Now we're proposing to extend uapi a bit with sample_ip.
> That will repeat the same mistake.
> Just use CORE and read everything that is there today
> and will be there in the future.

Another work of this effort is that we need the perf_event to prepare
required fields before calling the BPF program. I think we will need
some logic in addition to CORE to get that right. How about we add
perf_btf where the perf_event prepare all fields before calling the
BPF program? perf_btf + CORE will be able to read all fields in the
sample.

Thanks,
Song
Namhyung Kim Nov. 1, 2022, 10:16 p.m. UTC | #6
Hi,

On Tue, Nov 1, 2022 at 1:04 PM Song Liu <song@kernel.org> wrote:
>
> On Tue, Nov 1, 2022 at 11:53 AM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Tue, Nov 1, 2022 at 11:47 AM Song Liu <song@kernel.org> wrote:
> > >
> > > On Tue, Nov 1, 2022 at 11:26 AM Alexei Starovoitov
> > > <alexei.starovoitov@gmail.com> wrote:
> > > >
> > > > On Tue, Nov 1, 2022 at 3:03 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > > > >
> > > > > On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
> > > > > > The bpf_perf_event_read_sample() helper is to get the specified sample
> > > > > > data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
> > > > > > decision for filtering on samples.  Currently PERF_SAMPLE_IP and
> > > > > > PERF_SAMPLE_DATA flags are supported only.
> > > > > >
> > > > > > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > > > > > ---
> > > > > >  include/uapi/linux/bpf.h       | 23 ++++++++++++++++
> > > > > >  kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
> > > > > >  tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
> > > > > >  3 files changed, 95 insertions(+)
> > > > > >
> > > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > > > index 94659f6b3395..cba501de9373 100644
> > > > > > --- a/include/uapi/linux/bpf.h
> > > > > > +++ b/include/uapi/linux/bpf.h
> > > > > > @@ -5481,6 +5481,28 @@ union bpf_attr {
> > > > > >   *           0 on success.
> > > > > >   *
> > > > > >   *           **-ENOENT** if the bpf_local_storage cannot be found.
> > > > > > + *
> > > > > > + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
> > > > > > + *   Description
> > > > > > + *           For an eBPF program attached to a perf event, retrieve the
> > > > > > + *           sample data associated to *ctx* and store it in the buffer
> > > > > > + *           pointed by *buf* up to size *size* bytes.
> > > > > > + *
> > > > > > + *           The *sample_flags* should contain a single value in the
> > > > > > + *           **enum perf_event_sample_format**.
> > > > > > + *   Return
> > > > > > + *           On success, number of bytes written to *buf*. On error, a
> > > > > > + *           negative value.
> > > > > > + *
> > > > > > + *           The *buf* can be set to **NULL** to return the number of bytes
> > > > > > + *           required to store the requested sample data.
> > > > > > + *
> > > > > > + *           **-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
> > > > > > + *
> > > > > > + *           **-ENOENT** if the associated perf event doesn't have the data.
> > > > > > + *
> > > > > > + *           **-ENOSYS** if system doesn't support the sample data to be
> > > > > > + *           retrieved.
> > > > > >   */
> > > > > >  #define ___BPF_FUNC_MAPPER(FN, ctx...)                       \
> > > > > >       FN(unspec, 0, ##ctx)                            \
> > > > > > @@ -5695,6 +5717,7 @@ union bpf_attr {
> > > > > >       FN(user_ringbuf_drain, 209, ##ctx)              \
> > > > > >       FN(cgrp_storage_get, 210, ##ctx)                \
> > > > > >       FN(cgrp_storage_delete, 211, ##ctx)             \
> > > > > > +     FN(perf_event_read_sample, 212, ##ctx)          \
> > > > > >       /* */
> > > > > >
> > > > > >  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> > > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > > > > index ce0228c72a93..befd937afa3c 100644
> > > > > > --- a/kernel/trace/bpf_trace.c
> > > > > > +++ b/kernel/trace/bpf_trace.c
> > > > > > @@ -28,6 +28,7 @@
> > > > > >
> > > > > >  #include <uapi/linux/bpf.h>
> > > > > >  #include <uapi/linux/btf.h>
> > > > > > +#include <uapi/linux/perf_event.h>
> > > > > >
> > > > > >  #include <asm/tlb.h>
> > > > > >
> > > > > > @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
> > > > > >       .arg4_type      = ARG_ANYTHING,
> > > > > >  };
> > > > > >
> > > > > > +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
> > > > > > +        void *, buf, u32, size, u64, flags)
> > > > > > +{
> > > > >
> > > > > I wonder we could add perf_btf (like we have tp_btf) program type that
> > > > > could access ctx->data directly without helpers
> > > > >
> > > > > > +     struct perf_sample_data *sd = ctx->data;
> > > > > > +     void *data;
> > > > > > +     u32 to_copy = sizeof(u64);
> > > > > > +
> > > > > > +     /* only allow a single sample flag */
> > > > > > +     if (!is_power_of_2(flags))
> > > > > > +             return -EINVAL;
> > > > > > +
> > > > > > +     /* support reading only already populated info */
> > > > > > +     if (flags & ~sd->sample_flags)
> > > > > > +             return -ENOENT;
> > > > > > +
> > > > > > +     switch (flags) {
> > > > > > +     case PERF_SAMPLE_IP:
> > > > > > +             data = &sd->ip;
> > > > > > +             break;
> > > > > > +     case PERF_SAMPLE_ADDR:
> > > > > > +             data = &sd->addr;
> > > > > > +             break;
> > > > >
> > > > > AFAICS from pe_prog_convert_ctx_access you should be able to read addr
> > > > > directly from context right? same as sample_period.. so I think if this
> > > > > will be generic way to read sample data, should we add sample_period
> > > > > as well?
> > > >
> > > > +1
> > > > Let's avoid new stable helpers for this.
> > > > Pls use CORE and read perf_sample_data directly.
> > >
> > > We have legacy ways to access sample_period and addr with
> > > struct bpf_perf_event_data and struct bpf_perf_event_data_kern. I
> > > think mixing that
> > > with CORE makes it confusing for the user. And a helper or a kfunc would make it
> > > easier to follow. perf_btf might also be a good approach for this.
> >
> > imo that's a counter argument to non-CORE style.
> > struct bpf_perf_event_data has sample_period and addr,
> > and as soon as we pushed the boundaries it turned out it's not enough.
> > Now we're proposing to extend uapi a bit with sample_ip.
> > That will repeat the same mistake.
> > Just use CORE and read everything that is there today
> > and will be there in the future.
>
> Another work of this effort is that we need the perf_event to prepare
> required fields before calling the BPF program. I think we will need
> some logic in addition to CORE to get that right. How about we add
> perf_btf where the perf_event prepare all fields before calling the
> BPF program? perf_btf + CORE will be able to read all fields in the
> sample.

IIUC we want something like below to access sample data directly,
right?

  BPF_CORE_READ(ctx, data, ip);

Some fields like raw and callchains will have variable length data
so it'd be hard to check the boundary at load time.  Also it's possible
that some fields are not set (according to sample type), and it'd be
the user's (or programmer's) responsibility to check if the data is
valid.  If these are not the concerns, I think I'm good.

Thanks,
Namhyung
Song Liu Nov. 2, 2022, 12:13 a.m. UTC | #7
On Tue, Nov 1, 2022 at 3:17 PM Namhyung Kim <namhyung@kernel.org> wrote:
> > > > >
> > > > > +1
> > > > > Let's avoid new stable helpers for this.
> > > > > Pls use CORE and read perf_sample_data directly.
> > > >
> > > > We have legacy ways to access sample_period and addr with
> > > > struct bpf_perf_event_data and struct bpf_perf_event_data_kern. I
> > > > think mixing that
> > > > with CORE makes it confusing for the user. And a helper or a kfunc would make it
> > > > easier to follow. perf_btf might also be a good approach for this.
> > >
> > > imo that's a counter argument to non-CORE style.
> > > struct bpf_perf_event_data has sample_period and addr,
> > > and as soon as we pushed the boundaries it turned out it's not enough.
> > > Now we're proposing to extend uapi a bit with sample_ip.
> > > That will repeat the same mistake.
> > > Just use CORE and read everything that is there today
> > > and will be there in the future.
> >
> > Another work of this effort is that we need the perf_event to prepare
> > required fields before calling the BPF program. I think we will need
> > some logic in addition to CORE to get that right. How about we add
> > perf_btf where the perf_event prepare all fields before calling the
> > BPF program? perf_btf + CORE will be able to read all fields in the
> > sample.
>
> IIUC we want something like below to access sample data directly,
> right?
>
>   BPF_CORE_READ(ctx, data, ip);
>

I haven't tried this, but I guess we may need something like

data = ctx->data;
BPF_CORE_READ(data, ip);

> Some fields like raw and callchains will have variable length data
> so it'd be hard to check the boundary at load time.

I think we are fine as long as we can check boundaries at run time.

> Also it's possible
> that some fields are not set (according to sample type), and it'd be
> the user's (or programmer's) responsibility to check if the data is
> valid.  If these are not the concerns, I think I'm good.

So we still need 1/3 of the set to make sure the data is valid?

Thanks,
Song
Namhyung Kim Nov. 2, 2022, 10:18 p.m. UTC | #8
On Tue, Nov 1, 2022 at 5:13 PM Song Liu <song@kernel.org> wrote:
>
> On Tue, Nov 1, 2022 at 3:17 PM Namhyung Kim <namhyung@kernel.org> wrote:
> > IIUC we want something like below to access sample data directly,
> > right?
> >
> >   BPF_CORE_READ(ctx, data, ip);
> >
>
> I haven't tried this, but I guess we may need something like
>
> data = ctx->data;
> BPF_CORE_READ(data, ip);

Ok, will try.

>
> > Some fields like raw and callchains will have variable length data
> > so it'd be hard to check the boundary at load time.
>
> I think we are fine as long as we can check boundaries at run time.

Sure, that means it's the responsibility of BPF writers, right?

>
> > Also it's possible
> > that some fields are not set (according to sample type), and it'd be
> > the user's (or programmer's) responsibility to check if the data is
> > valid.  If these are not the concerns, I think I'm good.
>
> So we still need 1/3 of the set to make sure the data is valid?

Of course, I'll keep it in the v2.

Thanks,
Namhyung
Song Liu Nov. 3, 2022, 6:41 p.m. UTC | #9
On Wed, Nov 2, 2022 at 3:18 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> On Tue, Nov 1, 2022 at 5:13 PM Song Liu <song@kernel.org> wrote:
> >
> > On Tue, Nov 1, 2022 at 3:17 PM Namhyung Kim <namhyung@kernel.org> wrote:
> > > IIUC we want something like below to access sample data directly,
> > > right?
> > >
> > >   BPF_CORE_READ(ctx, data, ip);
> > >
> >
> > I haven't tried this, but I guess we may need something like
> >
> > data = ctx->data;
> > BPF_CORE_READ(data, ip);
>
> Ok, will try.
>
> >
> > > Some fields like raw and callchains will have variable length data
> > > so it'd be hard to check the boundary at load time.
> >
> > I think we are fine as long as we can check boundaries at run time.
>
> Sure, that means it's the responsibility of BPF writers, right?

Right, the author of the BPF program could check whether the data
is valid.

Song

>
> >
> > > Also it's possible
> > > that some fields are not set (according to sample type), and it'd be
> > > the user's (or programmer's) responsibility to check if the data is
> > > valid.  If these are not the concerns, I think I'm good.
> >
> > So we still need 1/3 of the set to make sure the data is valid?
>
> Of course, I'll keep it in the v2.
>
> Thanks,
> Namhyung
Yonghong Song Nov. 3, 2022, 7:45 p.m. UTC | #10
On 11/1/22 3:02 AM, Jiri Olsa wrote:
> On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
>> The bpf_perf_event_read_sample() helper is to get the specified sample
>> data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
>> decision for filtering on samples.  Currently PERF_SAMPLE_IP and
>> PERF_SAMPLE_DATA flags are supported only.
>>
>> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
>> ---
>>   include/uapi/linux/bpf.h       | 23 ++++++++++++++++
>>   kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
>>   tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
>>   3 files changed, 95 insertions(+)
>>
>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>> index 94659f6b3395..cba501de9373 100644
>> --- a/include/uapi/linux/bpf.h
>> +++ b/include/uapi/linux/bpf.h
>> @@ -5481,6 +5481,28 @@ union bpf_attr {
>>    *		0 on success.
>>    *
>>    *		**-ENOENT** if the bpf_local_storage cannot be found.
>> + *
>> + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
>> + *	Description
>> + *		For an eBPF program attached to a perf event, retrieve the
>> + *		sample data associated to *ctx*	and store it in the buffer
>> + *		pointed by *buf* up to size *size* bytes.
>> + *
>> + *		The *sample_flags* should contain a single value in the
>> + *		**enum perf_event_sample_format**.
>> + *	Return
>> + *		On success, number of bytes written to *buf*. On error, a
>> + *		negative value.
>> + *
>> + *		The *buf* can be set to **NULL** to return the number of bytes
>> + *		required to store the requested sample data.
>> + *
>> + *		**-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
>> + *
>> + *		**-ENOENT** if the associated perf event doesn't have the data.
>> + *
>> + *		**-ENOSYS** if system doesn't support the sample data to be
>> + *		retrieved.
>>    */
>>   #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
>>   	FN(unspec, 0, ##ctx)				\
>> @@ -5695,6 +5717,7 @@ union bpf_attr {
>>   	FN(user_ringbuf_drain, 209, ##ctx)		\
>>   	FN(cgrp_storage_get, 210, ##ctx)		\
>>   	FN(cgrp_storage_delete, 211, ##ctx)		\
>> +	FN(perf_event_read_sample, 212, ##ctx)		\
>>   	/* */
>>   
>>   /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>> index ce0228c72a93..befd937afa3c 100644
>> --- a/kernel/trace/bpf_trace.c
>> +++ b/kernel/trace/bpf_trace.c
>> @@ -28,6 +28,7 @@
>>   
>>   #include <uapi/linux/bpf.h>
>>   #include <uapi/linux/btf.h>
>> +#include <uapi/linux/perf_event.h>
>>   
>>   #include <asm/tlb.h>
>>   
>> @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
>>   	.arg4_type      = ARG_ANYTHING,
>>   };
>>   
>> +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
>> +	   void *, buf, u32, size, u64, flags)
>> +{
> 
> I wonder we could add perf_btf (like we have tp_btf) program type that
> could access ctx->data directly without helpers

Martin and I have discussed an idea to introduce a generic helper like
     bpf_get_kern_ctx(void *ctx)
Given a context, the helper will return a PTR_TO_BTF_ID representing the
corresponding kernel ctx. So in the above example, user could call

     struct bpf_perf_event_data_kern *kctx = bpf_get_kern_ctx(ctx);
     ...

To implement bpf_get_kern_ctx helper, the verifier can find the type
of the context and provide a hidden btf_id as the second parameter of
the actual kernel helper function like
     bpf_get_kern_ctx(ctx) {
        return ctx;
     }
     /* based on ctx_btf_id, find kctx_btf_id and return it to verifier */

     The bpf_get_kern_ctx helper can be inlined as well.

> 
>> +	struct perf_sample_data *sd = ctx->data;
>> +	void *data;
>> +	u32 to_copy = sizeof(u64);
>> +
>> +	/* only allow a single sample flag */
>> +	if (!is_power_of_2(flags))
>> +		return -EINVAL;
>> +
>> +	/* support reading only already populated info */
>> +	if (flags & ~sd->sample_flags)
>> +		return -ENOENT;
>> +
>> +	switch (flags) {
>> +	case PERF_SAMPLE_IP:
>> +		data = &sd->ip;
>> +		break;
>> +	case PERF_SAMPLE_ADDR:
>> +		data = &sd->addr;
>> +		break;
> 
> AFAICS from pe_prog_convert_ctx_access you should be able to read addr
> directly from context right? same as sample_period.. so I think if this
> will be generic way to read sample data, should we add sample_period
> as well?
> 
> 
>> +	default:
>> +		return -ENOSYS;
>> +	}
>> +
>> +	if (!buf)
>> +		return to_copy;
>> +
>> +	if (size < to_copy)
>> +		to_copy = size;
> 
> should we fail in here instead? is there any point in returning
> not complete data?
> 
> jirka
> 
> 
>> +
>> +	memcpy(buf, data, to_copy);
>> +	return to_copy;
>> +}
>> +
>> +static const struct bpf_func_proto bpf_perf_event_read_sample_proto = {
>> +	.func           = bpf_perf_event_read_sample,
>> +	.gpl_only       = true,
>> +	.ret_type       = RET_INTEGER,
>> +	.arg1_type      = ARG_PTR_TO_CTX,
>> +	.arg2_type      = ARG_PTR_TO_MEM_OR_NULL,
>> +	.arg3_type      = ARG_CONST_SIZE_OR_ZERO,
>> +	.arg4_type      = ARG_ANYTHING,
>> +};
>> +
>[...]
Song Liu Nov. 3, 2022, 8:55 p.m. UTC | #11
> On Nov 3, 2022, at 12:45 PM, Yonghong Song <yhs@meta.com> wrote:
> 
> 
> 
> On 11/1/22 3:02 AM, Jiri Olsa wrote:
>> On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
>>> The bpf_perf_event_read_sample() helper is to get the specified sample
>>> data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
>>> decision for filtering on samples.  Currently PERF_SAMPLE_IP and
>>> PERF_SAMPLE_DATA flags are supported only.
>>> 
>>> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
>>> ---
>>>  include/uapi/linux/bpf.h       | 23 ++++++++++++++++
>>>  kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
>>>  tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
>>>  3 files changed, 95 insertions(+)
>>> 
>>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>>> index 94659f6b3395..cba501de9373 100644
>>> --- a/include/uapi/linux/bpf.h
>>> +++ b/include/uapi/linux/bpf.h
>>> @@ -5481,6 +5481,28 @@ union bpf_attr {
>>>   *		0 on success.
>>>   *
>>>   *		**-ENOENT** if the bpf_local_storage cannot be found.
>>> + *
>>> + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
>>> + *	Description
>>> + *		For an eBPF program attached to a perf event, retrieve the
>>> + *		sample data associated to *ctx*	and store it in the buffer
>>> + *		pointed by *buf* up to size *size* bytes.
>>> + *
>>> + *		The *sample_flags* should contain a single value in the
>>> + *		**enum perf_event_sample_format**.
>>> + *	Return
>>> + *		On success, number of bytes written to *buf*. On error, a
>>> + *		negative value.
>>> + *
>>> + *		The *buf* can be set to **NULL** to return the number of bytes
>>> + *		required to store the requested sample data.
>>> + *
>>> + *		**-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
>>> + *
>>> + *		**-ENOENT** if the associated perf event doesn't have the data.
>>> + *
>>> + *		**-ENOSYS** if system doesn't support the sample data to be
>>> + *		retrieved.
>>>   */
>>>  #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
>>>  	FN(unspec, 0, ##ctx)				\
>>> @@ -5695,6 +5717,7 @@ union bpf_attr {
>>>  	FN(user_ringbuf_drain, 209, ##ctx)		\
>>>  	FN(cgrp_storage_get, 210, ##ctx)		\
>>>  	FN(cgrp_storage_delete, 211, ##ctx)		\
>>> +	FN(perf_event_read_sample, 212, ##ctx)		\
>>>  	/* */
>>>    /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
>>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>>> index ce0228c72a93..befd937afa3c 100644
>>> --- a/kernel/trace/bpf_trace.c
>>> +++ b/kernel/trace/bpf_trace.c
>>> @@ -28,6 +28,7 @@
>>>    #include <uapi/linux/bpf.h>
>>>  #include <uapi/linux/btf.h>
>>> +#include <uapi/linux/perf_event.h>
>>>    #include <asm/tlb.h>
>>>  @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
>>>  	.arg4_type      = ARG_ANYTHING,
>>>  };
>>>  +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
>>> +	   void *, buf, u32, size, u64, flags)
>>> +{
>> I wonder we could add perf_btf (like we have tp_btf) program type that
>> could access ctx->data directly without helpers
> 
> Martin and I have discussed an idea to introduce a generic helper like
>    bpf_get_kern_ctx(void *ctx)
> Given a context, the helper will return a PTR_TO_BTF_ID representing the
> corresponding kernel ctx. So in the above example, user could call
> 
>    struct bpf_perf_event_data_kern *kctx = bpf_get_kern_ctx(ctx);
>    ...

This is an interesting idea! 

> To implement bpf_get_kern_ctx helper, the verifier can find the type
> of the context and provide a hidden btf_id as the second parameter of
> the actual kernel helper function like
>    bpf_get_kern_ctx(ctx) {
>       return ctx;
>    }
>    /* based on ctx_btf_id, find kctx_btf_id and return it to verifier */

I think we will need a map of ctx_btf_id => kctx_btf_id. Shall we somehow
expose this to the user? 

Thanks,
Song


>    The bpf_get_kern_ctx helper can be inlined as well.
Yonghong Song Nov. 3, 2022, 9:21 p.m. UTC | #12
On 11/3/22 1:55 PM, Song Liu wrote:
> 
> 
>> On Nov 3, 2022, at 12:45 PM, Yonghong Song <yhs@meta.com> wrote:
>>
>>
>>
>> On 11/1/22 3:02 AM, Jiri Olsa wrote:
>>> On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
>>>> The bpf_perf_event_read_sample() helper is to get the specified sample
>>>> data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
>>>> decision for filtering on samples.  Currently PERF_SAMPLE_IP and
>>>> PERF_SAMPLE_DATA flags are supported only.
>>>>
>>>> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
>>>> ---
>>>>   include/uapi/linux/bpf.h       | 23 ++++++++++++++++
>>>>   kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
>>>>   tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
>>>>   3 files changed, 95 insertions(+)
>>>>
>>>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>>>> index 94659f6b3395..cba501de9373 100644
>>>> --- a/include/uapi/linux/bpf.h
>>>> +++ b/include/uapi/linux/bpf.h
>>>> @@ -5481,6 +5481,28 @@ union bpf_attr {
>>>>    *		0 on success.
>>>>    *
>>>>    *		**-ENOENT** if the bpf_local_storage cannot be found.
>>>> + *
>>>> + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
>>>> + *	Description
>>>> + *		For an eBPF program attached to a perf event, retrieve the
>>>> + *		sample data associated to *ctx*	and store it in the buffer
>>>> + *		pointed by *buf* up to size *size* bytes.
>>>> + *
>>>> + *		The *sample_flags* should contain a single value in the
>>>> + *		**enum perf_event_sample_format**.
>>>> + *	Return
>>>> + *		On success, number of bytes written to *buf*. On error, a
>>>> + *		negative value.
>>>> + *
>>>> + *		The *buf* can be set to **NULL** to return the number of bytes
>>>> + *		required to store the requested sample data.
>>>> + *
>>>> + *		**-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
>>>> + *
>>>> + *		**-ENOENT** if the associated perf event doesn't have the data.
>>>> + *
>>>> + *		**-ENOSYS** if system doesn't support the sample data to be
>>>> + *		retrieved.
>>>>    */
>>>>   #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
>>>>   	FN(unspec, 0, ##ctx)				\
>>>> @@ -5695,6 +5717,7 @@ union bpf_attr {
>>>>   	FN(user_ringbuf_drain, 209, ##ctx)		\
>>>>   	FN(cgrp_storage_get, 210, ##ctx)		\
>>>>   	FN(cgrp_storage_delete, 211, ##ctx)		\
>>>> +	FN(perf_event_read_sample, 212, ##ctx)		\
>>>>   	/* */
>>>>     /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
>>>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>>>> index ce0228c72a93..befd937afa3c 100644
>>>> --- a/kernel/trace/bpf_trace.c
>>>> +++ b/kernel/trace/bpf_trace.c
>>>> @@ -28,6 +28,7 @@
>>>>     #include <uapi/linux/bpf.h>
>>>>   #include <uapi/linux/btf.h>
>>>> +#include <uapi/linux/perf_event.h>
>>>>     #include <asm/tlb.h>
>>>>   @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
>>>>   	.arg4_type      = ARG_ANYTHING,
>>>>   };
>>>>   +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
>>>> +	   void *, buf, u32, size, u64, flags)
>>>> +{
>>> I wonder we could add perf_btf (like we have tp_btf) program type that
>>> could access ctx->data directly without helpers
>>
>> Martin and I have discussed an idea to introduce a generic helper like
>>     bpf_get_kern_ctx(void *ctx)
>> Given a context, the helper will return a PTR_TO_BTF_ID representing the
>> corresponding kernel ctx. So in the above example, user could call
>>
>>     struct bpf_perf_event_data_kern *kctx = bpf_get_kern_ctx(ctx);
>>     ...
> 
> This is an interesting idea!
> 
>> To implement bpf_get_kern_ctx helper, the verifier can find the type
>> of the context and provide a hidden btf_id as the second parameter of
>> the actual kernel helper function like
>>     bpf_get_kern_ctx(ctx) {
>>        return ctx;
>>     }
>>     /* based on ctx_btf_id, find kctx_btf_id and return it to verifier */
> 
> I think we will need a map of ctx_btf_id => kctx_btf_id. Shall we somehow
> expose this to the user?

Yes, inside the kernel we need ctx_btf_id -> kctx_btf_id mapping.
Good question. We might not want to this mapping as a stable API.
So using kfunc might be more appropriate.

> 
> Thanks,
> Song
> 
> 
>>     The bpf_get_kern_ctx helper can be inlined as well.
> 
>
Namhyung Kim Nov. 4, 2022, 6:18 a.m. UTC | #13
On Thu, Nov 3, 2022 at 2:21 PM Yonghong Song <yhs@meta.com> wrote:
>
>
>
> On 11/3/22 1:55 PM, Song Liu wrote:
> >
> >
> >> On Nov 3, 2022, at 12:45 PM, Yonghong Song <yhs@meta.com> wrote:
> >>
> >>
> >>
> >> On 11/1/22 3:02 AM, Jiri Olsa wrote:
> >>> On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote:
> >>>> The bpf_perf_event_read_sample() helper is to get the specified sample
> >>>> data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a
> >>>> decision for filtering on samples.  Currently PERF_SAMPLE_IP and
> >>>> PERF_SAMPLE_DATA flags are supported only.
> >>>>
> >>>> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> >>>> ---
> >>>>   include/uapi/linux/bpf.h       | 23 ++++++++++++++++
> >>>>   kernel/trace/bpf_trace.c       | 49 ++++++++++++++++++++++++++++++++++
> >>>>   tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++
> >>>>   3 files changed, 95 insertions(+)
> >>>>
> >>>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> >>>> index 94659f6b3395..cba501de9373 100644
> >>>> --- a/include/uapi/linux/bpf.h
> >>>> +++ b/include/uapi/linux/bpf.h
> >>>> @@ -5481,6 +5481,28 @@ union bpf_attr {
> >>>>    *               0 on success.
> >>>>    *
> >>>>    *               **-ENOENT** if the bpf_local_storage cannot be found.
> >>>> + *
> >>>> + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
> >>>> + *        Description
> >>>> + *                For an eBPF program attached to a perf event, retrieve the
> >>>> + *                sample data associated to *ctx* and store it in the buffer
> >>>> + *                pointed by *buf* up to size *size* bytes.
> >>>> + *
> >>>> + *                The *sample_flags* should contain a single value in the
> >>>> + *                **enum perf_event_sample_format**.
> >>>> + *        Return
> >>>> + *                On success, number of bytes written to *buf*. On error, a
> >>>> + *                negative value.
> >>>> + *
> >>>> + *                The *buf* can be set to **NULL** to return the number of bytes
> >>>> + *                required to store the requested sample data.
> >>>> + *
> >>>> + *                **-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
> >>>> + *
> >>>> + *                **-ENOENT** if the associated perf event doesn't have the data.
> >>>> + *
> >>>> + *                **-ENOSYS** if system doesn't support the sample data to be
> >>>> + *                retrieved.
> >>>>    */
> >>>>   #define ___BPF_FUNC_MAPPER(FN, ctx...)                   \
> >>>>    FN(unspec, 0, ##ctx)                            \
> >>>> @@ -5695,6 +5717,7 @@ union bpf_attr {
> >>>>    FN(user_ringbuf_drain, 209, ##ctx)              \
> >>>>    FN(cgrp_storage_get, 210, ##ctx)                \
> >>>>    FN(cgrp_storage_delete, 211, ##ctx)             \
> >>>> +  FN(perf_event_read_sample, 212, ##ctx)          \
> >>>>    /* */
> >>>>     /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> >>>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> >>>> index ce0228c72a93..befd937afa3c 100644
> >>>> --- a/kernel/trace/bpf_trace.c
> >>>> +++ b/kernel/trace/bpf_trace.c
> >>>> @@ -28,6 +28,7 @@
> >>>>     #include <uapi/linux/bpf.h>
> >>>>   #include <uapi/linux/btf.h>
> >>>> +#include <uapi/linux/perf_event.h>
> >>>>     #include <asm/tlb.h>
> >>>>   @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = {
> >>>>    .arg4_type      = ARG_ANYTHING,
> >>>>   };
> >>>>   +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
> >>>> +     void *, buf, u32, size, u64, flags)
> >>>> +{
> >>> I wonder we could add perf_btf (like we have tp_btf) program type that
> >>> could access ctx->data directly without helpers
> >>
> >> Martin and I have discussed an idea to introduce a generic helper like
> >>     bpf_get_kern_ctx(void *ctx)
> >> Given a context, the helper will return a PTR_TO_BTF_ID representing the
> >> corresponding kernel ctx. So in the above example, user could call
> >>
> >>     struct bpf_perf_event_data_kern *kctx = bpf_get_kern_ctx(ctx);
> >>     ...
> >
> > This is an interesting idea!
> >
> >> To implement bpf_get_kern_ctx helper, the verifier can find the type
> >> of the context and provide a hidden btf_id as the second parameter of
> >> the actual kernel helper function like
> >>     bpf_get_kern_ctx(ctx) {
> >>        return ctx;
> >>     }
> >>     /* based on ctx_btf_id, find kctx_btf_id and return it to verifier */
> >
> > I think we will need a map of ctx_btf_id => kctx_btf_id. Shall we somehow
> > expose this to the user?
>
> Yes, inside the kernel we need ctx_btf_id -> kctx_btf_id mapping.
> Good question. We might not want to this mapping as a stable API.
> So using kfunc might be more appropriate.

Ok, now I don't think I'm following well.. ;-)

So currently perf event type BPF programs can have perf_event
data context directly as an argument, but we want to disallow it?
I guess the context id mapping can be done implicitly based on
the prog type and/or attach type, but probably I'm missing
something here. :)

Thanks,
Namhyung
diff mbox series

Patch

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 94659f6b3395..cba501de9373 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5481,6 +5481,28 @@  union bpf_attr {
  *		0 on success.
  *
  *		**-ENOENT** if the bpf_local_storage cannot be found.
+ *
+ * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
+ *	Description
+ *		For an eBPF program attached to a perf event, retrieve the
+ *		sample data associated to *ctx*	and store it in the buffer
+ *		pointed by *buf* up to size *size* bytes.
+ *
+ *		The *sample_flags* should contain a single value in the
+ *		**enum perf_event_sample_format**.
+ *	Return
+ *		On success, number of bytes written to *buf*. On error, a
+ *		negative value.
+ *
+ *		The *buf* can be set to **NULL** to return the number of bytes
+ *		required to store the requested sample data.
+ *
+ *		**-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
+ *
+ *		**-ENOENT** if the associated perf event doesn't have the data.
+ *
+ *		**-ENOSYS** if system doesn't support the sample data to be
+ *		retrieved.
  */
 #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
 	FN(unspec, 0, ##ctx)				\
@@ -5695,6 +5717,7 @@  union bpf_attr {
 	FN(user_ringbuf_drain, 209, ##ctx)		\
 	FN(cgrp_storage_get, 210, ##ctx)		\
 	FN(cgrp_storage_delete, 211, ##ctx)		\
+	FN(perf_event_read_sample, 212, ##ctx)		\
 	/* */
 
 /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index ce0228c72a93..befd937afa3c 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -28,6 +28,7 @@ 
 
 #include <uapi/linux/bpf.h>
 #include <uapi/linux/btf.h>
+#include <uapi/linux/perf_event.h>
 
 #include <asm/tlb.h>
 
@@ -1743,6 +1744,52 @@  static const struct bpf_func_proto bpf_read_branch_records_proto = {
 	.arg4_type      = ARG_ANYTHING,
 };
 
+BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx,
+	   void *, buf, u32, size, u64, flags)
+{
+	struct perf_sample_data *sd = ctx->data;
+	void *data;
+	u32 to_copy = sizeof(u64);
+
+	/* only allow a single sample flag */
+	if (!is_power_of_2(flags))
+		return -EINVAL;
+
+	/* support reading only already populated info */
+	if (flags & ~sd->sample_flags)
+		return -ENOENT;
+
+	switch (flags) {
+	case PERF_SAMPLE_IP:
+		data = &sd->ip;
+		break;
+	case PERF_SAMPLE_ADDR:
+		data = &sd->addr;
+		break;
+	default:
+		return -ENOSYS;
+	}
+
+	if (!buf)
+		return to_copy;
+
+	if (size < to_copy)
+		to_copy = size;
+
+	memcpy(buf, data, to_copy);
+	return to_copy;
+}
+
+static const struct bpf_func_proto bpf_perf_event_read_sample_proto = {
+	.func           = bpf_perf_event_read_sample,
+	.gpl_only       = true,
+	.ret_type       = RET_INTEGER,
+	.arg1_type      = ARG_PTR_TO_CTX,
+	.arg2_type      = ARG_PTR_TO_MEM_OR_NULL,
+	.arg3_type      = ARG_CONST_SIZE_OR_ZERO,
+	.arg4_type      = ARG_ANYTHING,
+};
+
 static const struct bpf_func_proto *
 pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1759,6 +1806,8 @@  pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_read_branch_records_proto;
 	case BPF_FUNC_get_attach_cookie:
 		return &bpf_get_attach_cookie_proto_pe;
+	case BPF_FUNC_perf_event_read_sample:
+		return &bpf_perf_event_read_sample_proto;
 	default:
 		return bpf_tracing_func_proto(func_id, prog);
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 94659f6b3395..cba501de9373 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5481,6 +5481,28 @@  union bpf_attr {
  *		0 on success.
  *
  *		**-ENOENT** if the bpf_local_storage cannot be found.
+ *
+ * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags)
+ *	Description
+ *		For an eBPF program attached to a perf event, retrieve the
+ *		sample data associated to *ctx*	and store it in the buffer
+ *		pointed by *buf* up to size *size* bytes.
+ *
+ *		The *sample_flags* should contain a single value in the
+ *		**enum perf_event_sample_format**.
+ *	Return
+ *		On success, number of bytes written to *buf*. On error, a
+ *		negative value.
+ *
+ *		The *buf* can be set to **NULL** to return the number of bytes
+ *		required to store the requested sample data.
+ *
+ *		**-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag.
+ *
+ *		**-ENOENT** if the associated perf event doesn't have the data.
+ *
+ *		**-ENOSYS** if system doesn't support the sample data to be
+ *		retrieved.
  */
 #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
 	FN(unspec, 0, ##ctx)				\
@@ -5695,6 +5717,7 @@  union bpf_attr {
 	FN(user_ringbuf_drain, 209, ##ctx)		\
 	FN(cgrp_storage_get, 210, ##ctx)		\
 	FN(cgrp_storage_delete, 211, ##ctx)		\
+	FN(perf_event_read_sample, 212, ##ctx)		\
 	/* */
 
 /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't