diff mbox series

[bpf-next,1/2] bpf, test_run: Add PROG_TEST_RUN support to kprobe

Message ID b544771c7bce102f2a97a34e2f5e7ebbb9ea0a24.1653861287.git.dxu@dxuuu.xyz (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series Add PROG_TEST_RUN support to BPF_PROG_TYPE_KPROBE | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1444 this patch: 1444
netdev/cc_maintainers warning 12 maintainers not CCed: kafai@fb.com netdev@vger.kernel.org rostedt@goodmis.org songliubraving@fb.com mingo@redhat.com pabeni@redhat.com yhs@fb.com edumazet@google.com davem@davemloft.net john.fastabend@gmail.com kuba@kernel.org kpsingh@kernel.org
netdev/build_clang success Errors and warnings before: 177 this patch: 177
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1451 this patch: 1451
netdev/checkpatch warning WARNING: ENOTSUPP is not a SUSV4 error code, prefer EOPNOTSUPP
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Kernel LATEST on ubuntu-latest with llvm-15
bpf/vmtest-bpf-next-VM_Test-1 success Logs for Kernel LATEST on ubuntu-latest with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Kernel LATEST on z15 with gcc

Commit Message

Daniel Xu May 29, 2022, 10:06 p.m. UTC
This commit adds PROG_TEST_RUN support to BPF_PROG_TYPE_KPROBE progs. On
top of being generally useful for unit testing kprobe progs, this commit
more specifically helps solve a relability problem with bpftrace BEGIN
and END probes.

BEGIN and END probes are run exactly once at the beginning and end of a
bpftrace tracing session, respectively. bpftrace currently implements
the probes by creating two dummy functions and attaching the BEGIN and
END progs, if defined, to those functions and calling the dummy
functions as appropriate. This works pretty well most of the time except
for when distros strip symbols from bpftrace. Every now and then this
happens and users get confused. Having PROG_TEST_RUN support will help
solve this issue by allowing us to directly trigger uprobes from
userspace.

Admittedly, this is a pretty specific problem and could probably be
solved other ways. However, PROG_TEST_RUN also makes unit testing more
convenient, especially as users start building more complex tracing
applications. So I see this as killing two birds with one stone.

Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
---
 include/linux/bpf.h      | 10 ++++++++++
 kernel/trace/bpf_trace.c |  1 +
 net/bpf/test_run.c       | 36 ++++++++++++++++++++++++++++++++++++
 3 files changed, 47 insertions(+)

Comments

kernel test robot May 31, 2022, 3:12 p.m. UTC | #1
Hi Daniel,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Daniel-Xu/Add-PROG_TEST_RUN-support-to-BPF_PROG_TYPE_KPROBE/20220530-060742
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: s390-randconfig-r026-20220531 (https://download.01.org/0day-ci/archive/20220531/202205312315.3VC5jz4T-lkp@intel.com/config)
compiler: s390-linux-gcc (GCC) 11.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/a547a4c795103fd002d3bbb5ee4d7141113716c0
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Daniel-Xu/Add-PROG_TEST_RUN-support-to-BPF_PROG_TYPE_KPROBE/20220530-060742
        git checkout a547a4c795103fd002d3bbb5ee4d7141113716c0
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> s390-linux-ld: kernel/trace/bpf_trace.o:(.data.rel.ro+0x100): undefined reference to `bpf_prog_test_run_kprobe'
   s390-linux-ld: drivers/dma/qcom/hidma.o: in function `hidma_probe':
   hidma.c:(.text+0x313e): undefined reference to `devm_ioremap_resource'
   s390-linux-ld: hidma.c:(.text+0x3192): undefined reference to `devm_ioremap_resource'
Song Liu May 31, 2022, 5:04 p.m. UTC | #2
On Sun, May 29, 2022 at 3:06 PM Daniel Xu <dxu@dxuuu.xyz> wrote:
>
> This commit adds PROG_TEST_RUN support to BPF_PROG_TYPE_KPROBE progs. On
> top of being generally useful for unit testing kprobe progs, this commit
> more specifically helps solve a relability problem with bpftrace BEGIN
> and END probes.
>
> BEGIN and END probes are run exactly once at the beginning and end of a
> bpftrace tracing session, respectively. bpftrace currently implements
> the probes by creating two dummy functions and attaching the BEGIN and
> END progs, if defined, to those functions and calling the dummy
> functions as appropriate. This works pretty well most of the time except
> for when distros strip symbols from bpftrace. Every now and then this
> happens and users get confused. Having PROG_TEST_RUN support will help
> solve this issue by allowing us to directly trigger uprobes from
> userspace.
>
> Admittedly, this is a pretty specific problem and could probably be
> solved other ways. However, PROG_TEST_RUN also makes unit testing more
> convenient, especially as users start building more complex tracing
> applications. So I see this as killing two birds with one stone.
>
> Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
> ---
>  include/linux/bpf.h      | 10 ++++++++++
>  kernel/trace/bpf_trace.c |  1 +
>  net/bpf/test_run.c       | 36 ++++++++++++++++++++++++++++++++++++
>  3 files changed, 47 insertions(+)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 2b914a56a2c5..dec3082ee158 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1751,6 +1751,9 @@ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
>  int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
>                                 const union bpf_attr *kattr,
>                                 union bpf_attr __user *uattr);
> +int bpf_prog_test_run_kprobe(struct bpf_prog *prog,
> +                            const union bpf_attr *kattr,
> +                            union bpf_attr __user *uattr);
>  bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>                     const struct bpf_prog *prog,
>                     struct bpf_insn_access_aux *info);
> @@ -1998,6 +2001,13 @@ static inline int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
>         return -ENOTSUPP;
>  }
>
> +static inline int bpf_prog_test_run_kprobe(struct bpf_prog *prog,
> +                                          const union bpf_attr *kattr,
> +                                          union bpf_attr __user *uattr)
> +{
> +       return -ENOTSUPP;
> +}

As the kernel test bot reported, this is not enough to cover all
different configs. We can
follow the pattern with bpf_prog_test_run_tracing().

Otherwise, this looks good to me.

Song

> +
>  static inline void bpf_map_put(struct bpf_map *map)
>  {
>  }
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 10b157a6d73e..b452e84b9ba4 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1363,6 +1363,7 @@ const struct bpf_verifier_ops kprobe_verifier_ops = {
>  };
>
>  const struct bpf_prog_ops kprobe_prog_ops = {
> +       .test_run = bpf_prog_test_run_kprobe,
>  };
>
>  BPF_CALL_5(bpf_perf_event_output_tp, void *, tp_buff, struct bpf_map *, map,
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index 56f059b3c242..0b6fc17ce901 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -1622,6 +1622,42 @@ int bpf_prog_test_run_syscall(struct bpf_prog *prog,
>         return err;
>  }
>
> +int bpf_prog_test_run_kprobe(struct bpf_prog *prog,
> +                            const union bpf_attr *kattr,
> +                            union bpf_attr __user *uattr)
> +{
> +       void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
> +       __u32 ctx_size_in = kattr->test.ctx_size_in;
> +       u32 repeat = kattr->test.repeat;
> +       struct pt_regs *ctx = NULL;
> +       u32 retval, duration;
> +       int err = 0;
> +
> +       if (kattr->test.data_in || kattr->test.data_out ||
> +           kattr->test.ctx_out || kattr->test.flags ||
> +           kattr->test.cpu || kattr->test.batch_size)
> +               return -EINVAL;
> +
> +       if (ctx_size_in != sizeof(struct pt_regs))
> +               return -EINVAL;
> +
> +       ctx = memdup_user(ctx_in, ctx_size_in);
> +       if (IS_ERR(ctx))
> +               return PTR_ERR(ctx);
> +
> +       err = bpf_test_run(prog, ctx, repeat, &retval, &duration, false);
> +       if (err)
> +               goto out;
> +
> +       if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval)) ||
> +           copy_to_user(&uattr->test.duration, &duration, sizeof(duration))) {
> +               err = -EFAULT;
> +       }
> +out:
> +       kfree(ctx);
> +       return err;
> +}
> +
>  static const struct btf_kfunc_id_set bpf_prog_test_kfunc_set = {
>         .owner        = THIS_MODULE,
>         .check_set        = &test_sk_check_kfunc_ids,
> --
> 2.36.1
>
Alexei Starovoitov May 31, 2022, 6:07 p.m. UTC | #3
On Sun, May 29, 2022 at 3:06 PM Daniel Xu <dxu@dxuuu.xyz> wrote:
>
> This commit adds PROG_TEST_RUN support to BPF_PROG_TYPE_KPROBE progs. On
> top of being generally useful for unit testing kprobe progs, this commit
> more specifically helps solve a relability problem with bpftrace BEGIN
> and END probes.
>
> BEGIN and END probes are run exactly once at the beginning and end of a
> bpftrace tracing session, respectively. bpftrace currently implements
> the probes by creating two dummy functions and attaching the BEGIN and
> END progs, if defined, to those functions and calling the dummy
> functions as appropriate. This works pretty well most of the time except
> for when distros strip symbols from bpftrace. Every now and then this
> happens and users get confused. Having PROG_TEST_RUN support will help
> solve this issue by allowing us to directly trigger uprobes from
> userspace.
>
> Admittedly, this is a pretty specific problem and could probably be
> solved other ways. However, PROG_TEST_RUN also makes unit testing more
> convenient, especially as users start building more complex tracing
> applications. So I see this as killing two birds with one stone.

bpftrace approach of uprobe-ing into BEGIN_trigger can
be solved with raw_tp prog.
"BEGIN" bpftrace's program doesn't have to be uprobe.
I can be raw_tp and prog_test_run command is
already implemented for raw_tp.

kprobe prog has pt_regs as arguments,
raw_tp has tracepoint args.
Both progs expect kernel addresses in args.
Passing 'struct pt_regs' filled with integers and non-kernel addresses
won't help to unit test bpf-kprobe programs.
There is little use in creating a dummy kprobe-bpf prog
that expects RDI to be 1 and RSI to be 2
(like selftest from patch 2 does) and running it.
We already have raw_tp with similar args and such
progs can be executed already.
Whether SEC() part says kprobe/ or raw_tp/ doesn't change
much in the prog itself.
More so raw_tp prog will work on all architectures,
whereas kprobe's pt_regs are arch specific.
So even if kprobe progs were runnable, bpftrace
should probably be using raw_tp to be arch independent.
kernel test robot June 1, 2022, 10:36 a.m. UTC | #4
Hi Daniel,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Daniel-Xu/Add-PROG_TEST_RUN-support-to-BPF_PROG_TYPE_KPROBE/20220530-060742
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: s390-randconfig-r002-20220531 (https://download.01.org/0day-ci/archive/20220601/202206011814.9lsKlbB0-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project c825abd6b0198fb088d9752f556a70705bc99dfd)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install s390 cross compiling tool for clang build
        # apt-get install binutils-s390x-linux-gnu
        # https://github.com/intel-lab-lkp/linux/commit/a547a4c795103fd002d3bbb5ee4d7141113716c0
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Daniel-Xu/Add-PROG_TEST_RUN-support-to-BPF_PROG_TYPE_KPROBE/20220530-060742
        git checkout a547a4c795103fd002d3bbb5ee4d7141113716c0
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> /opt/cross/gcc-11.3.0-nolibc/s390x-linux/bin/s390x-linux-ld: kernel/trace/bpf_trace.o:kernel/trace/bpf_trace.c:1365: undefined reference to `bpf_prog_test_run_kprobe'
Daniel Xu June 2, 2022, 2:37 p.m. UTC | #5
On Tue, May 31, 2022 at 11:07:31AM -0700, Alexei Starovoitov wrote:
> On Sun, May 29, 2022 at 3:06 PM Daniel Xu <dxu@dxuuu.xyz> wrote:
> >
> > This commit adds PROG_TEST_RUN support to BPF_PROG_TYPE_KPROBE progs. On
> > top of being generally useful for unit testing kprobe progs, this commit
> > more specifically helps solve a relability problem with bpftrace BEGIN
> > and END probes.
> >
> > BEGIN and END probes are run exactly once at the beginning and end of a
> > bpftrace tracing session, respectively. bpftrace currently implements
> > the probes by creating two dummy functions and attaching the BEGIN and
> > END progs, if defined, to those functions and calling the dummy
> > functions as appropriate. This works pretty well most of the time except
> > for when distros strip symbols from bpftrace. Every now and then this
> > happens and users get confused. Having PROG_TEST_RUN support will help
> > solve this issue by allowing us to directly trigger uprobes from
> > userspace.
> >
> > Admittedly, this is a pretty specific problem and could probably be
> > solved other ways. However, PROG_TEST_RUN also makes unit testing more
> > convenient, especially as users start building more complex tracing
> > applications. So I see this as killing two birds with one stone.
> 
> bpftrace approach of uprobe-ing into BEGIN_trigger can
> be solved with raw_tp prog.
> "BEGIN" bpftrace's program doesn't have to be uprobe.
> I can be raw_tp and prog_test_run command is
> already implemented for raw_tp.
> 
> kprobe prog has pt_regs as arguments,
> raw_tp has tracepoint args.
> Both progs expect kernel addresses in args.
> Passing 'struct pt_regs' filled with integers and non-kernel addresses
> won't help to unit test bpf-kprobe programs.
> There is little use in creating a dummy kprobe-bpf prog
> that expects RDI to be 1 and RSI to be 2
> (like selftest from patch 2 does) and running it.

Yeah that's a good point about the kprobe case. But AFAICT uprobes are
implemented using a kprobe prog in which case it would be both possible
and useful to insert userspace args and userspace addrspace addrs.

> We already have raw_tp with similar args and such
> progs can be executed already.
> Whether SEC() part says kprobe/ or raw_tp/ doesn't change
> much in the prog itself.

I suppose so, and I guess you could always bpf_program__set_type(..).

> More so raw_tp prog will work on all architectures,
> whereas kprobe's pt_regs are arch specific.
> So even if kprobe progs were runnable, bpftrace
> should probably be using raw_tp to be arch independent.

bpftrace has all the infra to pull arbitrary structs out of vmlinux BTF
already. It should be fairly simple to get the arch-specific struct
pt_regs size and construct a buffer of all 0s. And fall back to old
logic (that'll be necessary for a while) if kprobe BPF_PROG_RUN or
vmlinux BTF is missing.

That being said, I didn't realize that when I put up the patch, so
thanks for the hint. It sounds like it's probably simpler to just use
raw_tp then.

FWIW I still think this feature is useful, but since I probably won't
use it in bpftrace I'm fine with dropping the patchset. If anyone still
wants it in I'm also fine with continuing on it.

Thanks,
Daniel
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 2b914a56a2c5..dec3082ee158 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1751,6 +1751,9 @@  int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
 int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
 				const union bpf_attr *kattr,
 				union bpf_attr __user *uattr);
+int bpf_prog_test_run_kprobe(struct bpf_prog *prog,
+			     const union bpf_attr *kattr,
+			     union bpf_attr __user *uattr);
 bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info);
@@ -1998,6 +2001,13 @@  static inline int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
 	return -ENOTSUPP;
 }
 
+static inline int bpf_prog_test_run_kprobe(struct bpf_prog *prog,
+					   const union bpf_attr *kattr,
+					   union bpf_attr __user *uattr)
+{
+	return -ENOTSUPP;
+}
+
 static inline void bpf_map_put(struct bpf_map *map)
 {
 }
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 10b157a6d73e..b452e84b9ba4 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1363,6 +1363,7 @@  const struct bpf_verifier_ops kprobe_verifier_ops = {
 };
 
 const struct bpf_prog_ops kprobe_prog_ops = {
+	.test_run = bpf_prog_test_run_kprobe,
 };
 
 BPF_CALL_5(bpf_perf_event_output_tp, void *, tp_buff, struct bpf_map *, map,
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 56f059b3c242..0b6fc17ce901 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -1622,6 +1622,42 @@  int bpf_prog_test_run_syscall(struct bpf_prog *prog,
 	return err;
 }
 
+int bpf_prog_test_run_kprobe(struct bpf_prog *prog,
+			     const union bpf_attr *kattr,
+			     union bpf_attr __user *uattr)
+{
+	void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
+	__u32 ctx_size_in = kattr->test.ctx_size_in;
+	u32 repeat = kattr->test.repeat;
+	struct pt_regs *ctx = NULL;
+	u32 retval, duration;
+	int err = 0;
+
+	if (kattr->test.data_in || kattr->test.data_out ||
+	    kattr->test.ctx_out || kattr->test.flags ||
+	    kattr->test.cpu || kattr->test.batch_size)
+		return -EINVAL;
+
+	if (ctx_size_in != sizeof(struct pt_regs))
+		return -EINVAL;
+
+	ctx = memdup_user(ctx_in, ctx_size_in);
+	if (IS_ERR(ctx))
+		return PTR_ERR(ctx);
+
+	err = bpf_test_run(prog, ctx, repeat, &retval, &duration, false);
+	if (err)
+		goto out;
+
+	if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval)) ||
+	    copy_to_user(&uattr->test.duration, &duration, sizeof(duration))) {
+		err = -EFAULT;
+	}
+out:
+	kfree(ctx);
+	return err;
+}
+
 static const struct btf_kfunc_id_set bpf_prog_test_kfunc_set = {
 	.owner        = THIS_MODULE,
 	.check_set        = &test_sk_check_kfunc_ids,