Message ID | 20230828075537.194192-4-jolsa@kernel.org (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | bpf: Add missed stats for kprobes | expand |
On 8/28/2023 3:55 PM, Jiri Olsa wrote: > Adding support to gather stats for kprobe_multi programs. > > We now count: > - missed stats due to bpf_prog_active protection (always) > - cnt/nsec of the bpf program execution (if kernel.bpf_stats_enabled=1) > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Hou Tao <houtao1@huawei.com> With one nit below. > --- > kernel/trace/bpf_trace.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index a7264b2c17ad..0a8685fc1eee 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -2706,18 +2706,24 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, > .link = link, > .entry_ip = entry_ip, > }; > + struct bpf_prog *prog = link->link.prog; > struct bpf_run_ctx *old_run_ctx; > + u64 start; > int err; > > if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { > + bpf_prog_inc_misses_counter(prog); > err = 0; > goto out; > } > > + The extra empty line is not needed here.
Hi, On 8/28/2023 3:55 PM, Jiri Olsa wrote: > Adding support to gather stats for kprobe_multi programs. > > We now count: > - missed stats due to bpf_prog_active protection (always) > - cnt/nsec of the bpf program execution (if kernel.bpf_stats_enabled=1) > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > --- > kernel/trace/bpf_trace.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index a7264b2c17ad..0a8685fc1eee 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -2706,18 +2706,24 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, > .link = link, > .entry_ip = entry_ip, > }; > + struct bpf_prog *prog = link->link.prog; > struct bpf_run_ctx *old_run_ctx; > + u64 start; > int err; > > if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { > + bpf_prog_inc_misses_counter(prog); > err = 0; > goto out; > } > > + > migrate_disable(); > rcu_read_lock(); > old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); > - err = bpf_prog_run(link->link.prog, regs); > + start = bpf_prog_start_time(); > + err = bpf_prog_run(prog, regs); > + bpf_prog_update_prog_stats(prog, start); Oops, I missed the bpf_prog_run() here. It seems that bpf_prog_run() has already done the accounting thing, so there is no need for double accounting. > bpf_reset_run_ctx(old_run_ctx); > rcu_read_unlock(); > migrate_enable();
On Tue, Sep 05, 2023 at 02:15:49PM +0800, Hou Tao wrote: > Hi, > > On 8/28/2023 3:55 PM, Jiri Olsa wrote: > > Adding support to gather stats for kprobe_multi programs. > > > > We now count: > > - missed stats due to bpf_prog_active protection (always) > > - cnt/nsec of the bpf program execution (if kernel.bpf_stats_enabled=1) > > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > > --- > > kernel/trace/bpf_trace.c | 8 +++++++- > > 1 file changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > index a7264b2c17ad..0a8685fc1eee 100644 > > --- a/kernel/trace/bpf_trace.c > > +++ b/kernel/trace/bpf_trace.c > > @@ -2706,18 +2706,24 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, > > .link = link, > > .entry_ip = entry_ip, > > }; > > + struct bpf_prog *prog = link->link.prog; > > struct bpf_run_ctx *old_run_ctx; > > + u64 start; > > int err; > > > > if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { > > + bpf_prog_inc_misses_counter(prog); > > err = 0; > > goto out; > > } > > > > + > > migrate_disable(); > > rcu_read_lock(); > > old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); > > - err = bpf_prog_run(link->link.prog, regs); > > + start = bpf_prog_start_time(); > > + err = bpf_prog_run(prog, regs); > > + bpf_prog_update_prog_stats(prog, start); > > Oops, I missed the bpf_prog_run() here. It seems that bpf_prog_run() has > already done the accounting thing, so there is no need for double > accounting. right, same as the other change, thanks jirka > > bpf_reset_run_ctx(old_run_ctx); > > rcu_read_unlock(); > > migrate_enable(); >
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index a7264b2c17ad..0a8685fc1eee 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2706,18 +2706,24 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, .link = link, .entry_ip = entry_ip, }; + struct bpf_prog *prog = link->link.prog; struct bpf_run_ctx *old_run_ctx; + u64 start; int err; if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { + bpf_prog_inc_misses_counter(prog); err = 0; goto out; } + migrate_disable(); rcu_read_lock(); old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); - err = bpf_prog_run(link->link.prog, regs); + start = bpf_prog_start_time(); + err = bpf_prog_run(prog, regs); + bpf_prog_update_prog_stats(prog, start); bpf_reset_run_ctx(old_run_ctx); rcu_read_unlock(); migrate_enable();
Adding support to gather stats for kprobe_multi programs. We now count: - missed stats due to bpf_prog_active protection (always) - cnt/nsec of the bpf program execution (if kernel.bpf_stats_enabled=1) Signed-off-by: Jiri Olsa <jolsa@kernel.org> --- kernel/trace/bpf_trace.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)