mbox series

[v15,00/19] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph

Message ID 172639136989.366111.11359590127009702129.stgit@devnote2 (mailing list archive)
Headers show
Series tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph | expand

Message

Masami Hiramatsu (Google) Sept. 15, 2024, 9:09 a.m. UTC
Hi,

Here is the 15th version of the series to re-implement the fprobe on
function-graph tracer. The previous version is;

https://lore.kernel.org/all/172615368656.133222.2336770908714920670.stgit@devnote2/

This version rebased on Steve's calltime change[1] instead of the last
patch in the previous series, and adds a bpf patch to add get_entry_ip()
for arm64. Note that [1] is not included in this series, so please use
the git branch[2].

[1] https://lore.kernel.org/all/20240914214805.779822616@goodmis.org/
[2] https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph

Overview
--------
This series rewrites the fprobe on this function-graph.
The purposes of this change are;

 1) Remove dependency of the rethook from fprobe so that we can reduce
   the return hook code and shadow stack.

 2) Make 'ftrace_regs' the common trace interface for the function
   boundary.

1) Currently we have 2(or 3) different function return hook codes,
 the function-graph tracer and rethook (and legacy kretprobe).
 But since this  is redundant and needs double maintenance cost,
 I would like to unify those. From the user's viewpoint, function-
 graph tracer is very useful to grasp the execution path. For this
 purpose, it is hard to use the rethook in the function-graph
 tracer, but the opposite is possible. (Strictly speaking, kretprobe
 can not use it because it requires 'pt_regs' for historical reasons.)

2) Now the fprobe provides the 'pt_regs' for its handler, but that is
 wrong for the function entry and exit. Moreover, depending on the
 architecture, there is no way to accurately reproduce 'pt_regs'
 outside of interrupt or exception handlers. This means fprobe should
 not use 'pt_regs' because it does not use such exceptions.
 (Conversely, kprobe should use 'pt_regs' because it is an abstract
  interface of the software breakpoint exception.)

This series changes fprobe to use function-graph tracer for tracing
function entry and exit, instead of mixture of ftrace and rethook.
Unlike the rethook which is a per-task list of system-wide allocated
nodes, the function graph's ret_stack is a per-task shadow stack.
Thus it does not need to set 'nr_maxactive' (which is the number of
pre-allocated nodes).
Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
their register interface, this changes it to convert 'ftrace_regs' to
'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
so users must access only registers for function parameters or
return value. 

Design
------
Instead of using ftrace's function entry hook directly, the new fprobe
is built on top of the function-graph's entry and return callbacks
with 'ftrace_regs'.

Since the fprobe requires access to 'ftrace_regs', the architecture
must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
entry callback with 'ftrace_regs', and also
CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
return_to_handler.

All fprobes share a single function-graph ops (means shares a common
ftrace filter) similar to the kprobe-on-ftrace. This needs another
layer to find corresponding fprobe in the common function-graph
callbacks, but has much better scalability, since the number of
registered function-graph ops is limited.

In the entry callback, the fprobe runs its entry_handler and saves the
address of 'fprobe' on the function-graph's shadow stack as data. The
return callback decodes the data to get the 'fprobe' address, and runs
the exit_handler.

The fprobe introduces two hash-tables, one is for entry callback which
searches fprobes related to the given function address passed by entry
callback. The other is for a return callback which checks if the given
'fprobe' data structure pointer is still valid. Note that it is
possible to unregister fprobe before the return callback runs. Thus
the address validation must be done before using it in the return
callback.

Download
--------
This series can be applied against the ftrace/for-next branch in
linux-trace tree.

This series can also be found below branch.

https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph

Thank you,

---

Masami Hiramatsu (Google) (19):
      tracing: Add a comment about ftrace_regs definition
      tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value
      function_graph: Pass ftrace_regs to entryfunc
      function_graph: Replace fgraph_ret_regs with ftrace_regs
      function_graph: Pass ftrace_regs to retfunc
      fprobe: Use ftrace_regs in fprobe entry handler
      fprobe: Use ftrace_regs in fprobe exit handler
      tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
      tracing: Add ftrace_fill_perf_regs() for perf event
      tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
      bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
      ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
      fprobe: Rewrite fprobe on function-graph tracer
      tracing: Fix function timing profiler to initialize hashtable
      tracing/fprobe: Remove nr_maxactive from fprobe
      selftests: ftrace: Remove obsolate maxactive syntax check
      selftests/ftrace: Add a test case for repeating register/unregister fprobe
      Documentation: probes: Update fprobe on function-graph tracer
      bpf: Add get_entry_ip() for arm64


 Documentation/trace/fprobe.rst                     |   42 +
 arch/arm64/Kconfig                                 |    2 
 arch/arm64/include/asm/ftrace.h                    |   47 +
 arch/arm64/kernel/asm-offsets.c                    |   12 
 arch/arm64/kernel/entry-ftrace.S                   |   32 +
 arch/arm64/kernel/ftrace.c                         |   20 +
 arch/loongarch/Kconfig                             |    4 
 arch/loongarch/include/asm/ftrace.h                |   32 -
 arch/loongarch/kernel/asm-offsets.c                |   12 
 arch/loongarch/kernel/ftrace_dyn.c                 |   10 
 arch/loongarch/kernel/mcount.S                     |   17 -
 arch/loongarch/kernel/mcount_dyn.S                 |   14 
 arch/powerpc/Kconfig                               |    1 
 arch/powerpc/include/asm/ftrace.h                  |   15 
 arch/powerpc/kernel/trace/ftrace.c                 |    2 
 arch/powerpc/kernel/trace/ftrace_64_pg.c           |   10 
 arch/riscv/Kconfig                                 |    3 
 arch/riscv/include/asm/ftrace.h                    |   43 +
 arch/riscv/kernel/ftrace.c                         |   17 +
 arch/riscv/kernel/mcount.S                         |   24 -
 arch/s390/Kconfig                                  |    3 
 arch/s390/include/asm/ftrace.h                     |   39 +
 arch/s390/kernel/asm-offsets.c                     |    6 
 arch/s390/kernel/mcount.S                          |    9 
 arch/x86/Kconfig                                   |    4 
 arch/x86/include/asm/ftrace.h                      |   37 +
 arch/x86/kernel/ftrace.c                           |   50 +-
 arch/x86/kernel/ftrace_32.S                        |   15 
 arch/x86/kernel/ftrace_64.S                        |   17 -
 include/linux/fprobe.h                             |   57 +-
 include/linux/ftrace.h                             |  134 ++++
 kernel/trace/Kconfig                               |   23 +
 kernel/trace/bpf_trace.c                           |   83 ++-
 kernel/trace/fgraph.c                              |   60 +-
 kernel/trace/fprobe.c                              |  637 ++++++++++++++------
 kernel/trace/ftrace.c                              |   10 
 kernel/trace/trace.h                               |    6 
 kernel/trace/trace_fprobe.c                        |  147 ++---
 kernel/trace/trace_functions_graph.c               |   10 
 kernel/trace/trace_irqsoff.c                       |    6 
 kernel/trace/trace_probe_tmpl.h                    |    2 
 kernel/trace/trace_sched_wakeup.c                  |    6 
 kernel/trace/trace_selftest.c                      |   11 
 lib/test_fprobe.c                                  |   51 --
 samples/fprobe/fprobe_example.c                    |    4 
 .../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19 +
 .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4 
 47 files changed, 1195 insertions(+), 614 deletions(-)
 create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc

--
Masami Hiramatsu (Google) <mhiramat@kernel.org>

Comments

Andrii Nakryiko Sept. 18, 2024, 9:22 p.m. UTC | #1
On Sun, Sep 15, 2024 at 11:09 AM Masami Hiramatsu (Google)
<mhiramat@kernel.org> wrote:
>
> Hi,
>
> Here is the 15th version of the series to re-implement the fprobe on
> function-graph tracer. The previous version is;
>
> https://lore.kernel.org/all/172615368656.133222.2336770908714920670.stgit@devnote2/
>
> This version rebased on Steve's calltime change[1] instead of the last
> patch in the previous series, and adds a bpf patch to add get_entry_ip()
> for arm64. Note that [1] is not included in this series, so please use
> the git branch[2].
>

With LPC and Kernel Recipes back-to-back I won't have time to look
through the code, but I did manage to run some benchmarks tonight, and
they look pretty good now, thanks! Seems like the kprobe regression is
gone, and kretprobes are a bit faster. So, nice work, thanks!

BEFORE
======
kprobe         :   25.052 ± 0.032M/s
kprobe-multi   :   28.102 ± 0.167M/s
kretprobe      :   10.724 ± 0.008M/s
kretprobe-multi:   11.337 ± 0.054M/s

AFTER
=====
kprobe         :   25.206 ± 0.026M/s
kprobe-multi   :   30.167 ± 0.148M/s
kretprobe      :   10.714 ± 0.016M/s
kretprobe-multi:   13.436 ± 0.328M/s

> [1] https://lore.kernel.org/all/20240914214805.779822616@goodmis.org/
> [2] https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
>

[...]
Masami Hiramatsu (Google) Sept. 20, 2024, 11:26 a.m. UTC | #2
On Wed, 18 Sep 2024 23:22:41 +0200
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:

> On Sun, Sep 15, 2024 at 11:09 AM Masami Hiramatsu (Google)
> <mhiramat@kernel.org> wrote:
> >
> > Hi,
> >
> > Here is the 15th version of the series to re-implement the fprobe on
> > function-graph tracer. The previous version is;
> >
> > https://lore.kernel.org/all/172615368656.133222.2336770908714920670.stgit@devnote2/
> >
> > This version rebased on Steve's calltime change[1] instead of the last
> > patch in the previous series, and adds a bpf patch to add get_entry_ip()
> > for arm64. Note that [1] is not included in this series, so please use
> > the git branch[2].
> >
> 
> With LPC and Kernel Recipes back-to-back I won't have time to look
> through the code, but I did manage to run some benchmarks tonight, and
> they look pretty good now, thanks! Seems like the kprobe regression is
> gone, and kretprobes are a bit faster. So, nice work, thanks!
> 
> BEFORE
> ======
> kprobe         :   25.052 ± 0.032M/s
> kprobe-multi   :   28.102 ± 0.167M/s
> kretprobe      :   10.724 ± 0.008M/s
> kretprobe-multi:   11.337 ± 0.054M/s
> 
> AFTER
> =====
> kprobe         :   25.206 ± 0.026M/s
> kprobe-multi   :   30.167 ± 0.148M/s
> kretprobe      :   10.714 ± 0.016M/s
> kretprobe-multi:   13.436 ± 0.328M/s

Thanks for reevaluate the series!
This results look nice.

Thank you,

> 
> > [1] https://lore.kernel.org/all/20240914214805.779822616@goodmis.org/
> > [2] https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
> >
> 
> [...]