mbox series

[v9,00/36] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph

Message ID 171318533841.254850.15841395205784342850.stgit@devnote2 (mailing list archive)
Headers show
Series tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph | expand

Message

Masami Hiramatsu (Google) April 15, 2024, 12:48 p.m. UTC
Hi,

Here is the 9th version of the series to re-implement the fprobe on
function-graph tracer. The previous version is;

https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/

This version is ported on the latest kernel (v6.9-rc3 + probes/for-next)
and fixed some bugs + performance optimization patch[36/36].
 - [12/36] Fix to clear fgraph_array entry in registration failure, also
           return -ENOSPC when fgraph_array is full.
 - [28/36] Add new store_fprobe_entry_data() for fprobe.
 - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation.
 - [36/36] Add new flag to skip timestamp recording.

Overview
--------
This series does major 2 changes, enable multiple function-graphs on
the ftrace (e.g. allow function-graph on sub instances) and rewrite the
fprobe on this function-graph.

The former changes had been sent from Steven Rostedt 4 years ago (*),
which allows users to set different setting function-graph tracer (and
other tracers based on function-graph) in each trace-instances at the
same time.

(*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/

The purpose of latter change are;

 1) Remove dependency of the rethook from fprobe so that we can reduce
   the return hook code and shadow stack.

 2) Make 'ftrace_regs' the common trace interface for the function
   boundary.

1) Currently we have 2(or 3) different function return hook codes,
 the function-graph tracer and rethook (and legacy kretprobe).
 But since this  is redundant and needs double maintenance cost,
 I would like to unify those. From the user's viewpoint, function-
 graph tracer is very useful to grasp the execution path. For this
 purpose, it is hard to use the rethook in the function-graph
 tracer, but the opposite is possible. (Strictly speaking, kretprobe
 can not use it because it requires 'pt_regs' for historical reasons.)

2) Now the fprobe provides the 'pt_regs' for its handler, but that is
 wrong for the function entry and exit. Moreover, depending on the
 architecture, there is no way to accurately reproduce 'pt_regs'
 outside of interrupt or exception handlers. This means fprobe should
 not use 'pt_regs' because it does not use such exceptions.
 (Conversely, kprobe should use 'pt_regs' because it is an abstract
  interface of the software breakpoint exception.)

This series changes fprobe to use function-graph tracer for tracing
function entry and exit, instead of mixture of ftrace and rethook.
Unlike the rethook which is a per-task list of system-wide allocated
nodes, the function graph's ret_stack is a per-task shadow stack.
Thus it does not need to set 'nr_maxactive' (which is the number of
pre-allocated nodes).
Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
their register interface, this changes it to convert 'ftrace_regs' to
'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
so users must access only registers for function parameters or
return value. 

Design
------
Instead of using ftrace's function entry hook directly, the new fprobe
is built on top of the function-graph's entry and return callbacks
with 'ftrace_regs'.

Since the fprobe requires access to 'ftrace_regs', the architecture
must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
entry callback with 'ftrace_regs', and also
CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
return_to_handler.

All fprobes share a single function-graph ops (means shares a common
ftrace filter) similar to the kprobe-on-ftrace. This needs another
layer to find corresponding fprobe in the common function-graph
callbacks, but has much better scalability, since the number of
registered function-graph ops is limited.

In the entry callback, the fprobe runs its entry_handler and saves the
address of 'fprobe' on the function-graph's shadow stack as data. The
return callback decodes the data to get the 'fprobe' address, and runs
the exit_handler.

The fprobe introduces two hash-tables, one is for entry callback which
searches fprobes related to the given function address passed by entry
callback. The other is for a return callback which checks if the given
'fprobe' data structure pointer is still valid. Note that it is
possible to unregister fprobe before the return callback runs. Thus
the address validation must be done before using it in the return
callback.

This series can be applied against the probes/for-next branch, which
is based on v6.9-rc3.

This series can also be found below branch.

https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph

Thank you,

---

Masami Hiramatsu (Google) (21):
      tracing: Add a comment about ftrace_regs definition
      tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value
      x86: tracing: Add ftrace_regs definition in the header
      function_graph: Use a simple LRU for fgraph_array index number
      ftrace: Add multiple fgraph storage selftest
      function_graph: Pass ftrace_regs to entryfunc
      function_graph: Replace fgraph_ret_regs with ftrace_regs
      function_graph: Pass ftrace_regs to retfunc
      fprobe: Use ftrace_regs in fprobe entry handler
      fprobe: Use ftrace_regs in fprobe exit handler
      tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
      tracing: Add ftrace_fill_perf_regs() for perf event
      tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
      bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
      ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
      fprobe: Rewrite fprobe on function-graph tracer
      tracing/fprobe: Remove nr_maxactive from fprobe
      selftests: ftrace: Remove obsolate maxactive syntax check
      selftests/ftrace: Add a test case for repeating register/unregister fprobe
      Documentation: probes: Update fprobe on function-graph tracer
      fgraph: Skip recording calltime/rettime if it is not nneeded

Steven Rostedt (VMware) (15):
      function_graph: Convert ret_stack to a series of longs
      fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long
      function_graph: Add an array structure that will allow multiple callbacks
      function_graph: Allow multiple users to attach to function graph
      function_graph: Remove logic around ftrace_graph_entry and return
      ftrace/function_graph: Pass fgraph_ops to function graph callbacks
      ftrace: Allow function_graph tracer to be enabled in instances
      ftrace: Allow ftrace startup flags exist without dynamic ftrace
      function_graph: Have the instances use their own ftrace_ops for filtering
      function_graph: Add "task variables" per task for fgraph_ops
      function_graph: Move set_graph_function tests to shadow stack global var
      function_graph: Move graph depth stored data to shadow stack global var
      function_graph: Move graph notrace bit to shadow stack global var
      function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data()
      function_graph: Add selftest for passing local variables


 Documentation/trace/fprobe.rst                     |   42 +
 arch/arm64/Kconfig                                 |    3 
 arch/arm64/include/asm/ftrace.h                    |   47 +
 arch/arm64/kernel/asm-offsets.c                    |   12 
 arch/arm64/kernel/entry-ftrace.S                   |   32 -
 arch/arm64/kernel/ftrace.c                         |   21 
 arch/loongarch/Kconfig                             |    4 
 arch/loongarch/include/asm/ftrace.h                |   32 -
 arch/loongarch/kernel/asm-offsets.c                |   12 
 arch/loongarch/kernel/ftrace_dyn.c                 |   15 
 arch/loongarch/kernel/mcount.S                     |   17 
 arch/loongarch/kernel/mcount_dyn.S                 |   14 
 arch/powerpc/Kconfig                               |    1 
 arch/powerpc/include/asm/ftrace.h                  |   15 
 arch/powerpc/kernel/trace/ftrace.c                 |    3 
 arch/powerpc/kernel/trace/ftrace_64_pg.c           |   10 
 arch/riscv/Kconfig                                 |    3 
 arch/riscv/include/asm/ftrace.h                    |   21 
 arch/riscv/kernel/ftrace.c                         |   15 
 arch/riscv/kernel/mcount.S                         |   24 
 arch/s390/Kconfig                                  |    3 
 arch/s390/include/asm/ftrace.h                     |   39 -
 arch/s390/kernel/asm-offsets.c                     |    6 
 arch/s390/kernel/mcount.S                          |    9 
 arch/x86/Kconfig                                   |    4 
 arch/x86/include/asm/ftrace.h                      |   43 -
 arch/x86/kernel/ftrace.c                           |   51 +
 arch/x86/kernel/ftrace_32.S                        |   15 
 arch/x86/kernel/ftrace_64.S                        |   17 
 include/linux/fprobe.h                             |   57 +
 include/linux/ftrace.h                             |  170 +++
 include/linux/sched.h                              |    2 
 include/linux/trace_recursion.h                    |   39 -
 kernel/trace/Kconfig                               |   23 
 kernel/trace/bpf_trace.c                           |   14 
 kernel/trace/fgraph.c                              | 1005 ++++++++++++++++----
 kernel/trace/fprobe.c                              |  637 +++++++++----
 kernel/trace/ftrace.c                              |   13 
 kernel/trace/ftrace_internal.h                     |    2 
 kernel/trace/trace.h                               |   96 ++
 kernel/trace/trace_fprobe.c                        |  147 ++-
 kernel/trace/trace_functions.c                     |    8 
 kernel/trace/trace_functions_graph.c               |   98 +-
 kernel/trace/trace_irqsoff.c                       |   12 
 kernel/trace/trace_probe_tmpl.h                    |    2 
 kernel/trace/trace_sched_wakeup.c                  |   12 
 kernel/trace/trace_selftest.c                      |  262 +++++
 lib/test_fprobe.c                                  |   51 -
 samples/fprobe/fprobe_example.c                    |    4 
 .../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19 
 .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4 
 51 files changed, 2325 insertions(+), 882 deletions(-)
 create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc

--
Masami Hiramatsu (Google) <mhiramat@kernel.org>

Comments

Masami Hiramatsu (Google) April 19, 2024, 5:36 a.m. UTC | #1
Hi Steve,

Can you review this series? Especially, [07/36] and [12/36] has been changed
a lot from your original patch.

Thank you,

On Mon, 15 Apr 2024 21:48:59 +0900
"Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote:

> Hi,
> 
> Here is the 9th version of the series to re-implement the fprobe on
> function-graph tracer. The previous version is;
> 
> https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/
> 
> This version is ported on the latest kernel (v6.9-rc3 + probes/for-next)
> and fixed some bugs + performance optimization patch[36/36].
>  - [12/36] Fix to clear fgraph_array entry in registration failure, also
>            return -ENOSPC when fgraph_array is full.
>  - [28/36] Add new store_fprobe_entry_data() for fprobe.
>  - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation.
>  - [36/36] Add new flag to skip timestamp recording.
> 
> Overview
> --------
> This series does major 2 changes, enable multiple function-graphs on
> the ftrace (e.g. allow function-graph on sub instances) and rewrite the
> fprobe on this function-graph.
> 
> The former changes had been sent from Steven Rostedt 4 years ago (*),
> which allows users to set different setting function-graph tracer (and
> other tracers based on function-graph) in each trace-instances at the
> same time.
> 
> (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/
> 
> The purpose of latter change are;
> 
>  1) Remove dependency of the rethook from fprobe so that we can reduce
>    the return hook code and shadow stack.
> 
>  2) Make 'ftrace_regs' the common trace interface for the function
>    boundary.
> 
> 1) Currently we have 2(or 3) different function return hook codes,
>  the function-graph tracer and rethook (and legacy kretprobe).
>  But since this  is redundant and needs double maintenance cost,
>  I would like to unify those. From the user's viewpoint, function-
>  graph tracer is very useful to grasp the execution path. For this
>  purpose, it is hard to use the rethook in the function-graph
>  tracer, but the opposite is possible. (Strictly speaking, kretprobe
>  can not use it because it requires 'pt_regs' for historical reasons.)
> 
> 2) Now the fprobe provides the 'pt_regs' for its handler, but that is
>  wrong for the function entry and exit. Moreover, depending on the
>  architecture, there is no way to accurately reproduce 'pt_regs'
>  outside of interrupt or exception handlers. This means fprobe should
>  not use 'pt_regs' because it does not use such exceptions.
>  (Conversely, kprobe should use 'pt_regs' because it is an abstract
>   interface of the software breakpoint exception.)
> 
> This series changes fprobe to use function-graph tracer for tracing
> function entry and exit, instead of mixture of ftrace and rethook.
> Unlike the rethook which is a per-task list of system-wide allocated
> nodes, the function graph's ret_stack is a per-task shadow stack.
> Thus it does not need to set 'nr_maxactive' (which is the number of
> pre-allocated nodes).
> Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
> Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
> their register interface, this changes it to convert 'ftrace_regs' to
> 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
> so users must access only registers for function parameters or
> return value. 
> 
> Design
> ------
> Instead of using ftrace's function entry hook directly, the new fprobe
> is built on top of the function-graph's entry and return callbacks
> with 'ftrace_regs'.
> 
> Since the fprobe requires access to 'ftrace_regs', the architecture
> must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
> CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
> entry callback with 'ftrace_regs', and also
> CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
> return_to_handler.
> 
> All fprobes share a single function-graph ops (means shares a common
> ftrace filter) similar to the kprobe-on-ftrace. This needs another
> layer to find corresponding fprobe in the common function-graph
> callbacks, but has much better scalability, since the number of
> registered function-graph ops is limited.
> 
> In the entry callback, the fprobe runs its entry_handler and saves the
> address of 'fprobe' on the function-graph's shadow stack as data. The
> return callback decodes the data to get the 'fprobe' address, and runs
> the exit_handler.
> 
> The fprobe introduces two hash-tables, one is for entry callback which
> searches fprobes related to the given function address passed by entry
> callback. The other is for a return callback which checks if the given
> 'fprobe' data structure pointer is still valid. Note that it is
> possible to unregister fprobe before the return callback runs. Thus
> the address validation must be done before using it in the return
> callback.
> 
> This series can be applied against the probes/for-next branch, which
> is based on v6.9-rc3.
> 
> This series can also be found below branch.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
> 
> Thank you,
> 
> ---
> 
> Masami Hiramatsu (Google) (21):
>       tracing: Add a comment about ftrace_regs definition
>       tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value
>       x86: tracing: Add ftrace_regs definition in the header
>       function_graph: Use a simple LRU for fgraph_array index number
>       ftrace: Add multiple fgraph storage selftest
>       function_graph: Pass ftrace_regs to entryfunc
>       function_graph: Replace fgraph_ret_regs with ftrace_regs
>       function_graph: Pass ftrace_regs to retfunc
>       fprobe: Use ftrace_regs in fprobe entry handler
>       fprobe: Use ftrace_regs in fprobe exit handler
>       tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
>       tracing: Add ftrace_fill_perf_regs() for perf event
>       tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
>       bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
>       ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
>       fprobe: Rewrite fprobe on function-graph tracer
>       tracing/fprobe: Remove nr_maxactive from fprobe
>       selftests: ftrace: Remove obsolate maxactive syntax check
>       selftests/ftrace: Add a test case for repeating register/unregister fprobe
>       Documentation: probes: Update fprobe on function-graph tracer
>       fgraph: Skip recording calltime/rettime if it is not nneeded
> 
> Steven Rostedt (VMware) (15):
>       function_graph: Convert ret_stack to a series of longs
>       fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long
>       function_graph: Add an array structure that will allow multiple callbacks
>       function_graph: Allow multiple users to attach to function graph
>       function_graph: Remove logic around ftrace_graph_entry and return
>       ftrace/function_graph: Pass fgraph_ops to function graph callbacks
>       ftrace: Allow function_graph tracer to be enabled in instances
>       ftrace: Allow ftrace startup flags exist without dynamic ftrace
>       function_graph: Have the instances use their own ftrace_ops for filtering
>       function_graph: Add "task variables" per task for fgraph_ops
>       function_graph: Move set_graph_function tests to shadow stack global var
>       function_graph: Move graph depth stored data to shadow stack global var
>       function_graph: Move graph notrace bit to shadow stack global var
>       function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data()
>       function_graph: Add selftest for passing local variables
> 
> 
>  Documentation/trace/fprobe.rst                     |   42 +
>  arch/arm64/Kconfig                                 |    3 
>  arch/arm64/include/asm/ftrace.h                    |   47 +
>  arch/arm64/kernel/asm-offsets.c                    |   12 
>  arch/arm64/kernel/entry-ftrace.S                   |   32 -
>  arch/arm64/kernel/ftrace.c                         |   21 
>  arch/loongarch/Kconfig                             |    4 
>  arch/loongarch/include/asm/ftrace.h                |   32 -
>  arch/loongarch/kernel/asm-offsets.c                |   12 
>  arch/loongarch/kernel/ftrace_dyn.c                 |   15 
>  arch/loongarch/kernel/mcount.S                     |   17 
>  arch/loongarch/kernel/mcount_dyn.S                 |   14 
>  arch/powerpc/Kconfig                               |    1 
>  arch/powerpc/include/asm/ftrace.h                  |   15 
>  arch/powerpc/kernel/trace/ftrace.c                 |    3 
>  arch/powerpc/kernel/trace/ftrace_64_pg.c           |   10 
>  arch/riscv/Kconfig                                 |    3 
>  arch/riscv/include/asm/ftrace.h                    |   21 
>  arch/riscv/kernel/ftrace.c                         |   15 
>  arch/riscv/kernel/mcount.S                         |   24 
>  arch/s390/Kconfig                                  |    3 
>  arch/s390/include/asm/ftrace.h                     |   39 -
>  arch/s390/kernel/asm-offsets.c                     |    6 
>  arch/s390/kernel/mcount.S                          |    9 
>  arch/x86/Kconfig                                   |    4 
>  arch/x86/include/asm/ftrace.h                      |   43 -
>  arch/x86/kernel/ftrace.c                           |   51 +
>  arch/x86/kernel/ftrace_32.S                        |   15 
>  arch/x86/kernel/ftrace_64.S                        |   17 
>  include/linux/fprobe.h                             |   57 +
>  include/linux/ftrace.h                             |  170 +++
>  include/linux/sched.h                              |    2 
>  include/linux/trace_recursion.h                    |   39 -
>  kernel/trace/Kconfig                               |   23 
>  kernel/trace/bpf_trace.c                           |   14 
>  kernel/trace/fgraph.c                              | 1005 ++++++++++++++++----
>  kernel/trace/fprobe.c                              |  637 +++++++++----
>  kernel/trace/ftrace.c                              |   13 
>  kernel/trace/ftrace_internal.h                     |    2 
>  kernel/trace/trace.h                               |   96 ++
>  kernel/trace/trace_fprobe.c                        |  147 ++-
>  kernel/trace/trace_functions.c                     |    8 
>  kernel/trace/trace_functions_graph.c               |   98 +-
>  kernel/trace/trace_irqsoff.c                       |   12 
>  kernel/trace/trace_probe_tmpl.h                    |    2 
>  kernel/trace/trace_sched_wakeup.c                  |   12 
>  kernel/trace/trace_selftest.c                      |  262 +++++
>  lib/test_fprobe.c                                  |   51 -
>  samples/fprobe/fprobe_example.c                    |    4 
>  .../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19 
>  .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4 
>  51 files changed, 2325 insertions(+), 882 deletions(-)
>  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> 
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
>
Steven Rostedt April 19, 2024, 8:01 a.m. UTC | #2
On Fri, 19 Apr 2024 14:36:18 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:

> Hi Steve,
> 
> Can you review this series? Especially, [07/36] and [12/36] has been changed
> a lot from your original patch.

I haven't forgotten (just been a bit hectic).

Worse comes to worse, I'll review it tomorrow.

-- Steve

> 
> Thank you,
> 
> On Mon, 15 Apr 2024 21:48:59 +0900
> "Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote:
> 
> > Hi,
> > 
> > Here is the 9th version of the series to re-implement the fprobe on
> > function-graph tracer. The previous version is;
> > 
> > https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/
> > 
> > This version is ported on the latest kernel (v6.9-rc3 + probes/for-next)
> > and fixed some bugs + performance optimization patch[36/36].
> >  - [12/36] Fix to clear fgraph_array entry in registration failure, also
> >            return -ENOSPC when fgraph_array is full.
> >  - [28/36] Add new store_fprobe_entry_data() for fprobe.
> >  - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation.
> >  - [36/36] Add new flag to skip timestamp recording.
> > 
> > Overview
> > --------
> > This series does major 2 changes, enable multiple function-graphs on
> > the ftrace (e.g. allow function-graph on sub instances) and rewrite the
> > fprobe on this function-graph.
> > 
> > The former changes had been sent from Steven Rostedt 4 years ago (*),
> > which allows users to set different setting function-graph tracer (and
> > other tracers based on function-graph) in each trace-instances at the
> > same time.
> > 
> > (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/
> > 
> > The purpose of latter change are;
> > 
> >  1) Remove dependency of the rethook from fprobe so that we can reduce
> >    the return hook code and shadow stack.
> > 
> >  2) Make 'ftrace_regs' the common trace interface for the function
> >    boundary.
> > 
> > 1) Currently we have 2(or 3) different function return hook codes,
> >  the function-graph tracer and rethook (and legacy kretprobe).
> >  But since this  is redundant and needs double maintenance cost,
> >  I would like to unify those. From the user's viewpoint, function-
> >  graph tracer is very useful to grasp the execution path. For this
> >  purpose, it is hard to use the rethook in the function-graph
> >  tracer, but the opposite is possible. (Strictly speaking, kretprobe
> >  can not use it because it requires 'pt_regs' for historical reasons.)
> > 
> > 2) Now the fprobe provides the 'pt_regs' for its handler, but that is
> >  wrong for the function entry and exit. Moreover, depending on the
> >  architecture, there is no way to accurately reproduce 'pt_regs'
> >  outside of interrupt or exception handlers. This means fprobe should
> >  not use 'pt_regs' because it does not use such exceptions.
> >  (Conversely, kprobe should use 'pt_regs' because it is an abstract
> >   interface of the software breakpoint exception.)
> > 
> > This series changes fprobe to use function-graph tracer for tracing
> > function entry and exit, instead of mixture of ftrace and rethook.
> > Unlike the rethook which is a per-task list of system-wide allocated
> > nodes, the function graph's ret_stack is a per-task shadow stack.
> > Thus it does not need to set 'nr_maxactive' (which is the number of
> > pre-allocated nodes).
> > Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
> > Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
> > their register interface, this changes it to convert 'ftrace_regs' to
> > 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
> > so users must access only registers for function parameters or
> > return value. 
> > 
> > Design
> > ------
> > Instead of using ftrace's function entry hook directly, the new fprobe
> > is built on top of the function-graph's entry and return callbacks
> > with 'ftrace_regs'.
> > 
> > Since the fprobe requires access to 'ftrace_regs', the architecture
> > must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
> > CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
> > entry callback with 'ftrace_regs', and also
> > CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
> > return_to_handler.
> > 
> > All fprobes share a single function-graph ops (means shares a common
> > ftrace filter) similar to the kprobe-on-ftrace. This needs another
> > layer to find corresponding fprobe in the common function-graph
> > callbacks, but has much better scalability, since the number of
> > registered function-graph ops is limited.
> > 
> > In the entry callback, the fprobe runs its entry_handler and saves the
> > address of 'fprobe' on the function-graph's shadow stack as data. The
> > return callback decodes the data to get the 'fprobe' address, and runs
> > the exit_handler.
> > 
> > The fprobe introduces two hash-tables, one is for entry callback which
> > searches fprobes related to the given function address passed by entry
> > callback. The other is for a return callback which checks if the given
> > 'fprobe' data structure pointer is still valid. Note that it is
> > possible to unregister fprobe before the return callback runs. Thus
> > the address validation must be done before using it in the return
> > callback.
> > 
> > This series can be applied against the probes/for-next branch, which
> > is based on v6.9-rc3.
> > 
> > This series can also be found below branch.
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
> > 
> > Thank you,
> > 
> > ---
> > 
> > Masami Hiramatsu (Google) (21):
> >       tracing: Add a comment about ftrace_regs definition
> >       tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value
> >       x86: tracing: Add ftrace_regs definition in the header
> >       function_graph: Use a simple LRU for fgraph_array index number
> >       ftrace: Add multiple fgraph storage selftest
> >       function_graph: Pass ftrace_regs to entryfunc
> >       function_graph: Replace fgraph_ret_regs with ftrace_regs
> >       function_graph: Pass ftrace_regs to retfunc
> >       fprobe: Use ftrace_regs in fprobe entry handler
> >       fprobe: Use ftrace_regs in fprobe exit handler
> >       tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
> >       tracing: Add ftrace_fill_perf_regs() for perf event
> >       tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
> >       bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
> >       ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
> >       fprobe: Rewrite fprobe on function-graph tracer
> >       tracing/fprobe: Remove nr_maxactive from fprobe
> >       selftests: ftrace: Remove obsolate maxactive syntax check
> >       selftests/ftrace: Add a test case for repeating register/unregister fprobe
> >       Documentation: probes: Update fprobe on function-graph tracer
> >       fgraph: Skip recording calltime/rettime if it is not nneeded
> > 
> > Steven Rostedt (VMware) (15):
> >       function_graph: Convert ret_stack to a series of longs
> >       fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long
> >       function_graph: Add an array structure that will allow multiple callbacks
> >       function_graph: Allow multiple users to attach to function graph
> >       function_graph: Remove logic around ftrace_graph_entry and return
> >       ftrace/function_graph: Pass fgraph_ops to function graph callbacks
> >       ftrace: Allow function_graph tracer to be enabled in instances
> >       ftrace: Allow ftrace startup flags exist without dynamic ftrace
> >       function_graph: Have the instances use their own ftrace_ops for filtering
> >       function_graph: Add "task variables" per task for fgraph_ops
> >       function_graph: Move set_graph_function tests to shadow stack global var
> >       function_graph: Move graph depth stored data to shadow stack global var
> >       function_graph: Move graph notrace bit to shadow stack global var
> >       function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data()
> >       function_graph: Add selftest for passing local variables
> > 
> > 
> >  Documentation/trace/fprobe.rst                     |   42 +
> >  arch/arm64/Kconfig                                 |    3 
> >  arch/arm64/include/asm/ftrace.h                    |   47 +
> >  arch/arm64/kernel/asm-offsets.c                    |   12 
> >  arch/arm64/kernel/entry-ftrace.S                   |   32 -
> >  arch/arm64/kernel/ftrace.c                         |   21 
> >  arch/loongarch/Kconfig                             |    4 
> >  arch/loongarch/include/asm/ftrace.h                |   32 -
> >  arch/loongarch/kernel/asm-offsets.c                |   12 
> >  arch/loongarch/kernel/ftrace_dyn.c                 |   15 
> >  arch/loongarch/kernel/mcount.S                     |   17 
> >  arch/loongarch/kernel/mcount_dyn.S                 |   14 
> >  arch/powerpc/Kconfig                               |    1 
> >  arch/powerpc/include/asm/ftrace.h                  |   15 
> >  arch/powerpc/kernel/trace/ftrace.c                 |    3 
> >  arch/powerpc/kernel/trace/ftrace_64_pg.c           |   10 
> >  arch/riscv/Kconfig                                 |    3 
> >  arch/riscv/include/asm/ftrace.h                    |   21 
> >  arch/riscv/kernel/ftrace.c                         |   15 
> >  arch/riscv/kernel/mcount.S                         |   24 
> >  arch/s390/Kconfig                                  |    3 
> >  arch/s390/include/asm/ftrace.h                     |   39 -
> >  arch/s390/kernel/asm-offsets.c                     |    6 
> >  arch/s390/kernel/mcount.S                          |    9 
> >  arch/x86/Kconfig                                   |    4 
> >  arch/x86/include/asm/ftrace.h                      |   43 -
> >  arch/x86/kernel/ftrace.c                           |   51 +
> >  arch/x86/kernel/ftrace_32.S                        |   15 
> >  arch/x86/kernel/ftrace_64.S                        |   17 
> >  include/linux/fprobe.h                             |   57 +
> >  include/linux/ftrace.h                             |  170 +++
> >  include/linux/sched.h                              |    2 
> >  include/linux/trace_recursion.h                    |   39 -
> >  kernel/trace/Kconfig                               |   23 
> >  kernel/trace/bpf_trace.c                           |   14 
> >  kernel/trace/fgraph.c                              | 1005 ++++++++++++++++----
> >  kernel/trace/fprobe.c                              |  637 +++++++++----
> >  kernel/trace/ftrace.c                              |   13 
> >  kernel/trace/ftrace_internal.h                     |    2 
> >  kernel/trace/trace.h                               |   96 ++
> >  kernel/trace/trace_fprobe.c                        |  147 ++-
> >  kernel/trace/trace_functions.c                     |    8 
> >  kernel/trace/trace_functions_graph.c               |   98 +-
> >  kernel/trace/trace_irqsoff.c                       |   12 
> >  kernel/trace/trace_probe_tmpl.h                    |    2 
> >  kernel/trace/trace_sched_wakeup.c                  |   12 
> >  kernel/trace/trace_selftest.c                      |  262 +++++
> >  lib/test_fprobe.c                                  |   51 -
> >  samples/fprobe/fprobe_example.c                    |    4 
> >  .../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19 
> >  .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4 
> >  51 files changed, 2325 insertions(+), 882 deletions(-)
> >  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> > 
> > --
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> >   
> 
>
Florent Revest April 24, 2024, 1:35 p.m. UTC | #3
Neat! :) I had a look at mostly the "high level" part (fprobe and
arm64 specific bits) and this seems to be in a good state to me.

Thanks for all that work, that is quite a refactoring :)

On Mon, Apr 15, 2024 at 2:49 PM Masami Hiramatsu (Google)
<mhiramat@kernel.org> wrote:
>
> Hi,
>
> Here is the 9th version of the series to re-implement the fprobe on
> function-graph tracer. The previous version is;
>
> https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/
>
> This version is ported on the latest kernel (v6.9-rc3 + probes/for-next)
> and fixed some bugs + performance optimization patch[36/36].
>  - [12/36] Fix to clear fgraph_array entry in registration failure, also
>            return -ENOSPC when fgraph_array is full.
>  - [28/36] Add new store_fprobe_entry_data() for fprobe.
>  - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation.
>  - [36/36] Add new flag to skip timestamp recording.
>
> Overview
> --------
> This series does major 2 changes, enable multiple function-graphs on
> the ftrace (e.g. allow function-graph on sub instances) and rewrite the
> fprobe on this function-graph.
>
> The former changes had been sent from Steven Rostedt 4 years ago (*),
> which allows users to set different setting function-graph tracer (and
> other tracers based on function-graph) in each trace-instances at the
> same time.
>
> (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/
>
> The purpose of latter change are;
>
>  1) Remove dependency of the rethook from fprobe so that we can reduce
>    the return hook code and shadow stack.
>
>  2) Make 'ftrace_regs' the common trace interface for the function
>    boundary.
>
> 1) Currently we have 2(or 3) different function return hook codes,
>  the function-graph tracer and rethook (and legacy kretprobe).
>  But since this  is redundant and needs double maintenance cost,
>  I would like to unify those. From the user's viewpoint, function-
>  graph tracer is very useful to grasp the execution path. For this
>  purpose, it is hard to use the rethook in the function-graph
>  tracer, but the opposite is possible. (Strictly speaking, kretprobe
>  can not use it because it requires 'pt_regs' for historical reasons.)
>
> 2) Now the fprobe provides the 'pt_regs' for its handler, but that is
>  wrong for the function entry and exit. Moreover, depending on the
>  architecture, there is no way to accurately reproduce 'pt_regs'
>  outside of interrupt or exception handlers. This means fprobe should
>  not use 'pt_regs' because it does not use such exceptions.
>  (Conversely, kprobe should use 'pt_regs' because it is an abstract
>   interface of the software breakpoint exception.)
>
> This series changes fprobe to use function-graph tracer for tracing
> function entry and exit, instead of mixture of ftrace and rethook.
> Unlike the rethook which is a per-task list of system-wide allocated
> nodes, the function graph's ret_stack is a per-task shadow stack.
> Thus it does not need to set 'nr_maxactive' (which is the number of
> pre-allocated nodes).
> Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
> Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
> their register interface, this changes it to convert 'ftrace_regs' to
> 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
> so users must access only registers for function parameters or
> return value.
>
> Design
> ------
> Instead of using ftrace's function entry hook directly, the new fprobe
> is built on top of the function-graph's entry and return callbacks
> with 'ftrace_regs'.
>
> Since the fprobe requires access to 'ftrace_regs', the architecture
> must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
> CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
> entry callback with 'ftrace_regs', and also
> CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
> return_to_handler.
>
> All fprobes share a single function-graph ops (means shares a common
> ftrace filter) similar to the kprobe-on-ftrace. This needs another
> layer to find corresponding fprobe in the common function-graph
> callbacks, but has much better scalability, since the number of
> registered function-graph ops is limited.
>
> In the entry callback, the fprobe runs its entry_handler and saves the
> address of 'fprobe' on the function-graph's shadow stack as data. The
> return callback decodes the data to get the 'fprobe' address, and runs
> the exit_handler.
>
> The fprobe introduces two hash-tables, one is for entry callback which
> searches fprobes related to the given function address passed by entry
> callback. The other is for a return callback which checks if the given
> 'fprobe' data structure pointer is still valid. Note that it is
> possible to unregister fprobe before the return callback runs. Thus
> the address validation must be done before using it in the return
> callback.
>
> This series can be applied against the probes/for-next branch, which
> is based on v6.9-rc3.
>
> This series can also be found below branch.
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
>
> Thank you,
>
> ---
>
> Masami Hiramatsu (Google) (21):
>       tracing: Add a comment about ftrace_regs definition
>       tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value
>       x86: tracing: Add ftrace_regs definition in the header
>       function_graph: Use a simple LRU for fgraph_array index number
>       ftrace: Add multiple fgraph storage selftest
>       function_graph: Pass ftrace_regs to entryfunc
>       function_graph: Replace fgraph_ret_regs with ftrace_regs
>       function_graph: Pass ftrace_regs to retfunc
>       fprobe: Use ftrace_regs in fprobe entry handler
>       fprobe: Use ftrace_regs in fprobe exit handler
>       tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
>       tracing: Add ftrace_fill_perf_regs() for perf event
>       tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
>       bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
>       ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
>       fprobe: Rewrite fprobe on function-graph tracer
>       tracing/fprobe: Remove nr_maxactive from fprobe
>       selftests: ftrace: Remove obsolate maxactive syntax check
>       selftests/ftrace: Add a test case for repeating register/unregister fprobe
>       Documentation: probes: Update fprobe on function-graph tracer
>       fgraph: Skip recording calltime/rettime if it is not nneeded
>
> Steven Rostedt (VMware) (15):
>       function_graph: Convert ret_stack to a series of longs
>       fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long
>       function_graph: Add an array structure that will allow multiple callbacks
>       function_graph: Allow multiple users to attach to function graph
>       function_graph: Remove logic around ftrace_graph_entry and return
>       ftrace/function_graph: Pass fgraph_ops to function graph callbacks
>       ftrace: Allow function_graph tracer to be enabled in instances
>       ftrace: Allow ftrace startup flags exist without dynamic ftrace
>       function_graph: Have the instances use their own ftrace_ops for filtering
>       function_graph: Add "task variables" per task for fgraph_ops
>       function_graph: Move set_graph_function tests to shadow stack global var
>       function_graph: Move graph depth stored data to shadow stack global var
>       function_graph: Move graph notrace bit to shadow stack global var
>       function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data()
>       function_graph: Add selftest for passing local variables
>
>
>  Documentation/trace/fprobe.rst                     |   42 +
>  arch/arm64/Kconfig                                 |    3
>  arch/arm64/include/asm/ftrace.h                    |   47 +
>  arch/arm64/kernel/asm-offsets.c                    |   12
>  arch/arm64/kernel/entry-ftrace.S                   |   32 -
>  arch/arm64/kernel/ftrace.c                         |   21
>  arch/loongarch/Kconfig                             |    4
>  arch/loongarch/include/asm/ftrace.h                |   32 -
>  arch/loongarch/kernel/asm-offsets.c                |   12
>  arch/loongarch/kernel/ftrace_dyn.c                 |   15
>  arch/loongarch/kernel/mcount.S                     |   17
>  arch/loongarch/kernel/mcount_dyn.S                 |   14
>  arch/powerpc/Kconfig                               |    1
>  arch/powerpc/include/asm/ftrace.h                  |   15
>  arch/powerpc/kernel/trace/ftrace.c                 |    3
>  arch/powerpc/kernel/trace/ftrace_64_pg.c           |   10
>  arch/riscv/Kconfig                                 |    3
>  arch/riscv/include/asm/ftrace.h                    |   21
>  arch/riscv/kernel/ftrace.c                         |   15
>  arch/riscv/kernel/mcount.S                         |   24
>  arch/s390/Kconfig                                  |    3
>  arch/s390/include/asm/ftrace.h                     |   39 -
>  arch/s390/kernel/asm-offsets.c                     |    6
>  arch/s390/kernel/mcount.S                          |    9
>  arch/x86/Kconfig                                   |    4
>  arch/x86/include/asm/ftrace.h                      |   43 -
>  arch/x86/kernel/ftrace.c                           |   51 +
>  arch/x86/kernel/ftrace_32.S                        |   15
>  arch/x86/kernel/ftrace_64.S                        |   17
>  include/linux/fprobe.h                             |   57 +
>  include/linux/ftrace.h                             |  170 +++
>  include/linux/sched.h                              |    2
>  include/linux/trace_recursion.h                    |   39 -
>  kernel/trace/Kconfig                               |   23
>  kernel/trace/bpf_trace.c                           |   14
>  kernel/trace/fgraph.c                              | 1005 ++++++++++++++++----
>  kernel/trace/fprobe.c                              |  637 +++++++++----
>  kernel/trace/ftrace.c                              |   13
>  kernel/trace/ftrace_internal.h                     |    2
>  kernel/trace/trace.h                               |   96 ++
>  kernel/trace/trace_fprobe.c                        |  147 ++-
>  kernel/trace/trace_functions.c                     |    8
>  kernel/trace/trace_functions_graph.c               |   98 +-
>  kernel/trace/trace_irqsoff.c                       |   12
>  kernel/trace/trace_probe_tmpl.h                    |    2
>  kernel/trace/trace_sched_wakeup.c                  |   12
>  kernel/trace/trace_selftest.c                      |  262 +++++
>  lib/test_fprobe.c                                  |   51 -
>  samples/fprobe/fprobe_example.c                    |    4
>  .../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19
>  .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4
>  51 files changed, 2325 insertions(+), 882 deletions(-)
>  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
Masami Hiramatsu (Google) April 25, 2024, 3:10 p.m. UTC | #4
On Wed, 24 Apr 2024 15:35:15 +0200
Florent Revest <revest@chromium.org> wrote:

> Neat! :) I had a look at mostly the "high level" part (fprobe and
> arm64 specific bits) and this seems to be in a good state to me.
> 

Thanks for the review this long series!

> Thanks for all that work, that is quite a refactoring :)
> 
> On Mon, Apr 15, 2024 at 2:49 PM Masami Hiramatsu (Google)
> <mhiramat@kernel.org> wrote:
> >
> > Hi,
> >
> > Here is the 9th version of the series to re-implement the fprobe on
> > function-graph tracer. The previous version is;
> >
> > https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/
> >
> > This version is ported on the latest kernel (v6.9-rc3 + probes/for-next)
> > and fixed some bugs + performance optimization patch[36/36].
> >  - [12/36] Fix to clear fgraph_array entry in registration failure, also
> >            return -ENOSPC when fgraph_array is full.
> >  - [28/36] Add new store_fprobe_entry_data() for fprobe.
> >  - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation.
> >  - [36/36] Add new flag to skip timestamp recording.
> >
> > Overview
> > --------
> > This series does major 2 changes, enable multiple function-graphs on
> > the ftrace (e.g. allow function-graph on sub instances) and rewrite the
> > fprobe on this function-graph.
> >
> > The former changes had been sent from Steven Rostedt 4 years ago (*),
> > which allows users to set different setting function-graph tracer (and
> > other tracers based on function-graph) in each trace-instances at the
> > same time.
> >
> > (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/
> >
> > The purpose of latter change are;
> >
> >  1) Remove dependency of the rethook from fprobe so that we can reduce
> >    the return hook code and shadow stack.
> >
> >  2) Make 'ftrace_regs' the common trace interface for the function
> >    boundary.
> >
> > 1) Currently we have 2(or 3) different function return hook codes,
> >  the function-graph tracer and rethook (and legacy kretprobe).
> >  But since this  is redundant and needs double maintenance cost,
> >  I would like to unify those. From the user's viewpoint, function-
> >  graph tracer is very useful to grasp the execution path. For this
> >  purpose, it is hard to use the rethook in the function-graph
> >  tracer, but the opposite is possible. (Strictly speaking, kretprobe
> >  can not use it because it requires 'pt_regs' for historical reasons.)
> >
> > 2) Now the fprobe provides the 'pt_regs' for its handler, but that is
> >  wrong for the function entry and exit. Moreover, depending on the
> >  architecture, there is no way to accurately reproduce 'pt_regs'
> >  outside of interrupt or exception handlers. This means fprobe should
> >  not use 'pt_regs' because it does not use such exceptions.
> >  (Conversely, kprobe should use 'pt_regs' because it is an abstract
> >   interface of the software breakpoint exception.)
> >
> > This series changes fprobe to use function-graph tracer for tracing
> > function entry and exit, instead of mixture of ftrace and rethook.
> > Unlike the rethook which is a per-task list of system-wide allocated
> > nodes, the function graph's ret_stack is a per-task shadow stack.
> > Thus it does not need to set 'nr_maxactive' (which is the number of
> > pre-allocated nodes).
> > Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
> > Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
> > their register interface, this changes it to convert 'ftrace_regs' to
> > 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
> > so users must access only registers for function parameters or
> > return value.
> >
> > Design
> > ------
> > Instead of using ftrace's function entry hook directly, the new fprobe
> > is built on top of the function-graph's entry and return callbacks
> > with 'ftrace_regs'.
> >
> > Since the fprobe requires access to 'ftrace_regs', the architecture
> > must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
> > CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
> > entry callback with 'ftrace_regs', and also
> > CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
> > return_to_handler.
> >
> > All fprobes share a single function-graph ops (means shares a common
> > ftrace filter) similar to the kprobe-on-ftrace. This needs another
> > layer to find corresponding fprobe in the common function-graph
> > callbacks, but has much better scalability, since the number of
> > registered function-graph ops is limited.
> >
> > In the entry callback, the fprobe runs its entry_handler and saves the
> > address of 'fprobe' on the function-graph's shadow stack as data. The
> > return callback decodes the data to get the 'fprobe' address, and runs
> > the exit_handler.
> >
> > The fprobe introduces two hash-tables, one is for entry callback which
> > searches fprobes related to the given function address passed by entry
> > callback. The other is for a return callback which checks if the given
> > 'fprobe' data structure pointer is still valid. Note that it is
> > possible to unregister fprobe before the return callback runs. Thus
> > the address validation must be done before using it in the return
> > callback.
> >
> > This series can be applied against the probes/for-next branch, which
> > is based on v6.9-rc3.
> >
> > This series can also be found below branch.
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
> >
> > Thank you,
> >
> > ---
> >
> > Masami Hiramatsu (Google) (21):
> >       tracing: Add a comment about ftrace_regs definition
> >       tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value
> >       x86: tracing: Add ftrace_regs definition in the header
> >       function_graph: Use a simple LRU for fgraph_array index number
> >       ftrace: Add multiple fgraph storage selftest
> >       function_graph: Pass ftrace_regs to entryfunc
> >       function_graph: Replace fgraph_ret_regs with ftrace_regs
> >       function_graph: Pass ftrace_regs to retfunc
> >       fprobe: Use ftrace_regs in fprobe entry handler
> >       fprobe: Use ftrace_regs in fprobe exit handler
> >       tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
> >       tracing: Add ftrace_fill_perf_regs() for perf event
> >       tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
> >       bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
> >       ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
> >       fprobe: Rewrite fprobe on function-graph tracer
> >       tracing/fprobe: Remove nr_maxactive from fprobe
> >       selftests: ftrace: Remove obsolate maxactive syntax check
> >       selftests/ftrace: Add a test case for repeating register/unregister fprobe
> >       Documentation: probes: Update fprobe on function-graph tracer
> >       fgraph: Skip recording calltime/rettime if it is not nneeded
> >
> > Steven Rostedt (VMware) (15):
> >       function_graph: Convert ret_stack to a series of longs
> >       fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long
> >       function_graph: Add an array structure that will allow multiple callbacks
> >       function_graph: Allow multiple users to attach to function graph
> >       function_graph: Remove logic around ftrace_graph_entry and return
> >       ftrace/function_graph: Pass fgraph_ops to function graph callbacks
> >       ftrace: Allow function_graph tracer to be enabled in instances
> >       ftrace: Allow ftrace startup flags exist without dynamic ftrace
> >       function_graph: Have the instances use their own ftrace_ops for filtering
> >       function_graph: Add "task variables" per task for fgraph_ops
> >       function_graph: Move set_graph_function tests to shadow stack global var
> >       function_graph: Move graph depth stored data to shadow stack global var
> >       function_graph: Move graph notrace bit to shadow stack global var
> >       function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data()
> >       function_graph: Add selftest for passing local variables
> >
> >
> >  Documentation/trace/fprobe.rst                     |   42 +
> >  arch/arm64/Kconfig                                 |    3
> >  arch/arm64/include/asm/ftrace.h                    |   47 +
> >  arch/arm64/kernel/asm-offsets.c                    |   12
> >  arch/arm64/kernel/entry-ftrace.S                   |   32 -
> >  arch/arm64/kernel/ftrace.c                         |   21
> >  arch/loongarch/Kconfig                             |    4
> >  arch/loongarch/include/asm/ftrace.h                |   32 -
> >  arch/loongarch/kernel/asm-offsets.c                |   12
> >  arch/loongarch/kernel/ftrace_dyn.c                 |   15
> >  arch/loongarch/kernel/mcount.S                     |   17
> >  arch/loongarch/kernel/mcount_dyn.S                 |   14
> >  arch/powerpc/Kconfig                               |    1
> >  arch/powerpc/include/asm/ftrace.h                  |   15
> >  arch/powerpc/kernel/trace/ftrace.c                 |    3
> >  arch/powerpc/kernel/trace/ftrace_64_pg.c           |   10
> >  arch/riscv/Kconfig                                 |    3
> >  arch/riscv/include/asm/ftrace.h                    |   21
> >  arch/riscv/kernel/ftrace.c                         |   15
> >  arch/riscv/kernel/mcount.S                         |   24
> >  arch/s390/Kconfig                                  |    3
> >  arch/s390/include/asm/ftrace.h                     |   39 -
> >  arch/s390/kernel/asm-offsets.c                     |    6
> >  arch/s390/kernel/mcount.S                          |    9
> >  arch/x86/Kconfig                                   |    4
> >  arch/x86/include/asm/ftrace.h                      |   43 -
> >  arch/x86/kernel/ftrace.c                           |   51 +
> >  arch/x86/kernel/ftrace_32.S                        |   15
> >  arch/x86/kernel/ftrace_64.S                        |   17
> >  include/linux/fprobe.h                             |   57 +
> >  include/linux/ftrace.h                             |  170 +++
> >  include/linux/sched.h                              |    2
> >  include/linux/trace_recursion.h                    |   39 -
> >  kernel/trace/Kconfig                               |   23
> >  kernel/trace/bpf_trace.c                           |   14
> >  kernel/trace/fgraph.c                              | 1005 ++++++++++++++++----
> >  kernel/trace/fprobe.c                              |  637 +++++++++----
> >  kernel/trace/ftrace.c                              |   13
> >  kernel/trace/ftrace_internal.h                     |    2
> >  kernel/trace/trace.h                               |   96 ++
> >  kernel/trace/trace_fprobe.c                        |  147 ++-
> >  kernel/trace/trace_functions.c                     |    8
> >  kernel/trace/trace_functions_graph.c               |   98 +-
> >  kernel/trace/trace_irqsoff.c                       |   12
> >  kernel/trace/trace_probe_tmpl.h                    |    2
> >  kernel/trace/trace_sched_wakeup.c                  |   12
> >  kernel/trace/trace_selftest.c                      |  262 +++++
> >  lib/test_fprobe.c                                  |   51 -
> >  samples/fprobe/fprobe_example.c                    |    4
> >  .../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19
> >  .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4
> >  51 files changed, 2325 insertions(+), 882 deletions(-)
> >  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> >
> > --
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>
Andrii Nakryiko April 25, 2024, 8:31 p.m. UTC | #5
On Mon, Apr 15, 2024 at 5:49 AM Masami Hiramatsu (Google)
<mhiramat@kernel.org> wrote:
>
> Hi,
>
> Here is the 9th version of the series to re-implement the fprobe on
> function-graph tracer. The previous version is;
>
> https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/
>
> This version is ported on the latest kernel (v6.9-rc3 + probes/for-next)
> and fixed some bugs + performance optimization patch[36/36].
>  - [12/36] Fix to clear fgraph_array entry in registration failure, also
>            return -ENOSPC when fgraph_array is full.
>  - [28/36] Add new store_fprobe_entry_data() for fprobe.
>  - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation.
>  - [36/36] Add new flag to skip timestamp recording.
>
> Overview
> --------
> This series does major 2 changes, enable multiple function-graphs on
> the ftrace (e.g. allow function-graph on sub instances) and rewrite the
> fprobe on this function-graph.
>
> The former changes had been sent from Steven Rostedt 4 years ago (*),
> which allows users to set different setting function-graph tracer (and
> other tracers based on function-graph) in each trace-instances at the
> same time.
>
> (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/
>
> The purpose of latter change are;
>
>  1) Remove dependency of the rethook from fprobe so that we can reduce
>    the return hook code and shadow stack.
>
>  2) Make 'ftrace_regs' the common trace interface for the function
>    boundary.
>
> 1) Currently we have 2(or 3) different function return hook codes,
>  the function-graph tracer and rethook (and legacy kretprobe).
>  But since this  is redundant and needs double maintenance cost,
>  I would like to unify those. From the user's viewpoint, function-
>  graph tracer is very useful to grasp the execution path. For this
>  purpose, it is hard to use the rethook in the function-graph
>  tracer, but the opposite is possible. (Strictly speaking, kretprobe
>  can not use it because it requires 'pt_regs' for historical reasons.)
>
> 2) Now the fprobe provides the 'pt_regs' for its handler, but that is
>  wrong for the function entry and exit. Moreover, depending on the
>  architecture, there is no way to accurately reproduce 'pt_regs'
>  outside of interrupt or exception handlers. This means fprobe should
>  not use 'pt_regs' because it does not use such exceptions.
>  (Conversely, kprobe should use 'pt_regs' because it is an abstract
>   interface of the software breakpoint exception.)
>
> This series changes fprobe to use function-graph tracer for tracing
> function entry and exit, instead of mixture of ftrace and rethook.
> Unlike the rethook which is a per-task list of system-wide allocated
> nodes, the function graph's ret_stack is a per-task shadow stack.
> Thus it does not need to set 'nr_maxactive' (which is the number of
> pre-allocated nodes).
> Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
> Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
> their register interface, this changes it to convert 'ftrace_regs' to
> 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
> so users must access only registers for function parameters or
> return value.
>
> Design
> ------
> Instead of using ftrace's function entry hook directly, the new fprobe
> is built on top of the function-graph's entry and return callbacks
> with 'ftrace_regs'.
>
> Since the fprobe requires access to 'ftrace_regs', the architecture
> must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
> CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
> entry callback with 'ftrace_regs', and also
> CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
> return_to_handler.
>
> All fprobes share a single function-graph ops (means shares a common
> ftrace filter) similar to the kprobe-on-ftrace. This needs another
> layer to find corresponding fprobe in the common function-graph
> callbacks, but has much better scalability, since the number of
> registered function-graph ops is limited.
>
> In the entry callback, the fprobe runs its entry_handler and saves the
> address of 'fprobe' on the function-graph's shadow stack as data. The
> return callback decodes the data to get the 'fprobe' address, and runs
> the exit_handler.
>
> The fprobe introduces two hash-tables, one is for entry callback which
> searches fprobes related to the given function address passed by entry
> callback. The other is for a return callback which checks if the given
> 'fprobe' data structure pointer is still valid. Note that it is
> possible to unregister fprobe before the return callback runs. Thus
> the address validation must be done before using it in the return
> callback.
>
> This series can be applied against the probes/for-next branch, which
> is based on v6.9-rc3.
>
> This series can also be found below branch.
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
>
> Thank you,
>
> ---

Hey Masami,

I can't really review most of that code as I'm completely unfamiliar
with all those inner workings of fprobe/ftrace/function_graph. I left
a few comments where there were somewhat more obvious BPF-related
pieces.

But I also did run our BPF benchmarks on probes/for-next as a baseline
and then with your series applied on top. Just to see if there are any
regressions. I think it will be a useful data point for you.

You should be already familiar with the bench tool we have in BPF
selftests (I used it on some other patches for your tree).

BASELINE
========
kprobe         :   24.634 ± 0.205M/s
kprobe-multi   :   28.898 ± 0.531M/s
kretprobe      :   10.478 ± 0.015M/s
kretprobe-multi:   11.012 ± 0.063M/s

THIS PATCH SET ON TOP
=====================
kprobe         :   25.144 ± 0.027M/s (+2%)
kprobe-multi   :   28.909 ± 0.074M/s
kretprobe      :    9.482 ± 0.008M/s (-9.5%)
kretprobe-multi:   13.688 ± 0.027M/s (+24%)

These numbers are pretty stable and look to be more or less representative.

As you can see, kprobes got a bit faster, kprobe-multi seems to be
about the same, though.

Then (I suppose they are "legacy") kretprobes got quite noticeably
slower, almost by 10%. Not sure why, but looks real after re-running
benchmarks a bunch of times and getting stable results.

On the other hand, multi-kretprobes got significantly faster (+24%!).
Again, I don't know if it is expected or not, but it's a nice
improvement.

If you have any idea why kretprobes would get so much slower, it would
be nice to look into that and see if you can mitigate the regression
somehow. Thanks!


>  51 files changed, 2325 insertions(+), 882 deletions(-)
>  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
>
Steven Rostedt April 28, 2024, 11:25 p.m. UTC | #6
On Thu, 25 Apr 2024 13:31:53 -0700
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:

I'm just coming back from Japan (work and then a vacation), and
catching up on my email during the 6 hour layover in Detroit.

> Hey Masami,
> 
> I can't really review most of that code as I'm completely unfamiliar
> with all those inner workings of fprobe/ftrace/function_graph. I left
> a few comments where there were somewhat more obvious BPF-related
> pieces.
> 
> But I also did run our BPF benchmarks on probes/for-next as a baseline
> and then with your series applied on top. Just to see if there are any
> regressions. I think it will be a useful data point for you.
> 
> You should be already familiar with the bench tool we have in BPF
> selftests (I used it on some other patches for your tree).

I should get familiar with your tools too.

> 
> BASELINE
> ========
> kprobe         :   24.634 ± 0.205M/s
> kprobe-multi   :   28.898 ± 0.531M/s
> kretprobe      :   10.478 ± 0.015M/s
> kretprobe-multi:   11.012 ± 0.063M/s
> 
> THIS PATCH SET ON TOP
> =====================
> kprobe         :   25.144 ± 0.027M/s (+2%)
> kprobe-multi   :   28.909 ± 0.074M/s
> kretprobe      :    9.482 ± 0.008M/s (-9.5%)
> kretprobe-multi:   13.688 ± 0.027M/s (+24%)
> 
> These numbers are pretty stable and look to be more or less representative.

Thanks for running this.

> 
> As you can see, kprobes got a bit faster, kprobe-multi seems to be
> about the same, though.
> 
> Then (I suppose they are "legacy") kretprobes got quite noticeably
> slower, almost by 10%. Not sure why, but looks real after re-running
> benchmarks a bunch of times and getting stable results.
> 
> On the other hand, multi-kretprobes got significantly faster (+24%!).
> Again, I don't know if it is expected or not, but it's a nice
> improvement.
> 
> If you have any idea why kretprobes would get so much slower, it would
> be nice to look into that and see if you can mitigate the regression
> somehow. Thanks!

My guess is that this patch set helps generic use cases for tracing the
return of functions, but will likely add more overhead for single use
cases. That is, kretprobe is made to be specific for a single function,
but kretprobe-multi is more generic. Hence the generic version will
improve at the sacrifice of the specific function. I did expect as much.

That said, I think there's probably a lot of low hanging fruit that can
be done to this series to help improve the kretprobe performance. I'm
not sure we can get back to the baseline, but I'm hoping we can at
least make it much better than that 10% slowdown.

I'll be reviewing this patch set this week as I recover from jetlag.

-- Steve
Masami Hiramatsu (Google) April 29, 2024, 1:51 p.m. UTC | #7
Hi Andrii,

On Thu, 25 Apr 2024 13:31:53 -0700
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:

> Hey Masami,
> 
> I can't really review most of that code as I'm completely unfamiliar
> with all those inner workings of fprobe/ftrace/function_graph. I left
> a few comments where there were somewhat more obvious BPF-related
> pieces.
> 
> But I also did run our BPF benchmarks on probes/for-next as a baseline
> and then with your series applied on top. Just to see if there are any
> regressions. I think it will be a useful data point for you.

Thanks for testing!

> 
> You should be already familiar with the bench tool we have in BPF
> selftests (I used it on some other patches for your tree).

What patches we need?

> 
> BASELINE
> ========
> kprobe         :   24.634 ± 0.205M/s
> kprobe-multi   :   28.898 ± 0.531M/s
> kretprobe      :   10.478 ± 0.015M/s
> kretprobe-multi:   11.012 ± 0.063M/s
> 
> THIS PATCH SET ON TOP
> =====================
> kprobe         :   25.144 ± 0.027M/s (+2%)
> kprobe-multi   :   28.909 ± 0.074M/s
> kretprobe      :    9.482 ± 0.008M/s (-9.5%)
> kretprobe-multi:   13.688 ± 0.027M/s (+24%)

This looks good. Kretprobe should also use kretprobe-multi (fprobe)
eventually because it should be a single callback version of
kretprobe-multi.

> 
> These numbers are pretty stable and look to be more or less representative.
> 
> As you can see, kprobes got a bit faster, kprobe-multi seems to be
> about the same, though.
> 
> Then (I suppose they are "legacy") kretprobes got quite noticeably
> slower, almost by 10%. Not sure why, but looks real after re-running
> benchmarks a bunch of times and getting stable results.

Hmm, kretprobe on x86 should use ftrace + rethook even with my series.
So nothing should be changed. Maybe cache access pattern has been
changed?
I'll check it with tracefs (to remove the effect from bpf related changes)

> 
> On the other hand, multi-kretprobes got significantly faster (+24%!).
> Again, I don't know if it is expected or not, but it's a nice
> improvement.

Thanks!

> 
> If you have any idea why kretprobes would get so much slower, it would
> be nice to look into that and see if you can mitigate the regression
> somehow. Thanks!

OK, let me check it.

Thank you!

> 
> 
> >  51 files changed, 2325 insertions(+), 882 deletions(-)
> >  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> >
> > --
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> >
Andrii Nakryiko April 29, 2024, 8:25 p.m. UTC | #8
On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> Hi Andrii,
>
> On Thu, 25 Apr 2024 13:31:53 -0700
> Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>
> > Hey Masami,
> >
> > I can't really review most of that code as I'm completely unfamiliar
> > with all those inner workings of fprobe/ftrace/function_graph. I left
> > a few comments where there were somewhat more obvious BPF-related
> > pieces.
> >
> > But I also did run our BPF benchmarks on probes/for-next as a baseline
> > and then with your series applied on top. Just to see if there are any
> > regressions. I think it will be a useful data point for you.
>
> Thanks for testing!
>
> >
> > You should be already familiar with the bench tool we have in BPF
> > selftests (I used it on some other patches for your tree).
>
> What patches we need?
>

You mean for this `bench` tool? They are part of BPF selftests (under
tools/testing/selftests/bpf), you can build them by running:

$ make RELEASE=1 -j$(nproc) bench

After that you'll get a self-container `bench` binary, which has all
the self-contained benchmarks.

You might also find a small script (benchs/run_bench_trigger.sh inside
BPF selftests directory) helpful, it collects final summary of the
benchmark run and optionally accepts a specific set of benchmarks. So
you can use it like this:

$ benchs/run_bench_trigger.sh kprobe kprobe-multi
kprobe         :   18.731 ± 0.639M/s
kprobe-multi   :   23.938 ± 0.612M/s

By default it will run a wider set of benchmarks (no uprobes, but a
bunch of extra fentry/fexit tests and stuff like this).

> >
> > BASELINE
> > ========
> > kprobe         :   24.634 ± 0.205M/s
> > kprobe-multi   :   28.898 ± 0.531M/s
> > kretprobe      :   10.478 ± 0.015M/s
> > kretprobe-multi:   11.012 ± 0.063M/s
> >
> > THIS PATCH SET ON TOP
> > =====================
> > kprobe         :   25.144 ± 0.027M/s (+2%)
> > kprobe-multi   :   28.909 ± 0.074M/s
> > kretprobe      :    9.482 ± 0.008M/s (-9.5%)
> > kretprobe-multi:   13.688 ± 0.027M/s (+24%)
>
> This looks good. Kretprobe should also use kretprobe-multi (fprobe)
> eventually because it should be a single callback version of
> kretprobe-multi.
>
> >
> > These numbers are pretty stable and look to be more or less representative.
> >
> > As you can see, kprobes got a bit faster, kprobe-multi seems to be
> > about the same, though.
> >
> > Then (I suppose they are "legacy") kretprobes got quite noticeably
> > slower, almost by 10%. Not sure why, but looks real after re-running
> > benchmarks a bunch of times and getting stable results.
>
> Hmm, kretprobe on x86 should use ftrace + rethook even with my series.
> So nothing should be changed. Maybe cache access pattern has been
> changed?
> I'll check it with tracefs (to remove the effect from bpf related changes)
>
> >
> > On the other hand, multi-kretprobes got significantly faster (+24%!).
> > Again, I don't know if it is expected or not, but it's a nice
> > improvement.
>
> Thanks!
>
> >
> > If you have any idea why kretprobes would get so much slower, it would
> > be nice to look into that and see if you can mitigate the regression
> > somehow. Thanks!
>
> OK, let me check it.
>
> Thank you!
>
> >
> >
> > >  51 files changed, 2325 insertions(+), 882 deletions(-)
> > >  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> > >
> > > --
> > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > >
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
Andrii Nakryiko April 29, 2024, 8:28 p.m. UTC | #9
On Sun, Apr 28, 2024 at 4:25 PM Steven Rostedt <rostedt@goodmis.org> wrote:
>
> On Thu, 25 Apr 2024 13:31:53 -0700
> Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>
> I'm just coming back from Japan (work and then a vacation), and
> catching up on my email during the 6 hour layover in Detroit.
>
> > Hey Masami,
> >
> > I can't really review most of that code as I'm completely unfamiliar
> > with all those inner workings of fprobe/ftrace/function_graph. I left
> > a few comments where there were somewhat more obvious BPF-related
> > pieces.
> >
> > But I also did run our BPF benchmarks on probes/for-next as a baseline
> > and then with your series applied on top. Just to see if there are any
> > regressions. I think it will be a useful data point for you.
> >
> > You should be already familiar with the bench tool we have in BPF
> > selftests (I used it on some other patches for your tree).
>
> I should get familiar with your tools too.
>

It's a nifty and self-contained tool to do some micro-benchmarking, I
replied to Masami with a few details on how to build and use it.

> >
> > BASELINE
> > ========
> > kprobe         :   24.634 ± 0.205M/s
> > kprobe-multi   :   28.898 ± 0.531M/s
> > kretprobe      :   10.478 ± 0.015M/s
> > kretprobe-multi:   11.012 ± 0.063M/s
> >
> > THIS PATCH SET ON TOP
> > =====================
> > kprobe         :   25.144 ± 0.027M/s (+2%)
> > kprobe-multi   :   28.909 ± 0.074M/s
> > kretprobe      :    9.482 ± 0.008M/s (-9.5%)
> > kretprobe-multi:   13.688 ± 0.027M/s (+24%)
> >
> > These numbers are pretty stable and look to be more or less representative.
>
> Thanks for running this.
>
> >
> > As you can see, kprobes got a bit faster, kprobe-multi seems to be
> > about the same, though.
> >
> > Then (I suppose they are "legacy") kretprobes got quite noticeably
> > slower, almost by 10%. Not sure why, but looks real after re-running
> > benchmarks a bunch of times and getting stable results.
> >
> > On the other hand, multi-kretprobes got significantly faster (+24%!).
> > Again, I don't know if it is expected or not, but it's a nice
> > improvement.
> >
> > If you have any idea why kretprobes would get so much slower, it would
> > be nice to look into that and see if you can mitigate the regression
> > somehow. Thanks!
>
> My guess is that this patch set helps generic use cases for tracing the
> return of functions, but will likely add more overhead for single use
> cases. That is, kretprobe is made to be specific for a single function,
> but kretprobe-multi is more generic. Hence the generic version will
> improve at the sacrifice of the specific function. I did expect as much.
>
> That said, I think there's probably a lot of low hanging fruit that can
> be done to this series to help improve the kretprobe performance. I'm
> not sure we can get back to the baseline, but I'm hoping we can at
> least make it much better than that 10% slowdown.

That would certainly be appreciated, thanks!

But I'm also considering trying to switch to multi-kprobe/kretprobe
automatically on libbpf side, whenever possible, so that users can get
the best performance. There might still be situations where this can't
be done, so singular kprobe/kretprobe can't be completely deprecated,
but multi variants seems to be universally faster, so I'm going to
make them a default (I need to handle some backwards compat aspect,
but that's libbpf-specific stuff you shouldn't be concerned with).

>
> I'll be reviewing this patch set this week as I recover from jetlag.
>
> -- Steve
Masami Hiramatsu (Google) April 30, 2024, 1:32 p.m. UTC | #10
On Mon, 29 Apr 2024 13:25:04 -0700
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:

> On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> >
> > Hi Andrii,
> >
> > On Thu, 25 Apr 2024 13:31:53 -0700
> > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> > > Hey Masami,
> > >
> > > I can't really review most of that code as I'm completely unfamiliar
> > > with all those inner workings of fprobe/ftrace/function_graph. I left
> > > a few comments where there were somewhat more obvious BPF-related
> > > pieces.
> > >
> > > But I also did run our BPF benchmarks on probes/for-next as a baseline
> > > and then with your series applied on top. Just to see if there are any
> > > regressions. I think it will be a useful data point for you.
> >
> > Thanks for testing!
> >
> > >
> > > You should be already familiar with the bench tool we have in BPF
> > > selftests (I used it on some other patches for your tree).
> >
> > What patches we need?
> >
> 
> You mean for this `bench` tool? They are part of BPF selftests (under
> tools/testing/selftests/bpf), you can build them by running:
> 
> $ make RELEASE=1 -j$(nproc) bench
> 
> After that you'll get a self-container `bench` binary, which has all
> the self-contained benchmarks.
> 
> You might also find a small script (benchs/run_bench_trigger.sh inside
> BPF selftests directory) helpful, it collects final summary of the
> benchmark run and optionally accepts a specific set of benchmarks. So
> you can use it like this:
> 
> $ benchs/run_bench_trigger.sh kprobe kprobe-multi
> kprobe         :   18.731 ± 0.639M/s
> kprobe-multi   :   23.938 ± 0.612M/s
> 
> By default it will run a wider set of benchmarks (no uprobes, but a
> bunch of extra fentry/fexit tests and stuff like this).

origin:
# benchs/run_bench_trigger.sh 
kretprobe :    1.329 ± 0.007M/s 
kretprobe-multi:    1.341 ± 0.004M/s 
# benchs/run_bench_trigger.sh 
kretprobe :    1.288 ± 0.014M/s 
kretprobe-multi:    1.365 ± 0.002M/s 
# benchs/run_bench_trigger.sh 
kretprobe :    1.329 ± 0.002M/s 
kretprobe-multi:    1.331 ± 0.011M/s 
# benchs/run_bench_trigger.sh 
kretprobe :    1.311 ± 0.003M/s 
kretprobe-multi:    1.318 ± 0.002M/s s

patched: 

# benchs/run_bench_trigger.sh
kretprobe :    1.274 ± 0.003M/s 
kretprobe-multi:    1.397 ± 0.002M/s 
# benchs/run_bench_trigger.sh
kretprobe :    1.307 ± 0.002M/s 
kretprobe-multi:    1.406 ± 0.004M/s 
# benchs/run_bench_trigger.sh
kretprobe :    1.279 ± 0.004M/s 
kretprobe-multi:    1.330 ± 0.014M/s 
# benchs/run_bench_trigger.sh
kretprobe :    1.256 ± 0.010M/s 
kretprobe-multi:    1.412 ± 0.003M/s 

Hmm, in my case, it seems smaller differences (~3%?).
I attached perf report results for those, but I don't see large difference.

> > >
> > > BASELINE
> > > ========
> > > kprobe         :   24.634 ± 0.205M/s
> > > kprobe-multi   :   28.898 ± 0.531M/s
> > > kretprobe      :   10.478 ± 0.015M/s
> > > kretprobe-multi:   11.012 ± 0.063M/s
> > >
> > > THIS PATCH SET ON TOP
> > > =====================
> > > kprobe         :   25.144 ± 0.027M/s (+2%)
> > > kprobe-multi   :   28.909 ± 0.074M/s
> > > kretprobe      :    9.482 ± 0.008M/s (-9.5%)
> > > kretprobe-multi:   13.688 ± 0.027M/s (+24%)
> >
> > This looks good. Kretprobe should also use kretprobe-multi (fprobe)
> > eventually because it should be a single callback version of
> > kretprobe-multi.

I ran another benchmark (prctl loop, attached), the origin kernel result is here;

# sh ./benchmark.sh 
count = 10000000, took 6.748133 sec

And the patched kernel result;

# sh ./benchmark.sh 
count = 10000000, took 6.644095 sec

I confirmed that the parf result has no big difference.

Thank you,


> >
> > >
> > > These numbers are pretty stable and look to be more or less representative.
> > >
> > > As you can see, kprobes got a bit faster, kprobe-multi seems to be
> > > about the same, though.
> > >
> > > Then (I suppose they are "legacy") kretprobes got quite noticeably
> > > slower, almost by 10%. Not sure why, but looks real after re-running
> > > benchmarks a bunch of times and getting stable results.
> >
> > Hmm, kretprobe on x86 should use ftrace + rethook even with my series.
> > So nothing should be changed. Maybe cache access pattern has been
> > changed?
> > I'll check it with tracefs (to remove the effect from bpf related changes)
> >
> > >
> > > On the other hand, multi-kretprobes got significantly faster (+24%!).
> > > Again, I don't know if it is expected or not, but it's a nice
> > > improvement.
> >
> > Thanks!
> >
> > >
> > > If you have any idea why kretprobes would get so much slower, it would
> > > be nice to look into that and see if you can mitigate the regression
> > > somehow. Thanks!
> >
> > OK, let me check it.
> >
> > Thank you!
> >
> > >
> > >
> > > >  51 files changed, 2325 insertions(+), 882 deletions(-)
> > > >  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> > > >
> > > > --
> > > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > >
> >
> >
> > --
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>
Andrii Nakryiko April 30, 2024, 4:29 p.m. UTC | #11
On Tue, Apr 30, 2024 at 6:32 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Mon, 29 Apr 2024 13:25:04 -0700
> Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>
> > On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > >
> > > Hi Andrii,
> > >
> > > On Thu, 25 Apr 2024 13:31:53 -0700
> > > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> > >
> > > > Hey Masami,
> > > >
> > > > I can't really review most of that code as I'm completely unfamiliar
> > > > with all those inner workings of fprobe/ftrace/function_graph. I left
> > > > a few comments where there were somewhat more obvious BPF-related
> > > > pieces.
> > > >
> > > > But I also did run our BPF benchmarks on probes/for-next as a baseline
> > > > and then with your series applied on top. Just to see if there are any
> > > > regressions. I think it will be a useful data point for you.
> > >
> > > Thanks for testing!
> > >
> > > >
> > > > You should be already familiar with the bench tool we have in BPF
> > > > selftests (I used it on some other patches for your tree).
> > >
> > > What patches we need?
> > >
> >
> > You mean for this `bench` tool? They are part of BPF selftests (under
> > tools/testing/selftests/bpf), you can build them by running:
> >
> > $ make RELEASE=1 -j$(nproc) bench
> >
> > After that you'll get a self-container `bench` binary, which has all
> > the self-contained benchmarks.
> >
> > You might also find a small script (benchs/run_bench_trigger.sh inside
> > BPF selftests directory) helpful, it collects final summary of the
> > benchmark run and optionally accepts a specific set of benchmarks. So
> > you can use it like this:
> >
> > $ benchs/run_bench_trigger.sh kprobe kprobe-multi
> > kprobe         :   18.731 ± 0.639M/s
> > kprobe-multi   :   23.938 ± 0.612M/s
> >
> > By default it will run a wider set of benchmarks (no uprobes, but a
> > bunch of extra fentry/fexit tests and stuff like this).
>
> origin:
> # benchs/run_bench_trigger.sh
> kretprobe :    1.329 ± 0.007M/s
> kretprobe-multi:    1.341 ± 0.004M/s
> # benchs/run_bench_trigger.sh
> kretprobe :    1.288 ± 0.014M/s
> kretprobe-multi:    1.365 ± 0.002M/s
> # benchs/run_bench_trigger.sh
> kretprobe :    1.329 ± 0.002M/s
> kretprobe-multi:    1.331 ± 0.011M/s
> # benchs/run_bench_trigger.sh
> kretprobe :    1.311 ± 0.003M/s
> kretprobe-multi:    1.318 ± 0.002M/s s
>
> patched:
>
> # benchs/run_bench_trigger.sh
> kretprobe :    1.274 ± 0.003M/s
> kretprobe-multi:    1.397 ± 0.002M/s
> # benchs/run_bench_trigger.sh
> kretprobe :    1.307 ± 0.002M/s
> kretprobe-multi:    1.406 ± 0.004M/s
> # benchs/run_bench_trigger.sh
> kretprobe :    1.279 ± 0.004M/s
> kretprobe-multi:    1.330 ± 0.014M/s
> # benchs/run_bench_trigger.sh
> kretprobe :    1.256 ± 0.010M/s
> kretprobe-multi:    1.412 ± 0.003M/s
>
> Hmm, in my case, it seems smaller differences (~3%?).
> I attached perf report results for those, but I don't see large difference.

I ran my benchmarks on bare metal machine (and quite powerful at that,
you can see my numbers are almost 10x of yours), with mitigations
disabled, no retpolines, etc. If you have any of those mitigations it
might result in smaller differences, probably. If you are running
inside QEMU/VM, the results might differ significantly as well.

>
> > > >
> > > > BASELINE
> > > > ========
> > > > kprobe         :   24.634 ± 0.205M/s
> > > > kprobe-multi   :   28.898 ± 0.531M/s
> > > > kretprobe      :   10.478 ± 0.015M/s
> > > > kretprobe-multi:   11.012 ± 0.063M/s
> > > >
> > > > THIS PATCH SET ON TOP
> > > > =====================
> > > > kprobe         :   25.144 ± 0.027M/s (+2%)
> > > > kprobe-multi   :   28.909 ± 0.074M/s
> > > > kretprobe      :    9.482 ± 0.008M/s (-9.5%)
> > > > kretprobe-multi:   13.688 ± 0.027M/s (+24%)
> > >
> > > This looks good. Kretprobe should also use kretprobe-multi (fprobe)
> > > eventually because it should be a single callback version of
> > > kretprobe-multi.
>
> I ran another benchmark (prctl loop, attached), the origin kernel result is here;
>
> # sh ./benchmark.sh
> count = 10000000, took 6.748133 sec
>
> And the patched kernel result;
>
> # sh ./benchmark.sh
> count = 10000000, took 6.644095 sec
>
> I confirmed that the parf result has no big difference.
>
> Thank you,
>
>
> > >
> > > >
> > > > These numbers are pretty stable and look to be more or less representative.
> > > >
> > > > As you can see, kprobes got a bit faster, kprobe-multi seems to be
> > > > about the same, though.
> > > >
> > > > Then (I suppose they are "legacy") kretprobes got quite noticeably
> > > > slower, almost by 10%. Not sure why, but looks real after re-running
> > > > benchmarks a bunch of times and getting stable results.
> > >
> > > Hmm, kretprobe on x86 should use ftrace + rethook even with my series.
> > > So nothing should be changed. Maybe cache access pattern has been
> > > changed?
> > > I'll check it with tracefs (to remove the effect from bpf related changes)
> > >
> > > >
> > > > On the other hand, multi-kretprobes got significantly faster (+24%!).
> > > > Again, I don't know if it is expected or not, but it's a nice
> > > > improvement.
> > >
> > > Thanks!
> > >
> > > >
> > > > If you have any idea why kretprobes would get so much slower, it would
> > > > be nice to look into that and see if you can mitigate the regression
> > > > somehow. Thanks!
> > >
> > > OK, let me check it.
> > >
> > > Thank you!
> > >
> > > >
> > > >
> > > > >  51 files changed, 2325 insertions(+), 882 deletions(-)
> > > > >  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> > > > >
> > > > > --
> > > > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > > >
> > >
> > >
> > > --
> > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
Masami Hiramatsu (Google) May 2, 2024, 2:06 a.m. UTC | #12
On Tue, 30 Apr 2024 09:29:40 -0700
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:

> On Tue, Apr 30, 2024 at 6:32 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> >
> > On Mon, 29 Apr 2024 13:25:04 -0700
> > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> > > On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > >
> > > > Hi Andrii,
> > > >
> > > > On Thu, 25 Apr 2024 13:31:53 -0700
> > > > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> > > >
> > > > > Hey Masami,
> > > > >
> > > > > I can't really review most of that code as I'm completely unfamiliar
> > > > > with all those inner workings of fprobe/ftrace/function_graph. I left
> > > > > a few comments where there were somewhat more obvious BPF-related
> > > > > pieces.
> > > > >
> > > > > But I also did run our BPF benchmarks on probes/for-next as a baseline
> > > > > and then with your series applied on top. Just to see if there are any
> > > > > regressions. I think it will be a useful data point for you.
> > > >
> > > > Thanks for testing!
> > > >
> > > > >
> > > > > You should be already familiar with the bench tool we have in BPF
> > > > > selftests (I used it on some other patches for your tree).
> > > >
> > > > What patches we need?
> > > >
> > >
> > > You mean for this `bench` tool? They are part of BPF selftests (under
> > > tools/testing/selftests/bpf), you can build them by running:
> > >
> > > $ make RELEASE=1 -j$(nproc) bench
> > >
> > > After that you'll get a self-container `bench` binary, which has all
> > > the self-contained benchmarks.
> > >
> > > You might also find a small script (benchs/run_bench_trigger.sh inside
> > > BPF selftests directory) helpful, it collects final summary of the
> > > benchmark run and optionally accepts a specific set of benchmarks. So
> > > you can use it like this:
> > >
> > > $ benchs/run_bench_trigger.sh kprobe kprobe-multi
> > > kprobe         :   18.731 ± 0.639M/s
> > > kprobe-multi   :   23.938 ± 0.612M/s
> > >
> > > By default it will run a wider set of benchmarks (no uprobes, but a
> > > bunch of extra fentry/fexit tests and stuff like this).
> >
> > origin:
> > # benchs/run_bench_trigger.sh
> > kretprobe :    1.329 ± 0.007M/s
> > kretprobe-multi:    1.341 ± 0.004M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe :    1.288 ± 0.014M/s
> > kretprobe-multi:    1.365 ± 0.002M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe :    1.329 ± 0.002M/s
> > kretprobe-multi:    1.331 ± 0.011M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe :    1.311 ± 0.003M/s
> > kretprobe-multi:    1.318 ± 0.002M/s s
> >
> > patched:
> >
> > # benchs/run_bench_trigger.sh
> > kretprobe :    1.274 ± 0.003M/s
> > kretprobe-multi:    1.397 ± 0.002M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe :    1.307 ± 0.002M/s
> > kretprobe-multi:    1.406 ± 0.004M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe :    1.279 ± 0.004M/s
> > kretprobe-multi:    1.330 ± 0.014M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe :    1.256 ± 0.010M/s
> > kretprobe-multi:    1.412 ± 0.003M/s
> >
> > Hmm, in my case, it seems smaller differences (~3%?).
> > I attached perf report results for those, but I don't see large difference.
> 
> I ran my benchmarks on bare metal machine (and quite powerful at that,
> you can see my numbers are almost 10x of yours), with mitigations
> disabled, no retpolines, etc. If you have any of those mitigations it
> might result in smaller differences, probably. If you are running
> inside QEMU/VM, the results might differ significantly as well.

I ran it on my bare metal machines again, but could not find any difference
between them. But I think I enabled intel mitigations on, so it might make
a difference from your result.

Can you run the benchmark with perf record? If there is such differences,
there should be recorded.
e.g. 

# perf record -g -o perf.data-kretprobe-nopatch-raw-bpf -- bench -w2 -d5 -a trig-kretprobe 
# perf report -G -i perf.data-kretprobe-nopatch-raw-bpf -k $VMLINUX --stdio > perf-out-kretprobe-nopatch-raw-bpf

I attached the results in my side.
The interesting point is, the functions int the result are not touched by
this series. Thus there may be another reason if you see the kretprobe
regression.

Thank you,
Andrii Nakryiko May 7, 2024, 9:04 p.m. UTC | #13
On Wed, May 1, 2024 at 7:06 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Tue, 30 Apr 2024 09:29:40 -0700
> Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>
> > On Tue, Apr 30, 2024 at 6:32 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > >
> > > On Mon, 29 Apr 2024 13:25:04 -0700
> > > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> > >
> > > > On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > > >
> > > > > Hi Andrii,
> > > > >
> > > > > On Thu, 25 Apr 2024 13:31:53 -0700
> > > > > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> > > > >
> > > > > > Hey Masami,
> > > > > >
> > > > > > I can't really review most of that code as I'm completely unfamiliar
> > > > > > with all those inner workings of fprobe/ftrace/function_graph. I left
> > > > > > a few comments where there were somewhat more obvious BPF-related
> > > > > > pieces.
> > > > > >
> > > > > > But I also did run our BPF benchmarks on probes/for-next as a baseline
> > > > > > and then with your series applied on top. Just to see if there are any
> > > > > > regressions. I think it will be a useful data point for you.
> > > > >
> > > > > Thanks for testing!
> > > > >
> > > > > >
> > > > > > You should be already familiar with the bench tool we have in BPF
> > > > > > selftests (I used it on some other patches for your tree).
> > > > >
> > > > > What patches we need?
> > > > >
> > > >
> > > > You mean for this `bench` tool? They are part of BPF selftests (under
> > > > tools/testing/selftests/bpf), you can build them by running:
> > > >
> > > > $ make RELEASE=1 -j$(nproc) bench
> > > >
> > > > After that you'll get a self-container `bench` binary, which has all
> > > > the self-contained benchmarks.
> > > >
> > > > You might also find a small script (benchs/run_bench_trigger.sh inside
> > > > BPF selftests directory) helpful, it collects final summary of the
> > > > benchmark run and optionally accepts a specific set of benchmarks. So
> > > > you can use it like this:
> > > >
> > > > $ benchs/run_bench_trigger.sh kprobe kprobe-multi
> > > > kprobe         :   18.731 ± 0.639M/s
> > > > kprobe-multi   :   23.938 ± 0.612M/s
> > > >
> > > > By default it will run a wider set of benchmarks (no uprobes, but a
> > > > bunch of extra fentry/fexit tests and stuff like this).
> > >
> > > origin:
> > > # benchs/run_bench_trigger.sh
> > > kretprobe :    1.329 ± 0.007M/s
> > > kretprobe-multi:    1.341 ± 0.004M/s
> > > # benchs/run_bench_trigger.sh
> > > kretprobe :    1.288 ± 0.014M/s
> > > kretprobe-multi:    1.365 ± 0.002M/s
> > > # benchs/run_bench_trigger.sh
> > > kretprobe :    1.329 ± 0.002M/s
> > > kretprobe-multi:    1.331 ± 0.011M/s
> > > # benchs/run_bench_trigger.sh
> > > kretprobe :    1.311 ± 0.003M/s
> > > kretprobe-multi:    1.318 ± 0.002M/s s
> > >
> > > patched:
> > >
> > > # benchs/run_bench_trigger.sh
> > > kretprobe :    1.274 ± 0.003M/s
> > > kretprobe-multi:    1.397 ± 0.002M/s
> > > # benchs/run_bench_trigger.sh
> > > kretprobe :    1.307 ± 0.002M/s
> > > kretprobe-multi:    1.406 ± 0.004M/s
> > > # benchs/run_bench_trigger.sh
> > > kretprobe :    1.279 ± 0.004M/s
> > > kretprobe-multi:    1.330 ± 0.014M/s
> > > # benchs/run_bench_trigger.sh
> > > kretprobe :    1.256 ± 0.010M/s
> > > kretprobe-multi:    1.412 ± 0.003M/s
> > >
> > > Hmm, in my case, it seems smaller differences (~3%?).
> > > I attached perf report results for those, but I don't see large difference.
> >
> > I ran my benchmarks on bare metal machine (and quite powerful at that,
> > you can see my numbers are almost 10x of yours), with mitigations
> > disabled, no retpolines, etc. If you have any of those mitigations it
> > might result in smaller differences, probably. If you are running
> > inside QEMU/VM, the results might differ significantly as well.
>
> I ran it on my bare metal machines again, but could not find any difference
> between them. But I think I enabled intel mitigations on, so it might make
> a difference from your result.
>
> Can you run the benchmark with perf record? If there is such differences,
> there should be recorded.

I can, yes, will try to do this week, I'm just trying to keep up with
the rest of the stuff on my plate and haven't found yet time to do
this. I'll get back to you (and I'll use the latest version of your
patch set, of course).

> e.g.
>
> # perf record -g -o perf.data-kretprobe-nopatch-raw-bpf -- bench -w2 -d5 -a trig-kretprobe
> # perf report -G -i perf.data-kretprobe-nopatch-raw-bpf -k $VMLINUX --stdio > perf-out-kretprobe-nopatch-raw-bpf
>
> I attached the results in my side.
> The interesting point is, the functions int the result are not touched by
> this series. Thus there may be another reason if you see the kretprobe
> regression.
>
> Thank you,
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>