mbox series

[0/8] arm64/ftrace: Add support for DYNAMIC_FTRACE_WITH_CALL_OPS

Message ID 20230109135828.879136-1-mark.rutland@arm.com (mailing list archive)
Headers show
Series arm64/ftrace: Add support for DYNAMIC_FTRACE_WITH_CALL_OPS | expand

Message

Mark Rutland Jan. 9, 2023, 1:58 p.m. UTC
This series adds a new DYNAMIC_FTRACE_WITH_CALL_OPS mechanism, and
enables support for this on arm64. This significantly reduces the
overhead of tracing when a callsite/tracee has a single associated
tracer, avoids a number of issues that make it undesireably and
infeasible to use dynamically-allocated trampolines (e.g. branch range
limitations), and makes it possible to implement support for
DYNAMIC_FTRACE_WITH_DIRECT_CALLS in future.

The main idea is to give each ftrace callsite an associated pointer to
an ftrace_ops. The architecture's ftrace_caller trampoline can recover
the ops pointer and invoke ops->func from this without needing to use
ftrace_ops_list_func, which has to iterate through all registered ops.

To do this, we use -fpatchable-function-entry=M,N, there N NOPs are
placed before the function entry point. On arm64 NOPs are always 4
bytes, so by allocating 2 per-function NOPs, we have enaough space to
place a 64-bit value. So that we can manipulate the pointer atomically,
we need to align instrumented functions to at least 8 bytes.

The first three patches enable this function alignment, requiring
changes to the ACPICA Makefile, and working around cases where GCC drops
alignment.

The final four patches implement support for arm64. As noted in the
final patch, this results in a significant reduction in overhead:

  Before this patch:

  Number of tracers     || Total time  | Per-call average time (ns)
  Relevant | Irrelevant || (ns)        | Total        | Overhead
  =========+============++=============+==============+============
         0 |          0 ||      94,583 |         0.95 |           -
         0 |          1 ||      93,709 |         0.94 |           -
         0 |          2 ||      93,666 |         0.94 |           -
         0 |         10 ||      93,709 |         0.94 |           -
         0 |        100 ||      93,792 |         0.94 |           -
  ---------+------------++-------------+--------------+------------
         1 |          1 ||   6,467,833 |        64.68 |       63.73
         1 |          2 ||   7,509,708 |        75.10 |       74.15
         1 |         10 ||  23,786,792 |       237.87 |      236.92
         1 |        100 || 106,432,500 |     1,064.43 |     1063.38
  ---------+------------++-------------+--------------+------------
         1 |          0 ||   1,431,875 |        14.32 |       13.37
         2 |          0 ||   6,456,334 |        64.56 |       63.62
        10 |          0 ||  22,717,000 |       227.17 |      226.22
       100 |          0 || 103,293,667 |      1032.94 |     1031.99
  ---------+------------++-------------+--------------+--------------

  Note: per-call overhead is estiamated relative to the baseline case
  with 0 relevant tracers and 0 irrelevant tracers.

  After this patch

  Number of tracers     || Total time  | Per-call average time (ns)
  Relevant | Irrelevant || (ns)        | Total        | Overhead
  =========+============++=============+==============+============
         0 |          0 ||      94,541 |         0.95 |           -
         0 |          1 ||      93,666 |         0.94 |           -
         0 |          2 ||      93,709 |         0.94 |           -
         0 |         10 ||      93,667 |         0.94 |           -
         0 |        100 ||      93,792 |         0.94 |           -
  ---------+------------++-------------+--------------+------------
         1 |          1 ||     281,000 |         2.81 |        1.86
         1 |          2 ||     281,042 |         2.81 |        1.87
         1 |         10 ||     280,958 |         2.81 |        1.86
         1 |        100 ||     281,250 |         2.81 |        1.87
  ---------+------------++-------------+--------------+------------
         1 |          0 ||     280,959 |         2.81 |        1.86
         2 |          0 ||   6,502,708 |        65.03 |       64.08
        10 |          0 ||  18,681,209 |       186.81 |      185.87
       100 |          0 || 103,550,458 |     1,035.50 |     1034.56
  ---------+------------++-------------+--------------+------------

  Note: per-call overhead is estiamated relative to the baseline case
  with 0 relevant tracers and 0 irrelevant tracers.

Thanks,
Mark.

Mark Rutland (8):
  Compiler attributes: GCC function alignment workarounds
  ACPI: Don't build ACPICA with '-Os'
  arm64: Extend support for CONFIG_FUNCTION_ALIGNMENT
  ftrace: Add DYNAMIC_FTRACE_WITH_CALL_OPS
  arm64: insn: Add helpers for BTI
  arm64: patching: Add aarch64_insn_write_literal_u64()
  arm64: ftrace: Update stale comment
  arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS

 arch/arm64/Kconfig                  |   3 +
 arch/arm64/Makefile                 |   5 +-
 arch/arm64/include/asm/ftrace.h     |  15 +--
 arch/arm64/include/asm/insn.h       |   1 +
 arch/arm64/include/asm/linkage.h    |  10 +-
 arch/arm64/include/asm/patching.h   |   2 +
 arch/arm64/kernel/asm-offsets.c     |   4 +
 arch/arm64/kernel/entry-ftrace.S    |  32 +++++-
 arch/arm64/kernel/ftrace.c          | 158 +++++++++++++++++++++++++++-
 arch/arm64/kernel/patching.c        |  17 +++
 drivers/acpi/acpica/Makefile        |   2 +-
 include/linux/compiler_attributes.h |  23 +++-
 include/linux/ftrace.h              |  15 ++-
 kernel/trace/Kconfig                |   7 ++
 kernel/trace/ftrace.c               | 109 ++++++++++++++++++-
 15 files changed, 371 insertions(+), 32 deletions(-)

Comments

David Laight Jan. 10, 2023, 8:55 a.m. UTC | #1
From: Mark Rutland
> Sent: 09 January 2023 13:58
> 
> This series adds a new DYNAMIC_FTRACE_WITH_CALL_OPS mechanism, and
> enables support for this on arm64. This significantly reduces the
> overhead of tracing when a callsite/tracee has a single associated
> tracer, avoids a number of issues that make it undesireably and
> infeasible to use dynamically-allocated trampolines (e.g. branch range
> limitations), and makes it possible to implement support for
> DYNAMIC_FTRACE_WITH_DIRECT_CALLS in future.
> 
> The main idea is to give each ftrace callsite an associated pointer to
> an ftrace_ops. The architecture's ftrace_caller trampoline can recover
> the ops pointer and invoke ops->func from this without needing to use
> ftrace_ops_list_func, which has to iterate through all registered ops.
> 
> To do this, we use -fpatchable-function-entry=M,N, there N NOPs are
> placed before the function entry point...

Doesn't this bump the minimum gcc version up to something like 9.0 ?

How does it interact with the 'CFI stuff' that also uses the same area?

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Mark Rutland Jan. 10, 2023, 10:31 a.m. UTC | #2
On Tue, Jan 10, 2023 at 08:55:58AM +0000, David Laight wrote:
> From: Mark Rutland
> > Sent: 09 January 2023 13:58
> > 
> > This series adds a new DYNAMIC_FTRACE_WITH_CALL_OPS mechanism, and
> > enables support for this on arm64. This significantly reduces the
> > overhead of tracing when a callsite/tracee has a single associated
> > tracer, avoids a number of issues that make it undesireably and
> > infeasible to use dynamically-allocated trampolines (e.g. branch range
> > limitations), and makes it possible to implement support for
> > DYNAMIC_FTRACE_WITH_DIRECT_CALLS in future.
> > 
> > The main idea is to give each ftrace callsite an associated pointer to
> > an ftrace_ops. The architecture's ftrace_caller trampoline can recover
> > the ops pointer and invoke ops->func from this without needing to use
> > ftrace_ops_list_func, which has to iterate through all registered ops.
> > 
> > To do this, we use -fpatchable-function-entry=M,N, there N NOPs are
> > placed before the function entry point...
> 
> Doesn't this bump the minimum gcc version up to something like 9.0 ?

This doesn't bump the minimum GCC version, but users of older toolchains
won't get the speedup.

We already support -fpatchable-function-entry based ftrace with GCC 8+ (and
this is necessary to play nicely with pointer authentication), for older GCC
versions we still support using -pg / mcount.

> How does it interact with the 'CFI stuff' that also uses the same area?

There's some more detail in patch 8, but the summary is that they're mutually
exclusive for now (enforce by Kconfig), and I'm working with others to get
improved compiler support necessary for them to play nicely together.

Currently LLVM will place the type-hash before the pre-function NOPs, which
works if everything has pre-function NOPs, but doesn't work for calls between
instrumented and non-instrumented functions, since as the latter don't have
pre-function NOPs and the type hash is placed at a different offset. To make
them work better together we'll need some improved compiler support, and I'm
working with others for that currently.

GCC doesn't currently have KCFI support, but the plan is to match whatever LLVM
does.

Atop that we'll need some trivial changes to the asm function macros, but
without the underlying compiler support there's not much point.

Thanks,
Mark.