diff mbox series

[v6,3/5] tracing/bpf-trace: Add support for faultable tracepoints

Message ID 20240828144153.829582-4-mathieu.desnoyers@efficios.com (mailing list archive)
State Handled Elsewhere
Delegated to: BPF
Headers show
Series Faultable Tracepoints | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Guessed tree name to be net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit fail Errors and warnings before: 3593 this patch: 8646
netdev/build_tools success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers warning 12 maintainers not CCed: andrii@kernel.org sdf@fomichev.me eddyz87@gmail.com mattbobrowski@google.com haoluo@google.com jolsa@kernel.org daniel@iogearbox.net song@kernel.org yonghong.song@linux.dev kpsingh@kernel.org martin.lau@linux.dev john.fastabend@gmail.com
netdev/build_clang fail Errors and warnings before: 2771 this patch: 2971
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 2991 this patch: 8422
netdev/checkpatch warning WARNING: Argument 'assign' is not used in function-like macro WARNING: Argument 'print' is not used in function-like macro WARNING: Argument 'tstruct' is not used in function-like macro WARNING: Co-developed-by and Signed-off-by: name/email do not match WARNING: line length of 111 exceeds 80 columns WARNING: line length of 82 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-4 fail Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-llvm-17 / test
bpf/vmtest-bpf-next-VM_Test-21 fail Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-22 fail Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18-O2
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-llvm-18 / test
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-14 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-10 success Logs for s390x-gcc / test
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-8 fail Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-13 fail Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-17 fail Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-18 fail Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17-O2
bpf/vmtest-bpf-next-VM_Test-16 success Logs for x86_64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-15 success Logs for x86_64-gcc / test

Commit Message

Mathieu Desnoyers Aug. 28, 2024, 2:41 p.m. UTC
In preparation for converting system call enter/exit instrumentation
into faultable tracepoints, make sure that bpf can handle registering to
such tracepoints by explicitly disabling preemption within the bpf
tracepoint probes to respect the current expectations within bpf tracing
code.

This change does not yet allow bpf to take page faults per se within its
probe, but allows its existing probes to connect to faultable
tracepoints.

Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/
Co-developed-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Yonghong Song <yhs@fb.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: bpf@vger.kernel.org
Cc: Joel Fernandes <joel@joelfernandes.org>
---
Changes since v4:
- Use DEFINE_INACTIVE_GUARD.
- Add brackets to multiline 'if' statements.
Changes since v5:
- Rebased on v6.11-rc5.
- Pass the TRACEPOINT_MAY_FAULT flag directly to tracepoint_probe_register_prio_flags.
---
 include/trace/bpf_probe.h | 21 ++++++++++++++++-----
 kernel/trace/bpf_trace.c  |  2 +-
 2 files changed, 17 insertions(+), 6 deletions(-)

Comments

Andrii Nakryiko Sept. 5, 2024, 1:21 a.m. UTC | #1
On Wed, Aug 28, 2024 at 7:42 AM Mathieu Desnoyers
<mathieu.desnoyers@efficios.com> wrote:
>
> In preparation for converting system call enter/exit instrumentation
> into faultable tracepoints, make sure that bpf can handle registering to
> such tracepoints by explicitly disabling preemption within the bpf
> tracepoint probes to respect the current expectations within bpf tracing
> code.
>
> This change does not yet allow bpf to take page faults per se within its
> probe, but allows its existing probes to connect to faultable
> tracepoints.
>
> Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/
> Co-developed-by: Michael Jeanson <mjeanson@efficios.com>
> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Yonghong Song <yhs@fb.com>
> Cc: Paul E. McKenney <paulmck@kernel.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: bpf@vger.kernel.org
> Cc: Joel Fernandes <joel@joelfernandes.org>
> ---
> Changes since v4:
> - Use DEFINE_INACTIVE_GUARD.
> - Add brackets to multiline 'if' statements.
> Changes since v5:
> - Rebased on v6.11-rc5.
> - Pass the TRACEPOINT_MAY_FAULT flag directly to tracepoint_probe_register_prio_flags.
> ---
>  include/trace/bpf_probe.h | 21 ++++++++++++++++-----
>  kernel/trace/bpf_trace.c  |  2 +-
>  2 files changed, 17 insertions(+), 6 deletions(-)
>
> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
> index a2ea11cc912e..cc96dd1e7c3d 100644
> --- a/include/trace/bpf_probe.h
> +++ b/include/trace/bpf_probe.h
> @@ -42,16 +42,27 @@
>  /* tracepoints with more than 12 arguments will hit build error */
>  #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
>
> -#define __BPF_DECLARE_TRACE(call, proto, args)                         \
> +#define __BPF_DECLARE_TRACE(call, proto, args, tp_flags)               \
>  static notrace void                                                    \
>  __bpf_trace_##call(void *__data, proto)                                        \
>  {                                                                      \
> -       CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args));        \
> +       DEFINE_INACTIVE_GUARD(preempt_notrace, bpf_trace_guard);        \
> +                                                                       \
> +       if ((tp_flags) & TRACEPOINT_MAY_FAULT) {                        \
> +               might_fault();                                          \
> +               activate_guard(preempt_notrace, bpf_trace_guard)();     \
> +       }                                                               \
> +                                                                       \
> +       CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \
>  }
>
>  #undef DECLARE_EVENT_CLASS
>  #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
> -       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args))
> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0)
> +
> +#undef DECLARE_EVENT_CLASS_MAY_FAULT
> +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \
> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), TRACEPOINT_MAY_FAULT)
>
>  /*
>   * This part is compiled out, it is only here as a build time check
> @@ -105,13 +116,13 @@ static inline void bpf_test_buffer_##call(void)                           \
>
>  #undef DECLARE_TRACE
>  #define DECLARE_TRACE(call, proto, args)                               \
> -       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args))          \
> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0)       \
>         __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), 0)
>
>  #undef DECLARE_TRACE_WRITABLE
>  #define DECLARE_TRACE_WRITABLE(call, proto, args, size) \
>         __CHECK_WRITABLE_BUF_SIZE(call, PARAMS(proto), PARAMS(args), size) \
> -       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) \
> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) \
>         __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), size)
>
>  #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index c77eb80cbd7f..ed07283d505b 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2473,7 +2473,7 @@ int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *li
>
>         return tracepoint_probe_register_prio_flags(tp, (void *)btp->bpf_func,
>                                                     link, TRACEPOINT_DEFAULT_PRIO,
> -                                                   TRACEPOINT_MAY_EXIST);
> +                                                   TRACEPOINT_MAY_EXIST | (tp->flags & TRACEPOINT_MAY_FAULT));
>  }
>
>  int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *link)
> --
> 2.39.2
>
>

I wonder if it would be better to just do this, instead of that
preempt guard. I think we don't strictly need preemption to be
disabled, we just need to stay on the same CPU, just like we do that
for many other program types.

We'll need some more BPF-specific plumbing to fully support faultable
(sleepable) tracepoints, but this should unblock your work, unless I'm
missing something. And we can take it from there, once your patches
land, to take advantage of faultable tracepoints in the BPF ecosystem.

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index b69a39316c0c..415639b7c7a4 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2302,7 +2302,8 @@ void __bpf_trace_run(struct bpf_raw_tp_link
*link, u64 *args)
        struct bpf_run_ctx *old_run_ctx;
        struct bpf_trace_run_ctx run_ctx;

-       cant_sleep();
+       migrate_disable();
+
        if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
                bpf_prog_inc_misses_counter(prog);
                goto out;
@@ -2318,6 +2319,8 @@ void __bpf_trace_run(struct bpf_raw_tp_link
*link, u64 *args)
        bpf_reset_run_ctx(old_run_ctx);
 out:
        this_cpu_dec(*(prog->active));
+
+       migrate_enable();
 }
Mathieu Desnoyers Sept. 9, 2024, 3:11 p.m. UTC | #2
On 2024-09-04 21:21, Andrii Nakryiko wrote:
> On Wed, Aug 28, 2024 at 7:42 AM Mathieu Desnoyers
> <mathieu.desnoyers@efficios.com> wrote:
>>
>> In preparation for converting system call enter/exit instrumentation
>> into faultable tracepoints, make sure that bpf can handle registering to
>> such tracepoints by explicitly disabling preemption within the bpf
>> tracepoint probes to respect the current expectations within bpf tracing
>> code.
>>
>> This change does not yet allow bpf to take page faults per se within its
>> probe, but allows its existing probes to connect to faultable
>> tracepoints.
>>
>> Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/
>> Co-developed-by: Michael Jeanson <mjeanson@efficios.com>
>> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
>> Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
>> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
>> Cc: Steven Rostedt <rostedt@goodmis.org>
>> Cc: Masami Hiramatsu <mhiramat@kernel.org>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Alexei Starovoitov <ast@kernel.org>
>> Cc: Yonghong Song <yhs@fb.com>
>> Cc: Paul E. McKenney <paulmck@kernel.org>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
>> Cc: Namhyung Kim <namhyung@kernel.org>
>> Cc: bpf@vger.kernel.org
>> Cc: Joel Fernandes <joel@joelfernandes.org>
>> ---
>> Changes since v4:
>> - Use DEFINE_INACTIVE_GUARD.
>> - Add brackets to multiline 'if' statements.
>> Changes since v5:
>> - Rebased on v6.11-rc5.
>> - Pass the TRACEPOINT_MAY_FAULT flag directly to tracepoint_probe_register_prio_flags.
>> ---
>>   include/trace/bpf_probe.h | 21 ++++++++++++++++-----
>>   kernel/trace/bpf_trace.c  |  2 +-
>>   2 files changed, 17 insertions(+), 6 deletions(-)
>>
>> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
>> index a2ea11cc912e..cc96dd1e7c3d 100644
>> --- a/include/trace/bpf_probe.h
>> +++ b/include/trace/bpf_probe.h
>> @@ -42,16 +42,27 @@
>>   /* tracepoints with more than 12 arguments will hit build error */
>>   #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
>>
>> -#define __BPF_DECLARE_TRACE(call, proto, args)                         \
>> +#define __BPF_DECLARE_TRACE(call, proto, args, tp_flags)               \
>>   static notrace void                                                    \
>>   __bpf_trace_##call(void *__data, proto)                                        \
>>   {                                                                      \
>> -       CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args));        \
>> +       DEFINE_INACTIVE_GUARD(preempt_notrace, bpf_trace_guard);        \
>> +                                                                       \
>> +       if ((tp_flags) & TRACEPOINT_MAY_FAULT) {                        \
>> +               might_fault();                                          \
>> +               activate_guard(preempt_notrace, bpf_trace_guard)();     \
>> +       }                                                               \
>> +                                                                       \
>> +       CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \
>>   }
>>
>>   #undef DECLARE_EVENT_CLASS
>>   #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
>> -       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args))
>> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0)
>> +
>> +#undef DECLARE_EVENT_CLASS_MAY_FAULT
>> +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \
>> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), TRACEPOINT_MAY_FAULT)
>>
>>   /*
>>    * This part is compiled out, it is only here as a build time check
>> @@ -105,13 +116,13 @@ static inline void bpf_test_buffer_##call(void)                           \
>>
>>   #undef DECLARE_TRACE
>>   #define DECLARE_TRACE(call, proto, args)                               \
>> -       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args))          \
>> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0)       \
>>          __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), 0)
>>
>>   #undef DECLARE_TRACE_WRITABLE
>>   #define DECLARE_TRACE_WRITABLE(call, proto, args, size) \
>>          __CHECK_WRITABLE_BUF_SIZE(call, PARAMS(proto), PARAMS(args), size) \
>> -       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) \
>> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) \
>>          __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), size)
>>
>>   #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>> index c77eb80cbd7f..ed07283d505b 100644
>> --- a/kernel/trace/bpf_trace.c
>> +++ b/kernel/trace/bpf_trace.c
>> @@ -2473,7 +2473,7 @@ int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *li
>>
>>          return tracepoint_probe_register_prio_flags(tp, (void *)btp->bpf_func,
>>                                                      link, TRACEPOINT_DEFAULT_PRIO,
>> -                                                   TRACEPOINT_MAY_EXIST);
>> +                                                   TRACEPOINT_MAY_EXIST | (tp->flags & TRACEPOINT_MAY_FAULT));
>>   }
>>
>>   int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *link)
>> --
>> 2.39.2
>>
>>
> 
> I wonder if it would be better to just do this, instead of that
> preempt guard. I think we don't strictly need preemption to be
> disabled, we just need to stay on the same CPU, just like we do that
> for many other program types.

I'm worried about introducing any kind of subtle synchronization
change in this series, and moving from preempt-off to migrate-disable
definitely falls under that umbrella.

I would recommend auditing all uses of this_cpu_*() APIs to make sure
accesses to per-cpu data structures are using atomics and not just using
operations that expect use of preempt-off to prevent concurrent threads
from updating to the per-cpu data concurrently.

So what you are suggesting may be a good idea, but I prefer to leave
this kind of change to a separate bpf-specific series, and I would
leave this work to someone who knows more about ebpf than me.

Thanks,

Mathieu

> 
> We'll need some more BPF-specific plumbing to fully support faultable
> (sleepable) tracepoints, but this should unblock your work, unless I'm
> missing something. And we can take it from there, once your patches
> land, to take advantage of faultable tracepoints in the BPF ecosystem.
> 
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index b69a39316c0c..415639b7c7a4 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2302,7 +2302,8 @@ void __bpf_trace_run(struct bpf_raw_tp_link
> *link, u64 *args)
>          struct bpf_run_ctx *old_run_ctx;
>          struct bpf_trace_run_ctx run_ctx;
> 
> -       cant_sleep();
> +       migrate_disable();
> +
>          if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
>                  bpf_prog_inc_misses_counter(prog);
>                  goto out;
> @@ -2318,6 +2319,8 @@ void __bpf_trace_run(struct bpf_raw_tp_link
> *link, u64 *args)
>          bpf_reset_run_ctx(old_run_ctx);
>   out:
>          this_cpu_dec(*(prog->active));
> +
> +       migrate_enable();
>   }
Andrii Nakryiko Sept. 9, 2024, 4:53 p.m. UTC | #3
On Mon, Sep 9, 2024 at 8:11 AM Mathieu Desnoyers
<mathieu.desnoyers@efficios.com> wrote:
>
> On 2024-09-04 21:21, Andrii Nakryiko wrote:
> > On Wed, Aug 28, 2024 at 7:42 AM Mathieu Desnoyers
> > <mathieu.desnoyers@efficios.com> wrote:
> >>
> >> In preparation for converting system call enter/exit instrumentation
> >> into faultable tracepoints, make sure that bpf can handle registering to
> >> such tracepoints by explicitly disabling preemption within the bpf
> >> tracepoint probes to respect the current expectations within bpf tracing
> >> code.
> >>
> >> This change does not yet allow bpf to take page faults per se within its
> >> probe, but allows its existing probes to connect to faultable
> >> tracepoints.
> >>
> >> Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/
> >> Co-developed-by: Michael Jeanson <mjeanson@efficios.com>
> >> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> >> Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
> >> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> >> Cc: Steven Rostedt <rostedt@goodmis.org>
> >> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> >> Cc: Peter Zijlstra <peterz@infradead.org>
> >> Cc: Alexei Starovoitov <ast@kernel.org>
> >> Cc: Yonghong Song <yhs@fb.com>
> >> Cc: Paul E. McKenney <paulmck@kernel.org>
> >> Cc: Ingo Molnar <mingo@redhat.com>
> >> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> >> Cc: Mark Rutland <mark.rutland@arm.com>
> >> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> >> Cc: Namhyung Kim <namhyung@kernel.org>
> >> Cc: bpf@vger.kernel.org
> >> Cc: Joel Fernandes <joel@joelfernandes.org>
> >> ---
> >> Changes since v4:
> >> - Use DEFINE_INACTIVE_GUARD.
> >> - Add brackets to multiline 'if' statements.
> >> Changes since v5:
> >> - Rebased on v6.11-rc5.
> >> - Pass the TRACEPOINT_MAY_FAULT flag directly to tracepoint_probe_register_prio_flags.
> >> ---
> >>   include/trace/bpf_probe.h | 21 ++++++++++++++++-----
> >>   kernel/trace/bpf_trace.c  |  2 +-
> >>   2 files changed, 17 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
> >> index a2ea11cc912e..cc96dd1e7c3d 100644
> >> --- a/include/trace/bpf_probe.h
> >> +++ b/include/trace/bpf_probe.h
> >> @@ -42,16 +42,27 @@
> >>   /* tracepoints with more than 12 arguments will hit build error */
> >>   #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
> >>
> >> -#define __BPF_DECLARE_TRACE(call, proto, args)                         \
> >> +#define __BPF_DECLARE_TRACE(call, proto, args, tp_flags)               \
> >>   static notrace void                                                    \
> >>   __bpf_trace_##call(void *__data, proto)                                        \
> >>   {                                                                      \
> >> -       CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args));        \
> >> +       DEFINE_INACTIVE_GUARD(preempt_notrace, bpf_trace_guard);        \
> >> +                                                                       \
> >> +       if ((tp_flags) & TRACEPOINT_MAY_FAULT) {                        \
> >> +               might_fault();                                          \
> >> +               activate_guard(preempt_notrace, bpf_trace_guard)();     \
> >> +       }                                                               \
> >> +                                                                       \
> >> +       CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \
> >>   }
> >>
> >>   #undef DECLARE_EVENT_CLASS
> >>   #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
> >> -       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args))
> >> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0)
> >> +
> >> +#undef DECLARE_EVENT_CLASS_MAY_FAULT
> >> +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \
> >> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), TRACEPOINT_MAY_FAULT)
> >>
> >>   /*
> >>    * This part is compiled out, it is only here as a build time check
> >> @@ -105,13 +116,13 @@ static inline void bpf_test_buffer_##call(void)                           \
> >>
> >>   #undef DECLARE_TRACE
> >>   #define DECLARE_TRACE(call, proto, args)                               \
> >> -       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args))          \
> >> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0)       \
> >>          __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), 0)
> >>
> >>   #undef DECLARE_TRACE_WRITABLE
> >>   #define DECLARE_TRACE_WRITABLE(call, proto, args, size) \
> >>          __CHECK_WRITABLE_BUF_SIZE(call, PARAMS(proto), PARAMS(args), size) \
> >> -       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) \
> >> +       __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) \
> >>          __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), size)
> >>
> >>   #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
> >> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> >> index c77eb80cbd7f..ed07283d505b 100644
> >> --- a/kernel/trace/bpf_trace.c
> >> +++ b/kernel/trace/bpf_trace.c
> >> @@ -2473,7 +2473,7 @@ int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *li
> >>
> >>          return tracepoint_probe_register_prio_flags(tp, (void *)btp->bpf_func,
> >>                                                      link, TRACEPOINT_DEFAULT_PRIO,
> >> -                                                   TRACEPOINT_MAY_EXIST);
> >> +                                                   TRACEPOINT_MAY_EXIST | (tp->flags & TRACEPOINT_MAY_FAULT));
> >>   }
> >>
> >>   int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *link)
> >> --
> >> 2.39.2
> >>
> >>
> >
> > I wonder if it would be better to just do this, instead of that
> > preempt guard. I think we don't strictly need preemption to be
> > disabled, we just need to stay on the same CPU, just like we do that
> > for many other program types.
>
> I'm worried about introducing any kind of subtle synchronization
> change in this series, and moving from preempt-off to migrate-disable
> definitely falls under that umbrella.
>
> I would recommend auditing all uses of this_cpu_*() APIs to make sure
> accesses to per-cpu data structures are using atomics and not just using
> operations that expect use of preempt-off to prevent concurrent threads
> from updating to the per-cpu data concurrently.
>
> So what you are suggesting may be a good idea, but I prefer to leave
> this kind of change to a separate bpf-specific series, and I would
> leave this work to someone who knows more about ebpf than me.
>

Yeah, that's ok. migrate_disable() switch is probably going a bit too
far too fast, but I think we should just add
preempt_disable/preempt_enable inside __bpf_trace_run() instead of
leaving it inside those hard to find and follow tracepoint macros. So
maybe you can just pass a bool into __bpf_trace_run() and do preempt
guard (or explicit disable/enable) there?

> Thanks,
>
> Mathieu
>
> >
> > We'll need some more BPF-specific plumbing to fully support faultable
> > (sleepable) tracepoints, but this should unblock your work, unless I'm
> > missing something. And we can take it from there, once your patches
> > land, to take advantage of faultable tracepoints in the BPF ecosystem.
> >
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index b69a39316c0c..415639b7c7a4 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -2302,7 +2302,8 @@ void __bpf_trace_run(struct bpf_raw_tp_link
> > *link, u64 *args)
> >          struct bpf_run_ctx *old_run_ctx;
> >          struct bpf_trace_run_ctx run_ctx;
> >
> > -       cant_sleep();
> > +       migrate_disable();
> > +
> >          if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
> >                  bpf_prog_inc_misses_counter(prog);
> >                  goto out;
> > @@ -2318,6 +2319,8 @@ void __bpf_trace_run(struct bpf_raw_tp_link
> > *link, u64 *args)
> >          bpf_reset_run_ctx(old_run_ctx);
> >   out:
> >          this_cpu_dec(*(prog->active));
> > +
> > +       migrate_enable();
> >   }
>
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> https://www.efficios.com
>
Mathieu Desnoyers Sept. 9, 2024, 5:22 p.m. UTC | #4
On 2024-09-09 12:53, Andrii Nakryiko wrote:
> On Mon, Sep 9, 2024 at 8:11 AM Mathieu Desnoyers
[...]
>>>
>>> I wonder if it would be better to just do this, instead of that
>>> preempt guard. I think we don't strictly need preemption to be
>>> disabled, we just need to stay on the same CPU, just like we do that
>>> for many other program types.
>>
>> I'm worried about introducing any kind of subtle synchronization
>> change in this series, and moving from preempt-off to migrate-disable
>> definitely falls under that umbrella.
>>
>> I would recommend auditing all uses of this_cpu_*() APIs to make sure
>> accesses to per-cpu data structures are using atomics and not just using
>> operations that expect use of preempt-off to prevent concurrent threads
>> from updating to the per-cpu data concurrently.
>>
>> So what you are suggesting may be a good idea, but I prefer to leave
>> this kind of change to a separate bpf-specific series, and I would
>> leave this work to someone who knows more about ebpf than me.
>>
> 
> Yeah, that's ok. migrate_disable() switch is probably going a bit too
> far too fast, but I think we should just add
> preempt_disable/preempt_enable inside __bpf_trace_run() instead of
> leaving it inside those hard to find and follow tracepoint macros. So
> maybe you can just pass a bool into __bpf_trace_run() and do preempt
> guard (or explicit disable/enable) there?
> 

Passing an extra boolean to __bpf_trace_run would impact all tracepoints
calling into ebpf, adding an extra function argument and extra tests for
all of those. The impact may be small, but it is non-zero in both code size
and overhead, so it would not be my preferred approach.

I have modified the macros to add the guard within __bpf_trace_##call
following suggestions from Linus:

   https://lore.kernel.org/lkml/CAHk-=wggDLDeTKbhb5hh--x=-DQd69v41137M72m6NOTmbD-cw@mail.gmail.com/

I'll Cc you on that version of the series.

Thanks,

Mathieu
Andrii Nakryiko Sept. 9, 2024, 8:13 p.m. UTC | #5
On Mon, Sep 9, 2024 at 10:22 AM Mathieu Desnoyers
<mathieu.desnoyers@efficios.com> wrote:
>
> On 2024-09-09 12:53, Andrii Nakryiko wrote:
> > On Mon, Sep 9, 2024 at 8:11 AM Mathieu Desnoyers
> [...]
> >>>
> >>> I wonder if it would be better to just do this, instead of that
> >>> preempt guard. I think we don't strictly need preemption to be
> >>> disabled, we just need to stay on the same CPU, just like we do that
> >>> for many other program types.
> >>
> >> I'm worried about introducing any kind of subtle synchronization
> >> change in this series, and moving from preempt-off to migrate-disable
> >> definitely falls under that umbrella.
> >>
> >> I would recommend auditing all uses of this_cpu_*() APIs to make sure
> >> accesses to per-cpu data structures are using atomics and not just using
> >> operations that expect use of preempt-off to prevent concurrent threads
> >> from updating to the per-cpu data concurrently.
> >>
> >> So what you are suggesting may be a good idea, but I prefer to leave
> >> this kind of change to a separate bpf-specific series, and I would
> >> leave this work to someone who knows more about ebpf than me.
> >>
> >
> > Yeah, that's ok. migrate_disable() switch is probably going a bit too
> > far too fast, but I think we should just add
> > preempt_disable/preempt_enable inside __bpf_trace_run() instead of
> > leaving it inside those hard to find and follow tracepoint macros. So
> > maybe you can just pass a bool into __bpf_trace_run() and do preempt
> > guard (or explicit disable/enable) there?
> >
>
> Passing an extra boolean to __bpf_trace_run would impact all tracepoints
> calling into ebpf, adding an extra function argument and extra tests for
> all of those. The impact may be small, but it is non-zero in both code size
> and overhead, so it would not be my preferred approach.
>

Ok, sounds good to me, we can always change that after your patch set
makes it into upstream.

> I have modified the macros to add the guard within __bpf_trace_##call
> following suggestions from Linus:
>
>    https://lore.kernel.org/lkml/CAHk-=wggDLDeTKbhb5hh--x=-DQd69v41137M72m6NOTmbD-cw@mail.gmail.com/
>
> I'll Cc you on that version of the series.

Thanks!

>
> Thanks,
>
> Mathieu
>
>
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> https://www.efficios.com
>
diff mbox series

Patch

diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
index a2ea11cc912e..cc96dd1e7c3d 100644
--- a/include/trace/bpf_probe.h
+++ b/include/trace/bpf_probe.h
@@ -42,16 +42,27 @@ 
 /* tracepoints with more than 12 arguments will hit build error */
 #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
 
-#define __BPF_DECLARE_TRACE(call, proto, args)				\
+#define __BPF_DECLARE_TRACE(call, proto, args, tp_flags)		\
 static notrace void							\
 __bpf_trace_##call(void *__data, proto)					\
 {									\
-	CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args));	\
+	DEFINE_INACTIVE_GUARD(preempt_notrace, bpf_trace_guard);	\
+									\
+	if ((tp_flags) & TRACEPOINT_MAY_FAULT) {			\
+		might_fault();						\
+		activate_guard(preempt_notrace, bpf_trace_guard)();	\
+	}								\
+									\
+	CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \
 }
 
 #undef DECLARE_EVENT_CLASS
 #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
-	__BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args))
+	__BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0)
+
+#undef DECLARE_EVENT_CLASS_MAY_FAULT
+#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \
+	__BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), TRACEPOINT_MAY_FAULT)
 
 /*
  * This part is compiled out, it is only here as a build time check
@@ -105,13 +116,13 @@  static inline void bpf_test_buffer_##call(void)				\
 
 #undef DECLARE_TRACE
 #define DECLARE_TRACE(call, proto, args)				\
-	__BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args))		\
+	__BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0)	\
 	__DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), 0)
 
 #undef DECLARE_TRACE_WRITABLE
 #define DECLARE_TRACE_WRITABLE(call, proto, args, size) \
 	__CHECK_WRITABLE_BUF_SIZE(call, PARAMS(proto), PARAMS(args), size) \
-	__BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) \
+	__BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) \
 	__DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), size)
 
 #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index c77eb80cbd7f..ed07283d505b 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2473,7 +2473,7 @@  int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *li
 
 	return tracepoint_probe_register_prio_flags(tp, (void *)btp->bpf_func,
 						    link, TRACEPOINT_DEFAULT_PRIO,
-						    TRACEPOINT_MAY_EXIST);
+						    TRACEPOINT_MAY_EXIST | (tp->flags & TRACEPOINT_MAY_FAULT));
 }
 
 int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *link)