Message ID | 20241003151638.1608537-9-mathieu.desnoyers@efficios.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | tracing: Allow system call tracepoints to handle page faults | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On Thu, 3 Oct 2024 11:16:38 -0400 Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote: > Add a might_fault() check to validate that the bpf sys_enter/sys_exit > probe callbacks are indeed called from a context where page faults can > be handled. > > Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> > Acked-by: Andrii Nakryiko <andrii@kernel.org> > Tested-by: Andrii Nakryiko <andrii@kernel.org> # BPF parts > Cc: Michael Jeanson <mjeanson@efficios.com> > Cc: Steven Rostedt <rostedt@goodmis.org> > Cc: Masami Hiramatsu <mhiramat@kernel.org> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Alexei Starovoitov <ast@kernel.org> > Cc: Yonghong Song <yhs@fb.com> > Cc: Paul E. McKenney <paulmck@kernel.org> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: Arnaldo Carvalho de Melo <acme@kernel.org> > Cc: Mark Rutland <mark.rutland@arm.com> > Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> > Cc: Namhyung Kim <namhyung@kernel.org> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> > Cc: bpf@vger.kernel.org > Cc: Joel Fernandes <joel@joelfernandes.org> > --- > include/trace/bpf_probe.h | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h > index 211b98d45fc6..099df5c3e38a 100644 > --- a/include/trace/bpf_probe.h > +++ b/include/trace/bpf_probe.h > @@ -57,6 +57,7 @@ __bpf_trace_##call(void *__data, proto) \ > static notrace void \ > __bpf_trace_##call(void *__data, proto) \ > { \ > + might_fault(); \ And I think this gets called at places that do not allow faults. -- Steve > guard(preempt_notrace)(); \ > CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ > }
On 2024-10-04 00:38, Steven Rostedt wrote: > On Thu, 3 Oct 2024 11:16:38 -0400 > Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote: > >> Add a might_fault() check to validate that the bpf sys_enter/sys_exit >> probe callbacks are indeed called from a context where page faults can >> be handled. >> >> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> >> Acked-by: Andrii Nakryiko <andrii@kernel.org> >> Tested-by: Andrii Nakryiko <andrii@kernel.org> # BPF parts >> Cc: Michael Jeanson <mjeanson@efficios.com> >> Cc: Steven Rostedt <rostedt@goodmis.org> >> Cc: Masami Hiramatsu <mhiramat@kernel.org> >> Cc: Peter Zijlstra <peterz@infradead.org> >> Cc: Alexei Starovoitov <ast@kernel.org> >> Cc: Yonghong Song <yhs@fb.com> >> Cc: Paul E. McKenney <paulmck@kernel.org> >> Cc: Ingo Molnar <mingo@redhat.com> >> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> >> Cc: Mark Rutland <mark.rutland@arm.com> >> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> >> Cc: Namhyung Kim <namhyung@kernel.org> >> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> >> Cc: bpf@vger.kernel.org >> Cc: Joel Fernandes <joel@joelfernandes.org> >> --- >> include/trace/bpf_probe.h | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h >> index 211b98d45fc6..099df5c3e38a 100644 >> --- a/include/trace/bpf_probe.h >> +++ b/include/trace/bpf_probe.h >> @@ -57,6 +57,7 @@ __bpf_trace_##call(void *__data, proto) \ >> static notrace void \ >> __bpf_trace_##call(void *__data, proto) \ >> { \ >> + might_fault(); \ > > And I think this gets called at places that do not allow faults. Context matters: #undef DECLARE_EVENT_SYSCALL_CLASS #define DECLARE_EVENT_SYSCALL_CLASS(call, proto, args, tstruct, assign, print) \ __DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), PARAMS(tstruct), \ PARAMS(assign), PARAMS(print)) \ static notrace void \ perf_trace_##call(void *__data, proto) \ { \ u64 __count __attribute__((unused)); \ struct task_struct *__task __attribute__((unused)); \ \ might_fault(); \ guard(preempt_notrace)(); \ do_perf_trace_##call(__data, args); \ } Not an issue. Thanks, Mathieu > > -- Steve > >> guard(preempt_notrace)(); \ >> CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ >> } >
diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h index 211b98d45fc6..099df5c3e38a 100644 --- a/include/trace/bpf_probe.h +++ b/include/trace/bpf_probe.h @@ -57,6 +57,7 @@ __bpf_trace_##call(void *__data, proto) \ static notrace void \ __bpf_trace_##call(void *__data, proto) \ { \ + might_fault(); \ guard(preempt_notrace)(); \ CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ }