Message ID | 20250108090457.512198-26-bigeasy@linutronix.de (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | module: Use RCU instead of RCU-sched. | expand |
On Wed, Jan 8, 2025 at 1:05 AM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > __module_address() can be invoked within a RCU section, there is no > requirement to have preemption disabled. > > Replace the preempt_disable() section around __module_address() with > RCU. > > Cc: Alexei Starovoitov <ast@kernel.org> > Cc: Andrii Nakryiko <andrii@kernel.org> > Cc: Daniel Borkmann <daniel@iogearbox.net> > Cc: Eduard Zingerman <eddyz87@gmail.com> > Cc: Hao Luo <haoluo@google.com> > Cc: Jiri Olsa <jolsa@kernel.org> > Cc: John Fastabend <john.fastabend@gmail.com> > Cc: KP Singh <kpsingh@kernel.org> > Cc: Martin KaFai Lau <martin.lau@linux.dev> > Cc: Masami Hiramatsu <mhiramat@kernel.org> > Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> > Cc: Matt Bobrowski <mattbobrowski@google.com> > Cc: Song Liu <song@kernel.org> > Cc: Stanislav Fomichev <sdf@fomichev.me> > Cc: Steven Rostedt <rostedt@goodmis.org> > Cc: Yonghong Song <yonghong.song@linux.dev> > Cc: bpf@vger.kernel.org > Cc: linux-trace-kernel@vger.kernel.org > Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > --- > kernel/trace/bpf_trace.c | 19 ++++++++----------- > 1 file changed, 8 insertions(+), 11 deletions(-) > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index 1b8db5aee9d38..020df7b6ff90c 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -2336,10 +2336,9 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp) > { > struct module *mod; > > - preempt_disable(); > + guard(rcu)(); > mod = __module_address((unsigned long)btp); > module_put(mod); > - preempt_enable(); > } > > static __always_inline > @@ -2907,16 +2906,14 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3 > for (i = 0; i < addrs_cnt; i++) { > struct module *mod; > > - preempt_disable(); > - mod = __module_address(addrs[i]); > - /* Either no module or we it's already stored */ > - if (!mod || has_module(&arr, mod)) { > - preempt_enable(); > - continue; > + scoped_guard(rcu) { > + mod = __module_address(addrs[i]); > + /* Either no module or we it's already stored */ > + if (!mod || has_module(&arr, mod)) > + continue; > + if (!try_module_get(mod)) > + err = -EINVAL; lgtm. Should we take into bpf-next or the whole set is handled together somewhere?
On 2025-01-09 10:38:03 [-0800], Alexei Starovoitov wrote: > lgtm. > Should we take into bpf-next or the whole set is handled together > somewhere? If you don't mind, I would hope to route the whole series via the modules tree. Some of the lower functions (__module_address()) check for disabled preemption and will trigger warnings at runtime if this gets applied before (earlier in the series) the check gets replaced. Sebastian
On Thu, Jan 9, 2025 at 12:54 PM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > On 2025-01-09 10:38:03 [-0800], Alexei Starovoitov wrote: > > lgtm. > > Should we take into bpf-next or the whole set is handled together > > somewhere? > > If you don't mind, I would hope to route the whole series via the > modules tree. Some of the lower functions (__module_address()) check for > disabled preemption and will trigger warnings at runtime if this gets > applied before (earlier in the series) the check gets replaced. I see. Then Acked-by: Alexei Starovoitov <ast@kernel.org>
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 1b8db5aee9d38..020df7b6ff90c 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2336,10 +2336,9 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp) { struct module *mod; - preempt_disable(); + guard(rcu)(); mod = __module_address((unsigned long)btp); module_put(mod); - preempt_enable(); } static __always_inline @@ -2907,16 +2906,14 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3 for (i = 0; i < addrs_cnt; i++) { struct module *mod; - preempt_disable(); - mod = __module_address(addrs[i]); - /* Either no module or we it's already stored */ - if (!mod || has_module(&arr, mod)) { - preempt_enable(); - continue; + scoped_guard(rcu) { + mod = __module_address(addrs[i]); + /* Either no module or we it's already stored */ + if (!mod || has_module(&arr, mod)) + continue; + if (!try_module_get(mod)) + err = -EINVAL; } - if (!try_module_get(mod)) - err = -EINVAL; - preempt_enable(); if (err) break; err = add_module(&arr, mod);