Message ID | 20230615204931.3250659-1-zhengyejian1@huawei.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | [5.10] tracing: Add tracing_reset_all_online_cpus_unlocked() function | expand |
On Fri, Jun 16, 2023 at 04:49:31AM +0800, Zheng Yejian wrote: > From: "Steven Rostedt (Google)" <rostedt@goodmis.org> > > commit e18eb8783ec4949adebc7d7b0fdb65f65bfeefd9 upstream. > > Currently the tracing_reset_all_online_cpus() requires the > trace_types_lock held. But only one caller of this function actually has > that lock held before calling it, and the other just takes the lock so > that it can call it. More users of this function is needed where the lock > is not held. > > Add a tracing_reset_all_online_cpus_unlocked() function for the one use > case that calls it without being held, and also add a lockdep_assert to > make sure it is held when called. > > Then have tracing_reset_all_online_cpus() take the lock internally, such > that callers do not need to worry about taking it. > > Link: https://lkml.kernel.org/r/20221123192741.658273220@goodmis.org > > Cc: Masami Hiramatsu <mhiramat@kernel.org> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Zheng Yejian <zhengyejian1@huawei.com> > Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> > [this patch is pre-depended by be111ebd8868d4b7c041cb3c6102e1ae27d6dc1d > due to tracing_reset_all_online_cpus() should be called after taking lock] > Fixes: be111ebd8868 ("tracing: Free buffers when a used dynamic event is removed") > Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com> > --- What about for 5.15.y? You can't apply a fix to just an older tree as you will then have a regression when you update. I'll drop this one from my queue, please resend a backport for all relevent stable releases. thanks, greg k-h
On 2023/6/19 16:26, Greg KH wrote: > On Fri, Jun 16, 2023 at 04:49:31AM +0800, Zheng Yejian wrote: >> From: "Steven Rostedt (Google)" <rostedt@goodmis.org> >> >> commit e18eb8783ec4949adebc7d7b0fdb65f65bfeefd9 upstream. >> >> Currently the tracing_reset_all_online_cpus() requires the >> trace_types_lock held. But only one caller of this function actually has >> that lock held before calling it, and the other just takes the lock so >> that it can call it. More users of this function is needed where the lock >> is not held. >> >> Add a tracing_reset_all_online_cpus_unlocked() function for the one use >> case that calls it without being held, and also add a lockdep_assert to >> make sure it is held when called. >> >> Then have tracing_reset_all_online_cpus() take the lock internally, such >> that callers do not need to worry about taking it. >> >> Link: https://lkml.kernel.org/r/20221123192741.658273220@goodmis.org >> >> Cc: Masami Hiramatsu <mhiramat@kernel.org> >> Cc: Andrew Morton <akpm@linux-foundation.org> >> Cc: Zheng Yejian <zhengyejian1@huawei.com> >> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> >> [this patch is pre-depended by be111ebd8868d4b7c041cb3c6102e1ae27d6dc1d >> due to tracing_reset_all_online_cpus() should be called after taking lock] >> Fixes: be111ebd8868 ("tracing: Free buffers when a used dynamic event is removed") >> Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com> >> --- > > > What about for 5.15.y? You can't apply a fix to just an older tree as > you will then have a regression when you update. > > I'll drop this one from my queue, please resend a backport for all > relevent stable releases. Hi, greg, I have resend the patch to relevent stable releases: 5.15.y: https://lore.kernel.org/all/20230620013052.1127047-1-zhengyejian1@huawei.com/ 5.10.y: https://lore.kernel.org/all/20230620013104.1127100-1-zhengyejian1@huawei.com/ 5.4.y: https://lore.kernel.org/all/20230620013113.1127152-1-zhengyejian1@huawei.com/ --- Thanks, Zheng Yejian > > thanks, > > greg k-h
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 482ec6606b7b..70526400e05c 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -2178,10 +2178,12 @@ void tracing_reset_online_cpus(struct array_buffer *buf) } /* Must have trace_types_lock held */ -void tracing_reset_all_online_cpus(void) +void tracing_reset_all_online_cpus_unlocked(void) { struct trace_array *tr; + lockdep_assert_held(&trace_types_lock); + list_for_each_entry(tr, &ftrace_trace_arrays, list) { if (!tr->clear_trace) continue; @@ -2193,6 +2195,13 @@ void tracing_reset_all_online_cpus(void) } } +void tracing_reset_all_online_cpus(void) +{ + mutex_lock(&trace_types_lock); + tracing_reset_all_online_cpus_unlocked(); + mutex_unlock(&trace_types_lock); +} + /* * The tgid_map array maps from pid to tgid; i.e. the value stored at index i * is the tgid last observed corresponding to pid=i. diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 37f616bf5fa9..e5b505b5b7d0 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -725,6 +725,7 @@ int tracing_is_enabled(void); void tracing_reset_online_cpus(struct array_buffer *buf); void tracing_reset_current(int cpu); void tracing_reset_all_online_cpus(void); +void tracing_reset_all_online_cpus_unlocked(void); int tracing_open_generic(struct inode *inode, struct file *filp); int tracing_open_generic_tr(struct inode *inode, struct file *filp); bool tracing_is_disabled(void); diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index bac13f24a96e..f8ed66f38175 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -2661,7 +2661,7 @@ static void trace_module_remove_events(struct module *mod) * over from this module may be passed to the new module events and * unexpected results may occur. */ - tracing_reset_all_online_cpus(); + tracing_reset_all_online_cpus_unlocked(); } static int trace_module_notify(struct notifier_block *self, diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c index 18291ab35657..ee174de0b8f6 100644 --- a/kernel/trace/trace_events_synth.c +++ b/kernel/trace/trace_events_synth.c @@ -1363,7 +1363,6 @@ int synth_event_delete(const char *event_name) mutex_unlock(&event_mutex); if (mod) { - mutex_lock(&trace_types_lock); /* * It is safest to reset the ring buffer if the module * being unloaded registered any events that were @@ -1375,7 +1374,6 @@ int synth_event_delete(const char *event_name) * occur. */ tracing_reset_all_online_cpus(); - mutex_unlock(&trace_types_lock); } return ret;