diff mbox series

[v2] tracing: Avoid possible softlockup in tracing_iter_reset()

Message ID 20240827124654.3817443-1-zhengyejian@huaweicloud.com (mailing list archive)
State Accepted
Commit 49aa8a1f4d6800721c7971ed383078257f12e8f9
Headers show
Series [v2] tracing: Avoid possible softlockup in tracing_iter_reset() | expand

Commit Message

Zheng Yejian Aug. 27, 2024, 12:46 p.m. UTC
In __tracing_open(), when max latency tracers took place on the cpu,
the time start of its buffer would be updated, then event entries with
timestamps being earlier than start of the buffer would be skipped
(see tracing_iter_reset()).

Softlockup will occur if the kernel is non-preemptible and too many
entries were skipped in the loop that reset every cpu buffer, so add
cond_resched() to avoid it.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Zheng Yejian <zhengyejian@huaweicloud.com>
---
 kernel/trace/trace.c | 2 ++
 1 file changed, 2 insertions(+)

v2:
  - Change to add cond_resched() in tracing_iter_reset()
    Link: https://lore.kernel.org/all/20240826103522.390faa85@gandalf.local.home/
  - Update commit title and add suggested-by tag

v1: https://lore.kernel.org/all/20240824030343.3218618-1-zhengyejian@huaweicloud.com/
diff mbox series

Patch

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index ebe7ce2f5f4a..edf6bc817aa1 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3958,6 +3958,8 @@  void tracing_iter_reset(struct trace_iterator *iter, int cpu)
 			break;
 		entries++;
 		ring_buffer_iter_advance(buf_iter);
+		/* This could be a big loop */
+		cond_resched();
 	}
 
 	per_cpu_ptr(iter->array_buffer->data, cpu)->skipped_entries = entries;