Message ID | 20240312131952.802267543@goodmis.org (mailing list archive) |
---|---|
State | Accepted |
Commit | e36f19a6457b2c0dfa4a7d19153ef0fda4bf5634 |
Headers | show |
Series | ring-buffer: Fix poll wakeup logic | expand |
On Tue, 12 Mar 2024 09:19:21 -0400 Steven Rostedt <rostedt@goodmis.org> wrote: > From: "Steven Rostedt (Google)" <rostedt@goodmis.org> > > The check for knowing if the poll should wait or not is basically the > exact same logic as rb_watermark_hit(). The only difference is that > rb_watermark_hit() also handles the !full case. But for the full case, the > logic is the same. Just call that instead of duplicating the code in > ring_buffer_poll_wait(). > This changes a bit (e.g. adding pagebusy check) but basically that should be there. And new version appears to be consistent between ring_buffer_wait() and ring_buffer_poll_wait(). So looks good to me. Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Thank you, > Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> > --- > kernel/trace/ring_buffer.c | 21 +++++++-------------- > 1 file changed, 7 insertions(+), 14 deletions(-) > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c > index adfe603a769b..857803e8cf07 100644 > --- a/kernel/trace/ring_buffer.c > +++ b/kernel/trace/ring_buffer.c > @@ -959,25 +959,18 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu, > } > > if (full) { > - unsigned long flags; > - > poll_wait(filp, &rbwork->full_waiters, poll_table); > > - raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); > - if (!cpu_buffer->shortest_full || > - cpu_buffer->shortest_full > full) > - cpu_buffer->shortest_full = full; > - raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); > - if (full_hit(buffer, cpu, full)) > + if (rb_watermark_hit(buffer, cpu, full)) > return EPOLLIN | EPOLLRDNORM; > /* > * Only allow full_waiters_pending update to be seen after > - * the shortest_full is set. If the writer sees the > - * full_waiters_pending flag set, it will compare the > - * amount in the ring buffer to shortest_full. If the amount > - * in the ring buffer is greater than the shortest_full > - * percent, it will call the irq_work handler to wake up > - * this list. The irq_handler will reset shortest_full > + * the shortest_full is set (in rb_watermark_hit). If the > + * writer sees the full_waiters_pending flag set, it will > + * compare the amount in the ring buffer to shortest_full. > + * If the amount in the ring buffer is greater than the > + * shortest_full percent, it will call the irq_work handler > + * to wake up this list. The irq_handler will reset shortest_full > * back to zero. That's done under the reader_lock, but > * the below smp_mb() makes sure that the update to > * full_waiters_pending doesn't leak up into the above. > -- > 2.43.0 > > >
On Wed, 13 Mar 2024 00:38:42 +0900 Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote: > On Tue, 12 Mar 2024 09:19:21 -0400 > Steven Rostedt <rostedt@goodmis.org> wrote: > > > From: "Steven Rostedt (Google)" <rostedt@goodmis.org> > > > > The check for knowing if the poll should wait or not is basically the > > exact same logic as rb_watermark_hit(). The only difference is that > > rb_watermark_hit() also handles the !full case. But for the full case, the > > logic is the same. Just call that instead of duplicating the code in > > ring_buffer_poll_wait(). > > > > This changes a bit (e.g. adding pagebusy check) but basically that should > be there. And new version appears to be consistent between ring_buffer_wait() > and ring_buffer_poll_wait(). So looks good to me. The pagebusy check is an optimization. As if it is true, it means the writer is still on the reader_page and there's no sub-buffers available. It just prevents having to do the calculation of the buffer-percentage filled (what's done by the full_hit() logic). > > Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> > Thanks! -- Steve
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index adfe603a769b..857803e8cf07 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -959,25 +959,18 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu, } if (full) { - unsigned long flags; - poll_wait(filp, &rbwork->full_waiters, poll_table); - raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); - if (!cpu_buffer->shortest_full || - cpu_buffer->shortest_full > full) - cpu_buffer->shortest_full = full; - raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); - if (full_hit(buffer, cpu, full)) + if (rb_watermark_hit(buffer, cpu, full)) return EPOLLIN | EPOLLRDNORM; /* * Only allow full_waiters_pending update to be seen after - * the shortest_full is set. If the writer sees the - * full_waiters_pending flag set, it will compare the - * amount in the ring buffer to shortest_full. If the amount - * in the ring buffer is greater than the shortest_full - * percent, it will call the irq_work handler to wake up - * this list. The irq_handler will reset shortest_full + * the shortest_full is set (in rb_watermark_hit). If the + * writer sees the full_waiters_pending flag set, it will + * compare the amount in the ring buffer to shortest_full. + * If the amount in the ring buffer is greater than the + * shortest_full percent, it will call the irq_work handler + * to wake up this list. The irq_handler will reset shortest_full * back to zero. That's done under the reader_lock, but * the below smp_mb() makes sure that the update to * full_waiters_pending doesn't leak up into the above.