diff mbox series

ring-buffer: Only update pages_touched when a new page is touched

Message ID 20240409151309.0d0e5056@gandalf.local.home (mailing list archive)
State Accepted
Commit ffe3986fece696cf65e0ef99e74c75f848be8e30
Headers show
Series ring-buffer: Only update pages_touched when a new page is touched | expand

Commit Message

Steven Rostedt April 9, 2024, 7:13 p.m. UTC
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>

The "buffer_percent" logic that is used by the ring buffer splice code to
only wake up the tasks when there's no data after the buffer is filled to
the percentage of the "buffer_percent" file is dependent on three
variables that determine the amount of data that is in the ring buffer:

 1) pages_read - incremented whenever a new sub-buffer is consumed
 2) pages_lost - incremented every time a writer overwrites a sub-buffer
 3) pages_touched - incremented when a write goes to a new sub-buffer

The percentage is the calculation of:

  (pages_touched - (pages_lost + pages_read)) / nr_pages

Basically, the amount of data is the total number of sub-bufs that have been
touched, minus the number of sub-bufs lost and sub-bufs consumed. This is
divided by the total count to give the buffer percentage. When the
percentage is greater than the value in the "buffer_percent" file, it
wakes up splice readers waiting for that amount.

It was observed that over time, the amount read from the splice was
constantly decreasing the longer the trace was running. That is, if one
asked for 60%, it would read over 60% when it first starts tracing, but
then it would be woken up at under 60% and would slowly decrease the
amount of data read after being woken up, where the amount becomes much
less than the buffer percent.

This was due to an accounting of the pages_touched incrementation. This
value is incremented whenever a writer transfers to a new sub-buffer. But
the place where it was incremented was incorrect. If a writer overflowed
the current sub-buffer it would go to the next one. If it gets preempted
by an interrupt at that time, and the interrupt performs a trace, it too
will end up going to the next sub-buffer. But only one should increment
the counter. Unfortunately, that was not the case.

Change the cmpxchg() that does the real switch of the tail-page into a
try_cmpxchg(), and on success, perform the increment of pages_touched. This
will only increment the counter once for when the writer moves to a new
sub-buffer, and not when there's a race and is incremented for when a
writer and its preempting writer both move to the same new sub-buffer.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/ring_buffer.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Masami Hiramatsu (Google) April 9, 2024, 11:44 p.m. UTC | #1
On Tue, 9 Apr 2024 15:13:09 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
> 
> The "buffer_percent" logic that is used by the ring buffer splice code to
> only wake up the tasks when there's no data after the buffer is filled to
> the percentage of the "buffer_percent" file is dependent on three
> variables that determine the amount of data that is in the ring buffer:
> 
>  1) pages_read - incremented whenever a new sub-buffer is consumed
>  2) pages_lost - incremented every time a writer overwrites a sub-buffer
>  3) pages_touched - incremented when a write goes to a new sub-buffer
> 
> The percentage is the calculation of:
> 
>   (pages_touched - (pages_lost + pages_read)) / nr_pages
> 
> Basically, the amount of data is the total number of sub-bufs that have been
> touched, minus the number of sub-bufs lost and sub-bufs consumed. This is
> divided by the total count to give the buffer percentage. When the
> percentage is greater than the value in the "buffer_percent" file, it
> wakes up splice readers waiting for that amount.
> 
> It was observed that over time, the amount read from the splice was
> constantly decreasing the longer the trace was running. That is, if one
> asked for 60%, it would read over 60% when it first starts tracing, but
> then it would be woken up at under 60% and would slowly decrease the
> amount of data read after being woken up, where the amount becomes much
> less than the buffer percent.
> 
> This was due to an accounting of the pages_touched incrementation. This
> value is incremented whenever a writer transfers to a new sub-buffer. But
> the place where it was incremented was incorrect. If a writer overflowed
> the current sub-buffer it would go to the next one. If it gets preempted
> by an interrupt at that time, and the interrupt performs a trace, it too
> will end up going to the next sub-buffer. But only one should increment
> the counter. Unfortunately, that was not the case.
> 
> Change the cmpxchg() that does the real switch of the tail-page into a
> try_cmpxchg(), and on success, perform the increment of pages_touched. This
> will only increment the counter once for when the writer moves to a new
> sub-buffer, and not when there's a race and is incremented for when a
> writer and its preempting writer both move to the same new sub-buffer.
> 

Looks good to me.

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

BTW, isn't this a real bugfix, because the page_touched can be
bigger than nr_pages without this fix?

Thank you,

> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
> ---
>  kernel/trace/ring_buffer.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 25476ead681b..6511dc3a00da 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -1393,7 +1393,6 @@ static void rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer,
>  	old_write = local_add_return(RB_WRITE_INTCNT, &next_page->write);
>  	old_entries = local_add_return(RB_WRITE_INTCNT, &next_page->entries);
>  
> -	local_inc(&cpu_buffer->pages_touched);
>  	/*
>  	 * Just make sure we have seen our old_write and synchronize
>  	 * with any interrupts that come in.
> @@ -1430,8 +1429,9 @@ static void rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer,
>  		 */
>  		local_set(&next_page->page->commit, 0);
>  
> -		/* Again, either we update tail_page or an interrupt does */
> -		(void)cmpxchg(&cpu_buffer->tail_page, tail_page, next_page);
> +		/* Either we update tail_page or an interrupt does */
> +		if (try_cmpxchg(&cpu_buffer->tail_page, &tail_page, next_page))
> +			local_inc(&cpu_buffer->pages_touched);
>  	}
>  }
>  
> -- 
> 2.43.0
>
Steven Rostedt April 10, 2024, 12:14 a.m. UTC | #2
On Wed, 10 Apr 2024 08:44:00 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:

> Looks good to me.
> 
> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Thanks.

> 
> BTW, isn't this a real bugfix, because the page_touched can be
> bigger than nr_pages without this fix?

Yes, I simply forgot to add the Cc stable.

-- Steve
diff mbox series

Patch

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 25476ead681b..6511dc3a00da 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1393,7 +1393,6 @@  static void rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer,
 	old_write = local_add_return(RB_WRITE_INTCNT, &next_page->write);
 	old_entries = local_add_return(RB_WRITE_INTCNT, &next_page->entries);
 
-	local_inc(&cpu_buffer->pages_touched);
 	/*
 	 * Just make sure we have seen our old_write and synchronize
 	 * with any interrupts that come in.
@@ -1430,8 +1429,9 @@  static void rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer,
 		 */
 		local_set(&next_page->page->commit, 0);
 
-		/* Again, either we update tail_page or an interrupt does */
-		(void)cmpxchg(&cpu_buffer->tail_page, tail_page, next_page);
+		/* Either we update tail_page or an interrupt does */
+		if (try_cmpxchg(&cpu_buffer->tail_page, &tail_page, next_page))
+			local_inc(&cpu_buffer->pages_touched);
 	}
 }