From patchwork Wed Dec 13 18:17:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13491737 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46A6059E29 for ; Wed, 13 Dec 2023 18:20:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2797BC43391; Wed, 13 Dec 2023 18:20:39 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rDTrI-00000002a9n-252N; Wed, 13 Dec 2023 13:21:24 -0500 Message-ID: <20231213182124.270751060@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 13 Dec 2023 13:17:45 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Tzvetomir Stoyanov , Vincent Donnefort , Kent Overstreet Subject: [PATCH v3 08/15] ring-buffer: Make sure the spare sub buffer used for reads has same size References: <20231213181737.791847466@goodmis.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Steven Rostedt (Google)" Now that the ring buffer specifies the size of its sub buffers, they all need to be the same size. When doing a read, a swap is done with a spare page. Make sure they are the same size before doing the swap, otherwise the read will fail. Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 2e201ef5b1ce..dd7669f6503b 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -7576,6 +7576,7 @@ struct ftrace_buffer_info { struct trace_iterator iter; void *spare; unsigned int spare_cpu; + unsigned int spare_size; unsigned int read; }; @@ -8295,6 +8296,15 @@ tracing_buffers_read(struct file *filp, char __user *ubuf, page_size = ring_buffer_subbuf_size_get(iter->array_buffer->buffer); + /* Make sure the spare matches the current sub buffer size */ + if (info->spare) { + if (page_size != info->spare_size) { + ring_buffer_free_read_page(iter->array_buffer->buffer, + info->spare_cpu, info->spare); + info->spare = NULL; + } + } + if (!info->spare) { info->spare = ring_buffer_alloc_read_page(iter->array_buffer->buffer, iter->cpu_file); @@ -8303,6 +8313,7 @@ tracing_buffers_read(struct file *filp, char __user *ubuf, info->spare = NULL; } else { info->spare_cpu = iter->cpu_file; + info->spare_size = page_size; } } if (!info->spare)