From patchwork Fri Dec 15 17:55:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13494768 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96A7041873; Fri, 15 Dec 2023 17:54:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2560DC43142; Fri, 15 Dec 2023 17:54:55 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rECPZ-00000002wQH-0hUx; Fri, 15 Dec 2023 12:55:45 -0500 Message-ID: <20231215175544.955184798@goodmis.org> User-Agent: quilt/0.67 Date: Fri, 15 Dec 2023 12:55:08 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Tzvetomir Stoyanov , Vincent Donnefort , Kent Overstreet Subject: [PATCH v4 06/15] ring-buffer: Clear pages on error in ring_buffer_subbuf_order_set() failure References: <20231215175502.106587604@goodmis.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Steven Rostedt (Google)" On failure to allocate ring buffer pages, the pointer to the CPU buffer pages is freed, but the pages that were allocated previously were not. Make sure they are freed too. Fixes: TBD ("tracing: Set new size of the ring buffer sub page") Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 57b9d3f5f32e..dd03887a4737 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -5860,6 +5860,7 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) for_each_buffer_cpu(buffer, cpu) { if (!cpu_buffers[cpu]) continue; + rb_free_cpu_buffer(cpu_buffers[cpu]); kfree(cpu_buffers[cpu]); } kfree(cpu_buffers);