Message ID | 20240111161712.1480333-2-vdonnefort@google.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Introducing trace buffer mapping by user-space | expand |
On Thu, 11 Jan 2024 16:17:08 +0000 Vincent Donnefort <vdonnefort@google.com> wrote: > In preparation for the ring-buffer memory mapping where each subbuf will > be accessible to user-space, zero all the page allocations. > > Signed-off-by: Vincent Donnefort <vdonnefort@google.com> Looks good to me. Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Thank you! > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c > index 173d2595ce2d..db73e326fa04 100644 > --- a/kernel/trace/ring_buffer.c > +++ b/kernel/trace/ring_buffer.c > @@ -1466,7 +1466,8 @@ static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer, > > list_add(&bpage->list, pages); > > - page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu), mflags, > + page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu), > + mflags | __GFP_ZERO, > cpu_buffer->buffer->subbuf_order); > if (!page) > goto free_pages; > @@ -1551,7 +1552,8 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu) > > cpu_buffer->reader_page = bpage; > > - page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, cpu_buffer->buffer->subbuf_order); > + page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_ZERO, > + cpu_buffer->buffer->subbuf_order); > if (!page) > goto fail_free_reader; > bpage->page = page_address(page); > @@ -5525,7 +5527,8 @@ ring_buffer_alloc_read_page(struct trace_buffer *buffer, int cpu) > if (bpage->data) > goto out; > > - page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_NORETRY, > + page = alloc_pages_node(cpu_to_node(cpu), > + GFP_KERNEL | __GFP_NORETRY | __GFP_ZERO, > cpu_buffer->buffer->subbuf_order); > if (!page) { > kfree(bpage); > -- > 2.43.0.275.g3460e3d667-goog >
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 173d2595ce2d..db73e326fa04 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1466,7 +1466,8 @@ static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer, list_add(&bpage->list, pages); - page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu), mflags, + page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu), + mflags | __GFP_ZERO, cpu_buffer->buffer->subbuf_order); if (!page) goto free_pages; @@ -1551,7 +1552,8 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu) cpu_buffer->reader_page = bpage; - page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, cpu_buffer->buffer->subbuf_order); + page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_ZERO, + cpu_buffer->buffer->subbuf_order); if (!page) goto fail_free_reader; bpage->page = page_address(page); @@ -5525,7 +5527,8 @@ ring_buffer_alloc_read_page(struct trace_buffer *buffer, int cpu) if (bpage->data) goto out; - page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_NORETRY, + page = alloc_pages_node(cpu_to_node(cpu), + GFP_KERNEL | __GFP_NORETRY | __GFP_ZERO, cpu_buffer->buffer->subbuf_order); if (!page) { kfree(bpage);
In preparation for the ring-buffer memory mapping where each subbuf will be accessible to user-space, zero all the page allocations. Signed-off-by: Vincent Donnefort <vdonnefort@google.com>