From patchwork Thu Jan 11 16:17:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13517605 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBD754F8BC for ; Thu, 11 Jan 2024 16:17:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K0lnm9+/" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-40e5980dfdfso11769375e9.1 for ; Thu, 11 Jan 2024 08:17:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704989842; x=1705594642; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=urOae5rhFO+lmkwawv/vVpYhVX+PQ6+XyZK8aXYOwuU=; b=K0lnm9+/Oin3Bl9s7WIjjMeZoohAvkHXeGcXPjXRRpLtmbSdpKMlCPUIrdLen9q1tx Xok7zLWfWGECmITMmsZlH2RWJU4phaXF/mXq+sni2IqMbNUwpF8y0glmojJ0a39PMyjo X+Zja6GZVrRdxVF2mvB7l1rzdR4ESOjt0+gC4vGq5MeCl6BMTAOEn2nVmOyLQWllmOQ7 /Z+N2snbfFtgqwanuC0akjvAI2Y1kKI7cLAfTXgCWZyuCXTJIQag+XVUHQqqUphjG1Dv msliKn0Zsth/LnMX+yD/AwpocbbOjY98yixE+nVnZy5gkkPEKuMt1fsCaQ/r2jUhpFVx rMeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704989842; x=1705594642; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=urOae5rhFO+lmkwawv/vVpYhVX+PQ6+XyZK8aXYOwuU=; b=UjAvt/21jkySrcoA/5ds+PT6vAsk6natrvh33++kAv/I4UMD0UueWjom39ReEvwjCQ kR4aFfXWIycAtrSEqEJD/cvTIMeOOCTHo0a5A4zr6JAN/u11WKc/4yzSrosQUJVKD6RS AC+fWTTb4zfwBaJSlTqpakIaNOOFxdM9AZIkotB745pN1BMVQICLJrtkIpEOcf2noP2X vVHkudrIkED/iIoa8su695WqxY4vznPVsJGb9spNa3jCi37wwGvQus90NrZKT//jzJ0A MvAaQUj4qcNTP+1gOJowMCrXSFZVYM3bS/0NeVtafE6q03pKu9GuabKNjkoZaNDbdjE5 5SoA== X-Gm-Message-State: AOJu0YzmIvtSv6CyTtctxt3Vzan99hirZXh1ifqA1M2yYu5N/24ta8td M72awFjdRrs5Dlkn27WdDz9KqQTzWNJFz3TXjLO9Avw= X-Google-Smtp-Source: AGHT+IF8MR6r7u8iVUmbLI+HZyZdBccm+AfUiHjUHEq6uy/4UgamnrMeV91I6Wy9kPk+wBVGPHmAI0cvbRiu9ePG X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:600c:3b0c:b0:40d:5e86:fe9e with SMTP id m12-20020a05600c3b0c00b0040d5e86fe9emr173wms.5.1704989841944; Thu, 11 Jan 2024 08:17:21 -0800 (PST) Date: Thu, 11 Jan 2024 16:17:08 +0000 In-Reply-To: <20240111161712.1480333-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240111161712.1480333-1-vdonnefort@google.com> X-Mailer: git-send-email 2.43.0.275.g3460e3d667-goog Message-ID: <20240111161712.1480333-2-vdonnefort@google.com> Subject: [PATCH v11 1/5] ring-buffer: Zero ring-buffer sub-buffers From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort In preparation for the ring-buffer memory mapping where each subbuf will be accessible to user-space, zero all the page allocations. Signed-off-by: Vincent Donnefort Reviewed-by: Masami Hiramatsu (Google) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 173d2595ce2d..db73e326fa04 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1466,7 +1466,8 @@ static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer, list_add(&bpage->list, pages); - page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu), mflags, + page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu), + mflags | __GFP_ZERO, cpu_buffer->buffer->subbuf_order); if (!page) goto free_pages; @@ -1551,7 +1552,8 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu) cpu_buffer->reader_page = bpage; - page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, cpu_buffer->buffer->subbuf_order); + page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_ZERO, + cpu_buffer->buffer->subbuf_order); if (!page) goto fail_free_reader; bpage->page = page_address(page); @@ -5525,7 +5527,8 @@ ring_buffer_alloc_read_page(struct trace_buffer *buffer, int cpu) if (bpage->data) goto out; - page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_NORETRY, + page = alloc_pages_node(cpu_to_node(cpu), + GFP_KERNEL | __GFP_NORETRY | __GFP_ZERO, cpu_buffer->buffer->subbuf_order); if (!page) { kfree(bpage); From patchwork Thu Jan 11 16:17:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13517606 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB91050255 for ; Thu, 11 Jan 2024 16:17:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="W2MEm62t" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-40e541ddf57so20169595e9.1 for ; Thu, 11 Jan 2024 08:17:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704989844; x=1705594644; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tDSYk7XYW6OJAyFvY7TTHo+xr/O1Sc5t/M2lwwtKZ4I=; b=W2MEm62tY4oCFHtvsWJG2vhUIpagMYTO6SiB7O8eUyS9t079Mb5J1ilhTYQIKzWm8o 3FLNNkLxkLr6iNlLk5dnARceN9ngz+xISe64GAUCcCWtMhz8ZFKT+kZhs1wbEzxchqGZ MfxNdqWckoRsHngZGA4bmU2uXYb9XZ6Ud9wMMo64hT0CjCDYXoUi7Yr0GpGKpPRfl+8g Au+wgSwpRD0I+pqlNyaE/j3mm45PHVM8unB4uFHibxrf/DO1SNgI2SPlt7MiD/iMTtfM Z61Pl6WUwYTElUv8rE28VujfzMAmEHnAnYym/iPwsQlkn89cFKIO3nhg2JtK6qzUI8cO 7YxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704989844; x=1705594644; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tDSYk7XYW6OJAyFvY7TTHo+xr/O1Sc5t/M2lwwtKZ4I=; b=thvhx+K5UGCIcGn7WTaTSfnBQaP9bEwGKaXuoUgVSuftedPLWsUhpsCE0Tui9cyLDK fJnaRv/5rhPpkNCviJtbsNTF78UmXPpv3LoTdZld3Kof2mAPu4FFhpQj2fXhXUFGazFR np9pYOM0BDOXC5yIdbDbwQM1a0Z71r0kWTkIwaMNMe7/lgekmM2+k8AksLwnaxNY3gOx 61ZZvzZQEwxFuvCBnSksJ538D21Dogyi97IKlt7FSIL0ofevvRi+HfGPJXEF0J2dCvaQ a639g4zNw7g44WU03a/0Tc0emhvrjHBd4eXQzxQRGnbUaF6zneJk6SyC8Nugv3gHG67S TpdQ== X-Gm-Message-State: AOJu0YxNDkfIwXwIFwQuUVpDk0F/02+c3pFZu71xZk0OMi1luSZ0X46a dZkG2na619YaZ6xFhB1NfpZUafruOwomJstu72qL8jg= X-Google-Smtp-Source: AGHT+IG988fV+y4v6eRMN4Qtm6s3lp6PVXdREEpYx8NOLeZVatTVFpsgszJvTsrdmt0s0u6BLPj8NZ9YrZBeedCU X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:600c:1c88:b0:40e:48b2:b337 with SMTP id k8-20020a05600c1c8800b0040e48b2b337mr147wms.5.1704989844417; Thu, 11 Jan 2024 08:17:24 -0800 (PST) Date: Thu, 11 Jan 2024 16:17:09 +0000 In-Reply-To: <20240111161712.1480333-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240111161712.1480333-1-vdonnefort@google.com> X-Mailer: git-send-email 2.43.0.275.g3460e3d667-goog Message-ID: <20240111161712.1480333-3-vdonnefort@google.com> Subject: [PATCH v11 2/5] ring-buffer: Introducing ring-buffer mapping functions From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort In preparation for allowing the user-space to map a ring-buffer, add a set of mapping functions: ring_buffer_{map,unmap}() ring_buffer_map_fault() And controls on the ring-buffer: ring_buffer_map_get_reader() /* swap reader and head */ Mapping the ring-buffer also involves: A unique ID for each subbuf of the ring-buffer, currently they are only identified through their in-kernel VA. A meta-page, where are stored ring-buffer statistics and a description for the current reader The linear mapping exposes the meta-page, and each subbuf of the ring-buffer, ordered following their unique ID, assigned during the first mapping. Once mapped, no subbuf can get in or out of the ring-buffer: the buffer size will remain unmodified and the splice enabling functions will in reality simply memcpy the data instead of swapping subbufs. Signed-off-by: Vincent Donnefort diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index fa802db216f9..0841ba8bab14 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -6,6 +6,8 @@ #include #include +#include + struct trace_buffer; struct ring_buffer_iter; @@ -221,4 +223,9 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node); #define trace_rb_cpu_prepare NULL #endif +int ring_buffer_map(struct trace_buffer *buffer, int cpu); +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu); +struct page *ring_buffer_map_fault(struct trace_buffer *buffer, int cpu, + unsigned long pgoff); +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu); #endif /* _LINUX_RING_BUFFER_H */ diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h new file mode 100644 index 000000000000..bde39a73ce65 --- /dev/null +++ b/include/uapi/linux/trace_mmap.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _TRACE_MMAP_H_ +#define _TRACE_MMAP_H_ + +#include + +/** + * struct trace_buffer_meta - Ring-buffer Meta-page description + * @entries: Number of entries in the ring-buffer. + * @overrun: Number of entries lost in the ring-buffer. + * @read: Number of entries that have been read. + * @subbufs_touched: Number of subbufs that have been filled. + * @subbufs_lost: Number of subbufs lost to overrun. + * @subbufs_read: Number of subbufs that have been read. + * @reader.lost_events: Number of events lost at the time of the reader swap. + * @reader.id: subbuf ID of the current reader. From 0 to @nr_subbufs - 1 + * @reader.read: Number of bytes read on the reader subbuf. + * @subbuf_size: Size of each subbuf, including the header. + * @nr_subbufs: Number of subbfs in the ring-buffer. + * @meta_page_size: Size of this meta-page. + * @meta_struct_len: Size of this structure. + */ +struct trace_buffer_meta { + unsigned long entries; + unsigned long overrun; + unsigned long read; + + unsigned long subbufs_touched; + unsigned long subbufs_lost; + unsigned long subbufs_read; + + struct { + unsigned long lost_events; + __u32 id; + __u32 read; + } reader; + + __u32 subbuf_size; + __u32 nr_subbufs; + + __u32 meta_page_size; + __u32 meta_struct_len; +}; + +#endif /* _TRACE_MMAP_H_ */ diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index db73e326fa04..e9ff1c95e896 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -338,6 +338,7 @@ struct buffer_page { local_t entries; /* entries on this page */ unsigned long real_end; /* real end of data */ unsigned order; /* order of the page */ + u32 id; /* ID for external mapping */ struct buffer_data_page *page; /* Actual data page */ }; @@ -484,6 +485,12 @@ struct ring_buffer_per_cpu { u64 read_stamp; /* pages removed since last reset */ unsigned long pages_removed; + + int mapped; + struct mutex mapping_lock; + unsigned long *subbuf_ids; /* ID to addr */ + struct trace_buffer_meta *meta_page; + /* ring buffer pages to update, > 0 to add, < 0 to remove */ long nr_pages_to_update; struct list_head new_pages; /* new pages to add */ @@ -1542,6 +1549,7 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu) init_irq_work(&cpu_buffer->irq_work.work, rb_wake_up_waiters); init_waitqueue_head(&cpu_buffer->irq_work.waiters); init_waitqueue_head(&cpu_buffer->irq_work.full_waiters); + mutex_init(&cpu_buffer->mapping_lock); bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()), GFP_KERNEL, cpu_to_node(cpu)); @@ -5160,6 +5168,23 @@ static void rb_clear_buffer_page(struct buffer_page *page) page->read = 0; } +static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + + WRITE_ONCE(meta->entries, local_read(&cpu_buffer->entries)); + WRITE_ONCE(meta->overrun, local_read(&cpu_buffer->overrun)); + WRITE_ONCE(meta->read, cpu_buffer->read); + + WRITE_ONCE(meta->subbufs_touched, local_read(&cpu_buffer->pages_touched)); + WRITE_ONCE(meta->subbufs_lost, local_read(&cpu_buffer->pages_lost)); + WRITE_ONCE(meta->subbufs_read, local_read(&cpu_buffer->pages_read)); + + WRITE_ONCE(meta->reader.read, cpu_buffer->reader_page->read); + WRITE_ONCE(meta->reader.id, cpu_buffer->reader_page->id); + WRITE_ONCE(meta->reader.lost_events, cpu_buffer->lost_events); +} + static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) { @@ -5204,6 +5229,9 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->lost_events = 0; cpu_buffer->last_overrun = 0; + if (cpu_buffer->mapped) + rb_update_meta_page(cpu_buffer); + rb_head_page_activate(cpu_buffer); cpu_buffer->pages_removed = 0; } @@ -5418,6 +5446,11 @@ int ring_buffer_swap_cpu(struct trace_buffer *buffer_a, cpu_buffer_a = buffer_a->buffers[cpu]; cpu_buffer_b = buffer_b->buffers[cpu]; + if (READ_ONCE(cpu_buffer_a->mapped) || READ_ONCE(cpu_buffer_b->mapped)) { + ret = -EBUSY; + goto out; + } + /* At least make sure the two buffers are somewhat the same */ if (cpu_buffer_a->nr_pages != cpu_buffer_b->nr_pages) goto out; @@ -5682,7 +5715,8 @@ int ring_buffer_read_page(struct trace_buffer *buffer, * Otherwise, we can simply swap the page with the one passed in. */ if (read || (len < (commit - read)) || - cpu_buffer->reader_page == cpu_buffer->commit_page) { + cpu_buffer->reader_page == cpu_buffer->commit_page || + cpu_buffer->mapped) { struct buffer_data_page *rpage = cpu_buffer->reader_page->page; unsigned int rpos = read; unsigned int pos = 0; @@ -5901,6 +5935,11 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) cpu_buffer = buffer->buffers[cpu]; + if (cpu_buffer->mapped) { + err = -EBUSY; + goto error; + } + /* Update the number of pages to match the new size */ nr_pages = old_size * buffer->buffers[cpu]->nr_pages; nr_pages = DIV_ROUND_UP(nr_pages, buffer->subbuf_size); @@ -6002,6 +6041,295 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) } EXPORT_SYMBOL_GPL(ring_buffer_subbuf_order_set); +#define subbuf_page(off, start) \ + virt_to_page((void *)(start + (off << PAGE_SHIFT))) + +#define foreach_subbuf_page(sub_order, start, page) \ + page = subbuf_page(0, (start)); \ + for (int __off = 0; __off < (1 << (sub_order)); \ + __off++, page = subbuf_page(__off, (start))) + +static inline void subbuf_map_prepare(unsigned long subbuf_start, int order) +{ + struct page *page; + + /* + * When allocating order > 0 pages, only the first struct page has a + * refcount > 1. Increasing the refcount here ensures none of the struct + * page composing the sub-buffer is freeed when the mapping is closed. + */ + foreach_subbuf_page(order, subbuf_start, page) + page_ref_inc(page); +} + +static inline void subbuf_unmap(unsigned long subbuf_start, int order) +{ + struct page *page; + + foreach_subbuf_page(order, subbuf_start, page) { + page_ref_dec(page); + page->mapping = NULL; + } +} + +static void rb_free_subbuf_ids(struct ring_buffer_per_cpu *cpu_buffer) +{ + int sub_id; + + for (sub_id = 0; sub_id < cpu_buffer->nr_pages + 1; sub_id++) + subbuf_unmap(cpu_buffer->subbuf_ids[sub_id], + cpu_buffer->buffer->subbuf_order); + + kfree(cpu_buffer->subbuf_ids); + cpu_buffer->subbuf_ids = NULL; +} + +static int rb_alloc_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + if (cpu_buffer->meta_page) + return 0; + + cpu_buffer->meta_page = page_to_virt(alloc_page(GFP_USER | __GFP_ZERO)); + if (!cpu_buffer->meta_page) + return -ENOMEM; + + return 0; +} + +static void rb_free_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + unsigned long addr = (unsigned long)cpu_buffer->meta_page; + + virt_to_page((void *)addr)->mapping = NULL; + free_page(addr); + cpu_buffer->meta_page = NULL; +} + +static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, + unsigned long *subbuf_ids) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + unsigned int nr_subbufs = cpu_buffer->nr_pages + 1; + struct buffer_page *first_subbuf, *subbuf; + int id = 0; + + subbuf_ids[id] = (unsigned long)cpu_buffer->reader_page->page; + subbuf_map_prepare(subbuf_ids[id], cpu_buffer->buffer->subbuf_order); + cpu_buffer->reader_page->id = id++; + + first_subbuf = subbuf = rb_set_head_page(cpu_buffer); + do { + if (id >= nr_subbufs) { + WARN_ON(1); + break; + } + + subbuf_ids[id] = (unsigned long)subbuf->page; + subbuf->id = id; + subbuf_map_prepare(subbuf_ids[id], cpu_buffer->buffer->subbuf_order); + + rb_inc_page(&subbuf); + id++; + } while (subbuf != first_subbuf); + + /* install subbuf ID to kern VA translation */ + cpu_buffer->subbuf_ids = subbuf_ids; + + meta->meta_page_size = PAGE_SIZE; + meta->meta_struct_len = sizeof(*meta); + meta->nr_subbufs = nr_subbufs; + meta->subbuf_size = cpu_buffer->buffer->subbuf_size + BUF_PAGE_HDR_SIZE; + + rb_update_meta_page(cpu_buffer); +} + +static inline struct ring_buffer_per_cpu * +rb_get_mapped_buffer(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return ERR_PTR(-EINVAL); + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + mutex_unlock(&cpu_buffer->mapping_lock); + return ERR_PTR(-ENODEV); + } + + return cpu_buffer; +} + +static inline void rb_put_mapped_buffer(struct ring_buffer_per_cpu *cpu_buffer) +{ + mutex_unlock(&cpu_buffer->mapping_lock); +} + +int ring_buffer_map(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long flags, *subbuf_ids; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (cpu_buffer->mapped) { + if (cpu_buffer->mapped == INT_MAX) + err = -EBUSY; + else + WRITE_ONCE(cpu_buffer->mapped, cpu_buffer->mapped + 1); + mutex_unlock(&cpu_buffer->mapping_lock); + return err; + } + + /* prevent another thread from changing buffer sizes */ + mutex_lock(&buffer->mutex); + + err = rb_alloc_meta_page(cpu_buffer); + if (err) + goto unlock; + + /* subbuf_ids include the reader while nr_pages does not */ + subbuf_ids = kzalloc(sizeof(*subbuf_ids) * (cpu_buffer->nr_pages + 1), + GFP_KERNEL); + if (!subbuf_ids) { + rb_free_meta_page(cpu_buffer); + err = -ENOMEM; + goto unlock; + } + + atomic_inc(&cpu_buffer->resize_disabled); + + /* + * Lock all readers to block any subbuf swap until the subbuf IDs are + * assigned. + */ + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + + rb_setup_ids_meta_page(cpu_buffer, subbuf_ids); + + WRITE_ONCE(cpu_buffer->mapped, 1); + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); +unlock: + mutex_unlock(&buffer->mutex); + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + err = -ENODEV; + goto unlock; + } + + WRITE_ONCE(cpu_buffer->mapped, cpu_buffer->mapped - 1); + if (!cpu_buffer->mapped) { + /* Wait for the writer and readers to observe !mapped */ + synchronize_rcu(); + + rb_free_subbuf_ids(cpu_buffer); + rb_free_meta_page(cpu_buffer); + atomic_dec(&cpu_buffer->resize_disabled); + } +unlock: + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +/* + * +--------------+ pgoff == 0 + * | meta page | + * +--------------+ pgoff == 1 + * | subbuffer 0 | + * +--------------+ pgoff == 1 + (1 << subbuf_order) + * | subbuffer 1 | + * ... + */ +struct page *ring_buffer_map_fault(struct trace_buffer *buffer, int cpu, + unsigned long pgoff) +{ + struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu]; + unsigned long subbuf_id, subbuf_offset, addr; + struct page *page; + + if (!pgoff) + return virt_to_page((void *)cpu_buffer->meta_page); + + pgoff--; + + subbuf_id = pgoff >> buffer->subbuf_order; + if (subbuf_id > cpu_buffer->nr_pages) + return NULL; + + subbuf_offset = pgoff & ((1UL << buffer->subbuf_order) - 1); + addr = cpu_buffer->subbuf_ids[subbuf_id] + (subbuf_offset * PAGE_SIZE); + page = virt_to_page((void *)addr); + + return page; +} + +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long reader_size; + unsigned long flags; + + cpu_buffer = rb_get_mapped_buffer(buffer, cpu); + if (IS_ERR(cpu_buffer)) + return (int)PTR_ERR(cpu_buffer); + + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); +consume: + if (rb_per_cpu_empty(cpu_buffer)) + goto out; + + reader_size = rb_page_size(cpu_buffer->reader_page); + + /* + * There are data to be read on the current reader page, we can + * return to the caller. But before that, we assume the latter will read + * everything. Let's update the kernel reader accordingly. + */ + if (cpu_buffer->reader_page->read < reader_size) { + while (cpu_buffer->reader_page->read < reader_size) + rb_advance_reader(cpu_buffer); + goto out; + } + + if (WARN_ON(!rb_get_reader_page(cpu_buffer))) + goto out; + + goto consume; +out: + rb_update_meta_page(cpu_buffer); + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + rb_put_mapped_buffer(cpu_buffer); + + return 0; +} + /* * We only allocate new buffers, never free them if the CPU goes down. * If we were to free the buffer, then the user would lose any trace that was in From patchwork Thu Jan 11 16:17:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13517607 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AEF9A52F74 for ; Thu, 11 Jan 2024 16:17:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="njdBluqA" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5f69158f32eso73161937b3.2 for ; Thu, 11 Jan 2024 08:17:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704989846; x=1705594646; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PtyECdimc05A+gLeHojIfO6SoowPh15bzbJQtIDe3O0=; b=njdBluqACu/YDYn/Lvqg2LhQH0RnKnFWv7/li4QdW08WvwLEwRB08c4Bwr5hGD922g rZDN7OYVAa8bYLbFVq67KJi1Wo2TGkSqFwHJGFYeQeAuxW/g8vKsWh+1xVZnS6WYPd3B eQ5S1TZECTbv3GrmohCe4oClBGH1UgYpC4fI3pCwJ9LQQ62JhsLodqBMSjcgyBbrsqUy +QFnY2Lhleu8HAL7deRoLkXXSoQkZSCOhdJqvG09km6z5fJy6F6W9Rk8jw6jcY6rZX3N nB4GkuN9rhrv6yLIUrDxt0G42Hhzw9kHO9qundj5vBAjOPsESecfWrGyIpdxj15tG9RW BS6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704989846; x=1705594646; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PtyECdimc05A+gLeHojIfO6SoowPh15bzbJQtIDe3O0=; b=ZrRaEyof6vpkc8dKxoPDdW78h0I0BI/ZbEUo47eJfr0kVQl9X0aq1vOsJKBQWzln+R B2Y+0ovRCk7HGxSLBoByIFbruNs6A+qrLm4uZKcfy3JDZxmXwatDR3tS5YgRdYnS82R9 pDBLOFcdwZorP2vWVPKZSPdlx5rPZZIHF7kvLOD/Hb8ozsJtl1st7glcdSASRz5+0+1W 1FPXgtZFf72Mx231Ypv6TH4XPjELB+1+GVvEj8UA0jUiweMp0eHQ1TIo/S+Oauw3v6vC sHriCWKk5ZfBAsVnw3Ikxw/Ws6fey8W4SdWr3mz8zGyT7j2aVPvsiExz3PERFz3nudD+ J9LQ== X-Gm-Message-State: AOJu0YznCNNBFbQAN6z/GH0DbzjDORKoxIMICIhaDM0B7mR15XrohUus vZlDrMpfrodeY+eKaU3PE6Riay6C+OFEdOeSmqySvYY= X-Google-Smtp-Source: AGHT+IGryqD6NcAPIGKNlQmtC7L9ifM9Ki2xJRzxpkbOrLtFk4l63VFhlYI4FGQ4mE0lKxEHQSGKNhaqPXt8bMoi X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a81:dd14:0:b0:5fb:7e5b:b87d with SMTP id e20-20020a81dd14000000b005fb7e5bb87dmr1942ywn.10.1704989846711; Thu, 11 Jan 2024 08:17:26 -0800 (PST) Date: Thu, 11 Jan 2024 16:17:10 +0000 In-Reply-To: <20240111161712.1480333-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240111161712.1480333-1-vdonnefort@google.com> X-Mailer: git-send-email 2.43.0.275.g3460e3d667-goog Message-ID: <20240111161712.1480333-4-vdonnefort@google.com> Subject: [PATCH v11 3/5] tracing: Allow user-space mapping of the ring-buffer From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort Currently, user-space extracts data from the ring-buffer via splice, which is handy for storage or network sharing. However, due to splice limitations, it is imposible to do real-time analysis without a copy. A solution for that problem is to let the user-space map the ring-buffer directly. The mapping is exposed via the per-CPU file trace_pipe_raw. The first element of the mapping is the meta-page. It is followed by each subbuffer constituting the ring-buffer, ordered by their unique page ID: * Meta-page -- include/uapi/linux/trace_mmap.h for a description * Subbuf ID 0 * Subbuf ID 1 ... It is therefore easy to translate a subbuf ID into an offset in the mapping: reader_id = meta->reader->id; reader_offset = meta->meta_page_size + reader_id * meta->subbuf_size; When new data is available, the mapper must call a newly introduced ioctl: TRACE_MMAP_IOCTL_GET_READER. This will update the Meta-page reader ID to point to the next reader containing unread data. Signed-off-by: Vincent Donnefort diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h index bde39a73ce65..a797891e3ba0 100644 --- a/include/uapi/linux/trace_mmap.h +++ b/include/uapi/linux/trace_mmap.h @@ -42,4 +42,6 @@ struct trace_buffer_meta { __u32 meta_struct_len; }; +#define TRACE_MMAP_IOCTL_GET_READER _IO('T', 0x1) + #endif /* _TRACE_MMAP_H_ */ diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 46dbe22121e9..7bf6c2942aea 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -6472,7 +6472,7 @@ static void tracing_set_nop(struct trace_array *tr) { if (tr->current_trace == &nop_trace) return; - + tr->current_trace->enabled--; if (tr->current_trace->reset) @@ -8583,15 +8583,31 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos, return ret; } -/* An ioctl call with cmd 0 to the ring buffer file will wake up all waiters */ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct ftrace_buffer_info *info = file->private_data; struct trace_iterator *iter = &info->iter; + int err; - if (cmd) - return -ENOIOCTLCMD; + if (cmd == TRACE_MMAP_IOCTL_GET_READER) { + if (!(file->f_flags & O_NONBLOCK)) { + err = ring_buffer_wait(iter->array_buffer->buffer, + iter->cpu_file, + iter->tr->buffer_percent); + if (err) + return err; + } + + return ring_buffer_map_get_reader(iter->array_buffer->buffer, + iter->cpu_file); + } else if (cmd) { + return -ENOTTY; + } + /* + * An ioctl call with cmd 0 to the ring buffer file will wake up all + * waiters + */ mutex_lock(&trace_types_lock); iter->wait_index++; @@ -8604,6 +8620,62 @@ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned return 0; } +static vm_fault_t tracing_buffers_mmap_fault(struct vm_fault *vmf) +{ + struct ftrace_buffer_info *info = vmf->vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + vm_fault_t ret = VM_FAULT_SIGBUS; + struct page *page; + + page = ring_buffer_map_fault(iter->array_buffer->buffer, iter->cpu_file, + vmf->pgoff); + if (!page) + return ret; + + get_page(page); + vmf->page = page; + vmf->page->mapping = vmf->vma->vm_file->f_mapping; + vmf->page->index = vmf->pgoff; + + return 0; +} + +static void tracing_buffers_mmap_close(struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + + ring_buffer_unmap(iter->array_buffer->buffer, iter->cpu_file); +} + +static void tracing_buffers_mmap_open(struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + + WARN_ON(ring_buffer_map(iter->array_buffer->buffer, iter->cpu_file)); +} + +static const struct vm_operations_struct tracing_buffers_vmops = { + .open = tracing_buffers_mmap_open, + .close = tracing_buffers_mmap_close, + .fault = tracing_buffers_mmap_fault, +}; + +static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = filp->private_data; + struct trace_iterator *iter = &info->iter; + + if (vma->vm_flags & VM_WRITE) + return -EPERM; + + vm_flags_mod(vma, VM_DONTCOPY | VM_DONTDUMP, VM_MAYWRITE); + vma->vm_ops = &tracing_buffers_vmops; + + return ring_buffer_map(iter->array_buffer->buffer, iter->cpu_file); +} + static const struct file_operations tracing_buffers_fops = { .open = tracing_buffers_open, .read = tracing_buffers_read, @@ -8612,6 +8684,7 @@ static const struct file_operations tracing_buffers_fops = { .splice_read = tracing_buffers_splice_read, .unlocked_ioctl = tracing_buffers_ioctl, .llseek = no_llseek, + .mmap = tracing_buffers_mmap, }; static ssize_t From patchwork Thu Jan 11 16:17:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13517608 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 931D953807 for ; Thu, 11 Jan 2024 16:17:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Yl5sYijT" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3377a359fdaso1307141f8f.2 for ; Thu, 11 Jan 2024 08:17:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704989849; x=1705594649; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XCuyvwf8Wck/dCBmVNrAbagpZQESmdweCPz4Yt1A3mA=; b=Yl5sYijTJddYPE8A0XbzsIxeUXUxr0R4yCq8ZzoKqOZLWBwT59vScwLfcMHzGaEu6o 6db7yvLqTcwCj3kmHSKnsoA7tsyBhu3XnFD0jJgCUhT17S2t6Xg3M2GPq/JTCg3rSYh1 kgO+QW8caVlNLBRh+oPED2UD5vqxTKGaMYkqUu24QuTZbEEZlf7yclpmBbDHNg63nu7U 21SDh14jgxJFTHdQ4Klav2uQNq3erZlwpA+UGLAxlJ5lm1u+BBJ/kgkV83Y+XT130706 iRF1p64/p6IipwupCJHs34EWLxrJmLjOX2QgM7855UYsqrEVGkXGoPbCUNG1Fp/Cgc0+ Pu2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704989849; x=1705594649; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XCuyvwf8Wck/dCBmVNrAbagpZQESmdweCPz4Yt1A3mA=; b=HcAs7zHGeQO5oybWvji5K0cg8tRSbSSh/WaooIfOXPa7x2fuLrA3hReHJPXp56agQM 9uIUHC2m0cJ52w1ytWIx6sF6DejfPl8MmECdTJDxyhbZFW/ZjyXEoYxVxiO5+axL6ttI KqM2JSZUzoUtev2ZHLpJD4yFqudmLi1q5V4vL6PBt3Zs+TNDRKcS9d8l9yOVPkqAiwCJ wI2PirYlJBZQ5/tTbrem8SH4eMIoXd7foPUd7Gq5lzdE+vRW2UykS98wlX4F3Ai/olGG tYkEtuu5YZ49719K+RyJbpXmhdeCl8/9jtaC/+UDNNpo6LWcnb0Cucp3MJYY5KDtJXTJ fPpw== X-Gm-Message-State: AOJu0Yx6AOHToIeRaEV1x3ZRU9EJd3DinSGOCKNjrI7HBnP9q/gspVNJ IV3YLRXCID0Pa8xE+50prUNY2o9/vbEIfZaE2OGEDtI= X-Google-Smtp-Source: AGHT+IEvnBMOdjovwRbSuDDixvEjJMk5Y+pp16eUzVYNCiCUKpLMzTRtwJWKqUEgoUcb13fZcHQcweiUd7Zg6Ybm X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:6000:618:b0:337:8f60:972e with SMTP id bn24-20020a056000061800b003378f60972emr1263wrb.3.1704989848845; Thu, 11 Jan 2024 08:17:28 -0800 (PST) Date: Thu, 11 Jan 2024 16:17:11 +0000 In-Reply-To: <20240111161712.1480333-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240111161712.1480333-1-vdonnefort@google.com> X-Mailer: git-send-email 2.43.0.275.g3460e3d667-goog Message-ID: <20240111161712.1480333-5-vdonnefort@google.com> Subject: [PATCH v11 4/5] Documentation: tracing: Add ring-buffer mapping From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort It is now possible to mmap() a ring-buffer to stream its content. Add some documentation and a code example. Signed-off-by: Vincent Donnefort diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst index 5092d6c13af5..0b300901fd75 100644 --- a/Documentation/trace/index.rst +++ b/Documentation/trace/index.rst @@ -29,6 +29,7 @@ Linux Tracing Technologies timerlat-tracer intel_th ring-buffer-design + ring-buffer-map stm sys-t coresight/index diff --git a/Documentation/trace/ring-buffer-map.rst b/Documentation/trace/ring-buffer-map.rst new file mode 100644 index 000000000000..2ba7b5339178 --- /dev/null +++ b/Documentation/trace/ring-buffer-map.rst @@ -0,0 +1,105 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================== +Tracefs ring-buffer memory mapping +================================== + +:Author: Vincent Donnefort + +Overview +======== +Tracefs ring-buffer memory map provides an efficient method to stream data +as no memory copy is necessary. The application mapping the ring-buffer becomes +then a consumer for that ring-buffer, in a similar fashion to trace_pipe. + +Memory mapping setup +==================== +The mapping works with a mmap() of the trace_pipe_raw interface. + +The first system page of the mapping contains ring-buffer statistics and +description. It is referred as the meta-page. One of the most important field of +the meta-page is the reader. It contains the subbuf ID which can be safely read +by the mapper (see ring-buffer-design.rst). + +The meta-page is followed by all the subbuf, ordered by ascendant ID. It is +therefore effortless to know where the reader starts in the mapping: + +.. code-block:: c + + reader_id = meta->reader->id; + reader_offset = meta->meta_page_size + reader_id * meta->subbuf_size; + +When the application is done with the current reader, it can get a new one using +the trace_pipe_raw ioctl() TRACE_MMAP_IOCTL_GET_READER. This ioctl also updates +the meta-page fields. + +Limitations +=========== +When a mapping is in place on a Tracefs ring-buffer, it is not possible to +either resize it (either by increasing the entire size of the ring-buffer or +each subbuf). It is also not possible to use snapshot or splice. + +Concurrent readers (either another application mapping that ring-buffer or the +kernel with trace_pipe) are allowed but not recommended. They will compete for +the ring-buffer and the output is unpredictable. + +Example +======= + +.. code-block:: c + + #include + #include + #include + #include + + #include + + #include + #include + + #define TRACE_PIPE_RAW "/sys/kernel/tracing/per_cpu/cpu0/trace_pipe_raw" + + int main(void) + { + int page_size = getpagesize(), fd, reader_id; + unsigned long meta_len, data_len; + struct trace_buffer_meta *meta; + void *map, *reader, *data; + + fd = open(TRACE_PIPE_RAW, O_RDONLY); + if (fd < 0) + exit(EXIT_FAILURE); + + map = mmap(NULL, page_size, PROT_READ, MAP_SHARED, fd, 0); + if (map == MAP_FAILED) + exit(EXIT_FAILURE); + + meta = (struct trace_buffer_meta *)map; + meta_len = meta->meta_page_size; + + printf("entries: %lu\n", meta->entries); + printf("overrun: %lu\n", meta->overrun); + printf("read: %lu\n", meta->read); + printf("subbufs_touched:%lu\n", meta->subbufs_touched); + printf("subbufs_lost: %lu\n", meta->subbufs_lost); + printf("subbufs_read: %lu\n", meta->subbufs_read); + printf("nr_subbufs: %u\n", meta->nr_subbufs); + + data_len = meta->subbuf_size * meta->nr_subbufs; + data = mmap(NULL, data_len, PROT_READ, MAP_SHARED, fd, data_len); + if (data == MAP_FAILED) + exit(EXIT_FAILURE); + + if (ioctl(fd, TRACE_MMAP_IOCTL_GET_READER) < 0) + exit(EXIT_FAILURE); + + reader_id = meta->reader.id; + reader = data + meta->subbuf_size * reader_id; + + munmap(data, data_len); + munmap(meta, meta_len); + close (fd); + + return 0; + } From patchwork Thu Jan 11 16:17:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13517609 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1080553E19 for ; Thu, 11 Jan 2024 16:17:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UGvD8Rzf" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbe9dacc912so6643441276.2 for ; Thu, 11 Jan 2024 08:17:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704989851; x=1705594651; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=chPSMnrDRu741hV8mN6BhcsZOY4+3XYSLVWxRQVgXxY=; b=UGvD8RzfpYUIhBQ91b4rrlCN8cdhWQTaenUUoyYPbR2cXdS51zCxvrzeeolJjxgndl jM86vQfrZgnp2duq8HAl58jWzrDs5xPmRuo8txvbB/f2TUHtlcnee22+y6Fw5u3F7Q9X aw7xkOyJt09WNC6VhStqGhoK8uGzw/bID3m3aOR0sLaZqa97gLU1mjGsJbJ5Gh0XYLqI rV2vSkDkZeJY60Ea7mBANwH1WTDOL4nlG5kVd/GtCyWfkkJcGJxBeHe3F8mGwNeu+tHe bQW6kOaQMEBAsREc+OTEiUzGUJ90APx8jWMRW2iAHcK3M0hvXbfjRBc12+o9C1fxhPBU SKcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704989851; x=1705594651; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=chPSMnrDRu741hV8mN6BhcsZOY4+3XYSLVWxRQVgXxY=; b=ayHlzv9NrQOBMts0J/jP2kcRe6Mjh6thPu8NPVPVik6AWu1lex8QH6EKF22Oua/DTC tNkji72ba7RQYqggenccNeSNpzem/V8oRfKHBb/bDD9r97wSM0FAkZBrBi4r9azk0O30 EQXGftWDm2Q1O7nSwKxByJ8ipg/pH5e+X4FMmheIlT5mCNgtfRUl4hbMPN8K6v21EmGD bp2KlhBUi68SLOhHoOAG9TBsWlApLO12cVLXd8fCP3JizlaCeMq8uuo4B1X/V4SAKIq3 ZYZCqRAI4ZNAzYGgy0V/vBWvlfpibuyD89cIOYnbMLB/LXwOVkMs1Czg2hSb9l2id4WS 5D6g== X-Gm-Message-State: AOJu0Yw8H9z3WvcXpwWsOW4J8r2/SsRXWmlOXUab7I7uYg7ItBnTXcmq c3BFexQEUrN9mffhn96ro93rfg4Uw+NYmnaT/TmuMT8= X-Google-Smtp-Source: AGHT+IHHfm2bkJmkStR1wq4BV1bTZwd9grT/yuiYYzm76Uayh3kw3vXeMdTe0+5TfEpL0gIMuqmrKrfMetDNGNv7 X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:6902:b03:b0:dbf:142d:9dc4 with SMTP id ch3-20020a0569020b0300b00dbf142d9dc4mr59718ybb.3.1704989851099; Thu, 11 Jan 2024 08:17:31 -0800 (PST) Date: Thu, 11 Jan 2024 16:17:12 +0000 In-Reply-To: <20240111161712.1480333-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240111161712.1480333-1-vdonnefort@google.com> X-Mailer: git-send-email 2.43.0.275.g3460e3d667-goog Message-ID: <20240111161712.1480333-6-vdonnefort@google.com> Subject: [PATCH v11 5/5] ring-buffer/selftest: Add ring-buffer mapping test From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort , Shuah Khan , Shuah Khan , linux-kselftest@vger.kernel.org This test maps a ring-buffer and validate the meta-page after reset and after emitting few events. Cc: Shuah Khan Cc: Shuah Khan Cc: linux-kselftest@vger.kernel.org Signed-off-by: Vincent Donnefort Reviewed-by: Masami Hiramatsu (Google) Tested-by: Masami Hiramatsu (Google) diff --git a/tools/testing/selftests/ring-buffer/Makefile b/tools/testing/selftests/ring-buffer/Makefile new file mode 100644 index 000000000000..627c5fa6d1ab --- /dev/null +++ b/tools/testing/selftests/ring-buffer/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +CFLAGS += -Wl,-no-as-needed -Wall +CFLAGS += $(KHDR_INCLUDES) +CFLAGS += -D_GNU_SOURCE + +TEST_GEN_PROGS = map_test + +include ../lib.mk diff --git a/tools/testing/selftests/ring-buffer/config b/tools/testing/selftests/ring-buffer/config new file mode 100644 index 000000000000..ef8214661612 --- /dev/null +++ b/tools/testing/selftests/ring-buffer/config @@ -0,0 +1 @@ +CONFIG_FTRACE=y diff --git a/tools/testing/selftests/ring-buffer/map_test.c b/tools/testing/selftests/ring-buffer/map_test.c new file mode 100644 index 000000000000..49107e8da5e9 --- /dev/null +++ b/tools/testing/selftests/ring-buffer/map_test.c @@ -0,0 +1,188 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Ring-buffer memory mapping tests + * + * Copyright (c) 2024 Vincent Donnefort + */ +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +#include "../user_events/user_events_selftests.h" /* share tracefs setup */ +#include "../kselftest_harness.h" + +#define TRACEFS_ROOT "/sys/kernel/tracing" + +static int __tracefs_write(const char *path, const char *value) +{ + FILE *file; + + file = fopen(path, "w"); + if (!file) + return -1; + + fputs(value, file); + fclose(file); + + return 0; +} + +static int __tracefs_write_int(const char *path, int value) +{ + char *str; + int ret; + + if (asprintf(&str, "%d", value) < 0) + return -1; + + ret = __tracefs_write(path, str); + + free(str); + + return ret; +} + +#define tracefs_write_int(path, value) \ + ASSERT_EQ(__tracefs_write_int((path), (value)), 0) + +static int tracefs_reset(void) +{ + if (__tracefs_write_int(TRACEFS_ROOT"/tracing_on", 0)) + return -1; + if (__tracefs_write_int(TRACEFS_ROOT"/trace", 0)) + return -1; + if (__tracefs_write(TRACEFS_ROOT"/set_event", "")) + return -1; + + return 0; +} + +FIXTURE(map) { + struct trace_buffer_meta *meta; + void *data; + int cpu_fd; + bool umount; +}; + +FIXTURE_VARIANT(map) { + int subbuf_size; +}; + +FIXTURE_VARIANT_ADD(map, subbuf_size_4k) { + .subbuf_size = 4, +}; + +FIXTURE_VARIANT_ADD(map, subbuf_size_8k) { + .subbuf_size = 8, +}; + +FIXTURE_SETUP(map) +{ + int cpu = sched_getcpu(), page_size = getpagesize(); + unsigned long meta_len, data_len; + char *cpu_path, *message; + bool fail, umount; + cpu_set_t cpu_mask; + void *map; + + if (!tracefs_enabled(&message, &fail, &umount)) { + if (fail) { + TH_LOG("Tracefs setup failed: %s", message); + ASSERT_FALSE(fail); + } + SKIP(return, "Skipping: %s", message); + } + + self->umount = umount; + + ASSERT_GE(cpu, 0); + + ASSERT_EQ(tracefs_reset(), 0); + + tracefs_write_int(TRACEFS_ROOT"/buffer_subbuf_size_kb", variant->subbuf_size); + + ASSERT_GE(asprintf(&cpu_path, + TRACEFS_ROOT"/per_cpu/cpu%d/trace_pipe_raw", + cpu), 0); + + self->cpu_fd = open(cpu_path, O_RDONLY | O_NONBLOCK); + ASSERT_GE(self->cpu_fd, 0); + free(cpu_path); + + map = mmap(NULL, page_size, PROT_READ, MAP_SHARED, self->cpu_fd, 0); + ASSERT_NE(map, MAP_FAILED); + self->meta = (struct trace_buffer_meta *)map; + + meta_len = self->meta->meta_page_size; + data_len = self->meta->subbuf_size * self->meta->nr_subbufs; + + map = mmap(NULL, data_len, PROT_READ, MAP_SHARED, self->cpu_fd, meta_len); + ASSERT_NE(map, MAP_FAILED); + self->data = map; + + /* + * Ensure generated events will be found on this very same ring-buffer. + */ + CPU_ZERO(&cpu_mask); + CPU_SET(cpu, &cpu_mask); + ASSERT_EQ(sched_setaffinity(0, sizeof(cpu_mask), &cpu_mask), 0); +} + +FIXTURE_TEARDOWN(map) +{ + tracefs_reset(); + + if (self->umount) + tracefs_unmount(); + + munmap(self->data, self->meta->subbuf_size * self->meta->nr_subbufs); + munmap(self->meta, self->meta->meta_page_size); + close(self->cpu_fd); +} + +TEST_F(map, meta_page_check) +{ + int cnt = 0; + + ASSERT_EQ(self->meta->entries, 0); + ASSERT_EQ(self->meta->overrun, 0); + ASSERT_EQ(self->meta->read, 0); + ASSERT_EQ(self->meta->subbufs_touched, 0); + ASSERT_EQ(self->meta->subbufs_lost, 0); + ASSERT_EQ(self->meta->subbufs_read, 0); + + ASSERT_EQ(self->meta->reader.id, 0); + ASSERT_EQ(self->meta->reader.read, 0); + + ASSERT_EQ(ioctl(self->cpu_fd, TRACE_MMAP_IOCTL_GET_READER), 0); + ASSERT_EQ(self->meta->reader.id, 0); + + tracefs_write_int(TRACEFS_ROOT"/tracing_on", 1); + for (int i = 0; i < 16; i++) + tracefs_write_int(TRACEFS_ROOT"/trace_marker", i); +again: + ASSERT_EQ(ioctl(self->cpu_fd, TRACE_MMAP_IOCTL_GET_READER), 0); + + ASSERT_EQ(self->meta->entries, 16); + ASSERT_EQ(self->meta->overrun, 0); + ASSERT_EQ(self->meta->read, 16); + /* subbufs_touched doesn't take into account the commit page */ + ASSERT_EQ(self->meta->subbufs_touched, 0); + ASSERT_EQ(self->meta->subbufs_lost, 0); + ASSERT_EQ(self->meta->subbufs_read, 1); + + ASSERT_EQ(self->meta->reader.id, 1); + + if (!(cnt++)) + goto again; +} + +TEST_HARNESS_MAIN