From patchwork Fri Jan 5 09:47:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13511899 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A38B8249F3 for ; Fri, 5 Jan 2024 09:47:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WE2+zfu/" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5e7ac088580so25421397b3.1 for ; Fri, 05 Jan 2024 01:47:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704448077; x=1705052877; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CmqFU5yf5cXpLanSSFZvIQvUYJLTz71QbTeswXv68B8=; b=WE2+zfu/+c7ZjTh021GUL+D7kIhQPf/Pr3FaIYPsOcm3Tnhs9jc61zBVyJXw6B9DJe yPUR4eQFp1Lnbu78opICTYQbKg9rNb843uqow5ULPC5cZ/14ahPbfVtn7iXcu/Gp1pcQ 7yyprpF6uEaGfQabdYM8VyJ2iDK0MBgBUT0NH4BqFgIMXUJkswz5dsxAGU1oVg3a/jlS x7mglgxoQYGI+ZB8+VQ5T9jEOE/QtNddqC5c8zykzJQT9ReqUX7xEy9OgKQxh773z3UP q9eMO8NNLPz1E3ROHsfm6K62bz1U17Rz0NWliOfC11ni559LuTYBgxpSJK54BB5C3rs0 jnQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704448077; x=1705052877; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CmqFU5yf5cXpLanSSFZvIQvUYJLTz71QbTeswXv68B8=; b=qvvqtphtKM+pnE25XE1nDo+ux0sBB5m3M8C5nFoS/QMuFDXffoUNVj5+PuX5k8+CLt 903zX9ariTJpqrl6E9u8oakyAoHT9oZLwJqqZVVxzceCp7DcKfLc5f8UZ9ig3g1TKHUg ugzGkaBb16QTPjdsEDscp3FJgCbl1DtgvydLMjFwCGzWPpnQ1HrAzxnozxEO22wq8rFa ptXgn5Zz1whFHhBDdSP5AFwWqWH5gtk6SgNwQ6/EmZsM6pnEFK5bDBVP/b9ueMaytwgd BEpjafK77aY5oVbyCbmLIx6Wj9fpdMd6tlGk8cPOQKafyoGeeD8ZOtLMsyqagBsPZKEm t2mA== X-Gm-Message-State: AOJu0YxZcJd9dNAK/ADj/u+Bo6iRgtognKO1s3zcQ/DiPiVM/WD8M8YL lbxoHdXmJZlVwi54roinwi9d4z7Y6KfCg4V/pixljoM= X-Google-Smtp-Source: AGHT+IFEi7XI1DOQtUKB2Q21+iuttWj8jjNX2Vfy4WATwdes0p8ZTmU32r57d1119zHR5UU5ldQ1RVoFP15RDCQ/ X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:690c:2e8d:b0:5e8:90de:db6c with SMTP id eu13-20020a05690c2e8d00b005e890dedb6cmr945583ywb.1.1704448077725; Fri, 05 Jan 2024 01:47:57 -0800 (PST) Date: Fri, 5 Jan 2024 09:47:28 +0000 In-Reply-To: <20240105094729.2363579-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240105094729.2363579-1-vdonnefort@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240105094729.2363579-2-vdonnefort@google.com> Subject: [PATCH v10 1/2] ring-buffer: Introducing ring-buffer mapping functions From: Vincent Donnefort To: rostedt@goodmis.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mhiramat@kernel.org, mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort In preparation for allowing the user-space to map a ring-buffer, add a set of mapping functions: ring_buffer_{map,unmap}() ring_buffer_map_fault() And controls on the ring-buffer: ring_buffer_map_get_reader() /* swap reader and head */ Mapping the ring-buffer also involves: A unique ID for each subbuf of the ring-buffer, currently they are only identified through their in-kernel VA. A meta-page, where are stored ring-buffer statistics and a description for the current reader The linear mapping exposes the meta-page, and each subbuf of the ring-buffer, ordered following their unique ID, assigned during the first mapping. Once mapped, no subbuf can get in or out of the ring-buffer: the buffer size will remain unmodified and the splice enabling functions will in reality simply memcpy the data instead of swapping subbufs. Signed-off-by: Vincent Donnefort diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index fa802db216f9..0841ba8bab14 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -6,6 +6,8 @@ #include #include +#include + struct trace_buffer; struct ring_buffer_iter; @@ -221,4 +223,9 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node); #define trace_rb_cpu_prepare NULL #endif +int ring_buffer_map(struct trace_buffer *buffer, int cpu); +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu); +struct page *ring_buffer_map_fault(struct trace_buffer *buffer, int cpu, + unsigned long pgoff); +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu); #endif /* _LINUX_RING_BUFFER_H */ diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h new file mode 100644 index 000000000000..f950648b0ba9 --- /dev/null +++ b/include/uapi/linux/trace_mmap.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _UAPI_TRACE_MMAP_H_ +#define _UAPI_TRACE_MMAP_H_ + +#include + +struct trace_buffer_meta { + unsigned long entries; + unsigned long overrun; + unsigned long read; + + unsigned long subbufs_touched; + unsigned long subbufs_lost; + unsigned long subbufs_read; + + struct { + unsigned long lost_events; /* Events lost at the time of the reader swap */ + __u32 id; /* Reader subbuf ID from 0 to nr_subbufs - 1 */ + __u32 read; /* Number of bytes read on the reader subbuf */ + } reader; + + __u32 subbuf_size; /* Size of each subbuf including the header */ + __u32 nr_subbufs; /* Number of subbufs in the ring-buffer */ + + __u32 meta_page_size; /* Size of the meta-page */ + __u32 meta_struct_len; /* Len of this struct */ +}; + +#endif /* _UAPI_TRACE_MMAP_H_ */ diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 173d2595ce2d..63426e525781 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -338,6 +338,7 @@ struct buffer_page { local_t entries; /* entries on this page */ unsigned long real_end; /* real end of data */ unsigned order; /* order of the page */ + u32 id; /* ID for external mapping */ struct buffer_data_page *page; /* Actual data page */ }; @@ -388,6 +389,7 @@ struct rb_irq_work { bool waiters_pending; bool full_waiters_pending; bool wakeup_full; + bool is_cpu_buffer; }; /* @@ -484,6 +486,12 @@ struct ring_buffer_per_cpu { u64 read_stamp; /* pages removed since last reset */ unsigned long pages_removed; + + int mapped; + struct mutex mapping_lock; + unsigned long *subbuf_ids; /* ID to addr */ + struct trace_buffer_meta *meta_page; + /* ring buffer pages to update, > 0 to add, < 0 to remove */ long nr_pages_to_update; struct list_head new_pages; /* new pages to add */ @@ -739,6 +747,33 @@ static __always_inline bool full_hit(struct trace_buffer *buffer, int cpu, int f return (dirty * 100) > (full * nr_pages); } +static void rb_update_meta_page(struct rb_irq_work *rbwork) +{ + struct ring_buffer_per_cpu *cpu_buffer; + + if (!rbwork->is_cpu_buffer) + return; + /* + * If the waiter is a cpu_buffer, this might be due to a + * userspace mapping. Let's update the meta-page. + */ + cpu_buffer = container_of(rbwork, struct ring_buffer_per_cpu, + irq_work); + + if (READ_ONCE(cpu_buffer->mapped)) { + /* Ensure the meta_page is ready */ + smp_rmb(); + WRITE_ONCE(cpu_buffer->meta_page->entries, + local_read(&cpu_buffer->entries)); + WRITE_ONCE(cpu_buffer->meta_page->overrun, + local_read(&cpu_buffer->overrun)); + WRITE_ONCE(cpu_buffer->meta_page->subbufs_touched, + local_read(&cpu_buffer->pages_touched)); + WRITE_ONCE(cpu_buffer->meta_page->subbufs_lost, + local_read(&cpu_buffer->pages_lost)); + } +} + /* * rb_wake_up_waiters - wake up tasks waiting for ring buffer input * @@ -749,6 +784,8 @@ static void rb_wake_up_waiters(struct irq_work *work) { struct rb_irq_work *rbwork = container_of(work, struct rb_irq_work, work); + rb_update_meta_page(rbwork); + wake_up_all(&rbwork->waiters); if (rbwork->full_waiters_pending || rbwork->wakeup_full) { rbwork->wakeup_full = false; @@ -1541,6 +1578,8 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu) init_irq_work(&cpu_buffer->irq_work.work, rb_wake_up_waiters); init_waitqueue_head(&cpu_buffer->irq_work.waiters); init_waitqueue_head(&cpu_buffer->irq_work.full_waiters); + mutex_init(&cpu_buffer->mapping_lock); + cpu_buffer->irq_work.is_cpu_buffer = true; bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()), GFP_KERNEL, cpu_to_node(cpu)); @@ -4544,6 +4583,14 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->last_overrun = overwrite; } + if (cpu_buffer->mapped) { + WRITE_ONCE(cpu_buffer->meta_page->reader.read, 0); + WRITE_ONCE(cpu_buffer->meta_page->reader.id, reader->id); + WRITE_ONCE(cpu_buffer->meta_page->reader.lost_events, cpu_buffer->lost_events); + WRITE_ONCE(cpu_buffer->meta_page->subbufs_read, + local_read(&cpu_buffer->pages_read)); + } + goto again; out: @@ -4611,6 +4658,12 @@ static void rb_advance_reader(struct ring_buffer_per_cpu *cpu_buffer) length = rb_event_length(event); cpu_buffer->reader_page->read += length; cpu_buffer->read_bytes += length; + if (cpu_buffer->mapped) { + WRITE_ONCE(cpu_buffer->meta_page->reader.read, + cpu_buffer->reader_page->read); + WRITE_ONCE(cpu_buffer->meta_page->read, + cpu_buffer->read); + } } static void rb_advance_iter(struct ring_buffer_iter *iter) @@ -5158,6 +5211,19 @@ static void rb_clear_buffer_page(struct buffer_page *page) page->read = 0; } +static void rb_reset_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + + WRITE_ONCE(meta->entries, 0); + WRITE_ONCE(meta->overrun, 0); + WRITE_ONCE(meta->read, cpu_buffer->read); + WRITE_ONCE(meta->subbufs_touched, 0); + WRITE_ONCE(meta->subbufs_lost, 0); + WRITE_ONCE(meta->subbufs_read, local_read(&cpu_buffer->pages_read)); + WRITE_ONCE(meta->reader.read, cpu_buffer->reader_page->read); +} + static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) { @@ -5202,6 +5268,9 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->lost_events = 0; cpu_buffer->last_overrun = 0; + if (cpu_buffer->mapped) + rb_reset_meta_page(cpu_buffer); + rb_head_page_activate(cpu_buffer); cpu_buffer->pages_removed = 0; } @@ -5416,6 +5485,11 @@ int ring_buffer_swap_cpu(struct trace_buffer *buffer_a, cpu_buffer_a = buffer_a->buffers[cpu]; cpu_buffer_b = buffer_b->buffers[cpu]; + if (READ_ONCE(cpu_buffer_a->mapped) || READ_ONCE(cpu_buffer_b->mapped)) { + ret = -EBUSY; + goto out; + } + /* At least make sure the two buffers are somewhat the same */ if (cpu_buffer_a->nr_pages != cpu_buffer_b->nr_pages) goto out; @@ -5679,7 +5753,8 @@ int ring_buffer_read_page(struct trace_buffer *buffer, * Otherwise, we can simply swap the page with the one passed in. */ if (read || (len < (commit - read)) || - cpu_buffer->reader_page == cpu_buffer->commit_page) { + cpu_buffer->reader_page == cpu_buffer->commit_page || + cpu_buffer->mapped) { struct buffer_data_page *rpage = cpu_buffer->reader_page->page; unsigned int rpos = read; unsigned int pos = 0; @@ -5898,6 +5973,11 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) cpu_buffer = buffer->buffers[cpu]; + if (cpu_buffer->mapped) { + err = -EBUSY; + goto error; + } + /* Update the number of pages to match the new size */ nr_pages = old_size * buffer->buffers[cpu]->nr_pages; nr_pages = DIV_ROUND_UP(nr_pages, buffer->subbuf_size); @@ -5999,6 +6079,308 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) } EXPORT_SYMBOL_GPL(ring_buffer_subbuf_order_set); +#define subbuf_page(off, start) \ + virt_to_page((void *)(start + (off << PAGE_SHIFT))) + +#define foreach_subbuf_page(sub_order, start, page) \ + page = subbuf_page(0, (start)); \ + for (int __off = 0; __off < (1 << (sub_order)); \ + __off++, page = subbuf_page(__off, (start))) + +static inline void subbuf_map_prepare(unsigned long subbuf_start, int order) +{ + struct page *page; + + /* + * When allocating order > 0 pages, only the first struct page has a + * refcount > 1. Increasing the refcount here ensures none of the struct + * page composing the sub-buffer is freeed when the mapping is closed. + */ + foreach_subbuf_page(order, subbuf_start, page) + page_ref_inc(page); +} + +static inline void subbuf_unmap(unsigned long subbuf_start, int order) +{ + struct page *page; + + foreach_subbuf_page(order, subbuf_start, page) { + page_ref_dec(page); + page->mapping = NULL; + } +} + +static void rb_free_subbuf_ids(struct ring_buffer_per_cpu *cpu_buffer) +{ + int sub_id; + + for (sub_id = 0; sub_id < cpu_buffer->nr_pages + 1; sub_id++) + subbuf_unmap(cpu_buffer->subbuf_ids[sub_id], + cpu_buffer->buffer->subbuf_order); + + kfree(cpu_buffer->subbuf_ids); + cpu_buffer->subbuf_ids = NULL; +} + +static int rb_alloc_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + if (cpu_buffer->meta_page) + return 0; + + cpu_buffer->meta_page = page_to_virt(alloc_page(GFP_USER)); + if (!cpu_buffer->meta_page) + return -ENOMEM; + + return 0; +} + +static void rb_free_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + unsigned long addr = (unsigned long)cpu_buffer->meta_page; + + virt_to_page((void *)addr)->mapping = NULL; + free_page(addr); + cpu_buffer->meta_page = NULL; +} + +static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, + unsigned long *subbuf_ids) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + unsigned int nr_subbufs = cpu_buffer->nr_pages + 1; + struct buffer_page *first_subbuf, *subbuf; + int id = 0; + + subbuf_ids[id] = (unsigned long)cpu_buffer->reader_page->page; + subbuf_map_prepare(subbuf_ids[id], cpu_buffer->buffer->subbuf_order); + cpu_buffer->reader_page->id = id++; + + first_subbuf = subbuf = rb_set_head_page(cpu_buffer); + do { + if (id >= nr_subbufs) { + WARN_ON(1); + break; + } + + subbuf_ids[id] = (unsigned long)subbuf->page; + subbuf->id = id; + subbuf_map_prepare(subbuf_ids[id], cpu_buffer->buffer->subbuf_order); + + rb_inc_page(&subbuf); + id++; + } while (subbuf != first_subbuf); + + /* install subbuf ID to kern VA translation */ + cpu_buffer->subbuf_ids = subbuf_ids; + + meta->meta_page_size = PAGE_SIZE; + meta->meta_struct_len = sizeof(*meta); + meta->nr_subbufs = nr_subbufs; + meta->subbuf_size = cpu_buffer->buffer->subbuf_size + BUF_PAGE_HDR_SIZE; + meta->reader.id = cpu_buffer->reader_page->id; + rb_reset_meta_page(cpu_buffer); +} + +static inline struct ring_buffer_per_cpu * +rb_get_mapped_buffer(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return ERR_PTR(-EINVAL); + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + mutex_unlock(&cpu_buffer->mapping_lock); + return ERR_PTR(-ENODEV); + } + + return cpu_buffer; +} + +static inline void rb_put_mapped_buffer(struct ring_buffer_per_cpu *cpu_buffer) +{ + mutex_unlock(&cpu_buffer->mapping_lock); +} + +int ring_buffer_map(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long flags, *subbuf_ids; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (cpu_buffer->mapped) { + if (cpu_buffer->mapped == INT_MAX) + err = -EBUSY; + else + WRITE_ONCE(cpu_buffer->mapped, cpu_buffer->mapped + 1); + mutex_unlock(&cpu_buffer->mapping_lock); + return err; + } + + /* prevent another thread from changing buffer sizes */ + mutex_lock(&buffer->mutex); + + err = rb_alloc_meta_page(cpu_buffer); + if (err) + goto unlock; + + /* subbuf_ids include the reader while nr_pages does not */ + subbuf_ids = kzalloc(sizeof(*subbuf_ids) * (cpu_buffer->nr_pages + 1), + GFP_KERNEL); + if (!subbuf_ids) { + rb_free_meta_page(cpu_buffer); + err = -ENOMEM; + goto unlock; + } + + atomic_inc(&cpu_buffer->resize_disabled); + + /* + * Lock all readers to block any subbuf swap until the subbuf IDs are + * assigned. + */ + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + + rb_setup_ids_meta_page(cpu_buffer, subbuf_ids); + /* + * Ensure rb_update_meta() will observe the meta-page before + * cpu_buffer->mapped. + */ + smp_wmb(); + WRITE_ONCE(cpu_buffer->mapped, 1); + + /* Init meta_page values unless the writer did it already */ + cmpxchg(&cpu_buffer->meta_page->entries, 0, + local_read(&cpu_buffer->entries)); + cmpxchg(&cpu_buffer->meta_page->overrun, 0, + local_read(&cpu_buffer->overrun)); + cmpxchg(&cpu_buffer->meta_page->subbufs_touched, 0, + local_read(&cpu_buffer->pages_touched)); + cmpxchg(&cpu_buffer->meta_page->subbufs_lost, 0, + local_read(&cpu_buffer->pages_lost)); + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); +unlock: + mutex_unlock(&buffer->mutex); + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + err = -ENODEV; + goto unlock; + } + + WRITE_ONCE(cpu_buffer->mapped, cpu_buffer->mapped - 1); + if (!cpu_buffer->mapped) { + /* Wait for the writer and readers to observe !mapped */ + synchronize_rcu(); + + rb_free_subbuf_ids(cpu_buffer); + rb_free_meta_page(cpu_buffer); + atomic_dec(&cpu_buffer->resize_disabled); + } +unlock: + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +/* + * +--------------+ pgoff == 0 + * | meta page | + * +--------------+ pgoff == 1 + * | subbuffer 0 | + * +--------------+ pgoff == 1 + (1 << subbuf_order) + * | subbuffer 1 | + * ... + */ +struct page *ring_buffer_map_fault(struct trace_buffer *buffer, int cpu, + unsigned long pgoff) +{ + struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu]; + unsigned long subbuf_id, subbuf_offset, addr; + struct page *page; + + if (!pgoff) + return virt_to_page((void *)cpu_buffer->meta_page); + + pgoff--; + + subbuf_id = pgoff >> buffer->subbuf_order; + if (subbuf_id > cpu_buffer->nr_pages) + return NULL; + + subbuf_offset = pgoff & ((1UL << buffer->subbuf_order) - 1); + addr = cpu_buffer->subbuf_ids[subbuf_id] + (subbuf_offset * PAGE_SIZE); + page = virt_to_page((void *)addr); + + return page; +} + +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long reader_size; + unsigned long flags; + + cpu_buffer = rb_get_mapped_buffer(buffer, cpu); + if (IS_ERR(cpu_buffer)) + return (int)PTR_ERR(cpu_buffer); + + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); +consume: + if (rb_per_cpu_empty(cpu_buffer)) + goto out; + + reader_size = rb_page_size(cpu_buffer->reader_page); + + /* + * There are data to be read on the current reader page, we can + * return to the caller. But before that, we assume the latter will read + * everything. Let's update the kernel reader accordingly. + */ + if (cpu_buffer->reader_page->read < reader_size) { + while (cpu_buffer->reader_page->read < reader_size) + rb_advance_reader(cpu_buffer); + goto out; + } + + if (WARN_ON(!rb_get_reader_page(cpu_buffer))) + goto out; + + goto consume; +out: + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + rb_put_mapped_buffer(cpu_buffer); + + return 0; +} + /* * We only allocate new buffers, never free them if the CPU goes down. * If we were to free the buffer, then the user would lose any trace that was in From patchwork Fri Jan 5 09:47:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13511900 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F060E24B42 for ; Fri, 5 Jan 2024 09:48:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1lrzoXGC" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbe053d5d91so2064542276.2 for ; Fri, 05 Jan 2024 01:48:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704448080; x=1705052880; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=d/chp9iYc18PijDbo+8g07Uc+eJNKO0EynyXnVgKIzI=; b=1lrzoXGCnJFvsELNSb+8V8v6buZvlCPDiG7gjszmqMsWo+V4rkXjWVUnVsx2sZKTGJ sbP9WMuU1JTZeJnFAFh6uJo5HIm6n+0t13mVwxfFpkfUi7Yygh7D2mlRzeMvc/VKAeW4 fAT0FYQv/GW0LPbVvHaOu9/o8mFd1uJ7rwfN+JaQxN+HrjvNbM+tHD5J4Y6RS2mEkybN SaFylMhFn+bIwQbNB3pxscqS+EeibHxHGHz+TOExYLYmWdKXiQyHnOBQedjtrhAsr685 450DaKeAnuUlmh9uo3xMTPPb6l+vg3lhbgpazQEo2PeDJjYy90Ns5ykSYYs48kIHGRd7 oH2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704448080; x=1705052880; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d/chp9iYc18PijDbo+8g07Uc+eJNKO0EynyXnVgKIzI=; b=Jz0LQ/ircnmH9AeaBCC59pj9H8KG4Q1eZ5XKIdD/IF7LUQk0uHabu60Ks5g726v2W6 nH2sVj34KUPW7TIwUdq2PuiD1zwkEdWsmFOxzugWLwNBvWP8u3WmX48+CM2Fk1Wd+3Pn ROavm4fKZYzErXdpvSC0eUcln3zNYYH9SVBKBhUGe8T4iJqWf0vyvM+BZz9SsMNN+GiU SKr6DugYUaiMtG7bvVz3uSV6Qq4wLBd3LbOshOaFmxQ2rzP2aqMRRVMbEDbydIjUesZc rr71/gBrBT2d6eX9zTldKVJT/tcClgrQPcCWTQ/+UmzRLUs4pmBWy0lgMrRjjpBKVhkl EBAw== X-Gm-Message-State: AOJu0YxeeU4QsnQuv+YHMBYd+mVoK6G2kYO7pg8MXlh4q4oZ75WbQS1C WwGksBV7rerJ92rD4wHd+c7sluKqdZvySxr1P1fdI2g= X-Google-Smtp-Source: AGHT+IGqgJzLxvsE/UVML+foQwlWKr4XAk7ifQHuueX1WxCLsxVxp2uDLH4O0EyYxpEHa6z4eFWlt7dut2YHFTQV X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a25:bb11:0:b0:db7:dce9:76cd with SMTP id z17-20020a25bb11000000b00db7dce976cdmr723549ybg.9.1704448079979; Fri, 05 Jan 2024 01:47:59 -0800 (PST) Date: Fri, 5 Jan 2024 09:47:29 +0000 In-Reply-To: <20240105094729.2363579-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240105094729.2363579-1-vdonnefort@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240105094729.2363579-3-vdonnefort@google.com> Subject: [PATCH v10 2/2] tracing: Allow user-space mapping of the ring-buffer From: Vincent Donnefort To: rostedt@goodmis.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mhiramat@kernel.org, mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort Currently, user-space extracts data from the ring-buffer via splice, which is handy for storage or network sharing. However, due to splice limitations, it is imposible to do real-time analysis without a copy. A solution for that problem is to let the user-space map the ring-buffer directly. The mapping is exposed via the per-CPU file trace_pipe_raw. The first element of the mapping is the meta-page. It is followed by each subbuffer constituting the ring-buffer, ordered by their unique page ID: * Meta-page -- include/uapi/linux/trace_mmap.h for a description * Subbuf ID 0 * Subbuf ID 1 ... It is therefore easy to translate a subbuf ID into an offset in the mapping: reader_id = meta->reader->id; reader_offset = meta->meta_page_size + reader_id * meta->subbuf_size; When new data is available, the mapper must call a newly introduced ioctl: TRACE_MMAP_IOCTL_GET_READER. This will update the Meta-page reader ID to point to the next reader containing unread data. Signed-off-by: Vincent Donnefort diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h index f950648b0ba9..8c49489c5867 100644 --- a/include/uapi/linux/trace_mmap.h +++ b/include/uapi/linux/trace_mmap.h @@ -26,4 +26,6 @@ struct trace_buffer_meta { __u32 meta_struct_len; /* Len of this struct */ }; +#define TRACE_MMAP_IOCTL_GET_READER _IO('T', 0x1) + #endif /* _UAPI_TRACE_MMAP_H_ */ diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 46dbe22121e9..cfeaf2cd204e 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -8583,15 +8583,31 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos, return ret; } -/* An ioctl call with cmd 0 to the ring buffer file will wake up all waiters */ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct ftrace_buffer_info *info = file->private_data; struct trace_iterator *iter = &info->iter; + int err; - if (cmd) - return -ENOIOCTLCMD; + if (cmd == TRACE_MMAP_IOCTL_GET_READER) { + if (!(file->f_flags & O_NONBLOCK)) { + err = ring_buffer_wait(iter->array_buffer->buffer, + iter->cpu_file, + iter->tr->buffer_percent); + if (err) + return err; + } + return ring_buffer_map_get_reader(iter->array_buffer->buffer, + iter->cpu_file); + } else if (cmd) { + return -ENOTTY; + } + + /* + * An ioctl call with cmd 0 to the ring buffer file will wake up all + * waiters + */ mutex_lock(&trace_types_lock); iter->wait_index++; @@ -8604,6 +8620,62 @@ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned return 0; } +static vm_fault_t tracing_buffers_mmap_fault(struct vm_fault *vmf) +{ + struct ftrace_buffer_info *info = vmf->vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + vm_fault_t ret = VM_FAULT_SIGBUS; + struct page *page; + + page = ring_buffer_map_fault(iter->array_buffer->buffer, iter->cpu_file, + vmf->pgoff); + if (!page) + return ret; + + get_page(page); + vmf->page = page; + vmf->page->mapping = vmf->vma->vm_file->f_mapping; + vmf->page->index = vmf->pgoff; + + return 0; +} + +static void tracing_buffers_mmap_close(struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + + ring_buffer_unmap(iter->array_buffer->buffer, iter->cpu_file); +} + +static void tracing_buffers_mmap_open(struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + + WARN_ON(ring_buffer_map(iter->array_buffer->buffer, iter->cpu_file)); +} + +static const struct vm_operations_struct tracing_buffers_vmops = { + .open = tracing_buffers_mmap_open, + .close = tracing_buffers_mmap_close, + .fault = tracing_buffers_mmap_fault, +}; + +static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = filp->private_data; + struct trace_iterator *iter = &info->iter; + + if (vma->vm_flags & VM_WRITE) + return -EPERM; + + vm_flags_mod(vma, VM_DONTCOPY | VM_DONTDUMP, VM_MAYWRITE); + vma->vm_ops = &tracing_buffers_vmops; + + return ring_buffer_map(iter->array_buffer->buffer, iter->cpu_file); +} + static const struct file_operations tracing_buffers_fops = { .open = tracing_buffers_open, .read = tracing_buffers_read, @@ -8612,6 +8684,7 @@ static const struct file_operations tracing_buffers_fops = { .splice_read = tracing_buffers_splice_read, .unlocked_ioctl = tracing_buffers_ioctl, .llseek = no_llseek, + .mmap = tracing_buffers_mmap, }; static ssize_t