From patchwork Mon Nov 15 14:19:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22B83C433EF for ; Mon, 15 Nov 2021 14:19:43 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DFBC561BD4 for ; Mon, 15 Nov 2021 14:19:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DFBC561BD4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 520866EDE4; Mon, 15 Nov 2021 14:19:42 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 06F3C6EDE1 for ; Mon, 15 Nov 2021 14:19:40 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 01/15] iio: buffer-dma: Get rid of incoming/outgoing queues Date: Mon, 15 Nov 2021 14:19:11 +0000 Message-Id: <20211115141925.60164-2-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The buffer-dma code was using two queues, incoming and outgoing, to manage the state of the blocks in use. While this totally works, it adds some complexity to the code, especially since the code only manages 2 blocks. It is much easier to just check each block's state manually, and keep a counter for the next block to dequeue. Since the new DMABUF based API wouldn't use these incoming and outgoing queues anyway, getting rid of them now makes the upcoming changes simpler. Signed-off-by: Paul Cercueil Reviewed-by: Alexandru Ardelean --- drivers/iio/buffer/industrialio-buffer-dma.c | 68 ++++++++++---------- include/linux/iio/buffer-dma.h | 7 +- 2 files changed, 37 insertions(+), 38 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index d348af8b9705..abac88f20104 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -191,16 +191,8 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( static void _iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) { - struct iio_dma_buffer_queue *queue = block->queue; - - /* - * The buffer has already been freed by the application, just drop the - * reference. - */ - if (block->state != IIO_BLOCK_STATE_DEAD) { + if (block->state != IIO_BLOCK_STATE_DEAD) block->state = IIO_BLOCK_STATE_DONE; - list_add_tail(&block->head, &queue->outgoing); - } } /** @@ -317,11 +309,8 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) * dead. This means we can reset the lists without having to fear * corrution. */ - INIT_LIST_HEAD(&queue->outgoing); spin_unlock_irq(&queue->list_lock); - INIT_LIST_HEAD(&queue->incoming); - for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { if (queue->fileio.blocks[i]) { block = queue->fileio.blocks[i]; @@ -346,7 +335,6 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) } block->state = IIO_BLOCK_STATE_QUEUED; - list_add_tail(&block->head, &queue->incoming); } out_unlock: @@ -401,13 +389,18 @@ int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev) { struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); - struct iio_dma_buffer_block *block, *_block; + struct iio_dma_buffer_block *block; + unsigned int i; mutex_lock(&queue->lock); queue->active = true; - list_for_each_entry_safe(block, _block, &queue->incoming, head) { - list_del(&block->head); - iio_dma_buffer_submit_block(queue, block); + queue->fileio.next_dequeue = 0; + + for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { + block = queue->fileio.blocks[i]; + + if (block->state == IIO_BLOCK_STATE_QUEUED) + iio_dma_buffer_submit_block(queue, block); } mutex_unlock(&queue->lock); @@ -442,28 +435,33 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_disable); static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue, struct iio_dma_buffer_block *block) { - if (block->state == IIO_BLOCK_STATE_DEAD) { + if (block->state == IIO_BLOCK_STATE_DEAD) iio_buffer_block_put(block); - } else if (queue->active) { + else if (queue->active) iio_dma_buffer_submit_block(queue, block); - } else { + else block->state = IIO_BLOCK_STATE_QUEUED; - list_add_tail(&block->head, &queue->incoming); - } } static struct iio_dma_buffer_block *iio_dma_buffer_dequeue( struct iio_dma_buffer_queue *queue) { struct iio_dma_buffer_block *block; + unsigned int idx; spin_lock_irq(&queue->list_lock); - block = list_first_entry_or_null(&queue->outgoing, struct - iio_dma_buffer_block, head); - if (block != NULL) { - list_del(&block->head); + + idx = queue->fileio.next_dequeue; + block = queue->fileio.blocks[idx]; + + if (block->state == IIO_BLOCK_STATE_DONE) { block->state = IIO_BLOCK_STATE_DEQUEUED; + idx = (idx + 1) % ARRAY_SIZE(queue->fileio.blocks); + queue->fileio.next_dequeue = idx; + } else { + block = NULL; } + spin_unlock_irq(&queue->list_lock); return block; @@ -539,6 +537,7 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf) struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf); struct iio_dma_buffer_block *block; size_t data_available = 0; + unsigned int i; /* * For counting the available bytes we'll use the size of the block not @@ -552,8 +551,15 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf) data_available += queue->fileio.active_block->size; spin_lock_irq(&queue->list_lock); - list_for_each_entry(block, &queue->outgoing, head) - data_available += block->size; + + for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { + block = queue->fileio.blocks[i]; + + if (block != queue->fileio.active_block + && block->state == IIO_BLOCK_STATE_DONE) + data_available += block->size; + } + spin_unlock_irq(&queue->list_lock); mutex_unlock(&queue->lock); @@ -616,9 +622,6 @@ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, queue->dev = dev; queue->ops = ops; - INIT_LIST_HEAD(&queue->incoming); - INIT_LIST_HEAD(&queue->outgoing); - mutex_init(&queue->lock); spin_lock_init(&queue->list_lock); @@ -645,11 +648,8 @@ void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue) continue; queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD; } - INIT_LIST_HEAD(&queue->outgoing); spin_unlock_irq(&queue->list_lock); - INIT_LIST_HEAD(&queue->incoming); - for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { if (!queue->fileio.blocks[i]) continue; diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h index ff15c61bf319..d4ed5ff39d44 100644 --- a/include/linux/iio/buffer-dma.h +++ b/include/linux/iio/buffer-dma.h @@ -78,12 +78,15 @@ struct iio_dma_buffer_block { * @active_block: Block being used in read() * @pos: Read offset in the active block * @block_size: Size of each block + * @next_dequeue: index of next block that will be dequeued */ struct iio_dma_buffer_queue_fileio { struct iio_dma_buffer_block *blocks[2]; struct iio_dma_buffer_block *active_block; size_t pos; size_t block_size; + + unsigned int next_dequeue; }; /** @@ -97,8 +100,6 @@ struct iio_dma_buffer_queue_fileio { * atomic context as well as blocks on those lists. This is the outgoing queue * list and typically also a list of active blocks in the part that handles * the DMA controller - * @incoming: List of buffers on the incoming queue - * @outgoing: List of buffers on the outgoing queue * @active: Whether the buffer is currently active * @fileio: FileIO state */ @@ -109,8 +110,6 @@ struct iio_dma_buffer_queue { struct mutex lock; spinlock_t list_lock; - struct list_head incoming; - struct list_head outgoing; bool active; From patchwork Mon Nov 15 14:19:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0AEFC433F5 for ; Mon, 15 Nov 2021 14:19:48 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B1E6661B5E for ; Mon, 15 Nov 2021 14:19:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B1E6661B5E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E1D546EDE7; Mon, 15 Nov 2021 14:19:47 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 645A96EDE6 for ; Mon, 15 Nov 2021 14:19:46 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 02/15] iio: buffer-dma: Remove unused iio_buffer_block struct Date: Mon, 15 Nov 2021 14:19:12 +0000 Message-Id: <20211115141925.60164-3-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This structure was never used anywhere, so it can safely be dropped. It will later be re-introduced as a different structure in a different header. Signed-off-by: Paul Cercueil Reviewed-by: Alexandru Ardelean --- include/linux/iio/buffer-dma.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h index d4ed5ff39d44..a65a005c4a19 100644 --- a/include/linux/iio/buffer-dma.h +++ b/include/linux/iio/buffer-dma.h @@ -17,11 +17,6 @@ struct iio_dma_buffer_queue; struct iio_dma_buffer_ops; struct device; -struct iio_buffer_block { - u32 size; - u32 bytes_used; -}; - /** * enum iio_block_state - State of a struct iio_dma_buffer_block * @IIO_BLOCK_STATE_DEQUEUED: Block is not queued From patchwork Mon Nov 15 14:19:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC640C433F5 for ; Mon, 15 Nov 2021 14:19:55 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA7EA61BD4 for ; Mon, 15 Nov 2021 14:19:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AA7EA61BD4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 214F26EAA8; Mon, 15 Nov 2021 14:19:55 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 25EB46EAA8 for ; Mon, 15 Nov 2021 14:19:54 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 03/15] iio: buffer-dma: Use round_down() instead of rounddown() Date: Mon, 15 Nov 2021 14:19:13 +0000 Message-Id: <20211115141925.60164-4-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We know that the buffer's alignment will always be a power of two; therefore, we can use the faster round_down() macro. Signed-off-by: Paul Cercueil Reviewed-by: Alexandru Ardelean --- drivers/iio/buffer/industrialio-buffer-dmaengine.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index 1ac94c4e9792..f8ce26a24c57 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -67,7 +67,7 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, dma_cookie_t cookie; block->bytes_used = min(block->size, dmaengine_buffer->max_size); - block->bytes_used = rounddown(block->bytes_used, + block->bytes_used = round_down(block->bytes_used, dmaengine_buffer->align); desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, From patchwork Mon Nov 15 14:19:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 275E6C433F5 for ; Mon, 15 Nov 2021 14:20:02 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EAF0961BE2 for ; Mon, 15 Nov 2021 14:20:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EAF0961BE2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5DA536EDE8; Mon, 15 Nov 2021 14:20:01 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id B7AE96EDE8 for ; Mon, 15 Nov 2021 14:20:00 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 04/15] iio: buffer-dma: Enable buffer write support Date: Mon, 15 Nov 2021 14:19:14 +0000 Message-Id: <20211115141925.60164-5-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Adding write support to the buffer-dma code is easy - the write() function basically needs to do the exact same thing as the read() function: dequeue a block, read or write the data, enqueue the block when entirely processed. Therefore, the iio_buffer_dma_read() and the new iio_buffer_dma_write() now both call a function iio_buffer_dma_io(), which will perform this task. The .space_available() callback can return the exact same value as the .data_available() callback for input buffers, since in both cases we count the exact same thing (the number of bytes in each available block). Signed-off-by: Paul Cercueil Reviewed-by: Alexandru Ardelean --- drivers/iio/buffer/industrialio-buffer-dma.c | 75 +++++++++++++++----- include/linux/iio/buffer-dma.h | 7 ++ 2 files changed, 66 insertions(+), 16 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index abac88f20104..eeeed6b2e0cf 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -179,7 +179,8 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( } block->size = size; - block->state = IIO_BLOCK_STATE_DEQUEUED; + block->bytes_used = size; + block->state = IIO_BLOCK_STATE_DONE; block->queue = queue; INIT_LIST_HEAD(&block->head); kref_init(&block->kref); @@ -195,6 +196,18 @@ static void _iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) block->state = IIO_BLOCK_STATE_DONE; } +static void iio_dma_buffer_queue_wake(struct iio_dma_buffer_queue *queue) +{ + __poll_t flags; + + if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) + flags = EPOLLIN | EPOLLRDNORM; + else + flags = EPOLLOUT | EPOLLWRNORM; + + wake_up_interruptible_poll(&queue->buffer.pollq, flags); +} + /** * iio_dma_buffer_block_done() - Indicate that a block has been completed * @block: The completed block @@ -212,7 +225,7 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) spin_unlock_irqrestore(&queue->list_lock, flags); iio_buffer_block_put_atomic(block); - wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); + iio_dma_buffer_queue_wake(queue); } EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done); @@ -241,7 +254,7 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue, } spin_unlock_irqrestore(&queue->list_lock, flags); - wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); + iio_dma_buffer_queue_wake(queue); } EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort); @@ -334,7 +347,8 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) queue->fileio.blocks[i] = block; } - block->state = IIO_BLOCK_STATE_QUEUED; + if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) + block->state = IIO_BLOCK_STATE_QUEUED; } out_unlock: @@ -467,20 +481,12 @@ static struct iio_dma_buffer_block *iio_dma_buffer_dequeue( return block; } -/** - * iio_dma_buffer_read() - DMA buffer read callback - * @buffer: Buffer to read form - * @n: Number of bytes to read - * @user_buffer: Userspace buffer to copy the data to - * - * Should be used as the read callback for iio_buffer_access_ops - * struct for DMA buffers. - */ -int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, - char __user *user_buffer) +static int iio_dma_buffer_io(struct iio_buffer *buffer, + size_t n, char __user *user_buffer, bool is_write) { struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); struct iio_dma_buffer_block *block; + void *addr; int ret; if (n < buffer->bytes_per_datum) @@ -503,8 +509,13 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, n = rounddown(n, buffer->bytes_per_datum); if (n > block->bytes_used - queue->fileio.pos) n = block->bytes_used - queue->fileio.pos; + addr = block->vaddr + queue->fileio.pos; - if (copy_to_user(user_buffer, block->vaddr + queue->fileio.pos, n)) { + if (is_write) + ret = !!copy_from_user(addr, user_buffer, n); + else + ret = !!copy_to_user(user_buffer, addr, n); + if (ret) { ret = -EFAULT; goto out_unlock; } @@ -513,6 +524,7 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, if (queue->fileio.pos == block->bytes_used) { queue->fileio.active_block = NULL; + block->bytes_used = block->size; iio_dma_buffer_enqueue(queue, block); } @@ -523,8 +535,39 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, return ret; } + +/** + * iio_dma_buffer_read() - DMA buffer read callback + * @buffer: Buffer to read form + * @n: Number of bytes to read + * @user_buffer: Userspace buffer to copy the data to + * + * Should be used as the read callback for iio_buffer_access_ops + * struct for DMA buffers. + */ +int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, + char __user *user_buffer) +{ + return iio_dma_buffer_io(buffer, n, user_buffer, false); +} EXPORT_SYMBOL_GPL(iio_dma_buffer_read); +/** + * iio_dma_buffer_write() - DMA buffer write callback + * @buffer: Buffer to read form + * @n: Number of bytes to read + * @user_buffer: Userspace buffer to copy the data from + * + * Should be used as the write callback for iio_buffer_access_ops + * struct for DMA buffers. + */ +int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n, + const char __user *user_buffer) +{ + return iio_dma_buffer_io(buffer, n, (__force char *)user_buffer, true); +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_write); + /** * iio_dma_buffer_data_available() - DMA buffer data_available callback * @buf: Buffer to check for data availability diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h index a65a005c4a19..09c07d5563c0 100644 --- a/include/linux/iio/buffer-dma.h +++ b/include/linux/iio/buffer-dma.h @@ -132,6 +132,8 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer, struct iio_dev *indio_dev); int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, char __user *user_buffer); +int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n, + const char __user *user_buffer); size_t iio_dma_buffer_data_available(struct iio_buffer *buffer); int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd); int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length); @@ -142,4 +144,9 @@ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue); void iio_dma_buffer_release(struct iio_dma_buffer_queue *queue); +static inline size_t iio_dma_buffer_space_available(struct iio_buffer *buffer) +{ + return iio_dma_buffer_data_available(buffer); +} + #endif From patchwork Mon Nov 15 14:19:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDCD9C433F5 for ; Mon, 15 Nov 2021 14:20:08 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B7CCC61BE2 for ; Mon, 15 Nov 2021 14:20:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B7CCC61BE2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1CF1D6EDEB; Mon, 15 Nov 2021 14:20:08 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id A3E976EDEB for ; Mon, 15 Nov 2021 14:20:06 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 05/15] iio: buffer-dmaengine: Support specifying buffer direction Date: Mon, 15 Nov 2021 14:19:15 +0000 Message-Id: <20211115141925.60164-6-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Update the devm_iio_dmaengine_buffer_setup() function to support specifying the buffer direction. Update the iio_dmaengine_buffer_submit() function to handle input buffers as well as output buffers. Signed-off-by: Paul Cercueil Reviewed-by: Alexandru Ardelean --- drivers/iio/adc/adi-axi-adc.c | 3 ++- .../buffer/industrialio-buffer-dmaengine.c | 24 +++++++++++++++---- include/linux/iio/buffer-dmaengine.h | 5 +++- 3 files changed, 25 insertions(+), 7 deletions(-) diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c index a73e3c2d212f..0a6f2c32b1b9 100644 --- a/drivers/iio/adc/adi-axi-adc.c +++ b/drivers/iio/adc/adi-axi-adc.c @@ -113,7 +113,8 @@ static int adi_axi_adc_config_dma_buffer(struct device *dev, dma_name = "rx"; return devm_iio_dmaengine_buffer_setup(indio_dev->dev.parent, - indio_dev, dma_name); + indio_dev, dma_name, + IIO_BUFFER_DIRECTION_IN); } static int adi_axi_adc_read_raw(struct iio_dev *indio_dev, diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index f8ce26a24c57..ac26b04aa4a9 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -64,14 +64,25 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, struct dmaengine_buffer *dmaengine_buffer = iio_buffer_to_dmaengine_buffer(&queue->buffer); struct dma_async_tx_descriptor *desc; + enum dma_transfer_direction dma_dir; + size_t max_size; dma_cookie_t cookie; - block->bytes_used = min(block->size, dmaengine_buffer->max_size); - block->bytes_used = round_down(block->bytes_used, - dmaengine_buffer->align); + max_size = min(block->size, dmaengine_buffer->max_size); + max_size = round_down(max_size, dmaengine_buffer->align); + + if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) { + block->bytes_used = max_size; + dma_dir = DMA_DEV_TO_MEM; + } else { + dma_dir = DMA_MEM_TO_DEV; + } + + if (!block->bytes_used || block->bytes_used > max_size) + return -EINVAL; desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, - block->phys_addr, block->bytes_used, DMA_DEV_TO_MEM, + block->phys_addr, block->bytes_used, dma_dir, DMA_PREP_INTERRUPT); if (!desc) return -ENOMEM; @@ -275,7 +286,8 @@ static struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev, */ int devm_iio_dmaengine_buffer_setup(struct device *dev, struct iio_dev *indio_dev, - const char *channel) + const char *channel, + enum iio_buffer_direction dir) { struct iio_buffer *buffer; @@ -286,6 +298,8 @@ int devm_iio_dmaengine_buffer_setup(struct device *dev, indio_dev->modes |= INDIO_BUFFER_HARDWARE; + buffer->direction = dir; + return iio_device_attach_buffer(indio_dev, buffer); } EXPORT_SYMBOL_GPL(devm_iio_dmaengine_buffer_setup); diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h index 5c355be89814..538d0479cdd6 100644 --- a/include/linux/iio/buffer-dmaengine.h +++ b/include/linux/iio/buffer-dmaengine.h @@ -7,11 +7,14 @@ #ifndef __IIO_DMAENGINE_H__ #define __IIO_DMAENGINE_H__ +#include + struct iio_dev; struct device; int devm_iio_dmaengine_buffer_setup(struct device *dev, struct iio_dev *indio_dev, - const char *channel); + const char *channel, + enum iio_buffer_direction dir); #endif From patchwork Mon Nov 15 14:19:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AD81C433F5 for ; Mon, 15 Nov 2021 14:20:16 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4389063227 for ; Mon, 15 Nov 2021 14:20:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4389063227 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AB02B6EDEA; Mon, 15 Nov 2021 14:20:13 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 610B36EDEA for ; Mon, 15 Nov 2021 14:20:12 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 06/15] iio: buffer-dmaengine: Enable write support Date: Mon, 15 Nov 2021 14:19:16 +0000 Message-Id: <20211115141925.60164-7-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use the iio_dma_buffer_write() and iio_dma_buffer_space_available() functions provided by the buffer-dma core, to enable write support in the buffer-dmaengine code. Signed-off-by: Paul Cercueil Reviewed-by: Alexandru Ardelean --- drivers/iio/buffer/industrialio-buffer-dmaengine.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index ac26b04aa4a9..5cde8fd81c7f 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -123,12 +123,14 @@ static void iio_dmaengine_buffer_release(struct iio_buffer *buf) static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = { .read = iio_dma_buffer_read, + .write = iio_dma_buffer_write, .set_bytes_per_datum = iio_dma_buffer_set_bytes_per_datum, .set_length = iio_dma_buffer_set_length, .request_update = iio_dma_buffer_request_update, .enable = iio_dma_buffer_enable, .disable = iio_dma_buffer_disable, .data_available = iio_dma_buffer_data_available, + .space_available = iio_dma_buffer_space_available, .release = iio_dmaengine_buffer_release, .modes = INDIO_BUFFER_HARDWARE, From patchwork Mon Nov 15 14:19:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB90CC433F5 for ; Mon, 15 Nov 2021 14:20:20 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9007663225 for ; Mon, 15 Nov 2021 14:20:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9007663225 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 027296EDED; Mon, 15 Nov 2021 14:20:20 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 26E556EDED for ; Mon, 15 Nov 2021 14:20:19 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 07/15] iio: core: Add new DMABUF interface infrastructure Date: Mon, 15 Nov 2021 14:19:17 +0000 Message-Id: <20211115141925.60164-8-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add the necessary infrastructure to the IIO core to support a new DMABUF based interface. The advantage of this new DMABUF based interface vs. the read() interface, is that it avoids an extra copy of the data between the kernel and userspace. This is particularly userful for high-speed devices which produce several megabytes or even gigabytes of data per second. The data in this new DMABUF interface is managed at the granularity of DMABUF objects. Reducing the granularity from byte level to block level is done to reduce the userspace-kernelspace synchronization overhead since performing syscalls for each byte at a few Mbps is just not feasible. This of course leads to a slightly increased latency. For this reason an application can choose the size of the DMABUFs as well as how many it allocates. E.g. two DMABUFs would be a traditional double buffering scheme. But using a higher number might be necessary to avoid underflow/overflow situations in the presence of scheduling latencies. As part of the interface, 2 new IOCTLs have been added: IIO_BUFFER_DMABUF_ALLOC_IOCTL(struct iio_dmabuf_alloc_req *): Each call will allocate a new DMABUF object. The return value (if not a negative errno value as error) will be the file descriptor of the new DMABUF. IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *): Place the DMABUF object into the queue pending for hardware process. These two IOCTLs have to be performed on the IIO buffer's file descriptor (either opened from the corresponding /dev/iio:deviceX, or obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl). To access the data stored in a block by userspace the block must be mapped to the process's memory. This is done by calling mmap() on the DMABUF's file descriptor. Before accessing the data through the map, you must use the DMA_BUF_IOCTL_SYNC(struct dma_buf_sync *) ioctl, with the DMA_BUF_SYNC_START flag, to make sure that the data is available. This call may block until the hardware is done with this block. Once you are done reading or writing the data, you must use this ioctl again with the DMA_BUF_SYNC_END flag, before enqueueing the DMABUF to the kernel's queue. If you need to know when the hardware is done with a DMABUF, you can poll its file descriptor for the EPOLLOUT event. Finally, to destroy a DMABUF object, simply call close() on its file descriptor. A typical workflow for the new interface is: for block in blocks: DMABUF_ALLOC block mmap block enable buffer while !done for block in blocks: DMABUF_ENQUEUE block DMABUF_SYNC_START block process data DMABUF_SYNC_END block disable buffer for block in blocks: close block Signed-off-by: Paul Cercueil --- drivers/iio/industrialio-buffer.c | 44 +++++++++++++++++++++++++++++++ include/linux/iio/buffer_impl.h | 8 ++++++ include/uapi/linux/iio/buffer.h | 29 ++++++++++++++++++++ 3 files changed, 81 insertions(+) diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index e180728914c0..30910e6c2346 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -1585,12 +1586,55 @@ static long iio_device_buffer_getfd(struct iio_dev *indio_dev, unsigned long arg return ret; } +static int iio_buffer_enqueue_dmabuf(struct iio_buffer *buffer, + struct iio_dmabuf __user *user_buf) +{ + struct iio_dmabuf dmabuf; + + if (!buffer->access->enqueue_dmabuf) + return -EPERM; + + if (copy_from_user(&dmabuf, user_buf, sizeof(dmabuf))) + return -EFAULT; + + if (dmabuf.flags & ~IIO_BUFFER_DMABUF_SUPPORTED_FLAGS) + return -EINVAL; + + return buffer->access->enqueue_dmabuf(buffer, &dmabuf); +} + +static int iio_buffer_alloc_dmabuf(struct iio_buffer *buffer, + struct iio_dmabuf_alloc_req __user *user_req) +{ + struct iio_dmabuf_alloc_req req; + + if (!buffer->access->alloc_dmabuf) + return -EPERM; + + if (copy_from_user(&req, user_req, sizeof(req))) + return -EFAULT; + + if (req.resv) + return -EINVAL; + + return buffer->access->alloc_dmabuf(buffer, &req); +} + static long iio_device_buffer_ioctl(struct iio_dev *indio_dev, struct file *filp, unsigned int cmd, unsigned long arg) { + struct iio_dev_buffer_pair *ib = filp->private_data; + struct iio_buffer *buffer = ib->buffer; + void __user *_arg = (void __user *)arg; + switch (cmd) { case IIO_BUFFER_GET_FD_IOCTL: return iio_device_buffer_getfd(indio_dev, arg); + case IIO_BUFFER_DMABUF_ALLOC_IOCTL: + return iio_buffer_alloc_dmabuf(buffer, _arg); + case IIO_BUFFER_DMABUF_ENQUEUE_IOCTL: + /* TODO: support non-blocking enqueue operation */ + return iio_buffer_enqueue_dmabuf(buffer, _arg); default: return IIO_IOCTL_UNHANDLED; } diff --git a/include/linux/iio/buffer_impl.h b/include/linux/iio/buffer_impl.h index e2ca8ea23e19..728541bc2c63 100644 --- a/include/linux/iio/buffer_impl.h +++ b/include/linux/iio/buffer_impl.h @@ -39,6 +39,9 @@ struct iio_buffer; * device stops sampling. Calles are balanced with @enable. * @release: called when the last reference to the buffer is dropped, * should free all resources allocated by the buffer. + * @alloc_dmabuf: called from userspace via ioctl to allocate one DMABUF. + * @enqueue_dmabuf: called from userspace via ioctl to queue this DMABUF + * object to this buffer. Requires a valid DMABUF fd. * @modes: Supported operating modes by this buffer type * @flags: A bitmask combination of INDIO_BUFFER_FLAG_* * @@ -68,6 +71,11 @@ struct iio_buffer_access_funcs { void (*release)(struct iio_buffer *buffer); + int (*alloc_dmabuf)(struct iio_buffer *buffer, + struct iio_dmabuf_alloc_req *req); + int (*enqueue_dmabuf)(struct iio_buffer *buffer, + struct iio_dmabuf *block); + unsigned int modes; unsigned int flags; }; diff --git a/include/uapi/linux/iio/buffer.h b/include/uapi/linux/iio/buffer.h index 13939032b3f6..e4621b926262 100644 --- a/include/uapi/linux/iio/buffer.h +++ b/include/uapi/linux/iio/buffer.h @@ -5,6 +5,35 @@ #ifndef _UAPI_IIO_BUFFER_H_ #define _UAPI_IIO_BUFFER_H_ +#include + +#define IIO_BUFFER_DMABUF_SUPPORTED_FLAGS 0x00000000 + +/** + * struct iio_dmabuf_alloc_req - Descriptor for allocating IIO DMABUFs + * @size: the size of a single DMABUF + * @resv: reserved + */ +struct iio_dmabuf_alloc_req { + __u64 size; + __u64 resv; +}; + +/** + * struct iio_dmabuf - Descriptor for a single IIO DMABUF object + * @fd: file descriptor of the DMABUF object + * @flags: one or more IIO_BUFFER_DMABUF_* flags + * @bytes_used: number of bytes used in this DMABUF for the data transfer. + * If zero, the full buffer is used. + */ +struct iio_dmabuf { + __u32 fd; + __u32 flags; + __u64 bytes_used; +}; + #define IIO_BUFFER_GET_FD_IOCTL _IOWR('i', 0x91, int) +#define IIO_BUFFER_DMABUF_ALLOC_IOCTL _IOW('i', 0x92, struct iio_dmabuf_alloc_req) +#define IIO_BUFFER_DMABUF_ENQUEUE_IOCTL _IOW('i', 0x93, struct iio_dmabuf) #endif /* _UAPI_IIO_BUFFER_H_ */ From patchwork Mon Nov 15 14:19:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B34F1C433F5 for ; Mon, 15 Nov 2021 14:20:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 80B8E63225 for ; Mon, 15 Nov 2021 14:20:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 80B8E63225 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9B3126EDF1; Mon, 15 Nov 2021 14:20:27 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 774A06EDEE for ; Mon, 15 Nov 2021 14:20:25 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 08/15] iio: buffer-dma: split iio_dma_buffer_fileio_free() function Date: Mon, 15 Nov 2021 14:19:18 +0000 Message-Id: <20211115141925.60164-9-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Alexandru Ardelean A part of the logic in the iio_dma_buffer_exit() is required for the change to add mmap support to IIO buffers. This change splits the logic into a separate function, which will be re-used later. Signed-off-by: Alexandru Ardelean Signed-off-by: Paul Cercueil Signed-off-by: Alexandru Ardelean --- drivers/iio/buffer/industrialio-buffer-dma.c | 39 +++++++++++--------- 1 file changed, 22 insertions(+), 17 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index eeeed6b2e0cf..eb8cfd3af030 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -358,6 +358,27 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) } EXPORT_SYMBOL_GPL(iio_dma_buffer_request_update); +static void iio_dma_buffer_fileio_free(struct iio_dma_buffer_queue *queue) +{ + unsigned int i; + + spin_lock_irq(&queue->list_lock); + for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { + if (!queue->fileio.blocks[i]) + continue; + queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD; + } + spin_unlock_irq(&queue->list_lock); + + for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { + if (!queue->fileio.blocks[i]) + continue; + iio_buffer_block_put(queue->fileio.blocks[i]); + queue->fileio.blocks[i] = NULL; + } + queue->fileio.active_block = NULL; +} + static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue, struct iio_dma_buffer_block *block) { @@ -681,25 +702,9 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_init); */ void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue) { - unsigned int i; - mutex_lock(&queue->lock); - spin_lock_irq(&queue->list_lock); - for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { - if (!queue->fileio.blocks[i]) - continue; - queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD; - } - spin_unlock_irq(&queue->list_lock); - - for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { - if (!queue->fileio.blocks[i]) - continue; - iio_buffer_block_put(queue->fileio.blocks[i]); - queue->fileio.blocks[i] = NULL; - } - queue->fileio.active_block = NULL; + iio_dma_buffer_fileio_free(queue); queue->ops = NULL; mutex_unlock(&queue->lock); From patchwork Mon Nov 15 14:19:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2EACC433F5 for ; Mon, 15 Nov 2021 14:20:35 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B6F9663225 for ; Mon, 15 Nov 2021 14:20:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B6F9663225 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 338296EDF5; Mon, 15 Nov 2021 14:20:34 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 480856EDF3 for ; Mon, 15 Nov 2021 14:20:32 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 09/15] iio: buffer-dma: Use DMABUFs instead of custom solution Date: Mon, 15 Nov 2021 14:19:19 +0000 Message-Id: <20211115141925.60164-10-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Enhance the current fileio code by using DMABUF objects instead of custom buffers. This adds more code than it removes, but: - a lot of the complexity can be dropped, e.g. custom kref and iio_buffer_block_put_atomic() are not needed anymore; - it will be much easier to introduce an API to export these DMABUF objects to userspace in a following patch. Signed-off-by: Paul Cercueil --- drivers/iio/buffer/industrialio-buffer-dma.c | 196 ++++++++++++------- include/linux/iio/buffer-dma.h | 8 +- 2 files changed, 127 insertions(+), 77 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index eb8cfd3af030..adb20434f2d2 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -90,104 +91,150 @@ * callback is called from within the custom callback. */ -static void iio_buffer_block_release(struct kref *kref) -{ - struct iio_dma_buffer_block *block = container_of(kref, - struct iio_dma_buffer_block, kref); - - WARN_ON(block->state != IIO_BLOCK_STATE_DEAD); - - dma_free_coherent(block->queue->dev, PAGE_ALIGN(block->size), - block->vaddr, block->phys_addr); - - iio_buffer_put(&block->queue->buffer); - kfree(block); -} - -static void iio_buffer_block_get(struct iio_dma_buffer_block *block) -{ - kref_get(&block->kref); -} - -static void iio_buffer_block_put(struct iio_dma_buffer_block *block) -{ - kref_put(&block->kref, iio_buffer_block_release); -} - -/* - * dma_free_coherent can sleep, hence we need to take some special care to be - * able to drop a reference from an atomic context. - */ -static LIST_HEAD(iio_dma_buffer_dead_blocks); -static DEFINE_SPINLOCK(iio_dma_buffer_dead_blocks_lock); - -static void iio_dma_buffer_cleanup_worker(struct work_struct *work) -{ - struct iio_dma_buffer_block *block, *_block; - LIST_HEAD(block_list); - - spin_lock_irq(&iio_dma_buffer_dead_blocks_lock); - list_splice_tail_init(&iio_dma_buffer_dead_blocks, &block_list); - spin_unlock_irq(&iio_dma_buffer_dead_blocks_lock); - - list_for_each_entry_safe(block, _block, &block_list, head) - iio_buffer_block_release(&block->kref); -} -static DECLARE_WORK(iio_dma_buffer_cleanup_work, iio_dma_buffer_cleanup_worker); - -static void iio_buffer_block_release_atomic(struct kref *kref) -{ +struct iio_buffer_dma_buf_attachment { + struct scatterlist sgl; + struct sg_table sg_table; struct iio_dma_buffer_block *block; - unsigned long flags; - - block = container_of(kref, struct iio_dma_buffer_block, kref); - - spin_lock_irqsave(&iio_dma_buffer_dead_blocks_lock, flags); - list_add_tail(&block->head, &iio_dma_buffer_dead_blocks); - spin_unlock_irqrestore(&iio_dma_buffer_dead_blocks_lock, flags); - - schedule_work(&iio_dma_buffer_cleanup_work); -} - -/* - * Version of iio_buffer_block_put() that can be called from atomic context - */ -static void iio_buffer_block_put_atomic(struct iio_dma_buffer_block *block) -{ - kref_put(&block->kref, iio_buffer_block_release_atomic); -} +}; static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf) { return container_of(buf, struct iio_dma_buffer_queue, buffer); } +static struct iio_buffer_dma_buf_attachment * +to_iio_buffer_dma_buf_attachment(struct sg_table *table) +{ + return container_of(table, struct iio_buffer_dma_buf_attachment, sg_table); +} + +static void iio_buffer_block_get(struct iio_dma_buffer_block *block) +{ + get_dma_buf(block->dmabuf); +} + +static void iio_buffer_block_put(struct iio_dma_buffer_block *block) +{ + dma_buf_put(block->dmabuf); +} + +static int iio_buffer_dma_buf_attach(struct dma_buf *dbuf, + struct dma_buf_attachment *at) +{ + at->priv = dbuf->priv; + + return 0; +} + +static struct sg_table *iio_buffer_dma_buf_map(struct dma_buf_attachment *at, + enum dma_data_direction dma_dir) +{ + struct iio_dma_buffer_block *block = at->priv; + struct iio_buffer_dma_buf_attachment *dba; + int ret; + + dba = kzalloc(sizeof(*dba), GFP_KERNEL); + if (!dba) + return ERR_PTR(-ENOMEM); + + sg_init_one(&dba->sgl, block->vaddr, PAGE_ALIGN(block->size)); + dba->sg_table.sgl = &dba->sgl; + dba->sg_table.nents = 1; + dba->block = block; + + ret = dma_map_sgtable(at->dev, &dba->sg_table, dma_dir, 0); + if (ret) { + kfree(dba); + return ERR_PTR(ret); + } + + return &dba->sg_table; +} + +static void iio_buffer_dma_buf_unmap(struct dma_buf_attachment *at, + struct sg_table *sg_table, + enum dma_data_direction dma_dir) +{ + struct iio_buffer_dma_buf_attachment *dba = + to_iio_buffer_dma_buf_attachment(sg_table); + + dma_unmap_sgtable(at->dev, &dba->sg_table, dma_dir, 0); + kfree(dba); +} + +static void iio_buffer_dma_buf_release(struct dma_buf *dbuf) +{ + struct iio_dma_buffer_block *block = dbuf->priv; + struct iio_dma_buffer_queue *queue = block->queue; + + WARN_ON(block->state != IIO_BLOCK_STATE_DEAD); + + mutex_lock(&queue->lock); + + dma_free_coherent(queue->dev, PAGE_ALIGN(block->size), + block->vaddr, block->phys_addr); + + kfree(block); + + mutex_unlock(&queue->lock); + iio_buffer_put(&queue->buffer); +} + + +static const struct dma_buf_ops iio_dma_buffer_dmabuf_ops = { + .attach = iio_buffer_dma_buf_attach, + .map_dma_buf = iio_buffer_dma_buf_map, + .unmap_dma_buf = iio_buffer_dma_buf_unmap, + .release = iio_buffer_dma_buf_release, +}; + static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( struct iio_dma_buffer_queue *queue, size_t size) { struct iio_dma_buffer_block *block; + DEFINE_DMA_BUF_EXPORT_INFO(einfo); + struct dma_buf *dmabuf; + int err; block = kzalloc(sizeof(*block), GFP_KERNEL); if (!block) - return NULL; + return ERR_PTR(-ENOMEM); block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size), &block->phys_addr, GFP_KERNEL); if (!block->vaddr) { - kfree(block); - return NULL; + err = -ENOMEM; + goto err_free_block; } + einfo.ops = &iio_dma_buffer_dmabuf_ops; + einfo.size = PAGE_ALIGN(size); + einfo.priv = block; + einfo.flags = O_RDWR; + + dmabuf = dma_buf_export(&einfo); + if (IS_ERR(dmabuf)) { + err = PTR_ERR(dmabuf); + goto err_free_dma; + } + + block->dmabuf = dmabuf; block->size = size; block->bytes_used = size; block->state = IIO_BLOCK_STATE_DONE; block->queue = queue; INIT_LIST_HEAD(&block->head); - kref_init(&block->kref); iio_buffer_get(&queue->buffer); return block; + +err_free_dma: + dma_free_coherent(queue->dev, PAGE_ALIGN(size), + block->vaddr, block->phys_addr); +err_free_block: + kfree(block); + return ERR_PTR(err); } static void _iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) @@ -224,7 +271,7 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) _iio_dma_buffer_block_done(block); spin_unlock_irqrestore(&queue->list_lock, flags); - iio_buffer_block_put_atomic(block); + iio_buffer_block_put(block); iio_dma_buffer_queue_wake(queue); } EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done); @@ -250,7 +297,8 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue, list_del(&block->head); block->bytes_used = 0; _iio_dma_buffer_block_done(block); - iio_buffer_block_put_atomic(block); + + iio_buffer_block_put(block); } spin_unlock_irqrestore(&queue->list_lock, flags); @@ -340,11 +388,13 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) if (!block) { block = iio_dma_buffer_alloc_block(queue, size); - if (!block) { - ret = -ENOMEM; + if (IS_ERR(block)) { + ret = PTR_ERR(block); goto out_unlock; } queue->fileio.blocks[i] = block; + + iio_buffer_block_get(block); } if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h index 09c07d5563c0..22effd6cfbb6 100644 --- a/include/linux/iio/buffer-dma.h +++ b/include/linux/iio/buffer-dma.h @@ -8,7 +8,6 @@ #define __INDUSTRIALIO_DMA_BUFFER_H__ #include -#include #include #include #include @@ -16,6 +15,7 @@ struct iio_dma_buffer_queue; struct iio_dma_buffer_ops; struct device; +struct dma_buf; /** * enum iio_block_state - State of a struct iio_dma_buffer_block @@ -41,8 +41,8 @@ enum iio_block_state { * @vaddr: Virutal address of the blocks memory * @phys_addr: Physical address of the blocks memory * @queue: Parent DMA buffer queue - * @kref: kref used to manage the lifetime of block * @state: Current state of the block + * @dmabuf: Underlying DMABUF object */ struct iio_dma_buffer_block { /* May only be accessed by the owner of the block */ @@ -58,13 +58,13 @@ struct iio_dma_buffer_block { size_t size; struct iio_dma_buffer_queue *queue; - /* Must not be accessed outside the core. */ - struct kref kref; /* * Must not be accessed outside the core. Access needs to hold * queue->list_lock if the block is not owned by the core. */ enum iio_block_state state; + + struct dma_buf *dmabuf; }; /** From patchwork Mon Nov 15 14:19:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3106C433F5 for ; Mon, 15 Nov 2021 14:20:41 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 90D1A63225 for ; Mon, 15 Nov 2021 14:20:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 90D1A63225 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DCBD76EDF6; Mon, 15 Nov 2021 14:20:40 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5F4226EDF6 for ; Mon, 15 Nov 2021 14:20:39 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 10/15] iio: buffer-dma: Implement new DMABUF based userspace API Date: Mon, 15 Nov 2021 14:19:20 +0000 Message-Id: <20211115141925.60164-11-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Implement the two functions iio_dma_buffer_alloc_dmabuf() and iio_dma_buffer_enqueue_dmabuf(), as well as all the necesary bits to enable userspace access to the DMABUF objects. These two functions are exported as GPL symbols so that IIO buffer implementations can support the new DMABUF based userspace API. Signed-off-by: Paul Cercueil --- drivers/iio/buffer/industrialio-buffer-dma.c | 273 ++++++++++++++++++- include/linux/iio/buffer-dma.h | 13 + 2 files changed, 279 insertions(+), 7 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index adb20434f2d2..92356ee02f30 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -15,7 +15,9 @@ #include #include #include +#include #include +#include #include /* @@ -97,6 +99,18 @@ struct iio_buffer_dma_buf_attachment { struct iio_dma_buffer_block *block; }; +struct iio_buffer_dma_fence { + struct dma_fence base; + struct iio_dma_buffer_block *block; + spinlock_t lock; +}; + +static struct iio_buffer_dma_fence * +to_iio_buffer_dma_fence(struct dma_fence *fence) +{ + return container_of(fence, struct iio_buffer_dma_fence, base); +} + static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf) { return container_of(buf, struct iio_dma_buffer_queue, buffer); @@ -118,6 +132,48 @@ static void iio_buffer_block_put(struct iio_dma_buffer_block *block) dma_buf_put(block->dmabuf); } +static const char * +iio_buffer_dma_fence_get_driver_name(struct dma_fence *fence) +{ + struct iio_buffer_dma_fence *iio_fence = to_iio_buffer_dma_fence(fence); + + return dev_name(iio_fence->block->queue->dev); +} + +static void iio_buffer_dma_fence_release(struct dma_fence *fence) +{ + struct iio_buffer_dma_fence *iio_fence = to_iio_buffer_dma_fence(fence); + + kfree(iio_fence); +} + +static const struct dma_fence_ops iio_buffer_dma_fence_ops = { + .get_driver_name = iio_buffer_dma_fence_get_driver_name, + .get_timeline_name = iio_buffer_dma_fence_get_driver_name, + .release = iio_buffer_dma_fence_release, +}; + +static struct dma_fence * +iio_dma_buffer_create_dma_fence(struct iio_dma_buffer_block *block) +{ + struct iio_buffer_dma_fence *fence; + u64 ctx; + + fence = kzalloc(sizeof(*fence), GFP_KERNEL); + if (!fence) + return ERR_PTR(-ENOMEM); + + fence->block = block; + spin_lock_init(&fence->lock); + + ctx = dma_fence_context_alloc(1); + + dma_fence_init(&fence->base, &iio_buffer_dma_fence_ops, + &fence->lock, ctx, 0); + + return &fence->base; +} + static int iio_buffer_dma_buf_attach(struct dma_buf *dbuf, struct dma_buf_attachment *at) { @@ -162,10 +218,26 @@ static void iio_buffer_dma_buf_unmap(struct dma_buf_attachment *at, kfree(dba); } +static int iio_buffer_dma_buf_mmap(struct dma_buf *dbuf, + struct vm_area_struct *vma) +{ + struct iio_dma_buffer_block *block = dbuf->priv; + struct device *dev = block->queue->dev; + + vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; + + if (vma->vm_ops->open) + vma->vm_ops->open(vma); + + return dma_mmap_pages(dev, vma, vma->vm_end - vma->vm_start, + virt_to_page(block->vaddr)); +} + static void iio_buffer_dma_buf_release(struct dma_buf *dbuf) { struct iio_dma_buffer_block *block = dbuf->priv; struct iio_dma_buffer_queue *queue = block->queue; + bool is_fileio = block->fileio; WARN_ON(block->state != IIO_BLOCK_STATE_DEAD); @@ -176,20 +248,51 @@ static void iio_buffer_dma_buf_release(struct dma_buf *dbuf) kfree(block); + queue->num_blocks--; + if (is_fileio) + queue->num_fileio_blocks--; mutex_unlock(&queue->lock); iio_buffer_put(&queue->buffer); } +static int iio_buffer_dma_buf_begin_cpu_access(struct dma_buf *dbuf, + enum dma_data_direction dma_dir) +{ + struct iio_dma_buffer_block *block = dbuf->priv; + struct device *dev = block->queue->dev; + + /* We only need to invalidate the cache for input buffers */ + if (block->queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) + dma_sync_single_for_cpu(dev, block->phys_addr, block->size, dma_dir); + + return 0; +} + +static int iio_buffer_dma_buf_end_cpu_access(struct dma_buf *dbuf, + enum dma_data_direction dma_dir) +{ + struct iio_dma_buffer_block *block = dbuf->priv; + struct device *dev = block->queue->dev; + + /* We only need to sync the cache for output buffers */ + if (block->queue->buffer.direction == IIO_BUFFER_DIRECTION_OUT) + dma_sync_single_for_device(dev, block->phys_addr, block->size, dma_dir); + + return 0; +} static const struct dma_buf_ops iio_dma_buffer_dmabuf_ops = { .attach = iio_buffer_dma_buf_attach, .map_dma_buf = iio_buffer_dma_buf_map, .unmap_dma_buf = iio_buffer_dma_buf_unmap, + .mmap = iio_buffer_dma_buf_mmap, .release = iio_buffer_dma_buf_release, + .begin_cpu_access = iio_buffer_dma_buf_begin_cpu_access, + .end_cpu_access = iio_buffer_dma_buf_end_cpu_access, }; static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( - struct iio_dma_buffer_queue *queue, size_t size) + struct iio_dma_buffer_queue *queue, size_t size, bool fileio) { struct iio_dma_buffer_block *block; DEFINE_DMA_BUF_EXPORT_INFO(einfo); @@ -223,10 +326,15 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( block->bytes_used = size; block->state = IIO_BLOCK_STATE_DONE; block->queue = queue; + block->fileio = fileio; INIT_LIST_HEAD(&block->head); iio_buffer_get(&queue->buffer); + queue->num_blocks++; + if (fileio) + queue->num_fileio_blocks++; + return block; err_free_dma: @@ -265,14 +373,22 @@ static void iio_dma_buffer_queue_wake(struct iio_dma_buffer_queue *queue) void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) { struct iio_dma_buffer_queue *queue = block->queue; + struct dma_resv *resv = block->dmabuf->resv; + struct dma_fence *fence; unsigned long flags; spin_lock_irqsave(&queue->list_lock, flags); _iio_dma_buffer_block_done(block); spin_unlock_irqrestore(&queue->list_lock, flags); + fence = dma_resv_excl_fence(resv); + if (fence) + dma_fence_signal(fence); + dma_resv_unlock(resv); + iio_buffer_block_put(block); - iio_dma_buffer_queue_wake(queue); + if (queue->fileio.enabled) + iio_dma_buffer_queue_wake(queue); } EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done); @@ -298,6 +414,8 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue, block->bytes_used = 0; _iio_dma_buffer_block_done(block); + if (dma_resv_is_locked(block->dmabuf->resv)) + dma_resv_unlock(block->dmabuf->resv); iio_buffer_block_put(block); } spin_unlock_irqrestore(&queue->list_lock, flags); @@ -323,6 +441,12 @@ static bool iio_dma_block_reusable(struct iio_dma_buffer_block *block) } } +static bool iio_dma_buffer_fileio_mode(struct iio_dma_buffer_queue *queue) +{ + return queue->fileio.enabled || + queue->num_blocks == queue->num_fileio_blocks; +} + /** * iio_dma_buffer_request_update() - DMA buffer request_update callback * @buffer: The buffer which to request an update @@ -349,6 +473,12 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) mutex_lock(&queue->lock); + queue->fileio.enabled = iio_dma_buffer_fileio_mode(queue); + + /* If DMABUFs were created, disable fileio interface */ + if (!queue->fileio.enabled) + goto out_unlock; + /* Allocations are page aligned */ if (PAGE_ALIGN(queue->fileio.block_size) == PAGE_ALIGN(size)) try_reuse = true; @@ -387,7 +517,7 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) } if (!block) { - block = iio_dma_buffer_alloc_block(queue, size); + block = iio_dma_buffer_alloc_block(queue, size, true); if (IS_ERR(block)) { ret = PTR_ERR(block); goto out_unlock; @@ -444,6 +574,8 @@ static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue, block->state = IIO_BLOCK_STATE_ACTIVE; iio_buffer_block_get(block); + dma_resv_lock(block->dmabuf->resv, NULL); + ret = queue->ops->submit(queue, block); if (ret) { /* @@ -480,12 +612,18 @@ int iio_dma_buffer_enable(struct iio_buffer *buffer, mutex_lock(&queue->lock); queue->active = true; queue->fileio.next_dequeue = 0; + queue->fileio.enabled = iio_dma_buffer_fileio_mode(queue); - for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { - block = queue->fileio.blocks[i]; + dev_dbg(queue->dev, "Buffer enabled in %s mode\n", + queue->fileio.enabled ? "fileio" : "dmabuf"); - if (block->state == IIO_BLOCK_STATE_QUEUED) - iio_dma_buffer_submit_block(queue, block); + if (queue->fileio.enabled) { + for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { + block = queue->fileio.blocks[i]; + + if (block->state == IIO_BLOCK_STATE_QUEUED) + iio_dma_buffer_submit_block(queue, block); + } } mutex_unlock(&queue->lock); @@ -507,6 +645,7 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer, struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); mutex_lock(&queue->lock); + queue->fileio.enabled = false; queue->active = false; if (queue->ops && queue->ops->abort) @@ -565,6 +704,11 @@ static int iio_dma_buffer_io(struct iio_buffer *buffer, mutex_lock(&queue->lock); + if (!queue->fileio.enabled) { + ret = -EBUSY; + goto out_unlock; + } + if (!queue->fileio.active_block) { block = iio_dma_buffer_dequeue(queue); if (block == NULL) { @@ -681,6 +825,121 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf) } EXPORT_SYMBOL_GPL(iio_dma_buffer_data_available); +int iio_dma_buffer_alloc_dmabuf(struct iio_buffer *buffer, + struct iio_dmabuf_alloc_req *req) +{ + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); + struct iio_dma_buffer_block *block; + int ret = 0; + + mutex_lock(&queue->lock); + + /* + * If the buffer is enabled and in fileio mode new blocks can't be + * allocated. + */ + if (queue->fileio.enabled) { + ret = -EBUSY; + goto out_unlock; + } + + if (!req->size || req->size > SIZE_MAX) { + ret = -EINVAL; + goto out_unlock; + } + + /* Free memory that might be in use for fileio mode */ + iio_dma_buffer_fileio_free(queue); + + block = iio_dma_buffer_alloc_block(queue, req->size, false); + if (IS_ERR(block)) { + ret = PTR_ERR(block); + goto out_unlock; + } + + ret = dma_buf_fd(block->dmabuf, O_CLOEXEC); + if (ret < 0) { + dma_buf_put(block->dmabuf); + goto out_unlock; + } + +out_unlock: + mutex_unlock(&queue->lock); + + return ret; +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_alloc_dmabuf); + +int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer, + struct iio_dmabuf *iio_dmabuf) +{ + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); + struct iio_dma_buffer_block *dma_block; + struct dma_fence *fence; + struct dma_buf *dmabuf; + int ret = 0; + + mutex_lock(&queue->lock); + + /* If in fileio mode buffers can't be enqueued. */ + if (queue->fileio.enabled) { + ret = -EBUSY; + goto out_unlock; + } + + dmabuf = dma_buf_get(iio_dmabuf->fd); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto out_unlock; + } + + if (dmabuf->ops != &iio_dma_buffer_dmabuf_ops) { + dev_err(queue->dev, "importing DMABUFs from other drivers is not yet supported.\n"); + ret = -EINVAL; + goto out_dma_buf_put; + } + + dma_block = dmabuf->priv; + + if (iio_dmabuf->bytes_used > dma_block->size) { + ret = -EINVAL; + goto out_dma_buf_put; + } + + dma_block->bytes_used = iio_dmabuf->bytes_used ?: dma_block->size; + + switch (dma_block->state) { + case IIO_BLOCK_STATE_QUEUED: + /* Nothing to do */ + goto out_unlock; + case IIO_BLOCK_STATE_DONE: + break; + default: + ret = -EBUSY; + goto out_dma_buf_put; + } + + fence = iio_dma_buffer_create_dma_fence(dma_block); + if (IS_ERR(fence)) { + ret = PTR_ERR(fence); + goto out_dma_buf_put; + } + + dma_resv_lock(dmabuf->resv, NULL); + dma_resv_add_excl_fence(dmabuf->resv, fence); + dma_resv_unlock(dmabuf->resv); + + iio_dma_buffer_enqueue(queue, dma_block); + +out_dma_buf_put: + dma_buf_put(dmabuf); +out_unlock: + mutex_unlock(&queue->lock); + + return ret; +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_enqueue_dmabuf); + /** * iio_dma_buffer_set_bytes_per_datum() - DMA buffer set_bytes_per_datum callback * @buffer: Buffer to set the bytes-per-datum for diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h index 22effd6cfbb6..85e55fe35282 100644 --- a/include/linux/iio/buffer-dma.h +++ b/include/linux/iio/buffer-dma.h @@ -42,6 +42,7 @@ enum iio_block_state { * @phys_addr: Physical address of the blocks memory * @queue: Parent DMA buffer queue * @state: Current state of the block + * @fileio: True if this buffer is used for fileio mode * @dmabuf: Underlying DMABUF object */ struct iio_dma_buffer_block { @@ -64,6 +65,7 @@ struct iio_dma_buffer_block { */ enum iio_block_state state; + bool fileio; struct dma_buf *dmabuf; }; @@ -74,6 +76,7 @@ struct iio_dma_buffer_block { * @pos: Read offset in the active block * @block_size: Size of each block * @next_dequeue: index of next block that will be dequeued + * @enabled: Whether the buffer is operating in fileio mode */ struct iio_dma_buffer_queue_fileio { struct iio_dma_buffer_block *blocks[2]; @@ -82,6 +85,7 @@ struct iio_dma_buffer_queue_fileio { size_t block_size; unsigned int next_dequeue; + bool enabled; }; /** @@ -96,6 +100,8 @@ struct iio_dma_buffer_queue_fileio { * list and typically also a list of active blocks in the part that handles * the DMA controller * @active: Whether the buffer is currently active + * @num_blocks: Total number of blocks in the queue + * @num_fileio_blocks: Number of blocks used for fileio interface * @fileio: FileIO state */ struct iio_dma_buffer_queue { @@ -107,6 +113,8 @@ struct iio_dma_buffer_queue { spinlock_t list_lock; bool active; + unsigned int num_blocks; + unsigned int num_fileio_blocks; struct iio_dma_buffer_queue_fileio fileio; }; @@ -149,4 +157,9 @@ static inline size_t iio_dma_buffer_space_available(struct iio_buffer *buffer) return iio_dma_buffer_data_available(buffer); } +int iio_dma_buffer_alloc_dmabuf(struct iio_buffer *buffer, + struct iio_dmabuf_alloc_req *req); +int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer, + struct iio_dmabuf *dmabuf); + #endif From patchwork Mon Nov 15 14:19:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ABA6C433EF for ; Mon, 15 Nov 2021 14:20:48 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3A4C563225 for ; Mon, 15 Nov 2021 14:20:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3A4C563225 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AE8966EDFA; Mon, 15 Nov 2021 14:20:47 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id C62226EDFA for ; Mon, 15 Nov 2021 14:20:45 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 11/15] iio: buffer-dma: Boost performance using write-combine cache setting Date: Mon, 15 Nov 2021 14:19:21 +0000 Message-Id: <20211115141925.60164-12-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We can be certain that the input buffers will only be accessed by userspace for reading, and output buffers will mostly be accessed by userspace for writing. Therefore, it makes more sense to use only fully cached input buffers, and to use the write-combine cache coherency setting for output buffers. This boosts performance, as the data written to the output buffers does not have to be sync'd for coherency. It will halve performance if the userspace application tries to read from the output buffer, but this should never happen. Since we don't need to sync the cache when disabling CPU access either for input buffers or output buffers, the .end_cpu_access() callback can be dropped completely. Signed-off-by: Paul Cercueil --- drivers/iio/buffer/industrialio-buffer-dma.c | 82 +++++++++++++------- 1 file changed, 54 insertions(+), 28 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index 92356ee02f30..fb39054d8c15 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -229,8 +229,33 @@ static int iio_buffer_dma_buf_mmap(struct dma_buf *dbuf, if (vma->vm_ops->open) vma->vm_ops->open(vma); - return dma_mmap_pages(dev, vma, vma->vm_end - vma->vm_start, - virt_to_page(block->vaddr)); + if (block->queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) { + /* + * With an input buffer, userspace will only read the data and + * never write. We can mmap the buffer fully cached. + */ + return dma_mmap_pages(dev, vma, vma->vm_end - vma->vm_start, + virt_to_page(block->vaddr)); + } else { + /* + * With an output buffer, userspace will only write the data + * and should rarely (if never) read from it. It is better to + * use write-combine in this case. + */ + return dma_mmap_wc(dev, vma, block->vaddr, block->phys_addr, + vma->vm_end - vma->vm_start); + } +} + +static void iio_dma_buffer_free_dmamem(struct iio_dma_buffer_block *block) +{ + struct device *dev = block->queue->dev; + size_t size = PAGE_ALIGN(block->size); + + if (block->queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) + dma_free_coherent(dev, size, block->vaddr, block->phys_addr); + else + dma_free_wc(dev, size, block->vaddr, block->phys_addr); } static void iio_buffer_dma_buf_release(struct dma_buf *dbuf) @@ -243,9 +268,7 @@ static void iio_buffer_dma_buf_release(struct dma_buf *dbuf) mutex_lock(&queue->lock); - dma_free_coherent(queue->dev, PAGE_ALIGN(block->size), - block->vaddr, block->phys_addr); - + iio_dma_buffer_free_dmamem(block); kfree(block); queue->num_blocks--; @@ -268,19 +291,6 @@ static int iio_buffer_dma_buf_begin_cpu_access(struct dma_buf *dbuf, return 0; } -static int iio_buffer_dma_buf_end_cpu_access(struct dma_buf *dbuf, - enum dma_data_direction dma_dir) -{ - struct iio_dma_buffer_block *block = dbuf->priv; - struct device *dev = block->queue->dev; - - /* We only need to sync the cache for output buffers */ - if (block->queue->buffer.direction == IIO_BUFFER_DIRECTION_OUT) - dma_sync_single_for_device(dev, block->phys_addr, block->size, dma_dir); - - return 0; -} - static const struct dma_buf_ops iio_dma_buffer_dmabuf_ops = { .attach = iio_buffer_dma_buf_attach, .map_dma_buf = iio_buffer_dma_buf_map, @@ -288,9 +298,28 @@ static const struct dma_buf_ops iio_dma_buffer_dmabuf_ops = { .mmap = iio_buffer_dma_buf_mmap, .release = iio_buffer_dma_buf_release, .begin_cpu_access = iio_buffer_dma_buf_begin_cpu_access, - .end_cpu_access = iio_buffer_dma_buf_end_cpu_access, }; +static int iio_dma_buffer_alloc_dmamem(struct iio_dma_buffer_block *block) +{ + struct device *dev = block->queue->dev; + size_t size = PAGE_ALIGN(block->size); + + if (block->queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) { + block->vaddr = dma_alloc_coherent(dev, size, + &block->phys_addr, + GFP_KERNEL); + } else { + block->vaddr = dma_alloc_wc(dev, size, + &block->phys_addr, + GFP_KERNEL); + } + if (!block->vaddr) + return -ENOMEM; + + return 0; +} + static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( struct iio_dma_buffer_queue *queue, size_t size, bool fileio) { @@ -303,12 +332,12 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( if (!block) return ERR_PTR(-ENOMEM); - block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size), - &block->phys_addr, GFP_KERNEL); - if (!block->vaddr) { - err = -ENOMEM; + block->size = size; + block->queue = queue; + + err = iio_dma_buffer_alloc_dmamem(block); + if (err) goto err_free_block; - } einfo.ops = &iio_dma_buffer_dmabuf_ops; einfo.size = PAGE_ALIGN(size); @@ -322,10 +351,8 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( } block->dmabuf = dmabuf; - block->size = size; block->bytes_used = size; block->state = IIO_BLOCK_STATE_DONE; - block->queue = queue; block->fileio = fileio; INIT_LIST_HEAD(&block->head); @@ -338,8 +365,7 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( return block; err_free_dma: - dma_free_coherent(queue->dev, PAGE_ALIGN(size), - block->vaddr, block->phys_addr); + iio_dma_buffer_free_dmamem(block); err_free_block: kfree(block); return ERR_PTR(err); From patchwork Mon Nov 15 14:22:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D0C3C433F5 for ; Mon, 15 Nov 2021 14:22:53 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5740261B97 for ; Mon, 15 Nov 2021 14:22:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5740261B97 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 153716EE05; Mon, 15 Nov 2021 14:22:52 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id D3F7F6EE05 for ; Mon, 15 Nov 2021 14:22:50 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 12/15] iio: buffer-dmaengine: Support new DMABUF based userspace API Date: Mon, 15 Nov 2021 14:22:40 +0000 Message-Id: <20211115142243.60605-1-paul@crapouillou.net> In-Reply-To: <20211115141925.60164-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use the functions provided by the buffer-dma core to implement the DMABUF userspace API in the buffer-dmaengine IIO buffer implementation. Signed-off-by: Paul Cercueil --- drivers/iio/buffer/industrialio-buffer-dmaengine.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index 5cde8fd81c7f..57a8b2e4ba3c 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -133,6 +133,9 @@ static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = { .space_available = iio_dma_buffer_space_available, .release = iio_dmaengine_buffer_release, + .alloc_dmabuf = iio_dma_buffer_alloc_dmabuf, + .enqueue_dmabuf = iio_dma_buffer_enqueue_dmabuf, + .modes = INDIO_BUFFER_HARDWARE, .flags = INDIO_BUFFER_FLAG_FIXED_WATERMARK, }; From patchwork Mon Nov 15 14:22:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C537EC433F5 for ; Mon, 15 Nov 2021 14:23:00 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8686161B97 for ; Mon, 15 Nov 2021 14:23:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8686161B97 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 58AB66E40A; Mon, 15 Nov 2021 14:22:59 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 800BA6E40A for ; Mon, 15 Nov 2021 14:22:57 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 13/15] iio: core: Add support for cyclic buffers Date: Mon, 15 Nov 2021 14:22:41 +0000 Message-Id: <20211115142243.60605-2-paul@crapouillou.net> In-Reply-To: <20211115142243.60605-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> <20211115142243.60605-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Introduce a new flag IIO_BUFFER_DMABUF_CYCLIC in the "flags" field of the iio_dmabuf uapi structure. When set, the DMABUF enqueued with the enqueue ioctl will be endlessly repeated on the TX output, until the buffer is disabled. Signed-off-by: Paul Cercueil Reviewed-by: Alexandru Ardelean --- drivers/iio/industrialio-buffer.c | 5 +++++ include/uapi/linux/iio/buffer.h | 3 ++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index 30910e6c2346..41bc51c88002 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -1600,6 +1600,11 @@ static int iio_buffer_enqueue_dmabuf(struct iio_buffer *buffer, if (dmabuf.flags & ~IIO_BUFFER_DMABUF_SUPPORTED_FLAGS) return -EINVAL; + /* Cyclic flag is only supported on output buffers */ + if ((dmabuf.flags & IIO_BUFFER_DMABUF_CYCLIC) && + buffer->direction != IIO_BUFFER_DIRECTION_OUT) + return -EINVAL; + return buffer->access->enqueue_dmabuf(buffer, &dmabuf); } diff --git a/include/uapi/linux/iio/buffer.h b/include/uapi/linux/iio/buffer.h index e4621b926262..2d541d038c02 100644 --- a/include/uapi/linux/iio/buffer.h +++ b/include/uapi/linux/iio/buffer.h @@ -7,7 +7,8 @@ #include -#define IIO_BUFFER_DMABUF_SUPPORTED_FLAGS 0x00000000 +#define IIO_BUFFER_DMABUF_CYCLIC (1 << 0) +#define IIO_BUFFER_DMABUF_SUPPORTED_FLAGS 0x00000001 /** * struct iio_dmabuf_alloc_req - Descriptor for allocating IIO DMABUFs From patchwork Mon Nov 15 14:22:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99115C433F5 for ; Mon, 15 Nov 2021 14:23:05 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 610CE61882 for ; Mon, 15 Nov 2021 14:23:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 610CE61882 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 431F26E059; Mon, 15 Nov 2021 14:23:04 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4EAA76E059 for ; Mon, 15 Nov 2021 14:23:03 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 14/15] iio: buffer-dmaengine: Add support for cyclic buffers Date: Mon, 15 Nov 2021 14:22:42 +0000 Message-Id: <20211115142243.60605-3-paul@crapouillou.net> In-Reply-To: <20211115142243.60605-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> <20211115142243.60605-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Handle the IIO_BUFFER_DMABUF_CYCLIC flag to support cyclic buffers. Signed-off-by: Paul Cercueil Reviewed-by: Alexandru Ardelean --- drivers/iio/buffer/industrialio-buffer-dma.c | 1 + .../iio/buffer/industrialio-buffer-dmaengine.c | 15 ++++++++++++--- include/linux/iio/buffer-dma.h | 3 +++ 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index fb39054d8c15..6658f103ee17 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -933,6 +933,7 @@ int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer, } dma_block->bytes_used = iio_dmabuf->bytes_used ?: dma_block->size; + dma_block->cyclic = iio_dmabuf->flags & IIO_BUFFER_DMABUF_CYCLIC; switch (dma_block->state) { case IIO_BLOCK_STATE_QUEUED: diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index 57a8b2e4ba3c..952e2160a11e 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -81,9 +81,18 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, if (!block->bytes_used || block->bytes_used > max_size) return -EINVAL; - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, - block->phys_addr, block->bytes_used, dma_dir, - DMA_PREP_INTERRUPT); + if (block->cyclic) { + desc = dmaengine_prep_dma_cyclic(dmaengine_buffer->chan, + block->phys_addr, + block->size, + block->bytes_used, + dma_dir, 0); + } else { + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, + block->phys_addr, + block->bytes_used, dma_dir, + DMA_PREP_INTERRUPT); + } if (!desc) return -ENOMEM; diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h index 85e55fe35282..27639fdf7b54 100644 --- a/include/linux/iio/buffer-dma.h +++ b/include/linux/iio/buffer-dma.h @@ -42,6 +42,7 @@ enum iio_block_state { * @phys_addr: Physical address of the blocks memory * @queue: Parent DMA buffer queue * @state: Current state of the block + * @cyclic: True if this is a cyclic buffer * @fileio: True if this buffer is used for fileio mode * @dmabuf: Underlying DMABUF object */ @@ -65,6 +66,8 @@ struct iio_dma_buffer_block { */ enum iio_block_state state; + bool cyclic; + bool fileio; struct dma_buf *dmabuf; }; From patchwork Mon Nov 15 14:22:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12619623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 039FBC433F5 for ; Mon, 15 Nov 2021 14:23:12 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C07D161882 for ; Mon, 15 Nov 2021 14:23:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C07D161882 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 066AF6E8CA; Mon, 15 Nov 2021 14:23:11 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id BA4326E8CA for ; Mon, 15 Nov 2021 14:23:09 +0000 (UTC) From: Paul Cercueil To: Jonathan Cameron Subject: [PATCH 15/15] Documentation: iio: Document high-speed DMABUF based API Date: Mon, 15 Nov 2021 14:22:43 +0000 Message-Id: <20211115142243.60605-4-paul@crapouillou.net> In-Reply-To: <20211115142243.60605-1-paul@crapouillou.net> References: <20211115141925.60164-1-paul@crapouillou.net> <20211115142243.60605-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , Michael Hennerich , linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, =?utf-8?q?Ch?= =?utf-8?q?ristian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Document the new DMABUF based API. Signed-off-by: Paul Cercueil --- Documentation/driver-api/dma-buf.rst | 2 + Documentation/iio/dmabuf_api.rst | 94 ++++++++++++++++++++++++++++ Documentation/iio/index.rst | 2 + 3 files changed, 98 insertions(+) create mode 100644 Documentation/iio/dmabuf_api.rst diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 2cd7db82d9fe..d3c9b58d2706 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -1,3 +1,5 @@ +.. _dma-buf: + Buffer Sharing and Synchronization ================================== diff --git a/Documentation/iio/dmabuf_api.rst b/Documentation/iio/dmabuf_api.rst new file mode 100644 index 000000000000..b4e120a4ef0c --- /dev/null +++ b/Documentation/iio/dmabuf_api.rst @@ -0,0 +1,94 @@ +=================================== +High-speed DMABUF interface for IIO +=================================== + +1. Overview +=========== + +The Industrial I/O subsystem supports access to buffers through a file-based +interface, with read() and write() access calls through the IIO device's dev +node. + +It additionally supports a DMABUF based interface, where the userspace +application can allocate and append DMABUF objects to the buffer's queue. + +The advantage of this DMABUF based interface vs. the fileio +interface, is that it avoids an extra copy of the data between the +kernel and userspace. This is particularly userful for high-speed +devices which produce several megabytes or even gigabytes of data per +second. + +The data in this DMABUF interface is managed at the granularity of +DMABUF objects. Reducing the granularity from byte level to block level +is done to reduce the userspace-kernelspace synchronization overhead +since performing syscalls for each byte at a few Mbps is just not +feasible. + +This of course leads to a slightly increased latency. For this reason an +application can choose the size of the DMABUFs as well as how many it +allocates. E.g. two DMABUFs would be a traditional double buffering +scheme. But using a higher number might be necessary to avoid +underflow/overflow situations in the presence of scheduling latencies. + +2. User API +=========== + +``IIO_BUFFER_DMABUF_ALLOC_IOCTL(struct iio_dmabuf_alloc_req *)`` +---------------------------------------------------------------- + +Each call will allocate a new DMABUF object. The return value (if not +a negative errno value as error) will be the file descriptor of the new +DMABUF. + +``IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *)`` +-------------------------------------------------------- + +Place the DMABUF object into the queue pending for hardware process. + +These two IOCTLs have to be performed on the IIO buffer's file +descriptor (either opened from the corresponding /dev/iio:deviceX, or +obtained using the `IIO_BUFFER_GET_FD_IOCTL` ioctl). + +3. Usage +======== + +To access the data stored in a block by userspace the block must be +mapped to the process's memory. This is done by calling mmap() on the +DMABUF's file descriptor. + +Before accessing the data through the map, you must use the +DMA_BUF_IOCTL_SYNC(struct dma_buf_sync *) ioctl, with the +DMA_BUF_SYNC_START flag, to make sure that the data is available. +This call may block until the hardware is done with this block. Once +you are done reading or writing the data, you must use this ioctl again +with the DMA_BUF_SYNC_END flag, before enqueueing the DMABUF to the +kernel's queue. + +If you need to know when the hardware is done with a DMABUF, you can +poll its file descriptor for the EPOLLOUT event. + +Finally, to destroy a DMABUF object, simply call close() on its file +descriptor. + +For more information about manipulating DMABUF objects, see: :ref:`dma-buf`. + +A typical workflow for the new interface is: + + for block in blocks: + DMABUF_ALLOC block + mmap block + + enable buffer + + while !done + for block in blocks: + DMABUF_ENQUEUE block + + DMABUF_SYNC_START block + process data + DMABUF_SYNC_END block + + disable buffer + + for block in blocks: + close block diff --git a/Documentation/iio/index.rst b/Documentation/iio/index.rst index 58b7a4ebac51..9ce799fbf262 100644 --- a/Documentation/iio/index.rst +++ b/Documentation/iio/index.rst @@ -10,3 +10,5 @@ Industrial I/O iio_configfs ep93xx_adc + + dmabuf_api