From patchwork Tue Dec 19 17:50:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 13498756 Received: from aposti.net (aposti.net [89.234.176.197]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2FFB45C17; Tue, 19 Dec 2023 17:51:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=crapouillou.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=crapouillou.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=crapouillou.net header.i=@crapouillou.net header.b="lfVWCHwS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=crapouillou.net; s=mail; t=1703008225; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e7fwZsaolFkOpPsDd74Ly10WkyLo2LNDxuxjwogSFJQ=; b=lfVWCHwSm37pIKUzOiE/NipNnaf+A+cN3rzXZWZVGJsgPu61ei5HWDEnxhfB+CB7eOyKwK CThUjG0QBJDE4KjA3pVK9aifL/tG4UaK8vV6PcZRx8k0gKU2HrbmKrVNeQmiODaCxWlJ99 EwLiV9FNjlmbB/f7jHrj0G/4nm4FwcI= From: Paul Cercueil To: Jonathan Cameron , Lars-Peter Clausen , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Vinod Koul , Jonathan Corbet Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, linux-iio@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, =?utf-8?q?Nuno_S=C3=A1?= , Michael Hennerich , Paul Cercueil Subject: [PATCH v5 7/8] iio: buffer-dmaengine: Support new DMABUF based userspace API Date: Tue, 19 Dec 2023 18:50:08 +0100 Message-ID: <20231219175009.65482-8-paul@crapouillou.net> In-Reply-To: <20231219175009.65482-1-paul@crapouillou.net> References: <20231219175009.65482-1-paul@crapouillou.net> Precedence: bulk X-Mailing-List: dmaengine@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam: Yes Use the functions provided by the buffer-dma core to implement the DMABUF userspace API in the buffer-dmaengine IIO buffer implementation. Since we want to be able to transfer an arbitrary number of bytes and not necesarily the full DMABUF, the associated scatterlist is converted to an array of DMA addresses + lengths, which is then passed to dmaengine_prep_slave_dma_array(). Signed-off-by: Paul Cercueil --- v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the code to work with the new functions introduced in industrialio-buffer-dma.c. v5: - Use the new dmaengine_prep_slave_dma_vec(). - Restrict to input buffers, since output buffers are not yet supported by IIO buffers. --- .../buffer/industrialio-buffer-dmaengine.c | 52 ++++++++++++++++--- 1 file changed, 46 insertions(+), 6 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index 5f85ba38e6f6..825d76a24a67 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -64,15 +64,51 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, struct dmaengine_buffer *dmaengine_buffer = iio_buffer_to_dmaengine_buffer(&queue->buffer); struct dma_async_tx_descriptor *desc; + unsigned int i, nents; + struct scatterlist *sgl; + struct dma_vec *vecs; + size_t max_size; dma_cookie_t cookie; + size_t len_total; - block->bytes_used = min(block->size, dmaengine_buffer->max_size); - block->bytes_used = round_down(block->bytes_used, - dmaengine_buffer->align); + if (queue->buffer.direction != IIO_BUFFER_DIRECTION_IN) { + /* We do not yet support output buffers. */ + return -EINVAL; + } - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, - block->phys_addr, block->bytes_used, DMA_DEV_TO_MEM, - DMA_PREP_INTERRUPT); + if (block->sg_table) { + sgl = block->sg_table->sgl; + nents = sg_nents_for_len(sgl, block->bytes_used); + + vecs = kmalloc_array(nents, sizeof(*vecs), GFP_KERNEL); + if (!vecs) + return -ENOMEM; + + len_total = block->bytes_used; + + for (i = 0; i < nents; i++) { + vecs[i].addr = sg_dma_address(sgl); + vecs[i].len = min(sg_dma_len(sgl), len_total); + len_total -= vecs[i].len; + + sgl = sg_next(sgl); + } + + desc = dmaengine_prep_slave_dma_vec(dmaengine_buffer->chan, + vecs, nents, DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT); + kfree(vecs); + } else { + max_size = min(block->size, dmaengine_buffer->max_size); + max_size = round_down(max_size, dmaengine_buffer->align); + block->bytes_used = max_size; + + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, + block->phys_addr, + block->bytes_used, + DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT); + } if (!desc) return -ENOMEM; @@ -120,6 +156,10 @@ static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = { .data_available = iio_dma_buffer_data_available, .release = iio_dmaengine_buffer_release, + .enqueue_dmabuf = iio_dma_buffer_enqueue_dmabuf, + .attach_dmabuf = iio_dma_buffer_attach_dmabuf, + .detach_dmabuf = iio_dma_buffer_detach_dmabuf, + .modes = INDIO_BUFFER_HARDWARE, .flags = INDIO_BUFFER_FLAG_FIXED_WATERMARK, };