From patchwork Mon Jun 24 19:49:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 11014053 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55231112C for ; Mon, 24 Jun 2019 19:49:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 465C1288BD for ; Mon, 24 Jun 2019 19:49:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3AB4628A89; Mon, 24 Jun 2019 19:49:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 88006288BD for ; Mon, 24 Jun 2019 19:49:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A861C89A61; Mon, 24 Jun 2019 19:49:31 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by gabe.freedesktop.org (Postfix) with ESMTPS id EBEA889A14 for ; Mon, 24 Jun 2019 19:49:22 +0000 (UTC) Received: by mail-pg1-x542.google.com with SMTP id f21so7677582pgi.3 for ; Mon, 24 Jun 2019 12:49:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=u/zFUO3yMoIyZsB9cMbMvZRrcD9xaq1bDUjQQ/exCxE=; b=MgiVj0baCk5QofRKi8MVKKxaRgkolSFs2okMrbl1g97wKAj7aKTX8GUEOMr2KkC90t EmLDCCRHJrCYzx7OjTrM4fNnHOrmvgdreBQUdQL4Xq1eycQRmRrVZ/mM+v9pmtXNh5vU cvr6HPQWkeqrdj4oEiGZY/q5Yp5amtP7eSlYVG7hg/KwwXDyjF3zP113pgOZjIlXrLcS 0iKwwSlLE0xjsOQPY9YmQoZODGQ1/3YRm0e97Sngrqr0ORGvYb/WuSulNqD97nA5gGYZ xiip9D1XsDH8uXA+gaKYWaZojXZlm+7bEWczHrp2sgjhIu+MfCIWoHpTfmW1RYWCJ6ft FGQQ== X-Gm-Message-State: APjAAAXjnyGYuEvZipl4B+BOPDJLfafl3qqn2pzmDlBaNGOjohrVhX0W 9xUT6h/dxM2kej6lGi61F5rhow== X-Google-Smtp-Source: APXvYqycDhB2rQp54g7Rc7xbJaAx5qPGV38UCten+IbY5pFy1HfClabiTmWOO99TuWZUa+g7Au/UiA== X-Received: by 2002:a17:90a:8c06:: with SMTP id a6mr27486142pjo.45.1561405762232; Mon, 24 Jun 2019 12:49:22 -0700 (PDT) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id v3sm7957031pfm.188.2019.06.24.12.49.20 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Mon, 24 Jun 2019 12:49:21 -0700 (PDT) From: John Stultz To: lkml Subject: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps Date: Mon, 24 Jun 2019 19:49:07 +0000 Message-Id: <20190624194908.121273-5-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190624194908.121273-1-john.stultz@linaro.org> References: <20190624194908.121273-1-john.stultz@linaro.org> X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=u/zFUO3yMoIyZsB9cMbMvZRrcD9xaq1bDUjQQ/exCxE=; b=jUxO4kdUYOs9pshrX+Xan74VySeFQJHDIl+iphGX9Z4AxgeKYs5JJzAHWoc9uHajaL eppVw06SwQ/zRYUha9mPNufPm2QrqOeJ7E53VzhaFmm7e9NN2Tt0D8yJ/B0SAREoQMpG XAImkg1b9BUVu4/U8vKwFLvQw6Uf6RnvfpLKI2yYxGV+yGGnYO8Thqf35mWLhEVkl6bV 7dUkJr4oRjqKdD+cDglUowD6nQoVI7c1PNQikNjo1U1Jo7AuZM3lmW9g5pMobvl4budm Ew/sv5yLARmJmr+yll9NsMxMjZhxeIQeXkKX08G2XBIDhkZ50uxFjL5Auh8pKARz1z13 erhA== X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yudongbin , Sudipto Paul , Xu YiPing , Vincent Donnefort , "Chenfeng \(puck\)" , dri-devel@lists.freedesktop.org, Chenbo Feng , Alistair Strachan , Liam Mark , Christoph Hellwig , "Xiaqing \(A\)" , "Andrew F . Davis" , Pratik Patel , butao MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP This adds a CMA heap, which allows userspace to allocate a dma-buf of contiguous memory out of a CMA region. This code is an evolution of the Android ION implementation, so thanks to its original author and maintainters: Benjamin Gaignard, Laura Abbott, and others! Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Pratik Patel Cc: Brian Starkey Cc: Vincent Donnefort Cc: Sudipto Paul Cc: Andrew F. Davis Cc: Xu YiPing Cc: "Chenfeng (puck)" Cc: butao Cc: "Xiaqing (A)" Cc: Yudongbin Cc: Christoph Hellwig Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Reviewed-by: Benjamin Gaignard Signed-off-by: John Stultz Change-Id: Ic2b0c5dfc0dbaff5245bd1c50170c64b06c73051 --- v2: * Switch allocate to return dmabuf fd * Simplify init code * Checkpatch fixups v3: * Switch to inline function for to_cma_heap() * Minor cleanups suggested by Brian * Fold in new registration style from Andrew * Folded in changes from Andrew to use simplified page list from the heap helpers v4: * Use the fd_flags when creating dmabuf fd (Suggested by Benjamin) * Use precalculated pagecount (Suggested by Andrew) v6: * Changed variable names to improve clarity, as suggested by Brian --- drivers/dma-buf/heaps/Kconfig | 8 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/cma_heap.c | 169 +++++++++++++++++++++++++++++++ 3 files changed, 178 insertions(+) create mode 100644 drivers/dma-buf/heaps/cma_heap.c diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 205052744169..a5eef06c4226 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. + +config DMABUF_HEAPS_CMA + bool "DMA-BUF CMA Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CMA heap. This heap is backed + by the Contiguous Memory Allocator (CMA). If your system has these + regions, you should say Y here. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index d1808eca2581..6e54cdec3da0 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-y += heap-helpers.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c new file mode 100644 index 000000000000..d2b10878b60b --- /dev/null +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -0,0 +1,169 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF CMA heap exporter + * + * Copyright (C) 2012, 2019 Linaro Ltd. + * Author: for ST-Ericsson. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "heap-helpers.h" + +struct cma_heap { + struct dma_heap *heap; + struct cma *cma; +}; + +static void cma_heap_free(struct heap_helper_buffer *buffer) +{ + struct cma_heap *cma_heap = dma_heap_get_data(buffer->heap_buffer.heap); + unsigned long nr_pages = buffer->pagecount; + struct page *cma_pages = buffer->priv_virt; + + /* free page list */ + kfree(buffer->pages); + /* release memory */ + cma_release(cma_heap->cma, cma_pages, nr_pages); + kfree(buffer); +} + +/* dmabuf heap CMA operations functions */ +static int cma_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) +{ + struct cma_heap *cma_heap = dma_heap_get_data(heap); + struct heap_helper_buffer *helper_buffer; + struct page *cma_pages; + size_t size = PAGE_ALIGN(len); + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long align = get_order(size); + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + int ret = -ENOMEM; + pgoff_t pg; + + if (align > CONFIG_CMA_ALIGNMENT) + align = CONFIG_CMA_ALIGNMENT; + + helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); + if (!helper_buffer) + return -ENOMEM; + + INIT_HEAP_HELPER_BUFFER(helper_buffer, cma_heap_free); + helper_buffer->heap_buffer.flags = heap_flags; + helper_buffer->heap_buffer.heap = heap; + helper_buffer->heap_buffer.size = len; + + cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false); + if (!cma_pages) + goto free_buf; + + if (PageHighMem(cma_pages)) { + unsigned long nr_clear_pages = nr_pages; + struct page *page = cma_pages; + + while (nr_clear_pages > 0) { + void *vaddr = kmap_atomic(page); + + memset(vaddr, 0, PAGE_SIZE); + kunmap_atomic(vaddr); + page++; + nr_clear_pages--; + } + } else { + memset(page_address(cma_pages), 0, size); + } + + helper_buffer->pagecount = nr_pages; + helper_buffer->pages = kmalloc_array(helper_buffer->pagecount, + sizeof(*helper_buffer->pages), + GFP_KERNEL); + if (!helper_buffer->pages) { + ret = -ENOMEM; + goto free_cma; + } + + for (pg = 0; pg < helper_buffer->pagecount; pg++) { + helper_buffer->pages[pg] = &cma_pages[pg]; + if (!helper_buffer->pages[pg]) + goto free_pages; + } + + /* create the dmabuf */ + exp_info.ops = &heap_helper_ops; + exp_info.size = len; + exp_info.flags = fd_flags; + exp_info.priv = &helper_buffer->heap_buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto free_pages; + } + + helper_buffer->heap_buffer.dmabuf = dmabuf; + helper_buffer->priv_virt = cma_pages; + + ret = dma_buf_fd(dmabuf, fd_flags); + if (ret < 0) { + dma_buf_put(dmabuf); + /* just return, as put will call release and that will free */ + return ret; + } + + return ret; + +free_pages: + kfree(helper_buffer->pages); +free_cma: + cma_release(cma_heap->cma, cma_pages, nr_pages); +free_buf: + kfree(helper_buffer); + return ret; +} + +static struct dma_heap_ops cma_heap_ops = { + .allocate = cma_heap_allocate, +}; + +static int __add_cma_heap(struct cma *cma, void *data) +{ + struct cma_heap *cma_heap; + struct dma_heap_export_info exp_info; + + cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL); + if (!cma_heap) + return -ENOMEM; + cma_heap->cma = cma; + + exp_info.name = cma_get_name(cma); + exp_info.ops = &cma_heap_ops; + exp_info.priv = cma_heap; + + cma_heap->heap = dma_heap_add(&exp_info); + if (IS_ERR(cma_heap->heap)) { + int ret = PTR_ERR(cma_heap->heap); + + kfree(cma_heap); + return ret; + } + + return 0; +} + +static int add_cma_heaps(void) +{ + cma_for_each_area(__add_cma_heap, NULL); + return 0; +} +device_initcall(add_cma_heaps);