From patchwork Tue Dec 15 22:05:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 11975849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F9DBC4361B for ; Tue, 15 Dec 2020 22:07:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E803322582 for ; Tue, 15 Dec 2020 22:07:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730061AbgLOWGt (ORCPT ); Tue, 15 Dec 2020 17:06:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729887AbgLOWGg (ORCPT ); Tue, 15 Dec 2020 17:06:36 -0500 Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76BDDC0617B0 for ; Tue, 15 Dec 2020 14:05:29 -0800 (PST) Received: by mail-pj1-x1042.google.com with SMTP id b5so394514pjl.0 for ; Tue, 15 Dec 2020 14:05:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kl8PmZ90JFms2rGnBVhby1dPAjXSFRIkuqNGt+DZeDI=; b=gtbakvDS5+hfUPYPisnKHKkpzI2vWtDWjMabBbr36nuJ4SnxstOs4PQOYUvf1rubas UVc6Zd9/Ftn/AKJedeTpl1JX9Mp7tQoHFxWL23lR9Nu+v0DciCOGdKgzjdW8OPgxDWq0 xU+zTjyuHUFkUFK7I3bWOz3PS1qYYCD3N5Zr3007xPpjcCgfSPvN67Wqp/5b6Ba19o0f L/NefZfbRy72TRZVYW/8uTjFQlc6ZnwEviOb5C3p0YeTjceCA8NGB6Ub3pMtd+S3Zr8G UmeZkBEWCcQRxb1om3207cWH5j2y88+4Mvy5bgKq5ARNmW3yFFAW265HlZSGKGP+868t jAGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kl8PmZ90JFms2rGnBVhby1dPAjXSFRIkuqNGt+DZeDI=; b=dURoAVjxTwPV7JkeM6X3UqHEoBXo+ftFw/dDa9s+rPbHbmCKGjqx0BmSFOTDRucpDl OUcm+vUTdD+EzMtjRR++oAflDVLXXLqCaFlVndp447HBKwxjnb5HlDIKjFVY4FewkMwI 5TR70gQV7w73Tr9wBkAmo3oXQwTERGla4GKpB6jWggsrCDTQ2Hs54njdKMjF/it7Y3Y7 +qfelJp91MGXpJJfdUXk9RK+qrnokmEkPYNdwIGUuYFEBwz/OW1SxV1n4EYfaiONavVY BWkjVKbZTTuguhy/PbTBw5b5PuEQ4b+TevNnq+dFIGzBWp9gJa0iLNlnPm911Bpx8K0x DDJQ== X-Gm-Message-State: AOAM532ipIv/Qj3ONwF9E9fKlImKgPDXZCc118aTJJHD9x+prX/D4dBb 2ny+j5zRUtPYvhgLhOTM3yFEvQ== X-Google-Smtp-Source: ABdhPJz5+eYpmQKEiyL/MUZSD6KzHvL6MBIgXZn3boaKbi6B/hgxQFMNg76pr/UYudpPO0p2PS22oQ== X-Received: by 2002:a17:90a:1b82:: with SMTP id w2mr653081pjc.127.1608069928998; Tue, 15 Dec 2020 14:05:28 -0800 (PST) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id x15sm84146pfa.80.2020.12.15.14.05.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Dec 2020 14:05:28 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC][PATCH 3/3] dma-buf: heaps: Rework heep allocation hooks to return struct dma_buf instead of fd Date: Tue, 15 Dec 2020 22:05:21 +0000 Message-Id: <20201215220521.118318-3-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201215220521.118318-1-john.stultz@linaro.org> References: <20201215220521.118318-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Every heap needs to create a dmabuf and then export it to a fd via dma_buf_fd(), so to consolidate things a bit, have the heaps just return a struct dmabuf * and let the top level dma_heap_buffer_alloc() call handle creating the fd via dma_buf_fd(). Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/dma-heap.c | 14 +++++++++++++- drivers/dma-buf/heaps/cma_heap.c | 22 +++++++--------------- drivers/dma-buf/heaps/system_heap.c | 21 +++++++-------------- include/linux/dma-heap.h | 12 ++++++------ 4 files changed, 33 insertions(+), 36 deletions(-) diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index afd22c9dbdcf..6b5db954569f 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -52,6 +52,9 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, unsigned int fd_flags, unsigned int heap_flags) { + struct dma_buf *dmabuf; + int fd; + /* * Allocations from all heaps have to begin * and end on page boundaries. @@ -60,7 +63,16 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, if (!len) return -EINVAL; - return heap->ops->allocate(heap, len, fd_flags, heap_flags); + dmabuf = heap->ops->allocate(heap, len, fd_flags, heap_flags); + if (IS_ERR(dmabuf)) + return PTR_ERR(dmabuf); + + fd = dma_buf_fd(dmabuf, fd_flags); + if (fd < 0) { + dma_buf_put(dmabuf); + /* just return, as put will call release and that will free */ + } + return fd; } static int dma_heap_open(struct inode *inode, struct file *file) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 877353e8014f..0c7d6430605f 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -268,10 +268,10 @@ static const struct dma_buf_ops cma_heap_buf_ops = { .release = cma_heap_dma_buf_release, }; -static int cma_heap_allocate(struct dma_heap *heap, - unsigned long len, - unsigned long fd_flags, - unsigned long heap_flags) +static struct dma_buf *cma_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) { struct cma_heap *cma_heap = dma_heap_get_drvdata(heap); struct cma_heap_buffer *buffer; @@ -286,7 +286,7 @@ static int cma_heap_allocate(struct dma_heap *heap, buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); if (!buffer) - return -ENOMEM; + return ERR_PTR(-ENOMEM); INIT_LIST_HEAD(&buffer->attachments); mutex_init(&buffer->lock); @@ -345,15 +345,7 @@ static int cma_heap_allocate(struct dma_heap *heap, ret = PTR_ERR(dmabuf); goto free_pages; } - - ret = dma_buf_fd(dmabuf, fd_flags); - if (ret < 0) { - dma_buf_put(dmabuf); - /* just return, as put will call release and that will free */ - return ret; - } - - return ret; + return dmabuf; free_pages: kfree(buffer->pages); @@ -362,7 +354,7 @@ static int cma_heap_allocate(struct dma_heap *heap, free_buffer: kfree(buffer); - return ret; + return ERR_PTR(ret); } static const struct dma_heap_ops cma_heap_ops = { diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 2321c91891f6..7b154424aeb3 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -332,10 +332,10 @@ static struct page *alloc_largest_available(unsigned long size, return NULL; } -static int system_heap_allocate(struct dma_heap *heap, - unsigned long len, - unsigned long fd_flags, - unsigned long heap_flags) +static struct dma_buf *system_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) { struct system_heap_buffer *buffer; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); @@ -350,7 +350,7 @@ static int system_heap_allocate(struct dma_heap *heap, buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); if (!buffer) - return -ENOMEM; + return ERR_PTR(-ENOMEM); INIT_LIST_HEAD(&buffer->attachments); mutex_init(&buffer->lock); @@ -400,14 +400,7 @@ static int system_heap_allocate(struct dma_heap *heap, ret = PTR_ERR(dmabuf); goto free_pages; } - - ret = dma_buf_fd(dmabuf, fd_flags); - if (ret < 0) { - dma_buf_put(dmabuf); - /* just return, as put will call release and that will free */ - return ret; - } - return ret; + return dmabuf; free_pages: for_each_sgtable_sg(table, sg, i) { @@ -421,7 +414,7 @@ static int system_heap_allocate(struct dma_heap *heap, __free_pages(page, compound_order(page)); kfree(buffer); - return ret; + return ERR_PTR(ret); } static const struct dma_heap_ops system_heap_ops = { diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 454e354d1ffb..5bc5c946af58 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -16,15 +16,15 @@ struct dma_heap; /** * struct dma_heap_ops - ops to operate on a given heap - * @allocate: allocate dmabuf and return fd + * @allocate: allocate dmabuf and return struct dma_buf ptr * - * allocate returns dmabuf fd on success, -errno on error. + * allocate returns dmabuf on success, ERR_PTR(-errno) on error. */ struct dma_heap_ops { - int (*allocate)(struct dma_heap *heap, - unsigned long len, - unsigned long fd_flags, - unsigned long heap_flags); + struct dma_buf *(*allocate)(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags); }; /**