From patchwork Fri May 14 20:11:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 12258863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74A2DC433ED for ; Fri, 14 May 2021 20:11:58 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3A64361076 for ; Fri, 14 May 2021 20:11:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A64361076 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=crapouillou.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C2D5C6F45A; Fri, 14 May 2021 20:11:57 +0000 (UTC) Received: from aposti.net (aposti.net [89.234.176.197]) by gabe.freedesktop.org (Postfix) with ESMTPS id 558386F45A for ; Fri, 14 May 2021 20:11:56 +0000 (UTC) From: Paul Cercueil To: Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter Subject: [PATCH v3 1/3] drm: Add support for GEM buffers backed by non-coherent memory Date: Fri, 14 May 2021 21:11:36 +0100 Message-Id: <20210514201138.162230-2-paul@crapouillou.net> In-Reply-To: <20210514201138.162230-1-paul@crapouillou.net> References: <20210514201138.162230-1-paul@crapouillou.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Cercueil , linux-mips@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Christoph Hellwig , od@opendingux.net Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Having GEM buffers backed by non-coherent memory is interesting in the particular case where it is faster to render to a non-coherent buffer then sync the data cache, than to render to a write-combine buffer, and (by extension) much faster than using a shadow buffer. This is true for instance on some Ingenic SoCs, where even simple blits (e.g. memcpy) are about three times faster using this method. Add a 'map_noncoherent' flag to the drm_gem_cma_object structure, which can be set by the drivers when they create the dumb buffer. Since this really only applies to software rendering, disable this flag as soon as the CMA objects are exported via PRIME. v3: New patch. Now uses a simple 'map_noncoherent' flag to control how the objects are mapped, and use the new dma_mmap_pages function. Signed-off-by: Paul Cercueil --- drivers/gpu/drm/drm_gem_cma_helper.c | 41 +++++++++++++++++++++++++--- include/drm/drm_gem_cma_helper.h | 7 ++++- 2 files changed, 43 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index 7942cf05cd93..81a31bcf7d68 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -115,8 +115,15 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm, if (IS_ERR(cma_obj)) return cma_obj; - cma_obj->vaddr = dma_alloc_wc(drm->dev, size, &cma_obj->paddr, - GFP_KERNEL | __GFP_NOWARN); + if (cma_obj->map_noncoherent) { + cma_obj->vaddr = dma_alloc_noncoherent(drm->dev, size, + &cma_obj->paddr, + DMA_TO_DEVICE, + GFP_KERNEL | __GFP_NOWARN); + } else { + cma_obj->vaddr = dma_alloc_wc(drm->dev, size, &cma_obj->paddr, + GFP_KERNEL | __GFP_NOWARN); + } if (!cma_obj->vaddr) { drm_dbg(drm, "failed to allocate buffer with size %zu\n", size); @@ -499,8 +506,13 @@ int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) cma_obj = to_drm_gem_cma_obj(obj); - ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr, - cma_obj->paddr, vma->vm_end - vma->vm_start); + vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); + if (!cma_obj->map_noncoherent) + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); + + ret = dma_mmap_pages(cma_obj->base.dev->dev, + vma, vma->vm_end - vma->vm_start, + virt_to_page(cma_obj->vaddr)); if (ret) drm_gem_vm_close(vma); @@ -556,3 +568,24 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev, return obj; } EXPORT_SYMBOL(drm_gem_cma_prime_import_sg_table_vmap); + +/** + * drm_gem_cma_prime_mmap - PRIME mmap function for CMA GEM drivers + * @obj: GEM object + * @vma: Virtual address range + * + * Carbon copy of drm_gem_prime_mmap, but the 'map_noncoherent' flag is + * disabled to ensure that the exported buffers have the expected cache + * coherency. + */ +int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, + struct vm_area_struct *vma) +{ + struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); + + /* Use standard cache settings for PRIME-exported GEM buffers */ + cma_obj->map_noncoherent = false; + + return drm_gem_prime_mmap(obj, vma); +} +EXPORT_SYMBOL(drm_gem_cma_prime_mmap); diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index 0a9711caa3e8..b597e00fd5f6 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -16,6 +16,7 @@ struct drm_mode_create_dumb; * more than one entry but they are guaranteed to have contiguous * DMA addresses. * @vaddr: kernel virtual address of the backing memory + * @map_noncoherent: if true, the GEM object is backed by non-coherent memory */ struct drm_gem_cma_object { struct drm_gem_object base; @@ -24,6 +25,8 @@ struct drm_gem_cma_object { /* For objects with DMA memory allocated by GEM CMA */ void *vaddr; + + bool map_noncoherent; }; #define to_drm_gem_cma_obj(gem_obj) \ @@ -119,7 +122,7 @@ int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); .prime_handle_to_fd = drm_gem_prime_handle_to_fd, \ .prime_fd_to_handle = drm_gem_prime_fd_to_handle, \ .gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \ - .gem_prime_mmap = drm_gem_prime_mmap + .gem_prime_mmap = drm_gem_cma_prime_mmap /** * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations @@ -181,5 +184,7 @@ struct drm_gem_object * drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *drm, struct dma_buf_attachment *attach, struct sg_table *sgt); +int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, + struct vm_area_struct *vma); #endif /* __DRM_GEM_CMA_HELPER_H__ */