diff mbox series

[v2,3/5] drm: Add and export function drm_gem_cma_mmap_noncoherent

Message ID 20210307202835.253907-4-paul@crapouillou.net (mailing list archive)
State Superseded
Headers show
Series Add option to mmap GEM buffers cached | expand

Commit Message

Paul Cercueil March 7, 2021, 8:28 p.m. UTC
This function can be used by drivers that need to mmap dumb buffers
created with non-coherent backing memory.

v2: Use dma_to_phys() since cma_obj->paddr isn't a phys_addr_t but a
dma_addr_t.

Signed-off-by: Paul Cercueil <paul@crapouillou.net>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 67 +++++++++++++++++++++++++---
 include/drm/drm_gem_cma_helper.h     |  1 +
 2 files changed, 63 insertions(+), 5 deletions(-)

Comments

Christoph Hellwig March 11, 2021, 12:26 p.m. UTC | #1
> +int drm_gem_cma_mmap_noncoherent(struct drm_gem_object *obj,
> +				 struct vm_area_struct *vma)
> +{
> +	struct drm_gem_cma_object *cma_obj;
> +	unsigned long pfn;
> +	int ret;
> +
> +	/*
> +	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> +	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> +	 * the whole buffer.
> +	 */
> +	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
> +	vma->vm_flags &= ~VM_PFNMAP;
> +	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> +
> +	cma_obj = to_drm_gem_cma_obj(obj);
> +
> +	pfn = PHYS_PFN(dma_to_phys(cma_obj->base.dev->dev, cma_obj->paddr));
> +
> +	ret = remap_pfn_range(vma, vma->vm_start, pfn,
> +			      vma->vm_end - vma->vm_start,
> +			      vma->vm_page_prot);

dma_to_phys must not be used by drivers.

I have a proper helper for this waiting for users:

http://git.infradead.org/users/hch/misc.git/commitdiff/96a546e7229ec53aadbdb7936d1e5e6cb5958952

If you can confirm the helpers works for you I can try to still sneak
it to Linus for 5.12 to ease the merge pain.
Paul Cercueil March 11, 2021, 12:32 p.m. UTC | #2
Hi Christoph,

Le jeu. 11 mars 2021 à 12:26, Christoph Hellwig <hch@infradead.org> a 
écrit :
>>  +int drm_gem_cma_mmap_noncoherent(struct drm_gem_object *obj,
>>  +				 struct vm_area_struct *vma)
>>  +{
>>  +	struct drm_gem_cma_object *cma_obj;
>>  +	unsigned long pfn;
>>  +	int ret;
>>  +
>>  +	/*
>>  +	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and 
>> set the
>>  +	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want 
>> to map
>>  +	 * the whole buffer.
>>  +	 */
>>  +	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
>>  +	vma->vm_flags &= ~VM_PFNMAP;
>>  +	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>>  +
>>  +	cma_obj = to_drm_gem_cma_obj(obj);
>>  +
>>  +	pfn = PHYS_PFN(dma_to_phys(cma_obj->base.dev->dev, 
>> cma_obj->paddr));
>>  +
>>  +	ret = remap_pfn_range(vma, vma->vm_start, pfn,
>>  +			      vma->vm_end - vma->vm_start,
>>  +			      vma->vm_page_prot);
> 
> dma_to_phys must not be used by drivers.
> 
> I have a proper helper for this waiting for users:
> 
> http://git.infradead.org/users/hch/misc.git/commitdiff/96a546e7229ec53aadbdb7936d1e5e6cb5958952
> 
> If you can confirm the helpers works for you I can try to still sneak
> it to Linus for 5.12 to ease the merge pain.

I can try. How do I get a page pointer from a dma_addr_t?

-Paul
Christoph Hellwig March 11, 2021, 12:36 p.m. UTC | #3
On Thu, Mar 11, 2021 at 12:32:27PM +0000, Paul Cercueil wrote:
> > dma_to_phys must not be used by drivers.
> > 
> > I have a proper helper for this waiting for users:
> > 
> > http://git.infradead.org/users/hch/misc.git/commitdiff/96a546e7229ec53aadbdb7936d1e5e6cb5958952
> > 
> > If you can confirm the helpers works for you I can try to still sneak
> > it to Linus for 5.12 to ease the merge pain.
> 
> I can try. How do I get a page pointer from a dma_addr_t?

You don't - you get it from using virt_to_page on the pointer returned
from dma_alloc_noncoherent.  That beind said to keep the API sane I
should probably add a wrapper that does that for you.
Paul Cercueil March 11, 2021, 4:12 p.m. UTC | #4
Le jeu. 11 mars 2021 à 12:36, Christoph Hellwig <hch@infradead.org> a 
écrit :
> On Thu, Mar 11, 2021 at 12:32:27PM +0000, Paul Cercueil wrote:
>>  > dma_to_phys must not be used by drivers.
>>  >
>>  > I have a proper helper for this waiting for users:
>>  >
>>  > 
>> http://git.infradead.org/users/hch/misc.git/commitdiff/96a546e7229ec53aadbdb7936d1e5e6cb5958952
>>  >
>>  > If you can confirm the helpers works for you I can try to still 
>> sneak
>>  > it to Linus for 5.12 to ease the merge pain.
>> 
>>  I can try. How do I get a page pointer from a dma_addr_t?
> 
> You don't - you get it from using virt_to_page on the pointer returned
> from dma_alloc_noncoherent.  That beind said to keep the API sane I
> should probably add a wrapper that does that for you.

I tested using:

ret = dma_mmap_pages(cma_obj->base.dev->dev,
                     vma, vma->vm_end - vma->vm_start,
                     virt_to_page(cma_obj->vaddr));

It works fine.

I think I can use remap_pfn_range() for now, and switch to your new API 
once it's available in drm-misc-next.

Cheers,
-Paul
Christoph Hellwig March 12, 2021, 4:36 p.m. UTC | #5
On Thu, Mar 11, 2021 at 04:12:55PM +0000, Paul Cercueil wrote:
> ret = dma_mmap_pages(cma_obj->base.dev->dev,
>                     vma, vma->vm_end - vma->vm_start,
>                     virt_to_page(cma_obj->vaddr));
> 
> It works fine.
> 
> I think I can use remap_pfn_range() for now, and switch to your new API once
> it's available in drm-misc-next.

No, drivers must not use dma_to_phys, and they also must not include
dma-direct.h.
diff mbox series

Patch

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d100c5f9c140..e39b0464e19d 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -10,6 +10,7 @@ 
  */
 
 #include <linux/dma-buf.h>
+#include <linux/dma-direct.h>
 #include <linux/dma-mapping.h>
 #include <linux/export.h>
 #include <linux/mm.h>
@@ -42,10 +43,20 @@  static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
 	.vm_ops = &drm_gem_cma_vm_ops,
 };
 
+static const struct drm_gem_object_funcs drm_gem_cma_noncoherent_funcs = {
+	.free = drm_gem_cma_free_object,
+	.print_info = drm_gem_cma_print_info,
+	.get_sg_table = drm_gem_cma_get_sg_table,
+	.vmap = drm_gem_cma_vmap,
+	.mmap = drm_gem_cma_mmap_noncoherent,
+	.vm_ops = &drm_gem_cma_vm_ops,
+};
+
 /**
  * __drm_gem_cma_create - Create a GEM CMA object without allocating memory
  * @drm: DRM device
  * @size: size of the object to allocate
+ * @noncoherent: if true, will use non-coherent backed memory
  *
  * This function creates and initializes a GEM CMA object of the given size,
  * but doesn't allocate any memory to back the object.
@@ -55,7 +66,7 @@  static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
  * error code on failure.
  */
 static struct drm_gem_cma_object *
-__drm_gem_cma_create(struct drm_device *drm, size_t size)
+__drm_gem_cma_create(struct drm_device *drm, size_t size, bool noncoherent)
 {
 	struct drm_gem_cma_object *cma_obj;
 	struct drm_gem_object *gem_obj;
@@ -68,8 +79,12 @@  __drm_gem_cma_create(struct drm_device *drm, size_t size)
 	if (!gem_obj)
 		return ERR_PTR(-ENOMEM);
 
-	if (!gem_obj->funcs)
-		gem_obj->funcs = &drm_gem_cma_default_funcs;
+	if (!gem_obj->funcs) {
+		if (noncoherent)
+			gem_obj->funcs = &drm_gem_cma_noncoherent_funcs;
+		else
+			gem_obj->funcs = &drm_gem_cma_default_funcs;
+	}
 
 	cma_obj = container_of(gem_obj, struct drm_gem_cma_object, base);
 
@@ -100,7 +115,7 @@  drm_gem_cma_create_with_cache_param(struct drm_device *drm,
 
 	size = round_up(size, PAGE_SIZE);
 
-	cma_obj = __drm_gem_cma_create(drm, size);
+	cma_obj = __drm_gem_cma_create(drm, size, noncoherent);
 	if (IS_ERR(cma_obj))
 		return cma_obj;
 
@@ -503,7 +518,7 @@  drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 		return ERR_PTR(-EINVAL);
 
 	/* Create a CMA GEM buffer. */
-	cma_obj = __drm_gem_cma_create(dev, attach->dmabuf->size);
+	cma_obj = __drm_gem_cma_create(dev, attach->dmabuf->size, false);
 	if (IS_ERR(cma_obj))
 		return ERR_CAST(cma_obj);
 
@@ -579,6 +594,48 @@  int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
 
+/**
+ * drm_gem_cma_mmap_noncoherent - memory-map a CMA GEM object with
+ *     non-coherent cache attribute
+ * @filp: file object
+ * @vma: VMA for the area to be mapped
+ *
+ * Just like drm_gem_cma_mmap, but for a GEM object backed by non-coherent
+ * memory.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_cma_mmap_noncoherent(struct drm_gem_object *obj,
+				 struct vm_area_struct *vma)
+{
+	struct drm_gem_cma_object *cma_obj;
+	unsigned long pfn;
+	int ret;
+
+	/*
+	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
+	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
+	 * the whole buffer.
+	 */
+	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
+	vma->vm_flags &= ~VM_PFNMAP;
+	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+
+	cma_obj = to_drm_gem_cma_obj(obj);
+
+	pfn = PHYS_PFN(dma_to_phys(cma_obj->base.dev->dev, cma_obj->paddr));
+
+	ret = remap_pfn_range(vma, vma->vm_start, pfn,
+			      vma->vm_end - vma->vm_start,
+			      vma->vm_page_prot);
+	if (ret)
+		drm_gem_vm_close(vma);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_cma_mmap_noncoherent);
+
 /**
  * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another driver's
  *	scatter/gather table and get the virtual address of the buffer
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 6b44e7492a63..6a3f7e1312cc 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -107,6 +107,7 @@  drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct sg_table *sgt);
 int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
+int drm_gem_cma_mmap_noncoherent(struct drm_gem_object *obj, struct vm_area_struct *vma);
 
 /**
  * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations