diff mbox series

[v2,4/4] drm/udl: Remove struct udl_gem_object and functions

Message ID 20191106104722.19700-5-tzimmermann@suse.de (mailing list archive)
State New, archived
Headers show
Series drm/udl: Convert to SHMEM | expand

Commit Message

Thomas Zimmermann Nov. 6, 2019, 10:47 a.m. UTC
Simply removes all the obsolete GEM code from udl. No functional
changes.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/udl/Makefile     |   2 +-
 drivers/gpu/drm/udl/udl_dmabuf.c | 254 -------------------------------
 drivers/gpu/drm/udl/udl_drv.h    |  29 ----
 drivers/gpu/drm/udl/udl_gem.c    | 206 -------------------------
 4 files changed, 1 insertion(+), 490 deletions(-)
 delete mode 100644 drivers/gpu/drm/udl/udl_dmabuf.c

Comments

Noralf Trønnes Nov. 6, 2019, 11:48 a.m. UTC | #1
Den 06.11.2019 11.47, skrev Thomas Zimmermann:
> Simply removes all the obsolete GEM code from udl. No functional
> changes.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/udl/Makefile     |   2 +-
>  drivers/gpu/drm/udl/udl_dmabuf.c | 254 -------------------------------
>  drivers/gpu/drm/udl/udl_drv.h    |  29 ----
>  drivers/gpu/drm/udl/udl_gem.c    | 206 -------------------------
>  4 files changed, 1 insertion(+), 490 deletions(-)
>  delete mode 100644 drivers/gpu/drm/udl/udl_dmabuf.c
> 

<snip>

> -int udl_gem_vmap(struct udl_gem_object *obj)
> -{
> -	int page_count = obj->base.size / PAGE_SIZE;
> -	int ret;
> -
> -	if (obj->base.import_attach) {
> -		obj->vmapping = dma_buf_vmap(obj->base.import_attach->dmabuf);
> -		if (!obj->vmapping)
> -			return -ENOMEM;
> -		return 0;
> -	}
> -
> -	ret = udl_gem_get_pages(obj);
> -	if (ret)
> -		return ret;
> -
> -	obj->vmapping = vmap(obj->pages, page_count, 0, PAGE_KERNEL);

This differs from the shmem helper vmap:

	shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
			    VM_MAP, pgprot_writecombine(PAGE_KERNEL));

Boris added the WC in be7d9f05c53e:

 drm/gem_shmem: Use a writecombine mapping for ->vaddr

 Right now, the BO is mapped as a cached region when ->vmap() is called
 and the underlying object is not a dmabuf.
 Doing that makes cache management a bit more complicated (you'd need
 to call dma_map/unmap_sg() on the ->sgt field everytime the BO is about
 to be passed to the GPU/CPU), so let's map the BO with writecombine
 attributes instead (as done in most drivers).

I don't know what the implications of this are, just pointing out a
difference.

Noralf.

> -	if (!obj->vmapping)
> -		return -ENOMEM;
> -	return 0;
> -}
Thomas Zimmermann Nov. 6, 2019, 12:30 p.m. UTC | #2
Hi

Am 06.11.19 um 12:48 schrieb Noralf Trønnes:
> 
> 
> Den 06.11.2019 11.47, skrev Thomas Zimmermann:
>> Simply removes all the obsolete GEM code from udl. No functional
>> changes.
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>>  drivers/gpu/drm/udl/Makefile     |   2 +-
>>  drivers/gpu/drm/udl/udl_dmabuf.c | 254 -------------------------------
>>  drivers/gpu/drm/udl/udl_drv.h    |  29 ----
>>  drivers/gpu/drm/udl/udl_gem.c    | 206 -------------------------
>>  4 files changed, 1 insertion(+), 490 deletions(-)
>>  delete mode 100644 drivers/gpu/drm/udl/udl_dmabuf.c
>>
> 
> <snip>
> 
>> -int udl_gem_vmap(struct udl_gem_object *obj)
>> -{
>> -	int page_count = obj->base.size / PAGE_SIZE;
>> -	int ret;
>> -
>> -	if (obj->base.import_attach) {
>> -		obj->vmapping = dma_buf_vmap(obj->base.import_attach->dmabuf);
>> -		if (!obj->vmapping)
>> -			return -ENOMEM;
>> -		return 0;
>> -	}
>> -
>> -	ret = udl_gem_get_pages(obj);
>> -	if (ret)
>> -		return ret;
>> -
>> -	obj->vmapping = vmap(obj->pages, page_count, 0, PAGE_KERNEL);
> 
> This differs from the shmem helper vmap:
> 
> 	shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
> 			    VM_MAP, pgprot_writecombine(PAGE_KERNEL));
> 
> Boris added the WC in be7d9f05c53e:
> 
>  drm/gem_shmem: Use a writecombine mapping for ->vaddr
> 
>  Right now, the BO is mapped as a cached region when ->vmap() is called
>  and the underlying object is not a dmabuf.
>  Doing that makes cache management a bit more complicated (you'd need
>  to call dma_map/unmap_sg() on the ->sgt field everytime the BO is about
>  to be passed to the GPU/CPU), so let's map the BO with writecombine
>  attributes instead (as done in most drivers).
> 
> I don't know what the implications of this are, just pointing out a
> difference.

Thanks! Switching to SHMEM disables caching on these pages. The logic
around dma_map/unmap_sg() is the same in udl and generic prime functions
from what I can tell.

Actually, I've never seen any difference in display performance when
modifying these settings. (DisplayLink is always slow.) I'd like to
throw away the caching optimization and rather tell userspace to
allocate a shadow buffer if necessary.

Best regards
Thomas

> Noralf.
> 
>> -	if (!obj->vmapping)
>> -		return -ENOMEM;
>> -	return 0;
>> -}
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
Gerd Hoffmann Nov. 6, 2019, 12:56 p.m. UTC | #3
On Wed, Nov 06, 2019 at 11:47:22AM +0100, Thomas Zimmermann wrote:
> Simply removes all the obsolete GEM code from udl. No functional
> changes.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Acked-by: Gerd Hoffmann <kraxel@redhat.com>

> ---
>  drivers/gpu/drm/udl/Makefile     |   2 +-
>  drivers/gpu/drm/udl/udl_dmabuf.c | 254 -------------------------------
>  drivers/gpu/drm/udl/udl_drv.h    |  29 ----
>  drivers/gpu/drm/udl/udl_gem.c    | 206 -------------------------
>  4 files changed, 1 insertion(+), 490 deletions(-)
>  delete mode 100644 drivers/gpu/drm/udl/udl_dmabuf.c
> 
> diff --git a/drivers/gpu/drm/udl/Makefile b/drivers/gpu/drm/udl/Makefile
> index e5bb6f757e11..9c42820ae33d 100644
> --- a/drivers/gpu/drm/udl/Makefile
> +++ b/drivers/gpu/drm/udl/Makefile
> @@ -1,4 +1,4 @@
>  # SPDX-License-Identifier: GPL-2.0-only
> -udl-y := udl_drv.o udl_modeset.o udl_connector.o udl_encoder.o udl_main.o udl_fb.o udl_transfer.o udl_gem.o udl_dmabuf.o
> +udl-y := udl_drv.o udl_modeset.o udl_connector.o udl_encoder.o udl_main.o udl_fb.o udl_transfer.o udl_gem.o
>  
>  obj-$(CONFIG_DRM_UDL) := udl.o
> diff --git a/drivers/gpu/drm/udl/udl_dmabuf.c b/drivers/gpu/drm/udl/udl_dmabuf.c
> deleted file mode 100644
> index b1c1ee64de59..000000000000
> --- a/drivers/gpu/drm/udl/udl_dmabuf.c
> +++ /dev/null
> @@ -1,254 +0,0 @@
> -// SPDX-License-Identifier: GPL-2.0-or-later
> -/*
> - * udl_dmabuf.c
> - *
> - * Copyright (c) 2014 The Chromium OS Authors
> - */
> -
> -#include <linux/shmem_fs.h>
> -#include <linux/dma-buf.h>
> -
> -#include <drm/drm_prime.h>
> -
> -#include "udl_drv.h"
> -
> -struct udl_drm_dmabuf_attachment {
> -	struct sg_table sgt;
> -	enum dma_data_direction dir;
> -	bool is_mapped;
> -};
> -
> -static int udl_attach_dma_buf(struct dma_buf *dmabuf,
> -			      struct dma_buf_attachment *attach)
> -{
> -	struct udl_drm_dmabuf_attachment *udl_attach;
> -
> -	DRM_DEBUG_PRIME("[DEV:%s] size:%zd\n", dev_name(attach->dev),
> -			attach->dmabuf->size);
> -
> -	udl_attach = kzalloc(sizeof(*udl_attach), GFP_KERNEL);
> -	if (!udl_attach)
> -		return -ENOMEM;
> -
> -	udl_attach->dir = DMA_NONE;
> -	attach->priv = udl_attach;
> -
> -	return 0;
> -}
> -
> -static void udl_detach_dma_buf(struct dma_buf *dmabuf,
> -			       struct dma_buf_attachment *attach)
> -{
> -	struct udl_drm_dmabuf_attachment *udl_attach = attach->priv;
> -	struct sg_table *sgt;
> -
> -	if (!udl_attach)
> -		return;
> -
> -	DRM_DEBUG_PRIME("[DEV:%s] size:%zd\n", dev_name(attach->dev),
> -			attach->dmabuf->size);
> -
> -	sgt = &udl_attach->sgt;
> -
> -	if (udl_attach->dir != DMA_NONE)
> -		dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
> -				udl_attach->dir);
> -
> -	sg_free_table(sgt);
> -	kfree(udl_attach);
> -	attach->priv = NULL;
> -}
> -
> -static struct sg_table *udl_map_dma_buf(struct dma_buf_attachment *attach,
> -					enum dma_data_direction dir)
> -{
> -	struct udl_drm_dmabuf_attachment *udl_attach = attach->priv;
> -	struct udl_gem_object *obj = to_udl_bo(attach->dmabuf->priv);
> -	struct drm_device *dev = obj->base.dev;
> -	struct udl_device *udl = dev->dev_private;
> -	struct scatterlist *rd, *wr;
> -	struct sg_table *sgt = NULL;
> -	unsigned int i;
> -	int page_count;
> -	int nents, ret;
> -
> -	DRM_DEBUG_PRIME("[DEV:%s] size:%zd dir=%d\n", dev_name(attach->dev),
> -			attach->dmabuf->size, dir);
> -
> -	/* just return current sgt if already requested. */
> -	if (udl_attach->dir == dir && udl_attach->is_mapped)
> -		return &udl_attach->sgt;
> -
> -	if (!obj->pages) {
> -		ret = udl_gem_get_pages(obj);
> -		if (ret) {
> -			DRM_ERROR("failed to map pages.\n");
> -			return ERR_PTR(ret);
> -		}
> -	}
> -
> -	page_count = obj->base.size / PAGE_SIZE;
> -	obj->sg = drm_prime_pages_to_sg(obj->pages, page_count);
> -	if (IS_ERR(obj->sg)) {
> -		DRM_ERROR("failed to allocate sgt.\n");
> -		return ERR_CAST(obj->sg);
> -	}
> -
> -	sgt = &udl_attach->sgt;
> -
> -	ret = sg_alloc_table(sgt, obj->sg->orig_nents, GFP_KERNEL);
> -	if (ret) {
> -		DRM_ERROR("failed to alloc sgt.\n");
> -		return ERR_PTR(-ENOMEM);
> -	}
> -
> -	mutex_lock(&udl->gem_lock);
> -
> -	rd = obj->sg->sgl;
> -	wr = sgt->sgl;
> -	for (i = 0; i < sgt->orig_nents; ++i) {
> -		sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
> -		rd = sg_next(rd);
> -		wr = sg_next(wr);
> -	}
> -
> -	if (dir != DMA_NONE) {
> -		nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
> -		if (!nents) {
> -			DRM_ERROR("failed to map sgl with iommu.\n");
> -			sg_free_table(sgt);
> -			sgt = ERR_PTR(-EIO);
> -			goto err_unlock;
> -		}
> -	}
> -
> -	udl_attach->is_mapped = true;
> -	udl_attach->dir = dir;
> -	attach->priv = udl_attach;
> -
> -err_unlock:
> -	mutex_unlock(&udl->gem_lock);
> -	return sgt;
> -}
> -
> -static void udl_unmap_dma_buf(struct dma_buf_attachment *attach,
> -			      struct sg_table *sgt,
> -			      enum dma_data_direction dir)
> -{
> -	/* Nothing to do. */
> -	DRM_DEBUG_PRIME("[DEV:%s] size:%zd dir:%d\n", dev_name(attach->dev),
> -			attach->dmabuf->size, dir);
> -}
> -
> -static void *udl_dmabuf_kmap(struct dma_buf *dma_buf, unsigned long page_num)
> -{
> -	/* TODO */
> -
> -	return NULL;
> -}
> -
> -static void udl_dmabuf_kunmap(struct dma_buf *dma_buf,
> -			      unsigned long page_num, void *addr)
> -{
> -	/* TODO */
> -}
> -
> -static int udl_dmabuf_mmap(struct dma_buf *dma_buf,
> -			   struct vm_area_struct *vma)
> -{
> -	/* TODO */
> -
> -	return -EINVAL;
> -}
> -
> -static const struct dma_buf_ops udl_dmabuf_ops = {
> -	.attach			= udl_attach_dma_buf,
> -	.detach			= udl_detach_dma_buf,
> -	.map_dma_buf		= udl_map_dma_buf,
> -	.unmap_dma_buf		= udl_unmap_dma_buf,
> -	.map			= udl_dmabuf_kmap,
> -	.unmap			= udl_dmabuf_kunmap,
> -	.mmap			= udl_dmabuf_mmap,
> -	.release		= drm_gem_dmabuf_release,
> -};
> -
> -struct dma_buf *udl_gem_prime_export(struct drm_gem_object *obj, int flags)
> -{
> -	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> -
> -	exp_info.ops = &udl_dmabuf_ops;
> -	exp_info.size = obj->size;
> -	exp_info.flags = flags;
> -	exp_info.priv = obj;
> -
> -	return drm_gem_dmabuf_export(obj->dev, &exp_info);
> -}
> -
> -static int udl_prime_create(struct drm_device *dev,
> -			    size_t size,
> -			    struct sg_table *sg,
> -			    struct udl_gem_object **obj_p)
> -{
> -	struct udl_gem_object *obj;
> -	int npages;
> -
> -	npages = size / PAGE_SIZE;
> -
> -	*obj_p = NULL;
> -	obj = udl_gem_alloc_object(dev, npages * PAGE_SIZE);
> -	if (!obj)
> -		return -ENOMEM;
> -
> -	obj->sg = sg;
> -	obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> -	if (obj->pages == NULL) {
> -		DRM_ERROR("obj pages is NULL %d\n", npages);
> -		return -ENOMEM;
> -	}
> -
> -	drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages);
> -
> -	*obj_p = obj;
> -	return 0;
> -}
> -
> -struct drm_gem_object *udl_gem_prime_import(struct drm_device *dev,
> -				struct dma_buf *dma_buf)
> -{
> -	struct dma_buf_attachment *attach;
> -	struct sg_table *sg;
> -	struct udl_gem_object *uobj;
> -	int ret;
> -
> -	/* need to attach */
> -	get_device(dev->dev);
> -	attach = dma_buf_attach(dma_buf, dev->dev);
> -	if (IS_ERR(attach)) {
> -		put_device(dev->dev);
> -		return ERR_CAST(attach);
> -	}
> -
> -	get_dma_buf(dma_buf);
> -
> -	sg = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
> -	if (IS_ERR(sg)) {
> -		ret = PTR_ERR(sg);
> -		goto fail_detach;
> -	}
> -
> -	ret = udl_prime_create(dev, dma_buf->size, sg, &uobj);
> -	if (ret)
> -		goto fail_unmap;
> -
> -	uobj->base.import_attach = attach;
> -
> -	return &uobj->base;
> -
> -fail_unmap:
> -	dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL);
> -fail_detach:
> -	dma_buf_detach(dma_buf, attach);
> -	dma_buf_put(dma_buf);
> -	put_device(dev->dev);
> -	return ERR_PTR(ret);
> -}
> diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
> index 630e64abc986..987d99ae2dfa 100644
> --- a/drivers/gpu/drm/udl/udl_drv.h
> +++ b/drivers/gpu/drm/udl/udl_drv.h
> @@ -73,18 +73,8 @@ struct udl_device {
>  
>  #define to_udl(x) container_of(x, struct udl_device, drm)
>  
> -struct udl_gem_object {
> -	struct drm_gem_object base;
> -	struct page **pages;
> -	void *vmapping;
> -	struct sg_table *sg;
> -};
> -
> -#define to_udl_bo(x) container_of(x, struct udl_gem_object, base)
> -
>  struct udl_framebuffer {
>  	struct drm_framebuffer base;
> -	struct udl_gem_object *obj;
>  	struct drm_gem_shmem_object *shmem;
>  	bool active_16; /* active on the 16-bit channel */
>  };
> @@ -120,27 +110,8 @@ int udl_render_hline(struct drm_device *dev, int log_bpp, struct urb **urb_ptr,
>  		     u32 byte_offset, u32 device_byte_offset, u32 byte_width,
>  		     int *ident_ptr, int *sent_ptr);
>  
> -int udl_dumb_create(struct drm_file *file_priv,
> -		    struct drm_device *dev,
> -		    struct drm_mode_create_dumb *args);
> -int udl_gem_mmap(struct drm_file *file_priv, struct drm_device *dev,
> -		 uint32_t handle, uint64_t *offset);
> -
>  struct drm_gem_object *udl_driver_gem_create_object(struct drm_device *dev,
>  						    size_t size);
> -void udl_gem_free_object(struct drm_gem_object *gem_obj);
> -struct udl_gem_object *udl_gem_alloc_object(struct drm_device *dev,
> -					    size_t size);
> -struct dma_buf *udl_gem_prime_export(struct drm_gem_object *obj, int flags);
> -struct drm_gem_object *udl_gem_prime_import(struct drm_device *dev,
> -				struct dma_buf *dma_buf);
> -
> -int udl_gem_get_pages(struct udl_gem_object *obj);
> -void udl_gem_put_pages(struct udl_gem_object *obj);
> -int udl_gem_vmap(struct udl_gem_object *obj);
> -void udl_gem_vunmap(struct udl_gem_object *obj);
> -int udl_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> -vm_fault_t udl_gem_fault(struct vm_fault *vmf);
>  
>  int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
>  		      int width, int height);
> diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c
> index 3554fffbf1db..ac82e06463bd 100644
> --- a/drivers/gpu/drm/udl/udl_gem.c
> +++ b/drivers/gpu/drm/udl/udl_gem.c
> @@ -80,209 +80,3 @@ struct drm_gem_object *udl_driver_gem_create_object(struct drm_device *dev,
>  
>  	return obj;
>  }
> -
> -struct udl_gem_object *udl_gem_alloc_object(struct drm_device *dev,
> -					    size_t size)
> -{
> -	struct drm_gem_object *obj;
> -
> -	obj = dev->driver->gem_create_object(dev, size);
> -	if (obj == NULL)
> -		return NULL;
> -
> -	if (drm_gem_object_init(dev, obj, size) != 0) {
> -		kfree(obj);
> -		return NULL;
> -	}
> -
> -	return to_udl_bo(obj);
> -}
> -
> -static int
> -udl_gem_create(struct drm_file *file,
> -	       struct drm_device *dev,
> -	       uint64_t size,
> -	       uint32_t *handle_p)
> -{
> -	struct udl_gem_object *obj;
> -	int ret;
> -	u32 handle;
> -
> -	size = roundup(size, PAGE_SIZE);
> -
> -	obj = udl_gem_alloc_object(dev, size);
> -	if (obj == NULL)
> -		return -ENOMEM;
> -
> -	ret = drm_gem_handle_create(file, &obj->base, &handle);
> -	if (ret) {
> -		drm_gem_object_release(&obj->base);
> -		kfree(obj);
> -		return ret;
> -	}
> -
> -	drm_gem_object_put_unlocked(&obj->base);
> -	*handle_p = handle;
> -	return 0;
> -}
> -
> -int udl_dumb_create(struct drm_file *file,
> -		    struct drm_device *dev,
> -		    struct drm_mode_create_dumb *args)
> -{
> -	args->pitch = args->width * DIV_ROUND_UP(args->bpp, 8);
> -	args->size = args->pitch * args->height;
> -	return udl_gem_create(file, dev,
> -			      args->size, &args->handle);
> -}
> -
> -int udl_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> -{
> -	struct drm_gem_object *obj;
> -	int ret;
> -
> -	ret = drm_gem_mmap(filp, vma);
> -	if (ret)
> -		return ret;
> -
> -	obj = vma->vm_private_data;
> -
> -	vma->vm_flags &= ~VM_PFNMAP;
> -	vma->vm_flags |= VM_MIXEDMAP;
> -
> -	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> -	if (obj->import_attach)
> -		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> -
> -	return ret;
> -}
> -
> -vm_fault_t udl_gem_fault(struct vm_fault *vmf)
> -{
> -	struct vm_area_struct *vma = vmf->vma;
> -	struct udl_gem_object *obj = to_udl_bo(vma->vm_private_data);
> -	struct page *page;
> -	unsigned int page_offset;
> -
> -	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
> -
> -	if (!obj->pages)
> -		return VM_FAULT_SIGBUS;
> -
> -	page = obj->pages[page_offset];
> -	return vmf_insert_page(vma, vmf->address, page);
> -}
> -
> -int udl_gem_get_pages(struct udl_gem_object *obj)
> -{
> -	struct page **pages;
> -
> -	if (obj->pages)
> -		return 0;
> -
> -	pages = drm_gem_get_pages(&obj->base);
> -	if (IS_ERR(pages))
> -		return PTR_ERR(pages);
> -
> -	obj->pages = pages;
> -
> -	return 0;
> -}
> -
> -void udl_gem_put_pages(struct udl_gem_object *obj)
> -{
> -	if (obj->base.import_attach) {
> -		kvfree(obj->pages);
> -		obj->pages = NULL;
> -		return;
> -	}
> -
> -	drm_gem_put_pages(&obj->base, obj->pages, false, false);
> -	obj->pages = NULL;
> -}
> -
> -int udl_gem_vmap(struct udl_gem_object *obj)
> -{
> -	int page_count = obj->base.size / PAGE_SIZE;
> -	int ret;
> -
> -	if (obj->base.import_attach) {
> -		obj->vmapping = dma_buf_vmap(obj->base.import_attach->dmabuf);
> -		if (!obj->vmapping)
> -			return -ENOMEM;
> -		return 0;
> -	}
> -
> -	ret = udl_gem_get_pages(obj);
> -	if (ret)
> -		return ret;
> -
> -	obj->vmapping = vmap(obj->pages, page_count, 0, PAGE_KERNEL);
> -	if (!obj->vmapping)
> -		return -ENOMEM;
> -	return 0;
> -}
> -
> -void udl_gem_vunmap(struct udl_gem_object *obj)
> -{
> -	if (obj->base.import_attach) {
> -		dma_buf_vunmap(obj->base.import_attach->dmabuf, obj->vmapping);
> -		return;
> -	}
> -
> -	vunmap(obj->vmapping);
> -
> -	udl_gem_put_pages(obj);
> -}
> -
> -void udl_gem_free_object(struct drm_gem_object *gem_obj)
> -{
> -	struct udl_gem_object *obj = to_udl_bo(gem_obj);
> -
> -	if (obj->vmapping)
> -		udl_gem_vunmap(obj);
> -
> -	if (gem_obj->import_attach) {
> -		drm_prime_gem_destroy(gem_obj, obj->sg);
> -		put_device(gem_obj->dev->dev);
> -	}
> -
> -	if (obj->pages)
> -		udl_gem_put_pages(obj);
> -
> -	drm_gem_free_mmap_offset(gem_obj);
> -}
> -
> -/* the dumb interface doesn't work with the GEM straight MMAP
> -   interface, it expects to do MMAP on the drm fd, like normal */
> -int udl_gem_mmap(struct drm_file *file, struct drm_device *dev,
> -		 uint32_t handle, uint64_t *offset)
> -{
> -	struct udl_gem_object *gobj;
> -	struct drm_gem_object *obj;
> -	struct udl_device *udl = to_udl(dev);
> -	int ret = 0;
> -
> -	mutex_lock(&udl->gem_lock);
> -	obj = drm_gem_object_lookup(file, handle);
> -	if (obj == NULL) {
> -		ret = -ENOENT;
> -		goto unlock;
> -	}
> -	gobj = to_udl_bo(obj);
> -
> -	ret = udl_gem_get_pages(gobj);
> -	if (ret)
> -		goto out;
> -	ret = drm_gem_create_mmap_offset(obj);
> -	if (ret)
> -		goto out;
> -
> -	*offset = drm_vma_node_offset_addr(&gobj->base.vma_node);
> -
> -out:
> -	drm_gem_object_put_unlocked(&gobj->base);
> -unlock:
> -	mutex_unlock(&udl->gem_lock);
> -	return ret;
> -}
> -- 
> 2.23.0
>
Thomas Zimmermann Nov. 7, 2019, 9:37 a.m. UTC | #4
Hi Noralf

Am 06.11.19 um 12:48 schrieb Noralf Trønnes:
> 
> 
> Den 06.11.2019 11.47, skrev Thomas Zimmermann:
>> Simply removes all the obsolete GEM code from udl. No functional
>> changes.
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>>  drivers/gpu/drm/udl/Makefile     |   2 +-
>>  drivers/gpu/drm/udl/udl_dmabuf.c | 254 -------------------------------
>>  drivers/gpu/drm/udl/udl_drv.h    |  29 ----
>>  drivers/gpu/drm/udl/udl_gem.c    | 206 -------------------------
>>  4 files changed, 1 insertion(+), 490 deletions(-)
>>  delete mode 100644 drivers/gpu/drm/udl/udl_dmabuf.c
>>
> 
> <snip>
> 
>> -int udl_gem_vmap(struct udl_gem_object *obj)
>> -{
>> -	int page_count = obj->base.size / PAGE_SIZE;
>> -	int ret;
>> -
>> -	if (obj->base.import_attach) {
>> -		obj->vmapping = dma_buf_vmap(obj->base.import_attach->dmabuf);
>> -		if (!obj->vmapping)
>> -			return -ENOMEM;
>> -		return 0;
>> -	}
>> -
>> -	ret = udl_gem_get_pages(obj);
>> -	if (ret)
>> -		return ret;
>> -
>> -	obj->vmapping = vmap(obj->pages, page_count, 0, PAGE_KERNEL);
> 
> This differs from the shmem helper vmap:
> 
> 	shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
> 			    VM_MAP, pgprot_writecombine(PAGE_KERNEL));
> 
> Boris added the WC in be7d9f05c53e:
> 
>  drm/gem_shmem: Use a writecombine mapping for ->vaddr
> 
>  Right now, the BO is mapped as a cached region when ->vmap() is called
>  and the underlying object is not a dmabuf.
>  Doing that makes cache management a bit more complicated (you'd need
>  to call dma_map/unmap_sg() on the ->sgt field everytime the BO is about
>  to be passed to the GPU/CPU), so let's map the BO with writecombine
>  attributes instead (as done in most drivers).
> 
> I don't know what the implications of this are, just pointing out a
> difference.

After some testing, I added an udl vmap function that enables caching on
the pages. I think I see a performance improvement for full-screen updates.

Best regards
Thomas

> 
> Noralf.
> 
>> -	if (!obj->vmapping)
>> -		return -ENOMEM;
>> -	return 0;
>> -}
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
diff mbox series

Patch

diff --git a/drivers/gpu/drm/udl/Makefile b/drivers/gpu/drm/udl/Makefile
index e5bb6f757e11..9c42820ae33d 100644
--- a/drivers/gpu/drm/udl/Makefile
+++ b/drivers/gpu/drm/udl/Makefile
@@ -1,4 +1,4 @@ 
 # SPDX-License-Identifier: GPL-2.0-only
-udl-y := udl_drv.o udl_modeset.o udl_connector.o udl_encoder.o udl_main.o udl_fb.o udl_transfer.o udl_gem.o udl_dmabuf.o
+udl-y := udl_drv.o udl_modeset.o udl_connector.o udl_encoder.o udl_main.o udl_fb.o udl_transfer.o udl_gem.o
 
 obj-$(CONFIG_DRM_UDL) := udl.o
diff --git a/drivers/gpu/drm/udl/udl_dmabuf.c b/drivers/gpu/drm/udl/udl_dmabuf.c
deleted file mode 100644
index b1c1ee64de59..000000000000
--- a/drivers/gpu/drm/udl/udl_dmabuf.c
+++ /dev/null
@@ -1,254 +0,0 @@ 
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * udl_dmabuf.c
- *
- * Copyright (c) 2014 The Chromium OS Authors
- */
-
-#include <linux/shmem_fs.h>
-#include <linux/dma-buf.h>
-
-#include <drm/drm_prime.h>
-
-#include "udl_drv.h"
-
-struct udl_drm_dmabuf_attachment {
-	struct sg_table sgt;
-	enum dma_data_direction dir;
-	bool is_mapped;
-};
-
-static int udl_attach_dma_buf(struct dma_buf *dmabuf,
-			      struct dma_buf_attachment *attach)
-{
-	struct udl_drm_dmabuf_attachment *udl_attach;
-
-	DRM_DEBUG_PRIME("[DEV:%s] size:%zd\n", dev_name(attach->dev),
-			attach->dmabuf->size);
-
-	udl_attach = kzalloc(sizeof(*udl_attach), GFP_KERNEL);
-	if (!udl_attach)
-		return -ENOMEM;
-
-	udl_attach->dir = DMA_NONE;
-	attach->priv = udl_attach;
-
-	return 0;
-}
-
-static void udl_detach_dma_buf(struct dma_buf *dmabuf,
-			       struct dma_buf_attachment *attach)
-{
-	struct udl_drm_dmabuf_attachment *udl_attach = attach->priv;
-	struct sg_table *sgt;
-
-	if (!udl_attach)
-		return;
-
-	DRM_DEBUG_PRIME("[DEV:%s] size:%zd\n", dev_name(attach->dev),
-			attach->dmabuf->size);
-
-	sgt = &udl_attach->sgt;
-
-	if (udl_attach->dir != DMA_NONE)
-		dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
-				udl_attach->dir);
-
-	sg_free_table(sgt);
-	kfree(udl_attach);
-	attach->priv = NULL;
-}
-
-static struct sg_table *udl_map_dma_buf(struct dma_buf_attachment *attach,
-					enum dma_data_direction dir)
-{
-	struct udl_drm_dmabuf_attachment *udl_attach = attach->priv;
-	struct udl_gem_object *obj = to_udl_bo(attach->dmabuf->priv);
-	struct drm_device *dev = obj->base.dev;
-	struct udl_device *udl = dev->dev_private;
-	struct scatterlist *rd, *wr;
-	struct sg_table *sgt = NULL;
-	unsigned int i;
-	int page_count;
-	int nents, ret;
-
-	DRM_DEBUG_PRIME("[DEV:%s] size:%zd dir=%d\n", dev_name(attach->dev),
-			attach->dmabuf->size, dir);
-
-	/* just return current sgt if already requested. */
-	if (udl_attach->dir == dir && udl_attach->is_mapped)
-		return &udl_attach->sgt;
-
-	if (!obj->pages) {
-		ret = udl_gem_get_pages(obj);
-		if (ret) {
-			DRM_ERROR("failed to map pages.\n");
-			return ERR_PTR(ret);
-		}
-	}
-
-	page_count = obj->base.size / PAGE_SIZE;
-	obj->sg = drm_prime_pages_to_sg(obj->pages, page_count);
-	if (IS_ERR(obj->sg)) {
-		DRM_ERROR("failed to allocate sgt.\n");
-		return ERR_CAST(obj->sg);
-	}
-
-	sgt = &udl_attach->sgt;
-
-	ret = sg_alloc_table(sgt, obj->sg->orig_nents, GFP_KERNEL);
-	if (ret) {
-		DRM_ERROR("failed to alloc sgt.\n");
-		return ERR_PTR(-ENOMEM);
-	}
-
-	mutex_lock(&udl->gem_lock);
-
-	rd = obj->sg->sgl;
-	wr = sgt->sgl;
-	for (i = 0; i < sgt->orig_nents; ++i) {
-		sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
-		rd = sg_next(rd);
-		wr = sg_next(wr);
-	}
-
-	if (dir != DMA_NONE) {
-		nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
-		if (!nents) {
-			DRM_ERROR("failed to map sgl with iommu.\n");
-			sg_free_table(sgt);
-			sgt = ERR_PTR(-EIO);
-			goto err_unlock;
-		}
-	}
-
-	udl_attach->is_mapped = true;
-	udl_attach->dir = dir;
-	attach->priv = udl_attach;
-
-err_unlock:
-	mutex_unlock(&udl->gem_lock);
-	return sgt;
-}
-
-static void udl_unmap_dma_buf(struct dma_buf_attachment *attach,
-			      struct sg_table *sgt,
-			      enum dma_data_direction dir)
-{
-	/* Nothing to do. */
-	DRM_DEBUG_PRIME("[DEV:%s] size:%zd dir:%d\n", dev_name(attach->dev),
-			attach->dmabuf->size, dir);
-}
-
-static void *udl_dmabuf_kmap(struct dma_buf *dma_buf, unsigned long page_num)
-{
-	/* TODO */
-
-	return NULL;
-}
-
-static void udl_dmabuf_kunmap(struct dma_buf *dma_buf,
-			      unsigned long page_num, void *addr)
-{
-	/* TODO */
-}
-
-static int udl_dmabuf_mmap(struct dma_buf *dma_buf,
-			   struct vm_area_struct *vma)
-{
-	/* TODO */
-
-	return -EINVAL;
-}
-
-static const struct dma_buf_ops udl_dmabuf_ops = {
-	.attach			= udl_attach_dma_buf,
-	.detach			= udl_detach_dma_buf,
-	.map_dma_buf		= udl_map_dma_buf,
-	.unmap_dma_buf		= udl_unmap_dma_buf,
-	.map			= udl_dmabuf_kmap,
-	.unmap			= udl_dmabuf_kunmap,
-	.mmap			= udl_dmabuf_mmap,
-	.release		= drm_gem_dmabuf_release,
-};
-
-struct dma_buf *udl_gem_prime_export(struct drm_gem_object *obj, int flags)
-{
-	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
-
-	exp_info.ops = &udl_dmabuf_ops;
-	exp_info.size = obj->size;
-	exp_info.flags = flags;
-	exp_info.priv = obj;
-
-	return drm_gem_dmabuf_export(obj->dev, &exp_info);
-}
-
-static int udl_prime_create(struct drm_device *dev,
-			    size_t size,
-			    struct sg_table *sg,
-			    struct udl_gem_object **obj_p)
-{
-	struct udl_gem_object *obj;
-	int npages;
-
-	npages = size / PAGE_SIZE;
-
-	*obj_p = NULL;
-	obj = udl_gem_alloc_object(dev, npages * PAGE_SIZE);
-	if (!obj)
-		return -ENOMEM;
-
-	obj->sg = sg;
-	obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
-	if (obj->pages == NULL) {
-		DRM_ERROR("obj pages is NULL %d\n", npages);
-		return -ENOMEM;
-	}
-
-	drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages);
-
-	*obj_p = obj;
-	return 0;
-}
-
-struct drm_gem_object *udl_gem_prime_import(struct drm_device *dev,
-				struct dma_buf *dma_buf)
-{
-	struct dma_buf_attachment *attach;
-	struct sg_table *sg;
-	struct udl_gem_object *uobj;
-	int ret;
-
-	/* need to attach */
-	get_device(dev->dev);
-	attach = dma_buf_attach(dma_buf, dev->dev);
-	if (IS_ERR(attach)) {
-		put_device(dev->dev);
-		return ERR_CAST(attach);
-	}
-
-	get_dma_buf(dma_buf);
-
-	sg = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
-	if (IS_ERR(sg)) {
-		ret = PTR_ERR(sg);
-		goto fail_detach;
-	}
-
-	ret = udl_prime_create(dev, dma_buf->size, sg, &uobj);
-	if (ret)
-		goto fail_unmap;
-
-	uobj->base.import_attach = attach;
-
-	return &uobj->base;
-
-fail_unmap:
-	dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL);
-fail_detach:
-	dma_buf_detach(dma_buf, attach);
-	dma_buf_put(dma_buf);
-	put_device(dev->dev);
-	return ERR_PTR(ret);
-}
diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
index 630e64abc986..987d99ae2dfa 100644
--- a/drivers/gpu/drm/udl/udl_drv.h
+++ b/drivers/gpu/drm/udl/udl_drv.h
@@ -73,18 +73,8 @@  struct udl_device {
 
 #define to_udl(x) container_of(x, struct udl_device, drm)
 
-struct udl_gem_object {
-	struct drm_gem_object base;
-	struct page **pages;
-	void *vmapping;
-	struct sg_table *sg;
-};
-
-#define to_udl_bo(x) container_of(x, struct udl_gem_object, base)
-
 struct udl_framebuffer {
 	struct drm_framebuffer base;
-	struct udl_gem_object *obj;
 	struct drm_gem_shmem_object *shmem;
 	bool active_16; /* active on the 16-bit channel */
 };
@@ -120,27 +110,8 @@  int udl_render_hline(struct drm_device *dev, int log_bpp, struct urb **urb_ptr,
 		     u32 byte_offset, u32 device_byte_offset, u32 byte_width,
 		     int *ident_ptr, int *sent_ptr);
 
-int udl_dumb_create(struct drm_file *file_priv,
-		    struct drm_device *dev,
-		    struct drm_mode_create_dumb *args);
-int udl_gem_mmap(struct drm_file *file_priv, struct drm_device *dev,
-		 uint32_t handle, uint64_t *offset);
-
 struct drm_gem_object *udl_driver_gem_create_object(struct drm_device *dev,
 						    size_t size);
-void udl_gem_free_object(struct drm_gem_object *gem_obj);
-struct udl_gem_object *udl_gem_alloc_object(struct drm_device *dev,
-					    size_t size);
-struct dma_buf *udl_gem_prime_export(struct drm_gem_object *obj, int flags);
-struct drm_gem_object *udl_gem_prime_import(struct drm_device *dev,
-				struct dma_buf *dma_buf);
-
-int udl_gem_get_pages(struct udl_gem_object *obj);
-void udl_gem_put_pages(struct udl_gem_object *obj);
-int udl_gem_vmap(struct udl_gem_object *obj);
-void udl_gem_vunmap(struct udl_gem_object *obj);
-int udl_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-vm_fault_t udl_gem_fault(struct vm_fault *vmf);
 
 int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
 		      int width, int height);
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c
index 3554fffbf1db..ac82e06463bd 100644
--- a/drivers/gpu/drm/udl/udl_gem.c
+++ b/drivers/gpu/drm/udl/udl_gem.c
@@ -80,209 +80,3 @@  struct drm_gem_object *udl_driver_gem_create_object(struct drm_device *dev,
 
 	return obj;
 }
-
-struct udl_gem_object *udl_gem_alloc_object(struct drm_device *dev,
-					    size_t size)
-{
-	struct drm_gem_object *obj;
-
-	obj = dev->driver->gem_create_object(dev, size);
-	if (obj == NULL)
-		return NULL;
-
-	if (drm_gem_object_init(dev, obj, size) != 0) {
-		kfree(obj);
-		return NULL;
-	}
-
-	return to_udl_bo(obj);
-}
-
-static int
-udl_gem_create(struct drm_file *file,
-	       struct drm_device *dev,
-	       uint64_t size,
-	       uint32_t *handle_p)
-{
-	struct udl_gem_object *obj;
-	int ret;
-	u32 handle;
-
-	size = roundup(size, PAGE_SIZE);
-
-	obj = udl_gem_alloc_object(dev, size);
-	if (obj == NULL)
-		return -ENOMEM;
-
-	ret = drm_gem_handle_create(file, &obj->base, &handle);
-	if (ret) {
-		drm_gem_object_release(&obj->base);
-		kfree(obj);
-		return ret;
-	}
-
-	drm_gem_object_put_unlocked(&obj->base);
-	*handle_p = handle;
-	return 0;
-}
-
-int udl_dumb_create(struct drm_file *file,
-		    struct drm_device *dev,
-		    struct drm_mode_create_dumb *args)
-{
-	args->pitch = args->width * DIV_ROUND_UP(args->bpp, 8);
-	args->size = args->pitch * args->height;
-	return udl_gem_create(file, dev,
-			      args->size, &args->handle);
-}
-
-int udl_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	struct drm_gem_object *obj;
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret)
-		return ret;
-
-	obj = vma->vm_private_data;
-
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_MIXEDMAP;
-
-	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
-	if (obj->import_attach)
-		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
-
-	return ret;
-}
-
-vm_fault_t udl_gem_fault(struct vm_fault *vmf)
-{
-	struct vm_area_struct *vma = vmf->vma;
-	struct udl_gem_object *obj = to_udl_bo(vma->vm_private_data);
-	struct page *page;
-	unsigned int page_offset;
-
-	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
-
-	if (!obj->pages)
-		return VM_FAULT_SIGBUS;
-
-	page = obj->pages[page_offset];
-	return vmf_insert_page(vma, vmf->address, page);
-}
-
-int udl_gem_get_pages(struct udl_gem_object *obj)
-{
-	struct page **pages;
-
-	if (obj->pages)
-		return 0;
-
-	pages = drm_gem_get_pages(&obj->base);
-	if (IS_ERR(pages))
-		return PTR_ERR(pages);
-
-	obj->pages = pages;
-
-	return 0;
-}
-
-void udl_gem_put_pages(struct udl_gem_object *obj)
-{
-	if (obj->base.import_attach) {
-		kvfree(obj->pages);
-		obj->pages = NULL;
-		return;
-	}
-
-	drm_gem_put_pages(&obj->base, obj->pages, false, false);
-	obj->pages = NULL;
-}
-
-int udl_gem_vmap(struct udl_gem_object *obj)
-{
-	int page_count = obj->base.size / PAGE_SIZE;
-	int ret;
-
-	if (obj->base.import_attach) {
-		obj->vmapping = dma_buf_vmap(obj->base.import_attach->dmabuf);
-		if (!obj->vmapping)
-			return -ENOMEM;
-		return 0;
-	}
-
-	ret = udl_gem_get_pages(obj);
-	if (ret)
-		return ret;
-
-	obj->vmapping = vmap(obj->pages, page_count, 0, PAGE_KERNEL);
-	if (!obj->vmapping)
-		return -ENOMEM;
-	return 0;
-}
-
-void udl_gem_vunmap(struct udl_gem_object *obj)
-{
-	if (obj->base.import_attach) {
-		dma_buf_vunmap(obj->base.import_attach->dmabuf, obj->vmapping);
-		return;
-	}
-
-	vunmap(obj->vmapping);
-
-	udl_gem_put_pages(obj);
-}
-
-void udl_gem_free_object(struct drm_gem_object *gem_obj)
-{
-	struct udl_gem_object *obj = to_udl_bo(gem_obj);
-
-	if (obj->vmapping)
-		udl_gem_vunmap(obj);
-
-	if (gem_obj->import_attach) {
-		drm_prime_gem_destroy(gem_obj, obj->sg);
-		put_device(gem_obj->dev->dev);
-	}
-
-	if (obj->pages)
-		udl_gem_put_pages(obj);
-
-	drm_gem_free_mmap_offset(gem_obj);
-}
-
-/* the dumb interface doesn't work with the GEM straight MMAP
-   interface, it expects to do MMAP on the drm fd, like normal */
-int udl_gem_mmap(struct drm_file *file, struct drm_device *dev,
-		 uint32_t handle, uint64_t *offset)
-{
-	struct udl_gem_object *gobj;
-	struct drm_gem_object *obj;
-	struct udl_device *udl = to_udl(dev);
-	int ret = 0;
-
-	mutex_lock(&udl->gem_lock);
-	obj = drm_gem_object_lookup(file, handle);
-	if (obj == NULL) {
-		ret = -ENOENT;
-		goto unlock;
-	}
-	gobj = to_udl_bo(obj);
-
-	ret = udl_gem_get_pages(gobj);
-	if (ret)
-		goto out;
-	ret = drm_gem_create_mmap_offset(obj);
-	if (ret)
-		goto out;
-
-	*offset = drm_vma_node_offset_addr(&gobj->base.vma_node);
-
-out:
-	drm_gem_object_put_unlocked(&gobj->base);
-unlock:
-	mutex_unlock(&udl->gem_lock);
-	return ret;
-}