From patchwork Wed Nov 6 09:31:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 11229755 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5CC551575 for ; Wed, 6 Nov 2019 09:31:33 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 44F322075C for ; Wed, 6 Nov 2019 09:31:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 44F322075C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3051D6EC76; Wed, 6 Nov 2019 09:31:29 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by gabe.freedesktop.org (Postfix) with ESMTPS id C38EA6EC78 for ; Wed, 6 Nov 2019 09:31:26 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 3F197B411; Wed, 6 Nov 2019 09:31:25 +0000 (UTC) From: Thomas Zimmermann To: daniel@ffwll.ch, christian.koenig@amd.com, noralf@tronnes.org Subject: [PATCH 3/8] drm: Add is_iomem return parameter to struct drm_gem_object_funcs.vmap Date: Wed, 6 Nov 2019 10:31:16 +0100 Message-Id: <20191106093121.21762-4-tzimmermann@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191106093121.21762-1-tzimmermann@suse.de> References: <20191106093121.21762-1-tzimmermann@suse.de> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Zimmermann , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The vmap operation can return system or I/O memory, which the caller may have to treat differently. The parameter is_iomem returns 'true' if the returned pointer refers to I/O memory, or 'false' otherwise. In many cases, such as CMA ans SHMEM, the returned value is 'false'. For TTM-based drivers, the correct value is provided by TTM itself. For DMA buffers that are shared among devices, we assume system memory as well. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 6 +++++- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 +- drivers/gpu/drm/cirrus/cirrus.c | 2 +- drivers/gpu/drm/drm_gem.c | 4 ++-- drivers/gpu/drm/drm_gem_cma_helper.c | 7 ++++++- drivers/gpu/drm/drm_gem_shmem_helper.c | 12 +++++++++--- drivers/gpu/drm/drm_gem_vram_helper.c | 7 +++++-- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 4 +++- drivers/gpu/drm/nouveau/nouveau_gem.h | 2 +- drivers/gpu/drm/nouveau/nouveau_prime.c | 4 +++- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 2 +- drivers/gpu/drm/qxl/qxl_drv.h | 2 +- drivers/gpu/drm/qxl/qxl_prime.c | 4 ++-- drivers/gpu/drm/radeon/radeon_drv.c | 2 +- drivers/gpu/drm/radeon/radeon_prime.c | 4 +++- drivers/gpu/drm/tiny/gm12u320.c | 2 +- drivers/gpu/drm/vc4/vc4_bo.c | 4 ++-- drivers/gpu/drm/vc4/vc4_drv.h | 2 +- drivers/gpu/drm/vgem/vgem_drv.c | 5 ++++- drivers/gpu/drm/xen/xen_drm_front_gem.c | 6 +++++- drivers/gpu/drm/xen/xen_drm_front_gem.h | 3 ++- include/drm/drm_drv.h | 2 +- include/drm/drm_gem.h | 2 +- include/drm/drm_gem_cma_helper.h | 2 +- include/drm/drm_gem_shmem_helper.h | 2 +- 26 files changed, 64 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index 4917b548b7f2..97b77e7e15dc 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -57,13 +57,15 @@ struct sg_table *amdgpu_gem_prime_get_sg_table(struct drm_gem_object *obj) /** * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation * @obj: GEM BO + * @is_iomem: returns true if the mapped memory is I/O memory, or false + * otherwise; can be NULL * * Sets up an in-kernel virtual mapping of the BO's memory. * * Returns: * The virtual address of the mapping or an error pointer. */ -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj) +void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem) { struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); int ret; @@ -73,6 +75,8 @@ void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj) if (ret) return ERR_PTR(ret); + if (is_iomem) + return ttm_kmap_obj_virtual(&bo->dma_buf_vmap, is_iomem); return bo->dma_buf_vmap.virtual; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h index 5012e6ab58f1..910cf2ef345f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h @@ -34,7 +34,7 @@ struct dma_buf *amdgpu_gem_prime_export(struct drm_gem_object *gobj, int flags); struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf); -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj); +void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem); void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int amdgpu_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/cirrus/cirrus.c b/drivers/gpu/drm/cirrus/cirrus.c index 248c9f765c45..6518e5c31eb4 100644 --- a/drivers/gpu/drm/cirrus/cirrus.c +++ b/drivers/gpu/drm/cirrus/cirrus.c @@ -302,7 +302,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, struct cirrus_device *cirrus = fb->dev->dev_private; void *vmap; - vmap = drm_gem_shmem_vmap(fb->obj[0]); + vmap = drm_gem_shmem_vmap(fb->obj[0], NULL); if (!vmap) return -ENOMEM; diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 56f42e0f2584..0acfbd134e04 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1251,9 +1251,9 @@ void *drm_gem_vmap(struct drm_gem_object *obj) void *vaddr; if (obj->funcs && obj->funcs->vmap) - vaddr = obj->funcs->vmap(obj); + vaddr = obj->funcs->vmap(obj, NULL); else if (obj->dev->driver->gem_prime_vmap) - vaddr = obj->dev->driver->gem_prime_vmap(obj); + vaddr = obj->dev->driver->gem_prime_vmap(obj, NULL); else vaddr = ERR_PTR(-EOPNOTSUPP); diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index 12e98fb28229..b14e88337529 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -537,6 +537,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual * address space * @obj: GEM object + * @is_iomem: returns true if the mapped memory is I/O memory, or false + * otherwise; can be NULL * * This function maps a buffer exported via DRM PRIME into the kernel's * virtual address space. Since the CMA buffers are already mapped into the @@ -547,10 +549,13 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); * Returns: * The kernel virtual address of the CMA GEM object's backing store. */ -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) +void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj, bool *is_iomem) { struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); + if (is_iomem) + *is_iomem = false; + return cma_obj->vaddr; } EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 3bc69b1ffa7d..a8a8e1b13a30 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -242,7 +242,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) } EXPORT_SYMBOL(drm_gem_shmem_unpin); -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) +static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, + bool *is_iomem) { struct drm_gem_object *obj = &shmem->base; int ret; @@ -266,6 +267,9 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) goto err_put_pages; } + if (is_iomem) + *is_iomem = false; + return shmem->vaddr; err_put_pages: @@ -279,6 +283,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) /* * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object + * @is_iomem: returns true if the mapped memory is I/O memory, or false + * otherwise; can be NULL * * This function makes sure that a virtual address exists for the buffer backing * the shmem GEM object. @@ -286,7 +292,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) * Returns: * 0 on success or a negative error code on failure. */ -void *drm_gem_shmem_vmap(struct drm_gem_object *obj) +void *drm_gem_shmem_vmap(struct drm_gem_object *obj, bool *is_iomem) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); void *vaddr; @@ -295,7 +301,7 @@ void *drm_gem_shmem_vmap(struct drm_gem_object *obj) ret = mutex_lock_interruptible(&shmem->vmap_lock); if (ret) return ERR_PTR(ret); - vaddr = drm_gem_shmem_vmap_locked(shmem); + vaddr = drm_gem_shmem_vmap_locked(shmem, is_iomem); mutex_unlock(&shmem->vmap_lock); return vaddr; diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 05f63f28814d..77658f835774 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -818,17 +818,20 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem) * drm_gem_vram_object_vmap() - \ Implements &struct drm_gem_object_funcs.vmap * @gem: The GEM object to map + * @is_iomem: returns true if the mapped memory is I/O memory, or false + * otherwise; can be NULL * * Returns: * The buffers virtual address on success, or * NULL otherwise. */ -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem) +static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem, + bool *is_iomem) { struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); void *base; - base = drm_gem_vram_vmap(gbo, NULL); + base = drm_gem_vram_vmap(gbo, is_iomem); if (IS_ERR(base)) return NULL; return base; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 32cfa5a48d42..558b79366bf4 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -51,7 +51,7 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); vm_fault_t etnaviv_gem_fault(struct vm_fault *vmf); int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); +void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem); void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c index f24dd21c2363..c8b09ed7f936 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c @@ -22,8 +22,10 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(etnaviv_obj->pages, npages); } -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) +void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem) { + if (is_iomem) + *is_iomem = false; return etnaviv_gem_vmap(obj); } diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h index 978e07591990..46ff11a39f23 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.h +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h @@ -35,7 +35,7 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *); extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *); extern struct drm_gem_object *nouveau_gem_prime_import_sg_table( struct drm_device *, struct dma_buf_attachment *, struct sg_table *); -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *); +extern void *nouveau_gem_prime_vmap(struct drm_gem_object *, bool *); extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *); #endif diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index bae6a3eccee0..b61376c91d31 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -35,7 +35,7 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(nvbo->bo.ttm->pages, npages); } -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj) +void *nouveau_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem) { struct nouveau_bo *nvbo = nouveau_gem_object(obj); int ret; @@ -45,6 +45,8 @@ void *nouveau_gem_prime_vmap(struct drm_gem_object *obj) if (ret) return ERR_PTR(ret); + if (is_iomem) + return ttm_kmap_obj_virtual(&nvbo->dma_buf_vmap, is_iomem); return nvbo->dma_buf_vmap.virtual; } diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index 83c57d325ca8..f833d8376d44 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -94,7 +94,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, if (ret) goto err_put_bo; - perfcnt->buf = drm_gem_shmem_vmap(&bo->base); + perfcnt->buf = drm_gem_shmem_vmap(&bo->base, NULL); if (IS_ERR(perfcnt->buf)) { ret = PTR_ERR(perfcnt->buf); goto err_put_bo; diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index e749c0d0e819..3f80b2215f25 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -452,7 +452,7 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj); struct drm_gem_object *qxl_gem_prime_import_sg_table( struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *qxl_gem_prime_vmap(struct drm_gem_object *obj); +void *qxl_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem); void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int qxl_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index e67ebbdeb7f2..9b2d4015e0d6 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -54,13 +54,13 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table( return ERR_PTR(-ENOSYS); } -void *qxl_gem_prime_vmap(struct drm_gem_object *obj) +void *qxl_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem) { struct qxl_bo *bo = gem_to_qxl_bo(obj); void *ptr; int ret; - ret = qxl_bo_kmap(bo, &ptr, NULL); + ret = qxl_bo_kmap(bo, &ptr, is_iomem); if (ret < 0) return ERR_PTR(ret); diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c index 888e0f384c61..7f9cff9cb572 100644 --- a/drivers/gpu/drm/radeon/radeon_drv.c +++ b/drivers/gpu/drm/radeon/radeon_drv.c @@ -153,7 +153,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sg); int radeon_gem_prime_pin(struct drm_gem_object *obj); void radeon_gem_prime_unpin(struct drm_gem_object *obj); -void *radeon_gem_prime_vmap(struct drm_gem_object *obj); +void *radeon_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem); void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); /* atpx handler */ diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index b906e8fbd5f3..2019b54277e4 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -39,7 +39,7 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages); } -void *radeon_gem_prime_vmap(struct drm_gem_object *obj) +void *radeon_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem) { struct radeon_bo *bo = gem_to_radeon_bo(obj); int ret; @@ -49,6 +49,8 @@ void *radeon_gem_prime_vmap(struct drm_gem_object *obj) if (ret) return ERR_PTR(ret); + if (is_iomem) + return ttm_kmap_obj_virtual(&bo->dma_buf_vmap, is_iomem); return bo->dma_buf_vmap.virtual; } diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c index 94fb1f593564..4c4b1904e046 100644 --- a/drivers/gpu/drm/tiny/gm12u320.c +++ b/drivers/gpu/drm/tiny/gm12u320.c @@ -278,7 +278,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) y1 = gm12u320->fb_update.rect.y1; y2 = gm12u320->fb_update.rect.y2; - vaddr = drm_gem_shmem_vmap(fb->obj[0]); + vaddr = drm_gem_shmem_vmap(fb->obj[0], NULL); if (IS_ERR(vaddr)) { GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr)); goto put_fb; diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index 72d30d90b856..c03462cef01c 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -767,7 +767,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) return drm_gem_cma_prime_mmap(obj, vma); } -void *vc4_prime_vmap(struct drm_gem_object *obj) +void *vc4_prime_vmap(struct drm_gem_object *obj, bool *is_iomem) { struct vc4_bo *bo = to_vc4_bo(obj); @@ -776,7 +776,7 @@ void *vc4_prime_vmap(struct drm_gem_object *obj) return ERR_PTR(-EINVAL); } - return drm_gem_cma_prime_vmap(obj); + return drm_gem_cma_prime_vmap(obj, is_iomem); } struct drm_gem_object * diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h index 6627b20c99e9..c84a7eaf1f3e 100644 --- a/drivers/gpu/drm/vc4/vc4_drv.h +++ b/drivers/gpu/drm/vc4/vc4_drv.h @@ -733,7 +733,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *vc4_prime_vmap(struct drm_gem_object *obj); +void *vc4_prime_vmap(struct drm_gem_object *obj, bool *is_iomem); int vc4_bo_cache_init(struct drm_device *dev); void vc4_bo_cache_destroy(struct drm_device *dev); int vc4_bo_inc_usecnt(struct vc4_bo *bo); diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c index 5bd60ded3d81..b991cfce3d91 100644 --- a/drivers/gpu/drm/vgem/vgem_drv.c +++ b/drivers/gpu/drm/vgem/vgem_drv.c @@ -379,7 +379,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev, return &obj->base; } -static void *vgem_prime_vmap(struct drm_gem_object *obj) +static void *vgem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); long n_pages = obj->size >> PAGE_SHIFT; @@ -389,6 +389,9 @@ static void *vgem_prime_vmap(struct drm_gem_object *obj) if (IS_ERR(pages)) return NULL; + if (is_iomem) + *is_iomem = false; + return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); } diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c index f0b85e094111..b3c3ba661f38 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c @@ -272,13 +272,17 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma) return gem_mmap_obj(xen_obj, vma); } -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj) +void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, + bool *is_iomem) { struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); if (!xen_obj->pages) return NULL; + if (is_iomem) + *is_iomem = false; + /* Please see comment in gem_mmap_obj on mapping and attributes. */ return vmap(xen_obj->pages, xen_obj->num_pages, VM_MAP, PAGE_KERNEL); diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h index a39675fa31b2..adcf3d809c75 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h @@ -34,7 +34,8 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj); int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma); -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj); +void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, + bool *is_iomem); void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, void *vaddr); diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index cf13470810a5..662c5d5dfd05 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -631,7 +631,7 @@ struct drm_driver { * Deprecated vmap hook for GEM drivers. Please use * &drm_gem_object_funcs.vmap instead. */ - void *(*gem_prime_vmap)(struct drm_gem_object *obj); + void *(*gem_prime_vmap)(struct drm_gem_object *obj, bool *is_iomem); /** * @gem_prime_vunmap: diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index e71f75a2ab57..edc73b686c60 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -138,7 +138,7 @@ struct drm_gem_object_funcs { * * This callback is optional. */ - void *(*vmap)(struct drm_gem_object *obj); + void *(*vmap)(struct drm_gem_object *obj, bool *is_iomem); /** * @vunmap: diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index 947ac95eb24a..69fdd18dc7b2 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt); int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); +void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj, bool *is_iomem); void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr); struct drm_gem_object * diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 6748379a0b44..ddb54aa1ac1a 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -95,7 +95,7 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_object *obj); void drm_gem_shmem_unpin(struct drm_gem_object *obj); -void *drm_gem_shmem_vmap(struct drm_gem_object *obj); +void *drm_gem_shmem_vmap(struct drm_gem_object *obj, bool *is_iomem); void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr); int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);