From patchwork Fri Jan 8 09:43:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5965AC433E9 for ; Fri, 8 Jan 2021 09:44:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E57E233EA for ; Fri, 8 Jan 2021 09:44:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727762AbhAHJo3 (ORCPT ); Fri, 8 Jan 2021 04:44:29 -0500 Received: from mx2.suse.de ([195.135.220.15]:35792 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727502AbhAHJoZ (ORCPT ); Fri, 8 Jan 2021 04:44:25 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 0EF9EAD2B; Fri, 8 Jan 2021 09:43:44 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann , Daniel Vetter Subject: [PATCH v4 01/13] dma-buf: Add vmap_local and vnumap_local operations Date: Fri, 8 Jan 2021 10:43:28 +0100 Message-Id: <20210108094340.15290-2-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The existing dma-buf calls dma_buf_vmap() and dma_buf_vunmap() are allowed to pin the buffer or acquire the buffer's reservation object lock. This is a problem for callers that only require a short-term mapping of the buffer without the pinning, or callers that have special locking requirements. These may suffer from unnecessary overhead or interfere with regular pin operations. The new interfaces dma_buf_vmap_local(), dma_buf_vunmapo_local(), and their rsp callbacks in struct dma_buf_ops provide an alternative without pinning or reservation locking. Callers are responsible for these operations. v4: * update documentation (Daniel) Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter Suggested-by: Daniel Vetter --- drivers/dma-buf/dma-buf.c | 81 +++++++++++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 34 ++++++++++++++++ 2 files changed, 115 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index b8465243eca2..01f9c74d97fa 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1295,6 +1295,87 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) } EXPORT_SYMBOL_GPL(dma_buf_vunmap); +/** + * dma_buf_vmap_local - Create virtual mapping for the buffer object into kernel + * address space. + * @dmabuf: [in] buffer to vmap + * @map: [out] returns the vmap pointer + * + * Unlike dma_buf_vmap() this is a short term mapping and will not pin + * the buffer. The struct dma_resv for the @dmabuf must be locked until + * dma_buf_vunmap_local() is called. + * + * Returns: + * 0 on success, or a negative errno code otherwise. + */ +int dma_buf_vmap_local(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct dma_buf_map ptr; + int ret = 0; + + dma_buf_map_clear(map); + + if (WARN_ON(!dmabuf)) + return -EINVAL; + + dma_resv_assert_held(dmabuf->resv); + + if (!dmabuf->ops->vmap_local) + return -EINVAL; + + mutex_lock(&dmabuf->lock); + if (dmabuf->vmapping_counter) { + dmabuf->vmapping_counter++; + BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr)); + *map = dmabuf->vmap_ptr; + goto out_unlock; + } + + BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr)); + + ret = dmabuf->ops->vmap_local(dmabuf, &ptr); + if (WARN_ON_ONCE(ret)) + goto out_unlock; + + dmabuf->vmap_ptr = ptr; + dmabuf->vmapping_counter = 1; + + *map = dmabuf->vmap_ptr; + +out_unlock: + mutex_unlock(&dmabuf->lock); + return ret; +} +EXPORT_SYMBOL_GPL(dma_buf_vmap_local); + +/** + * dma_buf_vunmap_local - Unmap a vmap obtained by dma_buf_vmap_local. + * @dmabuf: [in] buffer to vunmap + * @map: [in] vmap pointer to vunmap + * + * Release a mapping established with dma_buf_vmap_local(). + */ +void dma_buf_vunmap_local(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + if (WARN_ON(!dmabuf)) + return; + + dma_resv_assert_held(dmabuf->resv); + + BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr)); + BUG_ON(dmabuf->vmapping_counter == 0); + BUG_ON(!dma_buf_map_is_equal(&dmabuf->vmap_ptr, map)); + + mutex_lock(&dmabuf->lock); + if (--dmabuf->vmapping_counter == 0) { + if (dmabuf->ops->vunmap_local) + dmabuf->ops->vunmap_local(dmabuf, map); + dma_buf_map_clear(&dmabuf->vmap_ptr); + } + mutex_unlock(&dmabuf->lock); +} +EXPORT_SYMBOL_GPL(dma_buf_vunmap_local); + #ifdef CONFIG_DEBUG_FS static int dma_buf_debug_show(struct seq_file *s, void *unused) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 628681bf6c99..aeed754b5467 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -264,6 +264,38 @@ struct dma_buf_ops { int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map); void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map); + + /** + * @vmap_local: + * + * Creates a virtual mapping for the buffer into kernel address space. + * + * This callback establishes short-term mappings for situations where + * callers only use the buffer for a bounded amount of time; such as + * updates to the framebuffer or reading back contained information. + * In contrast to the regular @vmap callback, vmap_local does never pin + * the buffer to a specific domain or acquire the buffer's reservation + * lock. + * + * This is called with the &dma_buf.resv object locked. Callers must hold + * the lock until after removing the mapping with @vunmap_local. + * + * This callback is optional. + * + * Returns: + * + * 0 on success or a negative error code on failure. + */ + int (*vmap_local)(struct dma_buf *dmabuf, struct dma_buf_map *map); + + /** + * @vunmap_local: + * + * Removes a virtual mapping that was established by @vmap_local. + * + * This callback is optional. + */ + void (*vunmap_local)(struct dma_buf *dmabuf, struct dma_buf_map *map); }; /** @@ -501,4 +533,6 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long); int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map); void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map); +int dma_buf_vmap_local(struct dma_buf *dmabuf, struct dma_buf_map *map); +void dma_buf_vunmap_local(struct dma_buf *dmabuf, struct dma_buf_map *map); #endif /* __DMA_BUF_H__ */ From patchwork Fri Jan 8 09:43:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0D34C433E0 for ; Fri, 8 Jan 2021 09:44:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B0FF3233EA for ; Fri, 8 Jan 2021 09:44:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727773AbhAHJo3 (ORCPT ); Fri, 8 Jan 2021 04:44:29 -0500 Received: from mx2.suse.de ([195.135.220.15]:35826 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727629AbhAHJo0 (ORCPT ); Fri, 8 Jan 2021 04:44:26 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id CA505AD87; Fri, 8 Jan 2021 09:43:44 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann , Daniel Vetter Subject: [PATCH v4 02/13] drm/gem: Create infrastructure for GEM vmap_local Date: Fri, 8 Jan 2021 10:43:29 +0100 Message-Id: <20210108094340.15290-3-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This patch adds vmap_local and vunmap_local to struct drm_gem_object_funcs; including the PRIME helpers to connect with dma-buf's related interfaces. Besides the generic DRM core, this will become relevant for fbdev emulation with virtio, so we update it as well. v4: * update documentation (Daniel) Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/drm_gem.c | 28 ++++++++++++++++++ drivers/gpu/drm/drm_internal.h | 2 ++ drivers/gpu/drm/drm_prime.c | 39 ++++++++++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_prime.c | 2 ++ include/drm/drm_gem.h | 21 ++++++++++++++ include/drm/drm_prime.h | 2 ++ 6 files changed, 94 insertions(+) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 92f89cee213e..6e131d9bb7bd 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1234,6 +1234,34 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) dma_buf_map_clear(map); } +int drm_gem_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map) +{ + int ret; + + if (!obj->funcs->vmap_local) + return -EOPNOTSUPP; + + ret = obj->funcs->vmap_local(obj, map); + if (ret) + return ret; + else if (dma_buf_map_is_null(map)) + return -ENOMEM; + + return 0; +} + +void drm_gem_vunmap_local(struct drm_gem_object *obj, struct dma_buf_map *map) +{ + if (dma_buf_map_is_null(map)) + return; + + if (obj->funcs->vunmap_local) + obj->funcs->vunmap_local(obj, map); + + /* Always set the mapping to NULL. Callers may rely on this. */ + dma_buf_map_clear(map); +} + /** * drm_gem_lock_reservations - Sets up the ww context and acquires * the lock on an array of GEM objects. diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h index 81d386b5b92a..b0bf6aba763a 100644 --- a/drivers/gpu/drm/drm_internal.h +++ b/drivers/gpu/drm/drm_internal.h @@ -190,6 +190,8 @@ int drm_gem_pin(struct drm_gem_object *obj); void drm_gem_unpin(struct drm_gem_object *obj); int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); +int drm_gem_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map); +void drm_gem_vunmap_local(struct drm_gem_object *obj, struct dma_buf_map *map); /* drm_debugfs.c drm_debugfs_crc.c */ #if defined(CONFIG_DEBUG_FS) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 683aa29ecd3b..633edea76985 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -695,6 +695,43 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map) } EXPORT_SYMBOL(drm_gem_dmabuf_vunmap); +/** + * drm_gem_dmabuf_vmap_local - dma_buf vmap_local implementation for GEM + * @dma_buf: buffer to be mapped + * @map: the virtual address of the buffer + * + * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap_local + * callback. Calls into &drm_gem_object_funcs.vmap_local for device specific handling. + * The kernel virtual address is returned in map. + * + * Returns: + * 0 on success or a negative errno code otherwise. + */ +int drm_gem_dmabuf_vmap_local(struct dma_buf *dma_buf, struct dma_buf_map *map) +{ + struct drm_gem_object *obj = dma_buf->priv; + + return drm_gem_vmap_local(obj, map); +} +EXPORT_SYMBOL(drm_gem_dmabuf_vmap_local); + +/** + * drm_gem_dmabuf_vunmap_local - dma_buf vunmap_local implementation for GEM + * @dma_buf: buffer to be unmapped + * @map: the virtual address of the buffer + * + * Releases a kernel virtual mapping. This can be used as the + * &dma_buf_ops.vunmap_local callback. Calls into &drm_gem_object_funcs.vunmap_local + * for device specific handling. + */ +void drm_gem_dmabuf_vunmap_local(struct dma_buf *dma_buf, struct dma_buf_map *map) +{ + struct drm_gem_object *obj = dma_buf->priv; + + drm_gem_vunmap_local(obj, map); +} +EXPORT_SYMBOL(drm_gem_dmabuf_vunmap_local); + /** * drm_gem_prime_mmap - PRIME mmap function for GEM drivers * @obj: GEM object @@ -787,6 +824,8 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { .mmap = drm_gem_dmabuf_mmap, .vmap = drm_gem_dmabuf_vmap, .vunmap = drm_gem_dmabuf_vunmap, + .vmap_local = drm_gem_dmabuf_vmap_local, + .vunmap_local = drm_gem_dmabuf_vunmap_local, }; /** diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index 807a27a16365..fea11a53d8fc 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -54,6 +54,8 @@ static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { .mmap = drm_gem_dmabuf_mmap, .vmap = drm_gem_dmabuf_vmap, .vunmap = drm_gem_dmabuf_vunmap, + .vmap = drm_gem_dmabuf_vmap_local, + .vunmap = drm_gem_dmabuf_vunmap_local, }, .device_attach = drm_gem_map_attach, .get_uuid = virtgpu_virtio_get_uuid, diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 5e6daa1c982f..1b4425dd1596 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -151,6 +151,27 @@ struct drm_gem_object_funcs { */ void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map); + /** + * @vmap_local: + * + * Returns a virtual address for the buffer. Used by the + * drm_gem_dmabuf_vmap_local() helper. Callers will hold &drm_gem_object.resv + * already and only release it after @vunmap_local has been called. + * + * This callback is optional. + */ + int (*vmap_local)(struct drm_gem_object *obj, struct dma_buf_map *map); + + /** + * @vunmap_local: + * + * Releases the address previously returned by @vmap. Used by the + * drm_gem_dmabuf_vunmap_local() helper. + * + * This callback is optional. + */ + void (*vunmap_local)(struct drm_gem_object *obj, struct dma_buf_map *map); + /** * @mmap: * diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h index 54f2c58305d2..fd2aef6966ef 100644 --- a/include/drm/drm_prime.h +++ b/include/drm/drm_prime.h @@ -85,6 +85,8 @@ void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach, enum dma_data_direction dir); int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map); void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map); +int drm_gem_dmabuf_vmap_local(struct dma_buf *dma_buf, struct dma_buf_map *map); +void drm_gem_dmabuf_vunmap_local(struct dma_buf *dma_buf, struct dma_buf_map *map); int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma); From patchwork Fri Jan 8 09:43:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50882C43381 for ; Fri, 8 Jan 2021 09:44:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0C954233EA for ; Fri, 8 Jan 2021 09:44:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727864AbhAHJoa (ORCPT ); Fri, 8 Jan 2021 04:44:30 -0500 Received: from mx2.suse.de ([195.135.220.15]:35860 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727649AbhAHJo1 (ORCPT ); Fri, 8 Jan 2021 04:44:27 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 914E5AE1C; Fri, 8 Jan 2021 09:43:45 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann , Daniel Vetter Subject: [PATCH v4 03/13] drm/cma-helper: Provide a vmap function for short-term mappings Date: Fri, 8 Jan 2021 10:43:30 +0100 Message-Id: <20210108094340.15290-4-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Implementations of the vmap/vunmap GEM callbacks may perform pinning of the BO and may acquire the associated reservation object's lock. Callers that only require a mapping of the contained memory can thus interfere with other tasks that require exact pinning, such as scanout. This is less of an issue with private CMA buffers, but may happen with imported ones. Therefore provide the new interface drm_gem_cma_vmap_local(), which only performs the vmap operations. Callers have to hold the reservation lock while the mapping persists. This patch also connects GEM CMA helpers to the GEM object function with equivalent functionality. v4: * vc4: don't wrap drm_gem_cma_vmap_local() in BO funcs (Daniel) * remove the TODO comment (Daniel) Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/drm_gem_cma_helper.c | 27 +++++++++++++++++++++++++++ drivers/gpu/drm/vc4/vc4_bo.c | 1 + include/drm/drm_gem_cma_helper.h | 1 + 3 files changed, 29 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index 7942cf05cd93..f854027fa01f 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = { .print_info = drm_gem_cma_print_info, .get_sg_table = drm_gem_cma_get_sg_table, .vmap = drm_gem_cma_vmap, + .vmap_local = drm_gem_cma_vmap_local, .mmap = drm_gem_cma_mmap, .vm_ops = &drm_gem_cma_vm_ops, }; @@ -471,6 +472,32 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) } EXPORT_SYMBOL_GPL(drm_gem_cma_vmap); +/** + * drm_gem_cma_vmap_local - map a CMA GEM object into the kernel's virtual + * address space + * @obj: GEM object + * @map: Returns the kernel virtual address of the CMA GEM object's backing + * store. + * + * This function maps a buffer into the kernel's + * virtual address space. Since the CMA buffers are already mapped into the + * kernel virtual address space this simply returns the cached virtual + * address. Drivers using the CMA helpers should set this as their DRM + * driver's &drm_gem_object_funcs.vmap_local callback. + * + * Returns: + * 0 on success, or a negative error code otherwise. + */ +int drm_gem_cma_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map) +{ + struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); + + dma_buf_map_set_vaddr(map, cma_obj->vaddr); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_cma_vmap_local); + /** * drm_gem_cma_mmap - memory-map an exported CMA GEM object * @obj: GEM object diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index dc316cb79e00..7283c018dbaf 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -387,6 +387,7 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = { .export = vc4_prime_export, .get_sg_table = drm_gem_cma_get_sg_table, .vmap = vc4_prime_vmap, + .vmap_local = drm_gem_cma_vmap_local, .vm_ops = &vc4_vm_ops, }; diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index 0a9711caa3e8..05122e71bc6d 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -99,6 +99,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +int drm_gem_cma_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map); int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); /** From patchwork Fri Jan 8 09:43:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEF07C433E6 for ; Fri, 8 Jan 2021 09:44:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7168F233EA for ; Fri, 8 Jan 2021 09:44:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727856AbhAHJoa (ORCPT ); Fri, 8 Jan 2021 04:44:30 -0500 Received: from mx2.suse.de ([195.135.220.15]:35878 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727729AbhAHJo2 (ORCPT ); Fri, 8 Jan 2021 04:44:28 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 65BF9AFA7; Fri, 8 Jan 2021 09:43:46 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann Subject: [PATCH v4 04/13] drm/shmem-helper: Provide a vmap function for short-term mappings Date: Fri, 8 Jan 2021 10:43:31 +0100 Message-Id: <20210108094340.15290-5-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Implementations of the vmap/vunmap GEM callbacks may perform pinning of the BO and may acquire the associated reservation object's lock. Callers that only require a mapping of the contained memory can thus interfere with other tasks that require exact pinning, such as scanout. This is less of an issue with private SHMEM buffers, but may happen with imported ones. Therefore provide the new interfaces drm_gem_shmem_vmap_local() and drm_gem_shmem_vunmap_local(), which only perform the vmap/vunmap operations. Callers have to hold the reservation lock while the mapping persists. This patch also connects GEM SHMEM helpers to GEM object functions with equivalent functionality. v4: * call dma_buf_{vmap,vunmap}_local() where necessary (Daniel) * move driver changes into separate patches (Daniel) Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/drm_gem_shmem_helper.c | 90 +++++++++++++++++++++++--- include/drm/drm_gem_shmem_helper.h | 2 + 2 files changed, 84 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 9825c378dfa6..298832b2b43b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -32,6 +32,8 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .get_sg_table = drm_gem_shmem_get_sg_table, .vmap = drm_gem_shmem_vmap, .vunmap = drm_gem_shmem_vunmap, + .vmap_local = drm_gem_shmem_vmap_local, + .vunmap_local = drm_gem_shmem_vunmap_local, .mmap = drm_gem_shmem_mmap, }; @@ -261,7 +263,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) } EXPORT_SYMBOL(drm_gem_shmem_unpin); -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map, + bool local) { struct drm_gem_object *obj = &shmem->base; int ret = 0; @@ -272,7 +275,10 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct } if (obj->import_attach) { - ret = dma_buf_vmap(obj->import_attach->dmabuf, map); + if (local) + ret = dma_buf_vmap_local(obj->import_attach->dmabuf, map); + else + ret = dma_buf_vmap(obj->import_attach->dmabuf, map); if (!ret) { if (WARN_ON(map->is_iomem)) { ret = -EIO; @@ -313,7 +319,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct return ret; } -/* +/** * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object * @map: Returns the kernel virtual address of the SHMEM GEM object's backing @@ -339,15 +345,53 @@ int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) ret = mutex_lock_interruptible(&shmem->vmap_lock); if (ret) return ret; - ret = drm_gem_shmem_vmap_locked(shmem, map); + ret = drm_gem_shmem_vmap_locked(shmem, map, false); mutex_unlock(&shmem->vmap_lock); return ret; } EXPORT_SYMBOL(drm_gem_shmem_vmap); +/** + * drm_gem_shmem_vmap_local - Create a virtual mapping for a shmem GEM object + * @shmem: shmem GEM object + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing + * store. + * + * This function makes sure that a contiguous kernel virtual address mapping + * exists for the buffer backing the shmem GEM object. + * + * The function is called with the BO's reservation object locked. Callers must + * hold the lock until after unmapping the buffer. + * + * This function can be used to implement &drm_gem_object_funcs.vmap_local. But + * it can also be called by drivers directly, in which case it will hide the + * differences between dma-buf imported and natively allocated objects. + * + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_local(). + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + int ret; + + dma_resv_assert_held(obj->resv); + + ret = mutex_lock_interruptible(&shmem->vmap_lock); + if (ret) + return ret; + ret = drm_gem_shmem_vmap_locked(shmem, map, true); + mutex_unlock(&shmem->vmap_lock); + + return ret; +} +EXPORT_SYMBOL(drm_gem_shmem_vmap_local); + static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, - struct dma_buf_map *map) + struct dma_buf_map *map, bool local) { struct drm_gem_object *obj = &shmem->base; @@ -358,7 +402,10 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, return; if (obj->import_attach) - dma_buf_vunmap(obj->import_attach->dmabuf, map); + if (local) + dma_buf_vunmap_local(obj->import_attach->dmabuf, map); + else + dma_buf_vunmap(obj->import_attach->dmabuf, map); else vunmap(shmem->vaddr); @@ -366,7 +413,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, drm_gem_shmem_put_pages(shmem); } -/* +/** * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object * @shmem: shmem GEM object * @map: Kernel virtual address where the SHMEM GEM object was mapped @@ -384,11 +431,38 @@ void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); mutex_lock(&shmem->vmap_lock); - drm_gem_shmem_vunmap_locked(shmem, map); + drm_gem_shmem_vunmap_locked(shmem, map, false); mutex_unlock(&shmem->vmap_lock); } EXPORT_SYMBOL(drm_gem_shmem_vunmap); +/** + * drm_gem_shmem_vunmap_local - Unmap a virtual mapping fo a shmem GEM object + * @shmem: shmem GEM object + * @map: Kernel virtual address where the SHMEM GEM object was mapped + * + * This function cleans up a kernel virtual address mapping acquired by + * drm_gem_shmem_vmap_local(). The mapping is only removed when the use count + * drops to zero. + * + * The function is called with the BO's reservation object locked. + * + * This function can be used to implement &drm_gem_object_funcs.vmap_local. + * But it can also be called by drivers directly, in which case it will hide + * the differences between dma-buf imported and natively allocated objects. + */ +void drm_gem_shmem_vunmap_local(struct drm_gem_object *obj, struct dma_buf_map *map) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + dma_resv_assert_held(obj->resv); + + mutex_lock(&shmem->vmap_lock); + drm_gem_shmem_vunmap_locked(shmem, map, true); + mutex_unlock(&shmem->vmap_lock); +} +EXPORT_SYMBOL(drm_gem_shmem_vunmap_local); + struct drm_gem_shmem_object * drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_device *dev, size_t size, diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 434328d8a0d9..3f59bdf749aa 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -114,7 +114,9 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_object *obj); void drm_gem_shmem_unpin(struct drm_gem_object *obj); int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +int drm_gem_shmem_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map); void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void drm_gem_shmem_vunmap_local(struct drm_gem_object *obj, struct dma_buf_map *map); int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv); From patchwork Fri Jan 8 09:43:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50E14C433E0 for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD3DD233EA for ; Fri, 8 Jan 2021 09:45:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728009AbhAHJpJ (ORCPT ); Fri, 8 Jan 2021 04:45:09 -0500 Received: from mx2.suse.de ([195.135.220.15]:36772 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727649AbhAHJpJ (ORCPT ); Fri, 8 Jan 2021 04:45:09 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 0A672AFA8; Fri, 8 Jan 2021 09:43:47 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann Subject: [PATCH v4 05/13] drm/mgag200: Use drm_gem_shmem_vmap_local() in damage handling Date: Fri, 8 Jan 2021 10:43:32 +0100 Message-Id: <20210108094340.15290-6-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Damage handling in mgag200 requires a short-term mapping of the source BO. Use drm_gem_shmem_vmap_local(). Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/mgag200/mgag200_mode.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c index 1dfc42170059..a33e28d4c5e9 100644 --- a/drivers/gpu/drm/mgag200/mgag200_mode.c +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c @@ -1552,22 +1552,32 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb, struct drm_rect *clip) { struct drm_device *dev = &mdev->base; + struct drm_gem_object *obj = fb->obj[0]; struct dma_buf_map map; void *vmap; int ret; - ret = drm_gem_shmem_vmap(fb->obj[0], &map); + ret = dma_resv_lock(obj->resv, NULL); if (drm_WARN_ON(dev, ret)) - return; /* BUG: SHMEM BO should always be vmapped */ + return; + ret = drm_gem_shmem_vmap_local(obj, &map); + if (drm_WARN_ON(dev, ret)) + goto err_dma_resv_unlock; /* BUG: SHMEM BO should always be vmapped */ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip); - drm_gem_shmem_vunmap(fb->obj[0], &map); + drm_gem_shmem_vunmap_local(obj, &map); + dma_resv_unlock(obj->resv); /* Always scanout image at VRAM offset 0 */ mgag200_set_startadd(mdev, (u32)0); mgag200_set_offset(mdev, fb); + + return; + +err_dma_resv_unlock: + dma_resv_unlock(obj->resv); } static void From patchwork Fri Jan 8 09:43:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44A31C433DB for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3F1F23435 for ; Fri, 8 Jan 2021 09:45:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728018AbhAHJpK (ORCPT ); Fri, 8 Jan 2021 04:45:10 -0500 Received: from mx2.suse.de ([195.135.220.15]:36774 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727956AbhAHJpJ (ORCPT ); Fri, 8 Jan 2021 04:45:09 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id BB9D9AFAC; Fri, 8 Jan 2021 09:43:47 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann Subject: [PATCH v4 06/13] drm/cirrus: Use drm_gem_shmem_vmap_local() in damage handling Date: Fri, 8 Jan 2021 10:43:33 +0100 Message-Id: <20210108094340.15290-7-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Damage handling in cirrus requires a short-term mapping of the source BO. Use drm_gem_shmem_vmap_local(). Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/tiny/cirrus.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c index a043e602199e..21cd7056d45f 100644 --- a/drivers/gpu/drm/tiny/cirrus.c +++ b/drivers/gpu/drm/tiny/cirrus.c @@ -315,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, struct drm_rect *rect) { struct cirrus_device *cirrus = to_cirrus(fb->dev); + struct drm_gem_object *obj = fb->obj[0]; struct dma_buf_map map; void *vmap; int idx, ret; @@ -323,9 +324,12 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, if (!drm_dev_enter(&cirrus->dev, &idx)) goto out; - ret = drm_gem_shmem_vmap(fb->obj[0], &map); + ret = dma_resv_lock(obj->resv, NULL); if (ret) goto out_dev_exit; + ret = drm_gem_shmem_vmap_local(fb->obj[0], &map); + if (ret) + goto out_dma_resv_unlock; vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ if (cirrus->cpp == fb->format->cpp[0]) @@ -345,9 +349,11 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, else WARN_ON_ONCE("cpp mismatch"); - drm_gem_shmem_vunmap(fb->obj[0], &map); ret = 0; + drm_gem_shmem_vunmap_local(obj, &map); +out_dma_resv_unlock: + dma_resv_unlock(obj->resv); out_dev_exit: drm_dev_exit(idx); out: From patchwork Fri Jan 8 09:43:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B839C433E0 for ; Fri, 8 Jan 2021 09:45:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DC403233EA for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728092AbhAHJpN (ORCPT ); Fri, 8 Jan 2021 04:45:13 -0500 Received: from mx2.suse.de ([195.135.220.15]:36776 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727966AbhAHJpJ (ORCPT ); Fri, 8 Jan 2021 04:45:09 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 79F5BAFAE; Fri, 8 Jan 2021 09:43:48 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann Subject: [PATCH v4 07/13] drm/gm12u320: Use drm_gem_shmem_vmap_local() in damage handling Date: Fri, 8 Jan 2021 10:43:34 +0100 Message-Id: <20210108094340.15290-8-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Damage handling in gm12u320 requires a short-term mapping of the source BO. Use drm_gem_shmem_vmap_local(). Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/tiny/gm12u320.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c index 33f65f4626e5..b0c6e350f2b3 100644 --- a/drivers/gpu/drm/tiny/gm12u320.c +++ b/drivers/gpu/drm/tiny/gm12u320.c @@ -265,11 +265,16 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) y1 = gm12u320->fb_update.rect.y1; y2 = gm12u320->fb_update.rect.y2; - ret = drm_gem_shmem_vmap(fb->obj[0], &map); + ret = dma_resv_lock(fb->obj[0]->resv, NULL); if (ret) { - GM12U320_ERR("failed to vmap fb: %d\n", ret); + GM12U320_ERR("failed to reserve fb: %d\n", ret); goto put_fb; } + ret = drm_gem_shmem_vmap_local(fb->obj[0], &map); + if (ret) { + GM12U320_ERR("failed to vmap fb: %d\n", ret); + goto unlock_resv; + } vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ if (fb->obj[0]->import_attach) { @@ -321,8 +326,11 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) if (ret) GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret); } + +unlock_resv: + dma_resv_unlock(fb->obj[0]->resv); vunmap: - drm_gem_shmem_vunmap(fb->obj[0], &map); + drm_gem_shmem_vunmap_local(fb->obj[0], &map); put_fb: drm_framebuffer_put(fb); gm12u320->fb_update.fb = NULL; From patchwork Fri Jan 8 09:43:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C1F5C433E9 for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 19BA1235DD for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728034AbhAHJpL (ORCPT ); Fri, 8 Jan 2021 04:45:11 -0500 Received: from mx2.suse.de ([195.135.220.15]:36778 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727975AbhAHJpK (ORCPT ); Fri, 8 Jan 2021 04:45:10 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 38000AFD7; Fri, 8 Jan 2021 09:43:49 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann Subject: [PATCH v4 08/13] drm/udl: Use drm_gem_shmem_vmap_local() in damage handling Date: Fri, 8 Jan 2021 10:43:35 +0100 Message-Id: <20210108094340.15290-9-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Damage handling in udl requires a short-term mapping of the source BO. Use drm_gem_shmem_vmap_local(). Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/udl/udl_modeset.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c index 9d34ec9d03f6..46b55b4d03c2 100644 --- a/drivers/gpu/drm/udl/udl_modeset.c +++ b/drivers/gpu/drm/udl/udl_modeset.c @@ -290,14 +290,18 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, else if ((clip.x2 > fb->width) || (clip.y2 > fb->height)) return -EINVAL; + ret = dma_resv_lock(fb->obj[0]->resv, NULL); + if (ret) + return ret; + if (import_attach) { ret = dma_buf_begin_cpu_access(import_attach->dmabuf, DMA_FROM_DEVICE); if (ret) - return ret; + goto out_dma_resv_unlock; } - ret = drm_gem_shmem_vmap(fb->obj[0], &map); + ret = drm_gem_shmem_vmap_local(fb->obj[0], &map); if (ret) { DRM_ERROR("failed to vmap fb\n"); goto out_dma_buf_end_cpu_access; @@ -307,7 +311,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, urb = udl_get_urb(dev); if (!urb) { ret = -ENOMEM; - goto out_drm_gem_shmem_vunmap; + goto out_drm_gem_shmem_vunmap_local; } cmd = urb->transfer_buffer; @@ -320,7 +324,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, &cmd, byte_offset, dev_byte_offset, byte_width); if (ret) - goto out_drm_gem_shmem_vunmap; + goto out_drm_gem_shmem_vunmap_local; } if (cmd > (char *)urb->transfer_buffer) { @@ -336,8 +340,8 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, ret = 0; -out_drm_gem_shmem_vunmap: - drm_gem_shmem_vunmap(fb->obj[0], &map); +out_drm_gem_shmem_vunmap_local: + drm_gem_shmem_vunmap_local(fb->obj[0], &map); out_dma_buf_end_cpu_access: if (import_attach) { tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf, @@ -345,6 +349,8 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, if (tmp_ret && !ret) ret = tmp_ret; /* only update ret if not set yet */ } +out_dma_resv_unlock: + dma_resv_unlock(fb->obj[0]->resv); return ret; } From patchwork Fri Jan 8 09:43:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B0F4C43381 for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4070B235FC for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728060AbhAHJpL (ORCPT ); Fri, 8 Jan 2021 04:45:11 -0500 Received: from mx2.suse.de ([195.135.220.15]:36780 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727987AbhAHJpK (ORCPT ); Fri, 8 Jan 2021 04:45:10 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 02ED9AFE2; Fri, 8 Jan 2021 09:43:50 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann , Daniel Vetter Subject: [PATCH v4 09/13] drm/vram-helper: Provide a vmap function for short-term mappings Date: Fri, 8 Jan 2021 10:43:36 +0100 Message-Id: <20210108094340.15290-10-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Implementations of the vmap/vunmap GEM callbacks may perform pinning of the BO and may acquire the associated reservation object's lock. It's somewhat inconvenient to callers that simply require a mapping of the contained memory; and also ipmplies a certain overhead. Therefore provide drm_gem_vram_vmap_local() drm_gem_vram_vunmap_local(), which only perform the vmap/vunmap operations. Callers have to hold the reservation lock while the mapping persists; or have to pin the BO by themselves. This patch connects GEM VRAM helpers to GEM object functions with equivalent functionality. v4: * move driver changes into separate patches (Daniel) * update documentation (Daniel) Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/drm_gem_vram_helper.c | 141 +++++++++++++++++--------- include/drm/drm_gem_vram_helper.h | 2 + 2 files changed, 95 insertions(+), 48 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 02ca22e90290..c7fba3a0758e 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -379,47 +379,6 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, - struct dma_buf_map *map) -{ - int ret; - - if (gbo->vmap_use_count > 0) - goto out; - - ret = ttm_bo_vmap(&gbo->bo, &gbo->map); - if (ret) - return ret; - -out: - ++gbo->vmap_use_count; - *map = gbo->map; - - return 0; -} - -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, - struct dma_buf_map *map) -{ - struct drm_device *dev = gbo->bo.base.dev; - - if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count)) - return; - - if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) - return; /* BUG: map not mapped from this BO */ - - if (--gbo->vmap_use_count > 0) - return; - - /* - * Permanently mapping and unmapping buffers adds overhead from - * updating the page tables and creates debugging output. Therefore, - * we delay the actual unmap operation until the BO gets evicted - * from memory. See drm_gem_vram_bo_driver_move_notify(). - */ -} - /** * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address * space @@ -447,7 +406,7 @@ int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) ret = drm_gem_vram_pin_locked(gbo, 0); if (ret) goto err_ttm_bo_unreserve; - ret = drm_gem_vram_kmap_locked(gbo, map); + ret = drm_gem_vram_vmap_local(gbo, map); if (ret) goto err_drm_gem_vram_unpin_locked; @@ -479,13 +438,83 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *ma if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) return; - drm_gem_vram_kunmap_locked(gbo, map); + drm_gem_vram_vunmap_local(gbo, map); drm_gem_vram_unpin_locked(gbo); ttm_bo_unreserve(&gbo->bo); } EXPORT_SYMBOL(drm_gem_vram_vunmap); +/** + * drm_gem_vram_vmap_local() - Maps a GEM VRAM object into kernel address space + * @gbo: The GEM VRAM object to map + * @map: Returns the kernel virtual address of the VRAM GEM object's backing + * store. + * + * The vmap_local function maps the buffer of a GEM VRAM object into kernel address + * space. Call drm_gem_vram_vunmap_local() with the returned address to unmap and + * unpin the GEM VRAM object. + * + * The function is called with the BO's reservation object locked. For short-term + * mappings, callers must hold the BO's reservation lock until after unmapping the + * buffer. + * + * Returns: + * 0 on success, or a negative error code otherwise. + */ +int drm_gem_vram_vmap_local(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) +{ + int ret; + + dma_resv_assert_held(gbo->bo.base.resv); + + if (gbo->vmap_use_count > 0) + goto out; + + ret = ttm_bo_vmap(&gbo->bo, &gbo->map); + if (ret) + return ret; + +out: + ++gbo->vmap_use_count; + *map = gbo->map; + + return 0; +} +EXPORT_SYMBOL(drm_gem_vram_vmap_local); + +/** + * drm_gem_vram_vunmap_local() - Unmaps a GEM VRAM object + * @gbo: The GEM VRAM object to unmap + * @map: Kernel virtual address where the VRAM GEM object was mapped + * + * A call to drm_gem_vram_vunmap_local() unmaps a GEM VRAM object's buffer. See + * the documentation for drm_gem_vram_vmap_local() for more information. + */ +void drm_gem_vram_vunmap_local(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) +{ + struct drm_device *dev = gbo->bo.base.dev; + + dma_resv_assert_held(gbo->bo.base.resv); + + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count)) + return; + + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) + return; /* BUG: map not mapped from this BO */ + + if (--gbo->vmap_use_count > 0) + return; + + /* + * Permanently mapping and unmapping buffers adds overhead from + * updating the page tables and creates debugging output. Therefore, + * we delay the actual unmap operation until the BO gets evicted + * from memory. See drm_gem_vram_bo_driver_move_notify(). + */ +} +EXPORT_SYMBOL(drm_gem_vram_vunmap_local); + /** * drm_gem_vram_fill_create_dumb() - \ Helper for implementing &struct drm_driver.dumb_create @@ -871,17 +900,33 @@ static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_bu drm_gem_vram_vunmap(gbo, map); } +static int drm_gem_vram_object_vmap_local(struct drm_gem_object *gem, struct dma_buf_map *map) +{ + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); + + return drm_gem_vram_vmap_local(gbo, map); +} + +static void drm_gem_vram_object_vunmap_local(struct drm_gem_object *gem, struct dma_buf_map *map) +{ + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); + + drm_gem_vram_vunmap_local(gbo, map); +} + /* * GEM object funcs */ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs = { - .free = drm_gem_vram_object_free, - .pin = drm_gem_vram_object_pin, - .unpin = drm_gem_vram_object_unpin, - .vmap = drm_gem_vram_object_vmap, + .free = drm_gem_vram_object_free, + .pin = drm_gem_vram_object_pin, + .unpin = drm_gem_vram_object_unpin, + .vmap = drm_gem_vram_object_vmap, .vunmap = drm_gem_vram_object_vunmap, - .mmap = drm_gem_ttm_mmap, + .vmap_local = drm_gem_vram_object_vmap_local, + .vunmap_local = drm_gem_vram_object_vunmap_local, + .mmap = drm_gem_ttm_mmap, .print_info = drm_gem_ttm_print_info, }; diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h index a4bac02249c2..bd6a60e7c22b 100644 --- a/include/drm/drm_gem_vram_helper.h +++ b/include/drm/drm_gem_vram_helper.h @@ -99,6 +99,8 @@ int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag); int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo); int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); +int drm_gem_vram_vmap_local(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); +void drm_gem_vram_vunmap_local(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); int drm_gem_vram_fill_create_dumb(struct drm_file *file, struct drm_device *dev, From patchwork Fri Jan 8 09:43:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF267C4332B for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B5412235DD for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728040AbhAHJpL (ORCPT ); Fri, 8 Jan 2021 04:45:11 -0500 Received: from mx2.suse.de ([195.135.220.15]:36782 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727986AbhAHJpK (ORCPT ); Fri, 8 Jan 2021 04:45:10 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id AFFABAFEF; Fri, 8 Jan 2021 09:43:50 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann Subject: [PATCH v4 10/13] drm/ast: Use drm_gem_vram_vmap_local() in cursor update Date: Fri, 8 Jan 2021 10:43:37 +0100 Message-Id: <20210108094340.15290-11-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Cursor updates in ast require a short-term mapping of the source and destination BO. Use drm_gem_vram_vmap_local() and avoid the pinning operations. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/ast/ast_cursor.c | 37 +++++++++++++++++++++++--------- 1 file changed, 27 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c index fac1ee79c372..c38f435bcde2 100644 --- a/drivers/gpu/drm/ast/ast_cursor.c +++ b/drivers/gpu/drm/ast/ast_cursor.c @@ -159,6 +159,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) struct drm_device *dev = &ast->base; struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index]; struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]); + struct drm_gem_object *objs[] = {&src_gbo->bo.base, &dst_gbo->bo.base}; + struct ww_acquire_ctx ctx; struct dma_buf_map src_map, dst_map; void __iomem *dst; void *src; @@ -168,26 +170,34 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT)) return -EINVAL; - ret = drm_gem_vram_vmap(src_gbo, &src_map); + ret = drm_gem_lock_reservations(objs, ARRAY_SIZE(objs), &ctx); if (ret) return ret; + + ret = drm_gem_vram_vmap_local(src_gbo, &src_map); + if (ret) + goto err_drm_gem_unlock_reservations; src = src_map.vaddr; /* TODO: Use mapping abstraction properly */ - ret = drm_gem_vram_vmap(dst_gbo, &dst_map); + ret = drm_gem_vram_vmap_local(dst_gbo, &dst_map); if (ret) - goto err_drm_gem_vram_vunmap; + goto err_drm_gem_vram_vunmap_local; dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */ /* do data transfer to cursor BO */ update_cursor_image(dst, src, fb->width, fb->height); - drm_gem_vram_vunmap(dst_gbo, &dst_map); - drm_gem_vram_vunmap(src_gbo, &src_map); + drm_gem_vram_vunmap_local(dst_gbo, &dst_map); + drm_gem_vram_vunmap_local(src_gbo, &src_map); + + drm_gem_unlock_reservations(objs, ARRAY_SIZE(objs), &ctx); return 0; -err_drm_gem_vram_vunmap: - drm_gem_vram_vunmap(src_gbo, &src_map); +err_drm_gem_vram_vunmap_local: + drm_gem_vram_vunmap_local(src_gbo, &src_map); +err_drm_gem_unlock_reservations: + drm_gem_unlock_reservations(objs, ARRAY_SIZE(objs), &ctx); return ret; } @@ -241,6 +251,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, { struct drm_device *dev = &ast->base; struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index]; + struct drm_gem_object *obj = &gbo->bo.base; struct dma_buf_map map; u8 x_offset, y_offset; u8 __iomem *dst; @@ -248,16 +259,22 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, u8 jreg; int ret; - ret = drm_gem_vram_vmap(gbo, &map); - if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret)) + ret = dma_resv_lock(obj->resv, NULL); + if (ret) + return; + ret = drm_gem_vram_vmap_local(gbo, &map); + if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap_local() failed, ret=%d\n", ret)) { + dma_resv_unlock(obj->resv); return; + } dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */ sig = dst + AST_HWC_SIZE; writel(x, sig + AST_HWC_SIGNATURE_X); writel(y, sig + AST_HWC_SIGNATURE_Y); - drm_gem_vram_vunmap(gbo, &map); + drm_gem_vram_vunmap_local(gbo, &map); + dma_resv_unlock(obj->resv); if (x < 0) { x_offset = (-x) + offset_x; From patchwork Fri Jan 8 09:43:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EA6FC4332E for ; Fri, 8 Jan 2021 09:45:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1553623435 for ; Fri, 8 Jan 2021 09:45:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728091AbhAHJpN (ORCPT ); Fri, 8 Jan 2021 04:45:13 -0500 Received: from mx2.suse.de ([195.135.220.15]:36788 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728002AbhAHJpK (ORCPT ); Fri, 8 Jan 2021 04:45:10 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6C39FAFBF; Fri, 8 Jan 2021 09:43:51 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann Subject: [PATCH v4 11/13] drm/vboxvideo: Use drm_gem_vram_vmap_local() in cursor update Date: Fri, 8 Jan 2021 10:43:38 +0100 Message-Id: <20210108094340.15290-12-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Cursor updates in vboxvideo require a short-term mapping of the source BO. Use drm_gem_vram_vmap_local() and avoid the pinning operations. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/vboxvideo/vbox_mode.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c index dbc0dd53c69e..215b37c78c10 100644 --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c @@ -381,7 +381,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, container_of(plane->dev, struct vbox_private, ddev); struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc); struct drm_framebuffer *fb = plane->state->fb; - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]); + struct drm_gem_object *obj = fb->obj[0]; + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj); u32 width = plane->state->crtc_w; u32 height = plane->state->crtc_h; size_t data_size, mask_size; @@ -401,11 +402,12 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, vbox_crtc->cursor_enabled = true; - ret = drm_gem_vram_vmap(gbo, &map); + ret = dma_resv_lock(obj->resv, NULL); + if (ret) + return; + ret = drm_gem_vram_vmap_local(gbo, &map); if (ret) { - /* - * BUG: we should have pinned the BO in prepare_fb(). - */ + dma_resv_unlock(obj->resv); mutex_unlock(&vbox->hw_mutex); DRM_WARN("Could not map cursor bo, skipping update\n"); return; @@ -421,7 +423,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, data_size = width * height * 4 + mask_size; copy_cursor_image(src, vbox->cursor_data, width, height, mask_size); - drm_gem_vram_vunmap(gbo, &map); + drm_gem_vram_vunmap_local(gbo, &map); + dma_resv_unlock(obj->resv); flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE | VBOX_MOUSE_POINTER_ALPHA; From patchwork Fri Jan 8 09:43:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7039C4332D for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8EDBB23435 for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728083AbhAHJpM (ORCPT ); Fri, 8 Jan 2021 04:45:12 -0500 Received: from mx2.suse.de ([195.135.220.15]:36790 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727993AbhAHJpL (ORCPT ); Fri, 8 Jan 2021 04:45:11 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 3CFF4AFF0; Fri, 8 Jan 2021 09:43:52 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann , Daniel Vetter Subject: [PATCH v4 12/13] drm/fb-helper: Move BO locking from DRM client to fbdev damage worker Date: Fri, 8 Jan 2021 10:43:39 +0100 Message-Id: <20210108094340.15290-13-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Fbdev emulation has to lock the BO into place while flushing the shadow buffer into the BO's memory. Remove any interference with pinning by using vmap_local functionality (instead of full vmap). This requires BO reservation locking in fbdev's damage worker. The new DRM client functions for locking and vmap_local functionality are added for consistency with the existing style. v4: * update documentation (Daniel) Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/drm_client.c | 94 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/drm_fb_helper.c | 41 +++++++------- include/drm/drm_client.h | 4 ++ 3 files changed, 119 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index ce45e380f4a2..0220165810aa 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -288,6 +288,37 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u return ERR_PTR(ret); } +/** + * drm_client_buffer_lock - Locks the DRM client buffer + * @buffer: DRM client buffer + * + * This function locks the client buffer by acquiring the buffer + * object's reservation lock. + * + * Unlock the buffer with drm_client_buffer_unlock(). + * + * Returns: + * 0 on success, or a negative errno code otherwise. + */ +int +drm_client_buffer_lock(struct drm_client_buffer *buffer) +{ + return dma_resv_lock(buffer->gem->resv, NULL); +} +EXPORT_SYMBOL(drm_client_buffer_lock); + +/** + * drm_client_buffer_unlock - Unlock DRM client buffer + * @buffer: DRM client buffer + * + * Unlocks a client buffer. See drm_client_buffer_lock(). + */ +void drm_client_buffer_unlock(struct drm_client_buffer *buffer) +{ + dma_resv_unlock(buffer->gem->resv); +} +EXPORT_SYMBOL(drm_client_buffer_unlock); + /** * drm_client_buffer_vmap - Map DRM client buffer into address space * @buffer: DRM client buffer @@ -305,6 +336,9 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u * other vmap interfaces, you don't need it for the client's vunmap * function. So you can modify it at will during blit and draw operations. * + * drm_client_buffer_vmap() is for permanent or long-term mappings. See + * drm_client_buffer_vmap_local() for short-term mappings. + * * Returns: * 0 on success, or a negative errno code otherwise. */ @@ -348,6 +382,66 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) } EXPORT_SYMBOL(drm_client_buffer_vunmap); +/** + * drm_client_buffer_vmap_local - Map DRM client buffer into address space + * @buffer: DRM client buffer + * @map_copy: Returns the mapped memory's address + * + * This function maps a client buffer into kernel address space. If the + * buffer is already mapped, it returns the existing mapping's address. + * + * Client buffer mappings are not ref'counted. Each call to + * drm_client_buffer_vmap_local() should be followed by a call to + * drm_client_buffer_vunmap_local(); or the client buffer should be mapped + * throughout its lifetime. + * + * The returned address is a copy of the internal value. In contrast to + * other vmap interfaces, you don't need it for the client's vunmap + * function. So you can modify it at will during blit and draw operations. + * + * Returns: + * 0 on success, or a negative errno code otherwise. + */ +int +drm_client_buffer_vmap_local(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy) +{ + struct dma_buf_map *map = &buffer->map; + int ret; + + /* + * FIXME: The dependency on GEM here isn't required, we could + * convert the driver handle to a dma-buf instead and use the + * backend-agnostic dma-buf vmap_local support instead. This would + * require that the handle2fd prime ioctl is reworked to pull the + * fd_install step out of the driver backend hooks, to make that + * final step optional for internal users. + */ + ret = drm_gem_vmap_local(buffer->gem, map); + if (ret) + return ret; + + *map_copy = *map; + + return 0; +} +EXPORT_SYMBOL(drm_client_buffer_vmap_local); + +/** + * drm_client_buffer_vunmap_local - Unmap DRM client buffer + * @buffer: DRM client buffer + * + * This function removes a client buffer's memory mapping. Calling this + * function is only required by clients that manage their buffer mappings + * by themselves. + */ +void drm_client_buffer_vunmap_local(struct drm_client_buffer *buffer) +{ + struct dma_buf_map *map = &buffer->map; + + drm_gem_vunmap_local(buffer->gem, map); +} +EXPORT_SYMBOL(drm_client_buffer_vunmap_local); + static void drm_client_buffer_rmfb(struct drm_client_buffer *buffer) { int ret; diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index a44c3a438059..283d27f8fe67 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -399,28 +399,34 @@ static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper, int ret; /* - * We have to pin the client buffer to its current location while - * flushing the shadow buffer. In the general case, concurrent - * modesetting operations could try to move the buffer and would - * fail. The modeset has to be serialized by acquiring the reservation - * object of the underlying BO here. - * * For fbdev emulation, we only have to protect against fbdev modeset * operations. Nothing else will involve the client buffer's BO. So it * is sufficient to acquire struct drm_fb_helper.lock here. */ mutex_lock(&fb_helper->lock); - ret = drm_client_buffer_vmap(buffer, &map); + /* + * We have to keep the client buffer at its current location while + * flushing the shadow buffer. Concurrent operations could otherwise + * try to move the buffer. Therefore acquiring the reservation + * object of the underlying BO here. + */ + ret = drm_client_buffer_lock(buffer); + if (ret) + goto out_mutex_unlock; + + ret = drm_client_buffer_vmap_local(buffer, &map); if (ret) - goto out; + goto out_drm_client_buffer_unlock; dst = map; drm_fb_helper_damage_blit_real(fb_helper, clip, &dst); - drm_client_buffer_vunmap(buffer); + drm_client_buffer_vunmap_local(buffer); -out: +out_drm_client_buffer_unlock: + drm_client_buffer_unlock(buffer); +out_mutex_unlock: mutex_unlock(&fb_helper->lock); return ret; @@ -946,15 +952,11 @@ static int setcmap_legacy(struct fb_cmap *cmap, struct fb_info *info) drm_modeset_lock_all(fb_helper->dev); drm_client_for_each_modeset(modeset, &fb_helper->client) { crtc = modeset->crtc; - if (!crtc->funcs->gamma_set || !crtc->gamma_size) { - ret = -EINVAL; - goto out; - } + if (!crtc->funcs->gamma_set || !crtc->gamma_size) + return -EINVAL; - if (cmap->start + cmap->len > crtc->gamma_size) { - ret = -EINVAL; - goto out; - } + if (cmap->start + cmap->len > crtc->gamma_size) + return -EINVAL; r = crtc->gamma_store; g = r + crtc->gamma_size; @@ -967,9 +969,8 @@ static int setcmap_legacy(struct fb_cmap *cmap, struct fb_info *info) ret = crtc->funcs->gamma_set(crtc, r, g, b, crtc->gamma_size, NULL); if (ret) - goto out; + return ret; } -out: drm_modeset_unlock_all(fb_helper->dev); return ret; diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h index f07f2fb02e75..df61e339a11c 100644 --- a/include/drm/drm_client.h +++ b/include/drm/drm_client.h @@ -156,8 +156,12 @@ struct drm_client_buffer * drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); +int drm_client_buffer_lock(struct drm_client_buffer *buffer); +void drm_client_buffer_unlock(struct drm_client_buffer *buffer); int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); +int drm_client_buffer_vmap_local(struct drm_client_buffer *buffer, struct dma_buf_map *map); +void drm_client_buffer_vunmap_local(struct drm_client_buffer *buffer); int drm_client_modeset_create(struct drm_client_dev *client); void drm_client_modeset_free(struct drm_client_dev *client); From patchwork Fri Jan 8 09:43:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 12006233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B37DFC433E6 for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 65840233EA for ; Fri, 8 Jan 2021 09:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728069AbhAHJpL (ORCPT ); Fri, 8 Jan 2021 04:45:11 -0500 Received: from mx2.suse.de ([195.135.220.15]:36792 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728004AbhAHJpK (ORCPT ); Fri, 8 Jan 2021 04:45:10 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 27ECFAFF1; Fri, 8 Jan 2021 09:43:53 +0000 (UTC) From: Thomas Zimmermann To: sumit.semwal@linaro.org, christian.koenig@amd.com, airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com, sean@poorly.run, eric@anholt.net, sam@ravnborg.org Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, virtualization@lists.linux-foundation.org, Thomas Zimmermann Subject: [PATCH v4 13/13] drm/vram-helper: Remove unused drm_gem_vram_{vmap,vunmap}() Date: Fri, 8 Jan 2021 10:43:40 +0100 Message-Id: <20210108094340.15290-14-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108094340.15290-1-tzimmermann@suse.de> References: <20210108094340.15290-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org VRAM-helper BO's cannot be exported, so calls for vmap and vunmap can only come from the BO's drivers or a kernel client. These are supposed use vmap_local functionality. The vmap and vunmap operations in VRAM helpers are therefore unused and can be removed. Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/drm_gem_vram_helper.c | 98 --------------------------- include/drm/drm_gem_vram_helper.h | 2 - 2 files changed, 100 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index c7fba3a0758e..c7d166cd16df 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -379,72 +379,6 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -/** - * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address - * space - * @gbo: The GEM VRAM object to map - * @map: Returns the kernel virtual address of the VRAM GEM object's backing - * store. - * - * The vmap function pins a GEM VRAM object to its current location, either - * system or video memory, and maps its buffer into kernel address space. - * As pinned object cannot be relocated, you should avoid pinning objects - * permanently. Call drm_gem_vram_vunmap() with the returned address to - * unmap and unpin the GEM VRAM object. - * - * Returns: - * 0 on success, or a negative error code otherwise. - */ -int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) -{ - int ret; - - ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); - if (ret) - return ret; - - ret = drm_gem_vram_pin_locked(gbo, 0); - if (ret) - goto err_ttm_bo_unreserve; - ret = drm_gem_vram_vmap_local(gbo, map); - if (ret) - goto err_drm_gem_vram_unpin_locked; - - ttm_bo_unreserve(&gbo->bo); - - return 0; - -err_drm_gem_vram_unpin_locked: - drm_gem_vram_unpin_locked(gbo); -err_ttm_bo_unreserve: - ttm_bo_unreserve(&gbo->bo); - return ret; -} -EXPORT_SYMBOL(drm_gem_vram_vmap); - -/** - * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object - * @gbo: The GEM VRAM object to unmap - * @map: Kernel virtual address where the VRAM GEM object was mapped - * - * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See - * the documentation for drm_gem_vram_vmap() for more information. - */ -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) -{ - int ret; - - ret = ttm_bo_reserve(&gbo->bo, false, false, NULL); - if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) - return; - - drm_gem_vram_vunmap_local(gbo, map); - drm_gem_vram_unpin_locked(gbo); - - ttm_bo_unreserve(&gbo->bo); -} -EXPORT_SYMBOL(drm_gem_vram_vunmap); - /** * drm_gem_vram_vmap_local() - Maps a GEM VRAM object into kernel address space * @gbo: The GEM VRAM object to map @@ -870,36 +804,6 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem) drm_gem_vram_unpin(gbo); } -/** - * drm_gem_vram_object_vmap() - - * Implements &struct drm_gem_object_funcs.vmap - * @gem: The GEM object to map - * @map: Returns the kernel virtual address of the VRAM GEM object's backing - * store. - * - * Returns: - * 0 on success, or a negative error code otherwise. - */ -static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map) -{ - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); - - return drm_gem_vram_vmap(gbo, map); -} - -/** - * drm_gem_vram_object_vunmap() - - * Implements &struct drm_gem_object_funcs.vunmap - * @gem: The GEM object to unmap - * @map: Kernel virtual address where the VRAM GEM object was mapped - */ -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map) -{ - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); - - drm_gem_vram_vunmap(gbo, map); -} - static int drm_gem_vram_object_vmap_local(struct drm_gem_object *gem, struct dma_buf_map *map) { struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); @@ -922,8 +826,6 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs = { .free = drm_gem_vram_object_free, .pin = drm_gem_vram_object_pin, .unpin = drm_gem_vram_object_unpin, - .vmap = drm_gem_vram_object_vmap, - .vunmap = drm_gem_vram_object_vunmap, .vmap_local = drm_gem_vram_object_vmap_local, .vunmap_local = drm_gem_vram_object_vunmap_local, .mmap = drm_gem_ttm_mmap, diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h index bd6a60e7c22b..e571867f4069 100644 --- a/include/drm/drm_gem_vram_helper.h +++ b/include/drm/drm_gem_vram_helper.h @@ -97,8 +97,6 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo); s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo); int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag); int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo); -int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); int drm_gem_vram_vmap_local(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); void drm_gem_vram_vunmap_local(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);