From patchwork Wed Sep 6 21:47:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13375955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9F50EE14C3 for ; Wed, 6 Sep 2023 21:47:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 98F1C10E20B; Wed, 6 Sep 2023 21:47:38 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id A1E0B10E1C4 for ; Wed, 6 Sep 2023 21:47:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694036855; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Bmrcb2oq50rZlpGWQfTXaOIClgXSoOriMS7Nyz+ms1c=; b=RBT1WdkMA/cZXc4dN+ar6INFxMNjHsGNDkXQJsQ8n4thq9Gzx90pZCeE+QV1D2lgwRS2bG Wkq+Y8+GHhVdbftkxhUl3h9zYYYvSYTVUUh07mb4t969yJBeZULg/GIYyqNYvkFdUt4ZGC woxbZDPo0cJxkIopMZsEtKDZkeNeDqI= Received: from mail-lf1-f70.google.com (mail-lf1-f70.google.com [209.85.167.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-388-2cDwMkXWNRO8sddc5QaoxQ-1; Wed, 06 Sep 2023 17:47:32 -0400 X-MC-Unique: 2cDwMkXWNRO8sddc5QaoxQ-1 Received: by mail-lf1-f70.google.com with SMTP id 2adb3069b0e04-50098600fd7so234132e87.1 for ; Wed, 06 Sep 2023 14:47:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694036851; x=1694641651; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Bmrcb2oq50rZlpGWQfTXaOIClgXSoOriMS7Nyz+ms1c=; b=dwFpwM4+HvDHSZE8AeIQ6tZiszCJjSCL61ch0klJ0md3yO3in3HGG0YIw86lJC+RSY /RruMdnV0Cp3bZ9LTDgS4lGUiZNV0zmPon5Yhs6Kf/UHzt4H18dh1ETrxZ1gcjfBjVUG klFKPlGfoUo+3/NshpGi4GSELmuw3rmC5W4SujNuGM/5wSH7H12gYIsW3KHrEb4dI9lU xJy+TwOPjVczSZ2xJczI147Ok+CABqivlam7D3kc0UxRrHI0fONBceV7HUU/d2BeO5WQ XUfhiNcvTJYYeOs9PKOXPBr2I6H1OcKisUyQczzGnVjt6UHYS1Y/Jj0ZS4I634VQkUSn IyHQ== X-Gm-Message-State: AOJu0YyIKy2MTbI9h5OcdYnFikLnM9rEeHjZ9stO55HhjxXwY9sIEfxT 6MlJ1kmHGufNM8Wle1mUbybeUvLNCrPoza5QvTn+tALseAimeSUQhishiE3n3teBx2uCO60Qp+R Y/O7uGKdruYeZxzplmjB9/8f/bRnT X-Received: by 2002:a05:6512:47c:b0:4fd:bdf8:930d with SMTP id x28-20020a056512047c00b004fdbdf8930dmr3397722lfd.22.1694036851085; Wed, 06 Sep 2023 14:47:31 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGZOvUjQotqVmw5dq+H1cGrTwf3v11lzqe++IgrCUjU4OD92LBUiS/v0WDNutLYEF1h43q/IQ== X-Received: by 2002:a05:6512:47c:b0:4fd:bdf8:930d with SMTP id x28-20020a056512047c00b004fdbdf8930dmr3397700lfd.22.1694036850791; Wed, 06 Sep 2023 14:47:30 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id q10-20020aa7cc0a000000b005232c051605sm8829288edt.19.2023.09.06.14.47.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 14:47:30 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Subject: [PATCH drm-misc-next v2 1/7] drm: gpuva_mgr: allow building as module Date: Wed, 6 Sep 2023 23:47:09 +0200 Message-ID: <20230906214723.4393-2-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906214723.4393-1-dakr@redhat.com> References: <20230906214723.4393-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Currently, the DRM GPUVA manager does not have any core dependencies preventing a module build. Also, new features from subsequent patches require helpers (namely drm_exec) which can be built as module. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/Kconfig | 7 +++++++ drivers/gpu/drm/Makefile | 2 +- drivers/gpu/drm/drm_gpuva_mgr.c | 3 +++ drivers/gpu/drm/nouveau/Kconfig | 1 + 4 files changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index ab9ef1c20349..3f2577e10c07 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -216,6 +216,13 @@ config DRM_EXEC help Execution context for command submissions +config DRM_GPUVA_MGR + tristate + depends on DRM && DRM_EXEC + help + GPU-VM representation providing helpers to manage a GPUs virtual + address space + config DRM_BUDDY tristate depends on DRM diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 215e78e79125..5c10243d77fe 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -45,7 +45,6 @@ drm-y := \ drm_vblank.o \ drm_vblank_work.o \ drm_vma_manager.o \ - drm_gpuva_mgr.o \ drm_writeback.o drm-$(CONFIG_DRM_LEGACY) += \ drm_agpsupport.o \ @@ -81,6 +80,7 @@ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o # # obj-$(CONFIG_DRM_EXEC) += drm_exec.o +obj-$(CONFIG_DRM_GPUVA_MGR) += drm_gpuva_mgr.o obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c index f86bfad74ff8..6b5e12e142df 100644 --- a/drivers/gpu/drm/drm_gpuva_mgr.c +++ b/drivers/gpu/drm/drm_gpuva_mgr.c @@ -1723,3 +1723,6 @@ drm_gpuva_ops_free(struct drm_gpuva_manager *mgr, kfree(ops); } EXPORT_SYMBOL_GPL(drm_gpuva_ops_free); + +MODULE_DESCRIPTION("DRM GPUVA Manager"); +MODULE_LICENSE("GPL"); diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig index c52e8096cca4..bf73f6b1ccad 100644 --- a/drivers/gpu/drm/nouveau/Kconfig +++ b/drivers/gpu/drm/nouveau/Kconfig @@ -11,6 +11,7 @@ config DRM_NOUVEAU select DRM_TTM select DRM_TTM_HELPER select DRM_EXEC + select DRM_GPUVA_MGR select DRM_SCHED select I2C select I2C_ALGOBIT From patchwork Wed Sep 6 21:47:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13375957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B668EE14D6 for ; Wed, 6 Sep 2023 21:47:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C540010E71E; Wed, 6 Sep 2023 21:47:43 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 76E7A10E66E for ; Wed, 6 Sep 2023 21:47:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694036858; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CD6MUHGFjWhun3g9ocmqQ5iRSNXnWwKRG8+eE/IEKlw=; b=QP5fvgRq116wFpZghyeH8ftL2cTecImFYzDzF4kNNWR/FikESDi2115+4LnqnjSGkiU+L4 vks3IPZFSCWtvafVapmTIIbQn5w04Sbf5Ht3wK4ZoNC2KkoNRSETIG5JyePKfEJQRkr6vc cRJo3TlvuXHMghuKwTuclgt4X7kiIa8= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-634-G496wr-8OEi7FAKrB5Ptgw-1; Wed, 06 Sep 2023 17:47:37 -0400 X-MC-Unique: G496wr-8OEi7FAKrB5Ptgw-1 Received: by mail-ej1-f72.google.com with SMTP id a640c23a62f3a-9a57ec840e2so194269366b.0 for ; Wed, 06 Sep 2023 14:47:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694036856; x=1694641656; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CD6MUHGFjWhun3g9ocmqQ5iRSNXnWwKRG8+eE/IEKlw=; b=MZwkCkSgDfjSM2gnUMzy0d4RgnOodrjKoLMrjhBpYBROTlugpqjT6wtohrBY8GUuU6 xVFxpeZdTvhMoSg5KKzKig93D7KqCx3RHAHDh6R077sX3N2Lv7HORIvDAw19HtjX4p4X KksocftCshGHU088FEZYCQJ39L9z1dMBBCDCyryx7F4JDUeqtC4OJ7TWbhJ4QEASF0O6 UztnbqcLtLkfMDuGh/vIv+jMFYsQbiddfPfF5Hry05YpaPNWKRiLvFJvigBnoyL6L1ae GfLiGhP8plg+VnmjWot5X1jEBjymzC5DknpWGOajFWnsrEb6xB1zSVsMCZH1Zl0be+Pw Cagw== X-Gm-Message-State: AOJu0YzrimgS752yRBVeSNvxGv7zCPAPIe3ZacGFWPLVD7pfrmP/Aujg Lusxm3cL7m5ZKTpjc+5bB3C6SuOXMoDDWy2dQNvSUs4V/DZM8Tfg3y+0qiNxJO6r78oBDSyXwbh 5ZtENJJMIu/nxEDmAToVfvlpcneAJ X-Received: by 2002:a17:906:4fcb:b0:9a9:d148:40f0 with SMTP id i11-20020a1709064fcb00b009a9d14840f0mr1235131ejw.22.1694036855252; Wed, 06 Sep 2023 14:47:35 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEaeGIiXf2Mo91iJb2910pwTnOJNY0FKF03BlTygilyAZVo5BFFSKeBITnQbX2KC7KkiWuV7g== X-Received: by 2002:a17:906:4fcb:b0:9a9:d148:40f0 with SMTP id i11-20020a1709064fcb00b009a9d14840f0mr1235110ejw.22.1694036854684; Wed, 06 Sep 2023 14:47:34 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id qq21-20020a17090720d500b0099c53c44083sm9497263ejb.79.2023.09.06.14.47.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 14:47:34 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Subject: [PATCH drm-misc-next v2 2/7] drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvm Date: Wed, 6 Sep 2023 23:47:10 +0200 Message-ID: <20230906214723.4393-3-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906214723.4393-1-dakr@redhat.com> References: <20230906214723.4393-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rename struct drm_gpuva_manager to struct drm_gpuvm including corresponding functions. This way the GPUVA manager's structures align very well with the documentation of VM_BIND [1] and VM_BIND locking [2]. It also provides a better foundation for the naming of data structures and functions introduced for implementing a common dma-resv per GPU-VM including tracking of external and evicted objects in subsequent patches. [1] Documentation/gpu/drm-vm-bind-async.rst [2] Documentation/gpu/drm-vm-bind-locking.rst Cc: Thomas Hellström Cc: Matthew Brost Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_debugfs.c | 14 +- drivers/gpu/drm/drm_gpuva_mgr.c | 352 ++++++++++++------------- drivers/gpu/drm/nouveau/nouveau_exec.c | 2 +- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 18 +- drivers/gpu/drm/nouveau/nouveau_uvmm.h | 4 +- include/drm/drm_debugfs.h | 4 +- include/drm/drm_gpuva_mgr.h | 121 +++++---- 7 files changed, 257 insertions(+), 258 deletions(-) diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c index 34c7d1a580e3..7e1eab7da7a7 100644 --- a/drivers/gpu/drm/drm_debugfs.c +++ b/drivers/gpu/drm/drm_debugfs.c @@ -187,31 +187,31 @@ static const struct file_operations drm_debugfs_fops = { /** * drm_debugfs_gpuva_info - dump the given DRM GPU VA space * @m: pointer to the &seq_file to write - * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @gpuvm: the &drm_gpuvm representing the GPU VA space * * Dumps the GPU VA mappings of a given DRM GPU VA manager. * * For each DRM GPU VA space drivers should call this function from their * &drm_info_list's show callback. * - * Returns: 0 on success, -ENODEV if the &mgr is not initialized + * Returns: 0 on success, -ENODEV if the &gpuvm is not initialized */ int drm_debugfs_gpuva_info(struct seq_file *m, - struct drm_gpuva_manager *mgr) + struct drm_gpuvm *gpuvm) { - struct drm_gpuva *va, *kva = &mgr->kernel_alloc_node; + struct drm_gpuva *va, *kva = &gpuvm->kernel_alloc_node; - if (!mgr->name) + if (!gpuvm->name) return -ENODEV; seq_printf(m, "DRM GPU VA space (%s) [0x%016llx;0x%016llx]\n", - mgr->name, mgr->mm_start, mgr->mm_start + mgr->mm_range); + gpuvm->name, gpuvm->mm_start, gpuvm->mm_start + gpuvm->mm_range); seq_printf(m, "Kernel reserved node [0x%016llx;0x%016llx]\n", kva->va.addr, kva->va.addr + kva->va.range); seq_puts(m, "\n"); seq_puts(m, " VAs | start | range | end | object | object offset\n"); seq_puts(m, "-------------------------------------------------------------------------------------------------------------\n"); - drm_gpuva_for_each_va(va, mgr) { + drm_gpuvm_for_each_va(va, gpuvm) { if (unlikely(va == kva)) continue; diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c index 6b5e12e142df..0d9f15f50454 100644 --- a/drivers/gpu/drm/drm_gpuva_mgr.c +++ b/drivers/gpu/drm/drm_gpuva_mgr.c @@ -33,8 +33,8 @@ /** * DOC: Overview * - * The DRM GPU VA Manager, represented by struct drm_gpuva_manager keeps track - * of a GPU's virtual address (VA) space and manages the corresponding virtual + * The DRM GPU VA Manager, represented by struct drm_gpuvm keeps track of a + * GPU's virtual address (VA) space and manages the corresponding virtual * mappings represented by &drm_gpuva objects. It also keeps track of the * mapping's backing &drm_gem_object buffers. * @@ -47,28 +47,28 @@ * The GPU VA manager internally uses a rb-tree to manage the * &drm_gpuva mappings within a GPU's virtual address space. * - * The &drm_gpuva_manager contains a special &drm_gpuva representing the + * The &drm_gpuvm structure contains a special &drm_gpuva representing the * portion of VA space reserved by the kernel. This node is initialized together * with the GPU VA manager instance and removed when the GPU VA manager is * destroyed. * - * In a typical application drivers would embed struct drm_gpuva_manager and + * In a typical application drivers would embed struct drm_gpuvm and * struct drm_gpuva within their own driver specific structures, there won't be * any memory allocations of its own nor memory allocations of &drm_gpuva * entries. * - * The data structures needed to store &drm_gpuvas within the &drm_gpuva_manager - * are contained within struct drm_gpuva already. Hence, for inserting - * &drm_gpuva entries from within dma-fence signalling critical sections it is - * enough to pre-allocate the &drm_gpuva structures. + * The data structures needed to store &drm_gpuvas within the &drm_gpuvm are + * contained within struct drm_gpuva already. Hence, for inserting &drm_gpuva + * entries from within dma-fence signalling critical sections it is enough to + * pre-allocate the &drm_gpuva structures. */ /** * DOC: Split and Merge * * Besides its capability to manage and represent a GPU VA space, the - * &drm_gpuva_manager also provides functions to let the &drm_gpuva_manager - * calculate a sequence of operations to satisfy a given map or unmap request. + * GPU VA manager also provides functions to let the &drm_gpuvm calculate a + * sequence of operations to satisfy a given map or unmap request. * * Therefore the DRM GPU VA manager provides an algorithm implementing splitting * and merging of existent GPU VA mappings with the ones that are requested to @@ -83,9 +83,9 @@ * of the GPU VA space. * * Depending on how the new GPU VA mapping intersects with the existent mappings - * of the GPU VA space the &drm_gpuva_fn_ops callbacks contain an arbitrary - * amount of unmap operations, a maximum of two remap operations and a single - * map operation. The caller might receive no callback at all if no operation is + * of the GPU VA space the &drm_gpuvm_ops callbacks contain an arbitrary amount + * of unmap operations, a maximum of two remap operations and a single map + * operation. The caller might receive no callback at all if no operation is * required, e.g. if the requested mapping already exists in the exact same way. * * The single map operation represents the original map operation requested by @@ -95,7 +95,7 @@ * &drm_gpuva to unmap is physically contiguous with the original mapping * request. Optionally, if 'keep' is set, drivers may keep the actual page table * entries for this &drm_gpuva, adding the missing page table entries only and - * update the &drm_gpuva_manager's view of things accordingly. + * update the &drm_gpuvm's view of things accordingly. * * Drivers may do the same optimization, namely delta page table updates, also * for remap operations. This is possible since &drm_gpuva_op_remap consists of @@ -106,8 +106,8 @@ * the beginning and one at the end of the new mapping, hence there is a * maximum of two remap operations. * - * Analogous to drm_gpuva_sm_map() drm_gpuva_sm_unmap() uses &drm_gpuva_fn_ops - * to call back into the driver in order to unmap a range of GPU VA space. The + * Analogous to drm_gpuva_sm_map() drm_gpuva_sm_unmap() uses &drm_gpuvm_ops to + * call back into the driver in order to unmap a range of GPU VA space. The * logic behind this function is way simpler though: For all existent mappings * enclosed by the given range unmap operations are created. For mappings which * are only partically located within the given range, remap operations are @@ -124,12 +124,12 @@ * allocations are possible (e.g. to allocate GPU page tables) and once in the * dma-fence signalling critical path. * - * To update the &drm_gpuva_manager's view of the GPU VA space - * drm_gpuva_insert() and drm_gpuva_remove() may be used. These functions can - * safely be used from &drm_gpuva_fn_ops callbacks originating from - * drm_gpuva_sm_map() or drm_gpuva_sm_unmap(). However, it might be more - * convenient to use the provided helper functions drm_gpuva_map(), - * drm_gpuva_remap() and drm_gpuva_unmap() instead. + * To update the &drm_gpuvm's view of the GPU VA space drm_gpuva_insert() and + * drm_gpuva_remove() may be used. These functions can safely be used from + * &drm_gpuvm_ops callbacks originating from drm_gpuva_sm_map() or + * drm_gpuva_sm_unmap(). However, it might be more convenient to use the + * provided helper functions drm_gpuva_map(), drm_gpuva_remap() and + * drm_gpuva_unmap() instead. * * The following diagram depicts the basic relationships of existent GPU VA * mappings, a newly requested mapping and the resulting mappings as implemented @@ -421,10 +421,10 @@ * // Allocates a new &drm_gpuva. * struct drm_gpuva * driver_gpuva_alloc(void); * - * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva + * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva * // structure in individual driver structures and lock the dma-resv with * // drm_exec or similar helpers. - * int driver_mapping_create(struct drm_gpuva_manager *mgr, + * int driver_mapping_create(struct drm_gpuvm *gpuvm, * u64 addr, u64 range, * struct drm_gem_object *obj, u64 offset) * { @@ -432,7 +432,7 @@ * struct drm_gpuva_op *op * * driver_lock_va_space(); - * ops = drm_gpuva_sm_map_ops_create(mgr, addr, range, + * ops = drm_gpuva_sm_map_ops_create(gpuvm, addr, range, * obj, offset); * if (IS_ERR(ops)) * return PTR_ERR(ops); @@ -448,7 +448,7 @@ * // free memory and unlock * * driver_vm_map(); - * drm_gpuva_map(mgr, va, &op->map); + * drm_gpuva_map(gpuvm, va, &op->map); * drm_gpuva_link(va); * * break; @@ -504,23 +504,23 @@ * 2) Receive a callback for each &drm_gpuva_op to create a new mapping:: * * struct driver_context { - * struct drm_gpuva_manager *mgr; + * struct drm_gpuvm *gpuvm; * struct drm_gpuva *new_va; * struct drm_gpuva *prev_va; * struct drm_gpuva *next_va; * }; * - * // ops to pass to drm_gpuva_manager_init() - * static const struct drm_gpuva_fn_ops driver_gpuva_ops = { + * // ops to pass to drm_gpuvm_init() + * static const struct drm_gpuvm_ops driver_gpuvm_ops = { * .sm_step_map = driver_gpuva_map, * .sm_step_remap = driver_gpuva_remap, * .sm_step_unmap = driver_gpuva_unmap, * }; * - * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva + * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva * // structure in individual driver structures and lock the dma-resv with * // drm_exec or similar helpers. - * int driver_mapping_create(struct drm_gpuva_manager *mgr, + * int driver_mapping_create(struct drm_gpuvm *gpuvm, * u64 addr, u64 range, * struct drm_gem_object *obj, u64 offset) * { @@ -529,7 +529,7 @@ * struct drm_gpuva_op *op; * int ret = 0; * - * ctx.mgr = mgr; + * ctx.gpuvm = gpuvm; * * ctx.new_va = kzalloc(sizeof(*ctx.new_va), GFP_KERNEL); * ctx.prev_va = kzalloc(sizeof(*ctx.prev_va), GFP_KERNEL); @@ -540,7 +540,7 @@ * } * * driver_lock_va_space(); - * ret = drm_gpuva_sm_map(mgr, &ctx, addr, range, obj, offset); + * ret = drm_gpuva_sm_map(gpuvm, &ctx, addr, range, obj, offset); * driver_unlock_va_space(); * * out: @@ -554,7 +554,7 @@ * { * struct driver_context *ctx = __ctx; * - * drm_gpuva_map(ctx->mgr, ctx->new_va, &op->map); + * drm_gpuva_map(ctx->vm, ctx->new_va, &op->map); * * drm_gpuva_link(ctx->new_va); * @@ -609,7 +609,7 @@ INTERVAL_TREE_DEFINE(struct drm_gpuva, rb.node, u64, rb.__subtree_last, GPUVA_START, GPUVA_LAST, static __maybe_unused, drm_gpuva_it) -static int __drm_gpuva_insert(struct drm_gpuva_manager *mgr, +static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va); static void __drm_gpuva_remove(struct drm_gpuva *va); @@ -623,121 +623,121 @@ drm_gpuva_check_overflow(u64 addr, u64 range) } static bool -drm_gpuva_in_mm_range(struct drm_gpuva_manager *mgr, u64 addr, u64 range) +drm_gpuva_in_mm_range(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { u64 end = addr + range; - u64 mm_start = mgr->mm_start; - u64 mm_end = mm_start + mgr->mm_range; + u64 mm_start = gpuvm->mm_start; + u64 mm_end = mm_start + gpuvm->mm_range; return addr >= mm_start && end <= mm_end; } static bool -drm_gpuva_in_kernel_node(struct drm_gpuva_manager *mgr, u64 addr, u64 range) +drm_gpuva_in_kernel_node(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { u64 end = addr + range; - u64 kstart = mgr->kernel_alloc_node.va.addr; - u64 krange = mgr->kernel_alloc_node.va.range; + u64 kstart = gpuvm->kernel_alloc_node.va.addr; + u64 krange = gpuvm->kernel_alloc_node.va.range; u64 kend = kstart + krange; return krange && addr < kend && kstart < end; } static bool -drm_gpuva_range_valid(struct drm_gpuva_manager *mgr, +drm_gpuva_range_valid(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { return !drm_gpuva_check_overflow(addr, range) && - drm_gpuva_in_mm_range(mgr, addr, range) && - !drm_gpuva_in_kernel_node(mgr, addr, range); + drm_gpuva_in_mm_range(gpuvm, addr, range) && + !drm_gpuva_in_kernel_node(gpuvm, addr, range); } /** - * drm_gpuva_manager_init() - initialize a &drm_gpuva_manager - * @mgr: pointer to the &drm_gpuva_manager to initialize + * drm_gpuvm_init() - initialize a &drm_gpuvm + * @gpuvm: pointer to the &drm_gpuvm to initialize * @name: the name of the GPU VA space * @start_offset: the start offset of the GPU VA space * @range: the size of the GPU VA space * @reserve_offset: the start of the kernel reserved GPU VA area * @reserve_range: the size of the kernel reserved GPU VA area - * @ops: &drm_gpuva_fn_ops called on &drm_gpuva_sm_map / &drm_gpuva_sm_unmap + * @ops: &drm_gpuvm_ops called on &drm_gpuva_sm_map / &drm_gpuva_sm_unmap * - * The &drm_gpuva_manager must be initialized with this function before use. + * The &drm_gpuvm must be initialized with this function before use. * - * Note that @mgr must be cleared to 0 before calling this function. The given + * Note that @gpuvm must be cleared to 0 before calling this function. The given * &name is expected to be managed by the surrounding driver structures. */ void -drm_gpuva_manager_init(struct drm_gpuva_manager *mgr, - const char *name, - u64 start_offset, u64 range, - u64 reserve_offset, u64 reserve_range, - const struct drm_gpuva_fn_ops *ops) +drm_gpuvm_init(struct drm_gpuvm *gpuvm, + const char *name, + u64 start_offset, u64 range, + u64 reserve_offset, u64 reserve_range, + const struct drm_gpuvm_ops *ops) { - mgr->rb.tree = RB_ROOT_CACHED; - INIT_LIST_HEAD(&mgr->rb.list); + gpuvm->rb.tree = RB_ROOT_CACHED; + INIT_LIST_HEAD(&gpuvm->rb.list); drm_gpuva_check_overflow(start_offset, range); - mgr->mm_start = start_offset; - mgr->mm_range = range; + gpuvm->mm_start = start_offset; + gpuvm->mm_range = range; - mgr->name = name ? name : "unknown"; - mgr->ops = ops; + gpuvm->name = name ? name : "unknown"; + gpuvm->ops = ops; - memset(&mgr->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); + memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); if (reserve_range) { - mgr->kernel_alloc_node.va.addr = reserve_offset; - mgr->kernel_alloc_node.va.range = reserve_range; + gpuvm->kernel_alloc_node.va.addr = reserve_offset; + gpuvm->kernel_alloc_node.va.range = reserve_range; if (likely(!drm_gpuva_check_overflow(reserve_offset, reserve_range))) - __drm_gpuva_insert(mgr, &mgr->kernel_alloc_node); + __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node); } } -EXPORT_SYMBOL_GPL(drm_gpuva_manager_init); +EXPORT_SYMBOL_GPL(drm_gpuvm_init); /** - * drm_gpuva_manager_destroy() - cleanup a &drm_gpuva_manager - * @mgr: pointer to the &drm_gpuva_manager to clean up + * drm_gpuvm_destroy() - cleanup a &drm_gpuvm + * @gpuvm: pointer to the &drm_gpuvm to clean up * * Note that it is a bug to call this function on a manager that still * holds GPU VA mappings. */ void -drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr) +drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) { - mgr->name = NULL; + gpuvm->name = NULL; - if (mgr->kernel_alloc_node.va.range) - __drm_gpuva_remove(&mgr->kernel_alloc_node); + if (gpuvm->kernel_alloc_node.va.range) + __drm_gpuva_remove(&gpuvm->kernel_alloc_node); - WARN(!RB_EMPTY_ROOT(&mgr->rb.tree.rb_root), + WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), "GPUVA tree is not empty, potentially leaking memory."); } -EXPORT_SYMBOL_GPL(drm_gpuva_manager_destroy); +EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); static int -__drm_gpuva_insert(struct drm_gpuva_manager *mgr, +__drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va) { struct rb_node *node; struct list_head *head; - if (drm_gpuva_it_iter_first(&mgr->rb.tree, + if (drm_gpuva_it_iter_first(&gpuvm->rb.tree, GPUVA_START(va), GPUVA_LAST(va))) return -EEXIST; - va->mgr = mgr; + va->vm = gpuvm; - drm_gpuva_it_insert(va, &mgr->rb.tree); + drm_gpuva_it_insert(va, &gpuvm->rb.tree); node = rb_prev(&va->rb.node); if (node) head = &(to_drm_gpuva(node))->rb.entry; else - head = &mgr->rb.list; + head = &gpuvm->rb.list; list_add(&va->rb.entry, head); @@ -746,36 +746,36 @@ __drm_gpuva_insert(struct drm_gpuva_manager *mgr, /** * drm_gpuva_insert() - insert a &drm_gpuva - * @mgr: the &drm_gpuva_manager to insert the &drm_gpuva in + * @gpuvm: the &drm_gpuvm to insert the &drm_gpuva in * @va: the &drm_gpuva to insert * * Insert a &drm_gpuva with a given address and range into a - * &drm_gpuva_manager. + * &drm_gpuvm. * * It is safe to use this function using the safe versions of iterating the GPU - * VA space, such as drm_gpuva_for_each_va_safe() and - * drm_gpuva_for_each_va_range_safe(). + * VA space, such as drm_gpuvm_for_each_va_safe() and + * drm_gpuvm_for_each_va_range_safe(). * * Returns: 0 on success, negative error code on failure. */ int -drm_gpuva_insert(struct drm_gpuva_manager *mgr, +drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va) { u64 addr = va->va.addr; u64 range = va->va.range; - if (unlikely(!drm_gpuva_range_valid(mgr, addr, range))) + if (unlikely(!drm_gpuva_range_valid(gpuvm, addr, range))) return -EINVAL; - return __drm_gpuva_insert(mgr, va); + return __drm_gpuva_insert(gpuvm, va); } EXPORT_SYMBOL_GPL(drm_gpuva_insert); static void __drm_gpuva_remove(struct drm_gpuva *va) { - drm_gpuva_it_remove(va, &va->mgr->rb.tree); + drm_gpuva_it_remove(va, &va->vm->rb.tree); list_del_init(&va->rb.entry); } @@ -786,15 +786,15 @@ __drm_gpuva_remove(struct drm_gpuva *va) * This removes the given &va from the underlaying tree. * * It is safe to use this function using the safe versions of iterating the GPU - * VA space, such as drm_gpuva_for_each_va_safe() and - * drm_gpuva_for_each_va_range_safe(). + * VA space, such as drm_gpuvm_for_each_va_safe() and + * drm_gpuvm_for_each_va_range_safe(). */ void drm_gpuva_remove(struct drm_gpuva *va) { - struct drm_gpuva_manager *mgr = va->mgr; + struct drm_gpuvm *gpuvm = va->vm; - if (unlikely(va == &mgr->kernel_alloc_node)) { + if (unlikely(va == &gpuvm->kernel_alloc_node)) { WARN(1, "Can't destroy kernel reserved node.\n"); return; } @@ -853,37 +853,37 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unlink); /** * drm_gpuva_find_first() - find the first &drm_gpuva in the given range - * @mgr: the &drm_gpuva_manager to search in + * @gpuvm: the &drm_gpuvm to search in * @addr: the &drm_gpuvas address * @range: the &drm_gpuvas range * * Returns: the first &drm_gpuva within the given range */ struct drm_gpuva * -drm_gpuva_find_first(struct drm_gpuva_manager *mgr, +drm_gpuva_find_first(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { u64 last = addr + range - 1; - return drm_gpuva_it_iter_first(&mgr->rb.tree, addr, last); + return drm_gpuva_it_iter_first(&gpuvm->rb.tree, addr, last); } EXPORT_SYMBOL_GPL(drm_gpuva_find_first); /** * drm_gpuva_find() - find a &drm_gpuva - * @mgr: the &drm_gpuva_manager to search in + * @gpuvm: the &drm_gpuvm to search in * @addr: the &drm_gpuvas address * @range: the &drm_gpuvas range * * Returns: the &drm_gpuva at a given &addr and with a given &range */ struct drm_gpuva * -drm_gpuva_find(struct drm_gpuva_manager *mgr, +drm_gpuva_find(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { struct drm_gpuva *va; - va = drm_gpuva_find_first(mgr, addr, range); + va = drm_gpuva_find_first(gpuvm, addr, range); if (!va) goto out; @@ -900,7 +900,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find); /** * drm_gpuva_find_prev() - find the &drm_gpuva before the given address - * @mgr: the &drm_gpuva_manager to search in + * @gpuvm: the &drm_gpuvm to search in * @start: the given GPU VA's start address * * Find the adjacent &drm_gpuva before the GPU VA with given &start address. @@ -911,18 +911,18 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find); * Returns: a pointer to the found &drm_gpuva or NULL if none was found */ struct drm_gpuva * -drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start) +drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start) { - if (!drm_gpuva_range_valid(mgr, start - 1, 1)) + if (!drm_gpuva_range_valid(gpuvm, start - 1, 1)) return NULL; - return drm_gpuva_it_iter_first(&mgr->rb.tree, start - 1, start); + return drm_gpuva_it_iter_first(&gpuvm->rb.tree, start - 1, start); } EXPORT_SYMBOL_GPL(drm_gpuva_find_prev); /** * drm_gpuva_find_next() - find the &drm_gpuva after the given address - * @mgr: the &drm_gpuva_manager to search in + * @gpuvm: the &drm_gpuvm to search in * @end: the given GPU VA's end address * * Find the adjacent &drm_gpuva after the GPU VA with given &end address. @@ -933,47 +933,47 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find_prev); * Returns: a pointer to the found &drm_gpuva or NULL if none was found */ struct drm_gpuva * -drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end) +drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end) { - if (!drm_gpuva_range_valid(mgr, end, 1)) + if (!drm_gpuva_range_valid(gpuvm, end, 1)) return NULL; - return drm_gpuva_it_iter_first(&mgr->rb.tree, end, end + 1); + return drm_gpuva_it_iter_first(&gpuvm->rb.tree, end, end + 1); } EXPORT_SYMBOL_GPL(drm_gpuva_find_next); /** * drm_gpuva_interval_empty() - indicate whether a given interval of the VA space * is empty - * @mgr: the &drm_gpuva_manager to check the range for + * @gpuvm: the &drm_gpuvm to check the range for * @addr: the start address of the range * @range: the range of the interval * * Returns: true if the interval is empty, false otherwise */ bool -drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range) +drm_gpuva_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { - return !drm_gpuva_find_first(mgr, addr, range); + return !drm_gpuva_find_first(gpuvm, addr, range); } EXPORT_SYMBOL_GPL(drm_gpuva_interval_empty); /** * drm_gpuva_map() - helper to insert a &drm_gpuva according to a * &drm_gpuva_op_map - * @mgr: the &drm_gpuva_manager + * @gpuvm: the &drm_gpuvm * @va: the &drm_gpuva to insert * @op: the &drm_gpuva_op_map to initialize @va with * - * Initializes the @va from the @op and inserts it into the given @mgr. + * Initializes the @va from the @op and inserts it into the given @gpuvm. */ void -drm_gpuva_map(struct drm_gpuva_manager *mgr, +drm_gpuva_map(struct drm_gpuvm *gpuvm, struct drm_gpuva *va, struct drm_gpuva_op_map *op) { drm_gpuva_init_from_op(va, op); - drm_gpuva_insert(mgr, va); + drm_gpuva_insert(gpuvm, va); } EXPORT_SYMBOL_GPL(drm_gpuva_map); @@ -993,18 +993,18 @@ drm_gpuva_remap(struct drm_gpuva *prev, struct drm_gpuva_op_remap *op) { struct drm_gpuva *curr = op->unmap->va; - struct drm_gpuva_manager *mgr = curr->mgr; + struct drm_gpuvm *gpuvm = curr->vm; drm_gpuva_remove(curr); if (op->prev) { drm_gpuva_init_from_op(prev, op->prev); - drm_gpuva_insert(mgr, prev); + drm_gpuva_insert(gpuvm, prev); } if (op->next) { drm_gpuva_init_from_op(next, op->next); - drm_gpuva_insert(mgr, next); + drm_gpuva_insert(gpuvm, next); } } EXPORT_SYMBOL_GPL(drm_gpuva_remap); @@ -1024,7 +1024,7 @@ drm_gpuva_unmap(struct drm_gpuva_op_unmap *op) EXPORT_SYMBOL_GPL(drm_gpuva_unmap); static int -op_map_cb(const struct drm_gpuva_fn_ops *fn, void *priv, +op_map_cb(const struct drm_gpuvm_ops *fn, void *priv, u64 addr, u64 range, struct drm_gem_object *obj, u64 offset) { @@ -1040,7 +1040,7 @@ op_map_cb(const struct drm_gpuva_fn_ops *fn, void *priv, } static int -op_remap_cb(const struct drm_gpuva_fn_ops *fn, void *priv, +op_remap_cb(const struct drm_gpuvm_ops *fn, void *priv, struct drm_gpuva_op_map *prev, struct drm_gpuva_op_map *next, struct drm_gpuva_op_unmap *unmap) @@ -1058,7 +1058,7 @@ op_remap_cb(const struct drm_gpuva_fn_ops *fn, void *priv, } static int -op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv, +op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv, struct drm_gpuva *va, bool merge) { struct drm_gpuva_op op = {}; @@ -1071,8 +1071,8 @@ op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv, } static int -__drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, - const struct drm_gpuva_fn_ops *ops, void *priv, +__drm_gpuva_sm_map(struct drm_gpuvm *gpuvm, + const struct drm_gpuvm_ops *ops, void *priv, u64 req_addr, u64 req_range, struct drm_gem_object *req_obj, u64 req_offset) { @@ -1080,10 +1080,10 @@ __drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, u64 req_end = req_addr + req_range; int ret; - if (unlikely(!drm_gpuva_range_valid(mgr, req_addr, req_range))) + if (unlikely(!drm_gpuva_range_valid(gpuvm, req_addr, req_range))) return -EINVAL; - drm_gpuva_for_each_va_range_safe(va, next, mgr, req_addr, req_end) { + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) { struct drm_gem_object *obj = va->gem.obj; u64 offset = va->gem.offset; u64 addr = va->va.addr; @@ -1215,18 +1215,18 @@ __drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, } static int -__drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, - const struct drm_gpuva_fn_ops *ops, void *priv, +__drm_gpuva_sm_unmap(struct drm_gpuvm *gpuvm, + const struct drm_gpuvm_ops *ops, void *priv, u64 req_addr, u64 req_range) { struct drm_gpuva *va, *next; u64 req_end = req_addr + req_range; int ret; - if (unlikely(!drm_gpuva_range_valid(mgr, req_addr, req_range))) + if (unlikely(!drm_gpuva_range_valid(gpuvm, req_addr, req_range))) return -EINVAL; - drm_gpuva_for_each_va_range_safe(va, next, mgr, req_addr, req_end) { + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) { struct drm_gpuva_op_map prev = {}, next = {}; bool prev_split = false, next_split = false; struct drm_gem_object *obj = va->gem.obj; @@ -1274,7 +1274,7 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, /** * drm_gpuva_sm_map() - creates the &drm_gpuva_op split/merge steps - * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @gpuvm: the &drm_gpuvm representing the GPU VA space * @req_addr: the start address of the new mapping * @req_range: the range of the new mapping * @req_obj: the &drm_gem_object to map @@ -1282,15 +1282,15 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, * @priv: pointer to a driver private data structure * * This function iterates the given range of the GPU VA space. It utilizes the - * &drm_gpuva_fn_ops to call back into the driver providing the split and merge + * &drm_gpuvm_ops to call back into the driver providing the split and merge * steps. * * Drivers may use these callbacks to update the GPU VA space right away within * the callback. In case the driver decides to copy and store the operations for * later processing neither this function nor &drm_gpuva_sm_unmap is allowed to - * be called before the &drm_gpuva_manager's view of the GPU VA space was + * be called before the &drm_gpuvm's view of the GPU VA space was * updated with the previous set of operations. To update the - * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(), + * &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(), * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be * used. * @@ -1305,18 +1305,18 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, * Returns: 0 on success or a negative error code */ int -drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv, +drm_gpuva_sm_map(struct drm_gpuvm *gpuvm, void *priv, u64 req_addr, u64 req_range, struct drm_gem_object *req_obj, u64 req_offset) { - const struct drm_gpuva_fn_ops *ops = mgr->ops; + const struct drm_gpuvm_ops *ops = gpuvm->ops; if (unlikely(!(ops && ops->sm_step_map && ops->sm_step_remap && ops->sm_step_unmap))) return -EINVAL; - return __drm_gpuva_sm_map(mgr, ops, priv, + return __drm_gpuva_sm_map(gpuvm, ops, priv, req_addr, req_range, req_obj, req_offset); } @@ -1324,20 +1324,20 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map); /** * drm_gpuva_sm_unmap() - creates the &drm_gpuva_ops to split on unmap - * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @gpuvm: the &drm_gpuvm representing the GPU VA space * @priv: pointer to a driver private data structure * @req_addr: the start address of the range to unmap * @req_range: the range of the mappings to unmap * * This function iterates the given range of the GPU VA space. It utilizes the - * &drm_gpuva_fn_ops to call back into the driver providing the operations to + * &drm_gpuvm_ops to call back into the driver providing the operations to * unmap and, if required, split existent mappings. * * Drivers may use these callbacks to update the GPU VA space right away within * the callback. In case the driver decides to copy and store the operations for * later processing neither this function nor &drm_gpuva_sm_map is allowed to be - * called before the &drm_gpuva_manager's view of the GPU VA space was updated - * with the previous set of operations. To update the &drm_gpuva_manager's view + * called before the &drm_gpuvm's view of the GPU VA space was updated + * with the previous set of operations. To update the &drm_gpuvm's view * of the GPU VA space drm_gpuva_insert(), drm_gpuva_destroy_locked() and/or * drm_gpuva_destroy_unlocked() should be used. * @@ -1350,24 +1350,24 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map); * Returns: 0 on success or a negative error code */ int -drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv, +drm_gpuva_sm_unmap(struct drm_gpuvm *gpuvm, void *priv, u64 req_addr, u64 req_range) { - const struct drm_gpuva_fn_ops *ops = mgr->ops; + const struct drm_gpuvm_ops *ops = gpuvm->ops; if (unlikely(!(ops && ops->sm_step_remap && ops->sm_step_unmap))) return -EINVAL; - return __drm_gpuva_sm_unmap(mgr, ops, priv, + return __drm_gpuva_sm_unmap(gpuvm, ops, priv, req_addr, req_range); } EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap); static struct drm_gpuva_op * -gpuva_op_alloc(struct drm_gpuva_manager *mgr) +gpuva_op_alloc(struct drm_gpuvm *gpuvm) { - const struct drm_gpuva_fn_ops *fn = mgr->ops; + const struct drm_gpuvm_ops *fn = gpuvm->ops; struct drm_gpuva_op *op; if (fn && fn->op_alloc) @@ -1382,10 +1382,10 @@ gpuva_op_alloc(struct drm_gpuva_manager *mgr) } static void -gpuva_op_free(struct drm_gpuva_manager *mgr, +gpuva_op_free(struct drm_gpuvm *gpuvm, struct drm_gpuva_op *op) { - const struct drm_gpuva_fn_ops *fn = mgr->ops; + const struct drm_gpuvm_ops *fn = gpuvm->ops; if (fn && fn->op_free) fn->op_free(op); @@ -1398,14 +1398,14 @@ drm_gpuva_sm_step(struct drm_gpuva_op *__op, void *priv) { struct { - struct drm_gpuva_manager *mgr; + struct drm_gpuvm *vm; struct drm_gpuva_ops *ops; } *args = priv; - struct drm_gpuva_manager *mgr = args->mgr; + struct drm_gpuvm *gpuvm = args->vm; struct drm_gpuva_ops *ops = args->ops; struct drm_gpuva_op *op; - op = gpuva_op_alloc(mgr); + op = gpuva_op_alloc(gpuvm); if (unlikely(!op)) goto err; @@ -1444,12 +1444,12 @@ drm_gpuva_sm_step(struct drm_gpuva_op *__op, err_free_prev: kfree(op->remap.prev); err_free_op: - gpuva_op_free(mgr, op); + gpuva_op_free(gpuvm, op); err: return -ENOMEM; } -static const struct drm_gpuva_fn_ops gpuva_list_ops = { +static const struct drm_gpuvm_ops gpuva_list_ops = { .sm_step_map = drm_gpuva_sm_step, .sm_step_remap = drm_gpuva_sm_step, .sm_step_unmap = drm_gpuva_sm_step, @@ -1457,7 +1457,7 @@ static const struct drm_gpuva_fn_ops gpuva_list_ops = { /** * drm_gpuva_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge - * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @gpuvm: the &drm_gpuvm representing the GPU VA space * @req_addr: the start address of the new mapping * @req_range: the range of the new mapping * @req_obj: the &drm_gem_object to map @@ -1476,9 +1476,9 @@ static const struct drm_gpuva_fn_ops gpuva_list_ops = { * map operation requested by the caller. * * Note that before calling this function again with another mapping request it - * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The + * is necessary to update the &drm_gpuvm's view of the GPU VA space. The * previously obtained operations must be either processed or abandoned. To - * update the &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(), + * update the &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(), * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be * used. * @@ -1488,13 +1488,13 @@ static const struct drm_gpuva_fn_ops gpuva_list_ops = { * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure */ struct drm_gpuva_ops * -drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr, +drm_gpuva_sm_map_ops_create(struct drm_gpuvm *gpuvm, u64 req_addr, u64 req_range, struct drm_gem_object *req_obj, u64 req_offset) { struct drm_gpuva_ops *ops; struct { - struct drm_gpuva_manager *mgr; + struct drm_gpuvm *vm; struct drm_gpuva_ops *ops; } args; int ret; @@ -1505,10 +1505,10 @@ drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr, INIT_LIST_HEAD(&ops->list); - args.mgr = mgr; + args.vm = gpuvm; args.ops = ops; - ret = __drm_gpuva_sm_map(mgr, &gpuva_list_ops, &args, + ret = __drm_gpuva_sm_map(gpuvm, &gpuva_list_ops, &args, req_addr, req_range, req_obj, req_offset); if (ret) @@ -1517,7 +1517,7 @@ drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr, return ops; err_free_ops: - drm_gpuva_ops_free(mgr, ops); + drm_gpuva_ops_free(gpuvm, ops); return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create); @@ -1525,7 +1525,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create); /** * drm_gpuva_sm_unmap_ops_create() - creates the &drm_gpuva_ops to split on * unmap - * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @gpuvm: the &drm_gpuvm representing the GPU VA space * @req_addr: the start address of the range to unmap * @req_range: the range of the mappings to unmap * @@ -1540,9 +1540,9 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create); * remap operations. * * Note that before calling this function again with another range to unmap it - * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The + * is necessary to update the &drm_gpuvm's view of the GPU VA space. The * previously obtained operations must be processed or abandoned. To update the - * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(), + * &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(), * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be * used. * @@ -1552,12 +1552,12 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create); * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure */ struct drm_gpuva_ops * -drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr, +drm_gpuva_sm_unmap_ops_create(struct drm_gpuvm *gpuvm, u64 req_addr, u64 req_range) { struct drm_gpuva_ops *ops; struct { - struct drm_gpuva_manager *mgr; + struct drm_gpuvm *vm; struct drm_gpuva_ops *ops; } args; int ret; @@ -1568,10 +1568,10 @@ drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr, INIT_LIST_HEAD(&ops->list); - args.mgr = mgr; + args.vm = gpuvm; args.ops = ops; - ret = __drm_gpuva_sm_unmap(mgr, &gpuva_list_ops, &args, + ret = __drm_gpuva_sm_unmap(gpuvm, &gpuva_list_ops, &args, req_addr, req_range); if (ret) goto err_free_ops; @@ -1579,14 +1579,14 @@ drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr, return ops; err_free_ops: - drm_gpuva_ops_free(mgr, ops); + drm_gpuva_ops_free(gpuvm, ops); return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap_ops_create); /** * drm_gpuva_prefetch_ops_create() - creates the &drm_gpuva_ops to prefetch - * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @gpuvm: the &drm_gpuvm representing the GPU VA space * @addr: the start address of the range to prefetch * @range: the range of the mappings to prefetch * @@ -1603,7 +1603,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap_ops_create); * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure */ struct drm_gpuva_ops * -drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr, +drm_gpuva_prefetch_ops_create(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { struct drm_gpuva_ops *ops; @@ -1618,8 +1618,8 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr, INIT_LIST_HEAD(&ops->list); - drm_gpuva_for_each_va_range(va, mgr, addr, end) { - op = gpuva_op_alloc(mgr); + drm_gpuvm_for_each_va_range(va, gpuvm, addr, end) { + op = gpuva_op_alloc(gpuvm); if (!op) { ret = -ENOMEM; goto err_free_ops; @@ -1633,14 +1633,14 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr, return ops; err_free_ops: - drm_gpuva_ops_free(mgr, ops); + drm_gpuva_ops_free(gpuvm, ops); return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create); /** * drm_gpuva_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM - * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @gpuvm: the &drm_gpuvm representing the GPU VA space * @obj: the &drm_gem_object to unmap * * This function creates a list of operations to perform unmapping for every @@ -1658,7 +1658,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create); * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure */ struct drm_gpuva_ops * -drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr, +drm_gpuva_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj) { struct drm_gpuva_ops *ops; @@ -1675,7 +1675,7 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr, INIT_LIST_HEAD(&ops->list); drm_gem_for_each_gpuva(va, obj) { - op = gpuva_op_alloc(mgr); + op = gpuva_op_alloc(gpuvm); if (!op) { ret = -ENOMEM; goto err_free_ops; @@ -1689,21 +1689,21 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr, return ops; err_free_ops: - drm_gpuva_ops_free(mgr, ops); + drm_gpuva_ops_free(gpuvm, ops); return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(drm_gpuva_gem_unmap_ops_create); /** * drm_gpuva_ops_free() - free the given &drm_gpuva_ops - * @mgr: the &drm_gpuva_manager the ops were created for + * @gpuvm: the &drm_gpuvm the ops were created for * @ops: the &drm_gpuva_ops to free * * Frees the given &drm_gpuva_ops structure including all the ops associated * with it. */ void -drm_gpuva_ops_free(struct drm_gpuva_manager *mgr, +drm_gpuva_ops_free(struct drm_gpuvm *gpuvm, struct drm_gpuva_ops *ops) { struct drm_gpuva_op *op, *next; @@ -1717,7 +1717,7 @@ drm_gpuva_ops_free(struct drm_gpuva_manager *mgr, kfree(op->remap.unmap); } - gpuva_op_free(mgr, op); + gpuva_op_free(gpuvm, op); } kfree(ops); diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c index a90c4cd8cbb2..c001952cd678 100644 --- a/drivers/gpu/drm/nouveau/nouveau_exec.c +++ b/drivers/gpu/drm/nouveau/nouveau_exec.c @@ -106,7 +106,7 @@ nouveau_exec_job_submit(struct nouveau_job *job) drm_exec_until_all_locked(exec) { struct drm_gpuva *va; - drm_gpuva_for_each_va(va, &uvmm->umgr) { + drm_gpuvm_for_each_va(va, &uvmm->umgr) { if (unlikely(va == &uvmm->umgr.kernel_alloc_node)) continue; diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index aae780e4a4aa..16c7ea3cee13 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -444,7 +444,7 @@ op_map_prepare_unwind(struct nouveau_uvma *uvma) static void op_unmap_prepare_unwind(struct drm_gpuva *va) { - drm_gpuva_insert(va->mgr, va); + drm_gpuva_insert(va->vm, va); } static void @@ -1836,11 +1836,11 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, uvmm->kernel_managed_addr = kernel_managed_addr; uvmm->kernel_managed_size = kernel_managed_size; - drm_gpuva_manager_init(&uvmm->umgr, cli->name, - NOUVEAU_VA_SPACE_START, - NOUVEAU_VA_SPACE_END, - kernel_managed_addr, kernel_managed_size, - NULL); + drm_gpuvm_init(&uvmm->umgr, cli->name, + NOUVEAU_VA_SPACE_START, + NOUVEAU_VA_SPACE_END, + kernel_managed_addr, kernel_managed_size, + NULL); ret = nvif_vmm_ctor(&cli->mmu, "uvmm", cli->vmm.vmm.object.oclass, RAW, @@ -1855,7 +1855,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, return 0; out_free_gpuva_mgr: - drm_gpuva_manager_destroy(&uvmm->umgr); + drm_gpuvm_destroy(&uvmm->umgr); out_unlock: mutex_unlock(&cli->mutex); return ret; @@ -1877,7 +1877,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) wait_event(entity->job.wq, list_empty(&entity->job.list.head)); nouveau_uvmm_lock(uvmm); - drm_gpuva_for_each_va_safe(va, next, &uvmm->umgr) { + drm_gpuvm_for_each_va_safe(va, next, &uvmm->umgr) { struct nouveau_uvma *uvma = uvma_from_va(va); struct drm_gem_object *obj = va->gem.obj; @@ -1910,7 +1910,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) mutex_lock(&cli->mutex); nouveau_vmm_fini(&uvmm->vmm); - drm_gpuva_manager_destroy(&uvmm->umgr); + drm_gpuvm_destroy(&uvmm->umgr); mutex_unlock(&cli->mutex); dma_resv_fini(&uvmm->resv); diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h index fc7f6fd2a4e1..1402a2385e39 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h @@ -9,7 +9,7 @@ struct nouveau_uvmm { struct nouveau_vmm vmm; - struct drm_gpuva_manager umgr; + struct drm_gpuvm umgr; struct maple_tree region_mt; struct mutex mutex; struct dma_resv resv; @@ -44,7 +44,7 @@ struct nouveau_uvma { #define uvmm_from_mgr(x) container_of((x), struct nouveau_uvmm, umgr) #define uvma_from_va(x) container_of((x), struct nouveau_uvma, va) -#define to_uvmm(x) uvmm_from_mgr((x)->va.mgr) +#define to_uvmm(x) uvmm_from_mgr((x)->va.vm) struct nouveau_uvmm_bind_job { struct nouveau_job base; diff --git a/include/drm/drm_debugfs.h b/include/drm/drm_debugfs.h index 7213ce15e4ff..5dbe6e972660 100644 --- a/include/drm/drm_debugfs.h +++ b/include/drm/drm_debugfs.h @@ -152,7 +152,7 @@ void drm_debugfs_add_files(struct drm_device *dev, const struct drm_debugfs_info *files, int count); int drm_debugfs_gpuva_info(struct seq_file *m, - struct drm_gpuva_manager *mgr); + struct drm_gpuvm *gpuvm); #else static inline void drm_debugfs_create_files(const struct drm_info_list *files, int count, struct dentry *root, @@ -176,7 +176,7 @@ static inline void drm_debugfs_add_files(struct drm_device *dev, {} static inline int drm_debugfs_gpuva_info(struct seq_file *m, - struct drm_gpuva_manager *mgr) + struct drm_gpuvm *gpuvm) { return 0; } diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuva_mgr.h index ed8d50200cc3..1aa99a4d5665 100644 --- a/include/drm/drm_gpuva_mgr.h +++ b/include/drm/drm_gpuva_mgr.h @@ -31,8 +31,8 @@ #include -struct drm_gpuva_manager; -struct drm_gpuva_fn_ops; +struct drm_gpuvm; +struct drm_gpuvm_ops; /** * enum drm_gpuva_flags - flags for struct drm_gpuva @@ -62,15 +62,15 @@ enum drm_gpuva_flags { * struct drm_gpuva - structure to track a GPU VA mapping * * This structure represents a GPU VA mapping and is associated with a - * &drm_gpuva_manager. + * &drm_gpuvm. * * Typically, this structure is embedded in bigger driver structures. */ struct drm_gpuva { /** - * @mgr: the &drm_gpuva_manager this object is associated with + * @gpuvm: the &drm_gpuvm this object is associated with */ - struct drm_gpuva_manager *mgr; + struct drm_gpuvm *vm; /** * @flags: the &drm_gpuva_flags for this mapping @@ -137,20 +137,20 @@ struct drm_gpuva { } rb; }; -int drm_gpuva_insert(struct drm_gpuva_manager *mgr, struct drm_gpuva *va); +int drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va); void drm_gpuva_remove(struct drm_gpuva *va); void drm_gpuva_link(struct drm_gpuva *va); void drm_gpuva_unlink(struct drm_gpuva *va); -struct drm_gpuva *drm_gpuva_find(struct drm_gpuva_manager *mgr, +struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm, u64 addr, u64 range); -struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuva_manager *mgr, +struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuvm *gpuvm, u64 addr, u64 range); -struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start); -struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end); +struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start); +struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end); -bool drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range); +bool drm_gpuva_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range); static inline void drm_gpuva_init(struct drm_gpuva *va, u64 addr, u64 range, struct drm_gem_object *obj, u64 offset) @@ -186,7 +186,7 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va) } /** - * struct drm_gpuva_manager - DRM GPU VA Manager + * struct drm_gpuvm - DRM GPU VA Manager * * The DRM GPU VA Manager keeps track of a GPU's virtual address space by using * &maple_tree structures. Typically, this structure is embedded in bigger @@ -197,7 +197,7 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va) * * There should be one manager instance per GPU virtual address space. */ -struct drm_gpuva_manager { +struct drm_gpuvm { /** * @name: the name of the DRM GPU VA space */ @@ -237,100 +237,99 @@ struct drm_gpuva_manager { struct drm_gpuva kernel_alloc_node; /** - * @ops: &drm_gpuva_fn_ops providing the split/merge steps to drivers + * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers */ - const struct drm_gpuva_fn_ops *ops; + const struct drm_gpuvm_ops *ops; }; -void drm_gpuva_manager_init(struct drm_gpuva_manager *mgr, - const char *name, - u64 start_offset, u64 range, - u64 reserve_offset, u64 reserve_range, - const struct drm_gpuva_fn_ops *ops); -void drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr); +void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, + u64 start_offset, u64 range, + u64 reserve_offset, u64 reserve_range, + const struct drm_gpuvm_ops *ops); +void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm); static inline struct drm_gpuva * __drm_gpuva_next(struct drm_gpuva *va) { - if (va && !list_is_last(&va->rb.entry, &va->mgr->rb.list)) + if (va && !list_is_last(&va->rb.entry, &va->vm->rb.list)) return list_next_entry(va, rb.entry); return NULL; } /** - * drm_gpuva_for_each_va_range() - iterate over a range of &drm_gpuvas + * drm_gpuvm_for_each_va_range() - iterate over a range of &drm_gpuvas * @va__: &drm_gpuva structure to assign to in each iteration step - * @mgr__: &drm_gpuva_manager to walk over + * @gpuvm__: &drm_gpuvm to walk over * @start__: starting offset, the first gpuva will overlap this * @end__: ending offset, the last gpuva will start before this (but may * overlap) * - * This iterator walks over all &drm_gpuvas in the &drm_gpuva_manager that lie + * This iterator walks over all &drm_gpuvas in the &drm_gpuvm that lie * between @start__ and @end__. It is implemented similarly to list_for_each(), - * but is using the &drm_gpuva_manager's internal interval tree to accelerate + * but is using the &drm_gpuvm's internal interval tree to accelerate * the search for the starting &drm_gpuva, and hence isn't safe against removal * of elements. It assumes that @end__ is within (or is the upper limit of) the - * &drm_gpuva_manager. This iterator does not skip over the &drm_gpuva_manager's + * &drm_gpuvm. This iterator does not skip over the &drm_gpuvm's * @kernel_alloc_node. */ -#define drm_gpuva_for_each_va_range(va__, mgr__, start__, end__) \ - for (va__ = drm_gpuva_find_first((mgr__), (start__), (end__) - (start__)); \ +#define drm_gpuvm_for_each_va_range(va__, gpuvm__, start__, end__) \ + for (va__ = drm_gpuva_find_first((gpuvm__), (start__), (end__) - (start__)); \ va__ && (va__->va.addr < (end__)); \ va__ = __drm_gpuva_next(va__)) /** - * drm_gpuva_for_each_va_range_safe() - safely iterate over a range of + * drm_gpuvm_for_each_va_range_safe() - safely iterate over a range of * &drm_gpuvas * @va__: &drm_gpuva to assign to in each iteration step * @next__: another &drm_gpuva to use as temporary storage - * @mgr__: &drm_gpuva_manager to walk over + * @gpuvm__: &drm_gpuvm to walk over * @start__: starting offset, the first gpuva will overlap this * @end__: ending offset, the last gpuva will start before this (but may * overlap) * - * This iterator walks over all &drm_gpuvas in the &drm_gpuva_manager that lie + * This iterator walks over all &drm_gpuvas in the &drm_gpuvm that lie * between @start__ and @end__. It is implemented similarly to - * list_for_each_safe(), but is using the &drm_gpuva_manager's internal interval + * list_for_each_safe(), but is using the &drm_gpuvm's internal interval * tree to accelerate the search for the starting &drm_gpuva, and hence is safe * against removal of elements. It assumes that @end__ is within (or is the - * upper limit of) the &drm_gpuva_manager. This iterator does not skip over the - * &drm_gpuva_manager's @kernel_alloc_node. + * upper limit of) the &drm_gpuvm. This iterator does not skip over the + * &drm_gpuvm's @kernel_alloc_node. */ -#define drm_gpuva_for_each_va_range_safe(va__, next__, mgr__, start__, end__) \ - for (va__ = drm_gpuva_find_first((mgr__), (start__), (end__) - (start__)), \ +#define drm_gpuvm_for_each_va_range_safe(va__, next__, gpuvm__, start__, end__) \ + for (va__ = drm_gpuva_find_first((gpuvm__), (start__), (end__) - (start__)), \ next__ = __drm_gpuva_next(va__); \ va__ && (va__->va.addr < (end__)); \ va__ = next__, next__ = __drm_gpuva_next(va__)) /** - * drm_gpuva_for_each_va() - iterate over all &drm_gpuvas + * drm_gpuvm_for_each_va() - iterate over all &drm_gpuvas * @va__: &drm_gpuva to assign to in each iteration step - * @mgr__: &drm_gpuva_manager to walk over + * @gpuvm__: &drm_gpuvm to walk over * * This iterator walks over all &drm_gpuva structures associated with the given - * &drm_gpuva_manager. + * &drm_gpuvm. */ -#define drm_gpuva_for_each_va(va__, mgr__) \ - list_for_each_entry(va__, &(mgr__)->rb.list, rb.entry) +#define drm_gpuvm_for_each_va(va__, gpuvm__) \ + list_for_each_entry(va__, &(gpuvm__)->rb.list, rb.entry) /** - * drm_gpuva_for_each_va_safe() - safely iterate over all &drm_gpuvas + * drm_gpuvm_for_each_va_safe() - safely iterate over all &drm_gpuvas * @va__: &drm_gpuva to assign to in each iteration step * @next__: another &drm_gpuva to use as temporary storage - * @mgr__: &drm_gpuva_manager to walk over + * @gpuvm__: &drm_gpuvm to walk over * * This iterator walks over all &drm_gpuva structures associated with the given - * &drm_gpuva_manager. It is implemented with list_for_each_entry_safe(), and + * &drm_gpuvm. It is implemented with list_for_each_entry_safe(), and * hence safe against the removal of elements. */ -#define drm_gpuva_for_each_va_safe(va__, next__, mgr__) \ - list_for_each_entry_safe(va__, next__, &(mgr__)->rb.list, rb.entry) +#define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \ + list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry) /** * enum drm_gpuva_op_type - GPU VA operation type * - * Operations to alter the GPU VA mappings tracked by the &drm_gpuva_manager. + * Operations to alter the GPU VA mappings tracked by the &drm_gpuvm. */ enum drm_gpuva_op_type { /** @@ -413,7 +412,7 @@ struct drm_gpuva_op_unmap { * * Optionally, if &keep is set, drivers may keep the actual page table * mappings for this &drm_gpuva, adding the missing page table entries - * only and update the &drm_gpuva_manager accordingly. + * only and update the &drm_gpuvm accordingly. */ bool keep; }; @@ -584,22 +583,22 @@ struct drm_gpuva_ops { #define drm_gpuva_next_op(op) list_next_entry(op, entry) struct drm_gpuva_ops * -drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr, +drm_gpuva_sm_map_ops_create(struct drm_gpuvm *gpuvm, u64 addr, u64 range, struct drm_gem_object *obj, u64 offset); struct drm_gpuva_ops * -drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr, +drm_gpuva_sm_unmap_ops_create(struct drm_gpuvm *gpuvm, u64 addr, u64 range); struct drm_gpuva_ops * -drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr, +drm_gpuva_prefetch_ops_create(struct drm_gpuvm *gpuvm, u64 addr, u64 range); struct drm_gpuva_ops * -drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr, +drm_gpuva_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj); -void drm_gpuva_ops_free(struct drm_gpuva_manager *mgr, +void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm, struct drm_gpuva_ops *ops); static inline void drm_gpuva_init_from_op(struct drm_gpuva *va, @@ -610,15 +609,15 @@ static inline void drm_gpuva_init_from_op(struct drm_gpuva *va, } /** - * struct drm_gpuva_fn_ops - callbacks for split/merge steps + * struct drm_gpuvm_ops - callbacks for split/merge steps * * This structure defines the callbacks used by &drm_gpuva_sm_map and * &drm_gpuva_sm_unmap to provide the split/merge steps for map and unmap * operations to drivers. */ -struct drm_gpuva_fn_ops { +struct drm_gpuvm_ops { /** - * @op_alloc: called when the &drm_gpuva_manager allocates + * @op_alloc: called when the &drm_gpuvm allocates * a struct drm_gpuva_op * * Some drivers may want to embed struct drm_gpuva_op into driver @@ -630,7 +629,7 @@ struct drm_gpuva_fn_ops { struct drm_gpuva_op *(*op_alloc)(void); /** - * @op_free: called when the &drm_gpuva_manager frees a + * @op_free: called when the &drm_gpuvm frees a * struct drm_gpuva_op * * Some drivers may want to embed struct drm_gpuva_op into driver @@ -686,14 +685,14 @@ struct drm_gpuva_fn_ops { int (*sm_step_unmap)(struct drm_gpuva_op *op, void *priv); }; -int drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv, +int drm_gpuva_sm_map(struct drm_gpuvm *gpuvm, void *priv, u64 addr, u64 range, struct drm_gem_object *obj, u64 offset); -int drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv, +int drm_gpuva_sm_unmap(struct drm_gpuvm *gpuvm, void *priv, u64 addr, u64 range); -void drm_gpuva_map(struct drm_gpuva_manager *mgr, +void drm_gpuva_map(struct drm_gpuvm *gpuvm, struct drm_gpuva *va, struct drm_gpuva_op_map *op); From patchwork Wed Sep 6 21:47:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13375956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 70E64EE14D0 for ; Wed, 6 Sep 2023 21:47:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E106210E1C4; Wed, 6 Sep 2023 21:47:45 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0647610E66E for ; Wed, 6 Sep 2023 21:47:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694036861; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zMiCP9X5SYt9K8RSwHF3u0zLTDBiJv9tu5/7ZPDJx/Q=; b=GH9OrWo56p5EfoyQqlMqLFo6ymHJ7dNOOxG6692HIwEbO5I6UR2z1n5yMWZF+dqUq1LnZX 0VPXA/v0OygfgQajn9Tghh6QOpOr/O4ZxZAVosUkIMefuxwxLJhe94RiFKvvb7tk3brSQm LbK24YLjAR0naxzp7TwX3iETBfRomwg= Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-479-KoMIOLSvMOST7yWVjPqZWQ-1; Wed, 06 Sep 2023 17:47:39 -0400 X-MC-Unique: KoMIOLSvMOST7yWVjPqZWQ-1 Received: by mail-ed1-f70.google.com with SMTP id 4fb4d7f45d1cf-50bf847b267so142934a12.3 for ; Wed, 06 Sep 2023 14:47:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694036858; x=1694641658; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zMiCP9X5SYt9K8RSwHF3u0zLTDBiJv9tu5/7ZPDJx/Q=; b=YmgBsZQTmjqtCex4tPHT5Va2BoV3b+4z9PjFOxKyN26WPpxrkl0GQW/bMP/S/VqoHV 4Sx53UfocPxgjcae9mGTT8qGT1cYRtVZFalaTM7BdKpDqV3U2zTt9TzKfo/3FBfz2v21 7NT4B40qiq9e/+pmYzDxI6pznv34c2MhVfiQLNTwNVlJ6stoIVbEs3tmwp6m2KoNfeeF Bm2A9iCMYnbBQwyHDmST8jCLxSxyf+2vVr7V+blOEi4/DVdaLbkfT+jQrjgoJUjt9NIr dF5Z9U1o+TlWuGW1P6azvdTsFJWUmShagkkbUbRztq+W25zahAAM8JVGu9qKd7af7aXi BTlg== X-Gm-Message-State: AOJu0YwgLrSfEL6fu3614ZFYPlfoNmwtY7TmfkTkvdrtV0ByamOh+riJ 51wkfxz1gCpTs2Zlb6WAY8pa8k+/U1LBml+KtibYuwKcdupsan5Hb5oRxqO1GChRb5r2Ev4w1Pe Zk/TnWVQIWtZOVxg/bIKxlnvne1AY X-Received: by 2002:aa7:d1ca:0:b0:524:9564:4fee with SMTP id g10-20020aa7d1ca000000b0052495644feemr3614112edp.10.1694036858662; Wed, 06 Sep 2023 14:47:38 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFuYz6aAxx9YUXbnolQ4+mbiXJ9HO5u+GNgQmkP3FWg3jEJsI+/RrdXfc8+SFAe+d29wrIdBA== X-Received: by 2002:aa7:d1ca:0:b0:524:9564:4fee with SMTP id g10-20020aa7d1ca000000b0052495644feemr3614104edp.10.1694036858439; Wed, 06 Sep 2023 14:47:38 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id j16-20020aa7ca50000000b0052a3ad836basm8884785edt.41.2023.09.06.14.47.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 14:47:38 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Subject: [PATCH drm-misc-next v2 3/7] drm/nouveau: uvmm: rename 'umgr' to 'base' Date: Wed, 6 Sep 2023 23:47:11 +0200 Message-ID: <20230906214723.4393-4-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906214723.4393-1-dakr@redhat.com> References: <20230906214723.4393-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rename struct drm_gpuvm within struct nouveau_uvmm from 'umgr' to base. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_debugfs.c | 2 +- drivers/gpu/drm/nouveau/nouveau_exec.c | 4 +-- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 32 +++++++++++------------ drivers/gpu/drm/nouveau/nouveau_uvmm.h | 6 ++--- 4 files changed, 22 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c index 053f703f2f68..e83db051e851 100644 --- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c +++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c @@ -231,7 +231,7 @@ nouveau_debugfs_gpuva(struct seq_file *m, void *data) continue; nouveau_uvmm_lock(uvmm); - drm_debugfs_gpuva_info(m, &uvmm->umgr); + drm_debugfs_gpuva_info(m, &uvmm->base); seq_puts(m, "\n"); nouveau_debugfs_gpuva_regions(m, uvmm); nouveau_uvmm_unlock(uvmm); diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c index c001952cd678..b4239af29e5a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_exec.c +++ b/drivers/gpu/drm/nouveau/nouveau_exec.c @@ -106,8 +106,8 @@ nouveau_exec_job_submit(struct nouveau_job *job) drm_exec_until_all_locked(exec) { struct drm_gpuva *va; - drm_gpuvm_for_each_va(va, &uvmm->umgr) { - if (unlikely(va == &uvmm->umgr.kernel_alloc_node)) + drm_gpuvm_for_each_va(va, &uvmm->base) { + if (unlikely(va == &uvmm->base.kernel_alloc_node)) continue; ret = drm_exec_prepare_obj(exec, va->gem.obj, 1); diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index 16c7ea3cee13..4c47b2279674 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -329,7 +329,7 @@ nouveau_uvma_region_create(struct nouveau_uvmm *uvmm, struct nouveau_uvma_region *reg; int ret; - if (!drm_gpuva_interval_empty(&uvmm->umgr, addr, range)) + if (!drm_gpuva_interval_empty(&uvmm->base, addr, range)) return -ENOSPC; ret = nouveau_uvma_region_alloc(®); @@ -384,7 +384,7 @@ nouveau_uvma_region_empty(struct nouveau_uvma_region *reg) { struct nouveau_uvmm *uvmm = reg->uvmm; - return drm_gpuva_interval_empty(&uvmm->umgr, + return drm_gpuva_interval_empty(&uvmm->base, reg->va.addr, reg->va.range); } @@ -589,7 +589,7 @@ op_map_prepare(struct nouveau_uvmm *uvmm, uvma->region = args->region; uvma->kind = args->kind; - drm_gpuva_map(&uvmm->umgr, &uvma->va, op); + drm_gpuva_map(&uvmm->base, &uvma->va, op); /* Keep a reference until this uvma is destroyed. */ nouveau_uvma_gem_get(uvma); @@ -1194,7 +1194,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) goto unwind_continue; } - op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr, + op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->base, op->va.addr, op->va.range); if (IS_ERR(op->ops)) { @@ -1205,7 +1205,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) ret = nouveau_uvmm_sm_unmap_prepare(uvmm, &op->new, op->ops); if (ret) { - drm_gpuva_ops_free(&uvmm->umgr, op->ops); + drm_gpuva_ops_free(&uvmm->base, op->ops); op->ops = NULL; op->reg = NULL; goto unwind_continue; @@ -1240,7 +1240,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) } } - op->ops = drm_gpuva_sm_map_ops_create(&uvmm->umgr, + op->ops = drm_gpuva_sm_map_ops_create(&uvmm->base, op->va.addr, op->va.range, op->gem.obj, @@ -1256,7 +1256,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) op->va.range, op->flags & 0xff); if (ret) { - drm_gpuva_ops_free(&uvmm->umgr, op->ops); + drm_gpuva_ops_free(&uvmm->base, op->ops); op->ops = NULL; goto unwind_continue; } @@ -1264,7 +1264,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) break; } case OP_UNMAP: - op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr, + op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->base, op->va.addr, op->va.range); if (IS_ERR(op->ops)) { @@ -1275,7 +1275,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) ret = nouveau_uvmm_sm_unmap_prepare(uvmm, &op->new, op->ops); if (ret) { - drm_gpuva_ops_free(&uvmm->umgr, op->ops); + drm_gpuva_ops_free(&uvmm->base, op->ops); op->ops = NULL; goto unwind_continue; } @@ -1404,7 +1404,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) break; } - drm_gpuva_ops_free(&uvmm->umgr, op->ops); + drm_gpuva_ops_free(&uvmm->base, op->ops); op->ops = NULL; op->reg = NULL; } @@ -1509,7 +1509,7 @@ nouveau_uvmm_bind_job_free_work_fn(struct work_struct *work) } if (!IS_ERR_OR_NULL(op->ops)) - drm_gpuva_ops_free(&uvmm->umgr, op->ops); + drm_gpuva_ops_free(&uvmm->base, op->ops); if (obj) drm_gem_object_put(obj); @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, uvmm->kernel_managed_addr = kernel_managed_addr; uvmm->kernel_managed_size = kernel_managed_size; - drm_gpuvm_init(&uvmm->umgr, cli->name, + drm_gpuvm_init(&uvmm->base, cli->name, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, kernel_managed_addr, kernel_managed_size, @@ -1855,7 +1855,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, return 0; out_free_gpuva_mgr: - drm_gpuvm_destroy(&uvmm->umgr); + drm_gpuvm_destroy(&uvmm->base); out_unlock: mutex_unlock(&cli->mutex); return ret; @@ -1877,11 +1877,11 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) wait_event(entity->job.wq, list_empty(&entity->job.list.head)); nouveau_uvmm_lock(uvmm); - drm_gpuvm_for_each_va_safe(va, next, &uvmm->umgr) { + drm_gpuvm_for_each_va_safe(va, next, &uvmm->base) { struct nouveau_uvma *uvma = uvma_from_va(va); struct drm_gem_object *obj = va->gem.obj; - if (unlikely(va == &uvmm->umgr.kernel_alloc_node)) + if (unlikely(va == &uvmm->base.kernel_alloc_node)) continue; drm_gpuva_remove(va); @@ -1910,7 +1910,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) mutex_lock(&cli->mutex); nouveau_vmm_fini(&uvmm->vmm); - drm_gpuvm_destroy(&uvmm->umgr); + drm_gpuvm_destroy(&uvmm->base); mutex_unlock(&cli->mutex); dma_resv_fini(&uvmm->resv); diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h index 1402a2385e39..ca63228ea82a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h @@ -8,8 +8,8 @@ #include "nouveau_drv.h" struct nouveau_uvmm { + struct drm_gpuvm base; struct nouveau_vmm vmm; - struct drm_gpuvm umgr; struct maple_tree region_mt; struct mutex mutex; struct dma_resv resv; @@ -41,10 +41,10 @@ struct nouveau_uvma { u8 kind; }; -#define uvmm_from_mgr(x) container_of((x), struct nouveau_uvmm, umgr) +#define uvmm_from_gpuvm(x) container_of((x), struct nouveau_uvmm, base) #define uvma_from_va(x) container_of((x), struct nouveau_uvma, va) -#define to_uvmm(x) uvmm_from_mgr((x)->va.vm) +#define to_uvmm(x) uvmm_from_gpuvm((x)->va.vm) struct nouveau_uvmm_bind_job { struct nouveau_job base; From patchwork Wed Sep 6 21:47:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13375959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5D35AEE14D6 for ; Wed, 6 Sep 2023 21:47:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BCA9B10E20A; Wed, 6 Sep 2023 21:47:49 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1E4FB10E721 for ; Wed, 6 Sep 2023 21:47:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694036865; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gOYwJUtARBOFO+IhtMGCnrTn9nS2L4Us2OmhtN72cww=; b=HV2VHdRQKk4NRRul5+zP21MFJx6g2Ebu37ickIcolr1FpQTCt+1OefQvxwjf47AKLV1N9m pZlpFjzGrhFgRvOey/1tqoMTwojtWQaAXjxLoX+iaz9VcVIjafazvhIL2UeGtXWH85Ljgv 6ixYuttDk5BBiBraa8f8J+9stgM+Yb4= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-77-o7iobWUpPi65qA-Avx0P3w-1; Wed, 06 Sep 2023 17:47:43 -0400 X-MC-Unique: o7iobWUpPi65qA-Avx0P3w-1 Received: by mail-lf1-f72.google.com with SMTP id 2adb3069b0e04-501bf3722dfso210917e87.3 for ; Wed, 06 Sep 2023 14:47:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694036862; x=1694641662; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gOYwJUtARBOFO+IhtMGCnrTn9nS2L4Us2OmhtN72cww=; b=Zl4kFc3M5A64U67ok79BXlIiySuLjwA/4M0nUuRYhLV5Dlg93v8ys3uYdJ6IgJDl6S JN51lIyJX4JoSoFDxiOjJAlPPLlSUW9kyQQ/mSIoKXyIstdd5fNcuEtH2XMQAtA61UYq 0/Hm0ihzvjk8hBsUqVpyjQbh/ISeNKDcWc+2DUICaEmmWRZxrh0Slh/Acs3euxxbBdN1 NSVnPQVfG7d1DBu02QL348UHY/ZMZJ8oAtsLtDyp4nJRS1RpF7DrDRX+pwWK7wmVg29o Zfl7VcuwR56zbAWdZYEXLVDATivquF0Ua0xKt2Kl0TyBT5nbiYWY68efmmkz1qnNMQtW nDAg== X-Gm-Message-State: AOJu0Ywrjvfq+Duph2SwRR/Zr7YuxLN/eJsHb+2NfFQFZTLZdiurYo+R xHgTPX4xf1GAY7y+iI8m8ciH9FHXa4R9gEtHqPnjyKqgrZNggQPs1V5/4PW08WCtnFXN1PN+VjO X8szsFY+9eISmXdCENYlbGfF/6bJE X-Received: by 2002:a05:6512:282c:b0:500:c5df:1872 with SMTP id cf44-20020a056512282c00b00500c5df1872mr3719430lfb.44.1694036862496; Wed, 06 Sep 2023 14:47:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE9gfgEv9BjU8q6WhoWjT05KtviFb1ftdsssVn+ZTKx7Ur+135JyyGX0IGxaMvUPWby7AfAcA== X-Received: by 2002:a05:6512:282c:b0:500:c5df:1872 with SMTP id cf44-20020a056512282c00b00500c5df1872mr3719410lfb.44.1694036862282; Wed, 06 Sep 2023 14:47:42 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id n18-20020a056402515200b0052a3aa50d72sm8929983edd.40.2023.09.06.14.47.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 14:47:41 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Subject: [PATCH drm-misc-next v2 4/7] drm/gpuvm: common dma-resv per struct drm_gpuvm Date: Wed, 6 Sep 2023 23:47:12 +0200 Message-ID: <20230906214723.4393-5-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906214723.4393-1-dakr@redhat.com> References: <20230906214723.4393-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Provide a common dma-resv for GEM objects not being used outside of this GPU-VM. This is used in a subsequent patch to generalize dma-resv, external and evicted object handling and GEM validation. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuva_mgr.c | 10 ++++++++-- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +- include/drm/drm_gpuva_mgr.h | 15 ++++++++++++++- 3 files changed, 23 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c index 0d9f15f50454..838277794990 100644 --- a/drivers/gpu/drm/drm_gpuva_mgr.c +++ b/drivers/gpu/drm/drm_gpuva_mgr.c @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm, /** * drm_gpuvm_init() - initialize a &drm_gpuvm * @gpuvm: pointer to the &drm_gpuvm to initialize + * @drm: the drivers &drm_device * @name: the name of the GPU VA space * @start_offset: the start offset of the GPU VA space * @range: the size of the GPU VA space @@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm, * &name is expected to be managed by the surrounding driver structures. */ void -drm_gpuvm_init(struct drm_gpuvm *gpuvm, +drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm, const char *name, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, @@ -694,6 +695,9 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, reserve_range))) __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node); } + + drm_gem_private_object_init(drm, &gpuvm->d_obj, 0); + gpuvm->resv = gpuvm->d_obj.resv; } EXPORT_SYMBOL_GPL(drm_gpuvm_init); @@ -713,7 +717,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) __drm_gpuva_remove(&gpuvm->kernel_alloc_node); WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), - "GPUVA tree is not empty, potentially leaking memory."); + "GPUVA tree is not empty, potentially leaking memory.\n"); + + drm_gem_private_object_fini(&gpuvm->d_obj); } EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index 4c47b2279674..21d5a06241ae 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, uvmm->kernel_managed_addr = kernel_managed_addr; uvmm->kernel_managed_size = kernel_managed_size; - drm_gpuvm_init(&uvmm->base, cli->name, + drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, kernel_managed_addr, kernel_managed_size, diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuva_mgr.h index 1aa99a4d5665..e09e3e3ac5b2 100644 --- a/include/drm/drm_gpuva_mgr.h +++ b/include/drm/drm_gpuva_mgr.h @@ -240,9 +240,22 @@ struct drm_gpuvm { * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers */ const struct drm_gpuvm_ops *ops; + + /** + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs + * dma-resv to &drm_exec. + */ + struct drm_gem_object d_obj; + + /** + * @resv: the &dma_resv for &drm_gem_objects mapped in this GPU VA + * space + */ + struct dma_resv *resv; }; -void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, +void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm, + const char *name, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, const struct drm_gpuvm_ops *ops); From patchwork Wed Sep 6 21:47:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13375958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2BE0BEE14D0 for ; Wed, 6 Sep 2023 21:47:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 07F8C10E721; Wed, 6 Sep 2023 21:47:53 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id CBBDF10E716 for ; Wed, 6 Sep 2023 21:47:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694036870; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MTItrdSJoUtDed7O3Chun0vMjXHPGJ83BCDeqpRqljU=; b=P+kGq962Nmca0/4Ee6PqGgCZmCBlhmlWuUU5AplRRx9eHFlooOSMdIf0/Kkh8nO1hx2+07 giggxs0hs+Tr20pbw4+22C44efPeHzSBeCuCMWg7kDtM632crsSEmFfEMiyaLdRXjqXjAr DpUYJPO5LYbpn1TW1hCRib8qTXmkdjc= Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-491-5Pl1mNbZPI-c3nQe6CrpKw-1; Wed, 06 Sep 2023 17:47:48 -0400 X-MC-Unique: 5Pl1mNbZPI-c3nQe6CrpKw-1 Received: by mail-ej1-f71.google.com with SMTP id a640c23a62f3a-9a647551b7dso29216966b.1 for ; Wed, 06 Sep 2023 14:47:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694036867; x=1694641667; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MTItrdSJoUtDed7O3Chun0vMjXHPGJ83BCDeqpRqljU=; b=dVkrGuKUynT+ApD8JsLitTB8MyUCRfv+zfU8lUMG216YZiJ1EkXLM9PQlG/k1q6+a4 mQ8ZSkVTiSRgsTdsFznJAv2lmxdEULCeLklmVb4zdGDZuesY66ORLOfEqnHoTPD3nE4z rN7Wp45hsdKIrMpL8Id1WWn/tsqRkEBzEX+NAs/VQNp9BCTE5s6a1H7EpwCCPb/LT3R2 54oerBGso+3CzEzohAT03+Qou3fDCuCodaNZQ/RfoAB+N1p97iYYFdfB3oSv7udxXqJY DwJk2xqA86QsUM7xHIEK8kuZJUf3RCPU4BGwRwziCNqvGY/0AAfSDf/X0HM8et8UPejr 9zMw== X-Gm-Message-State: AOJu0YyMeKj+CAJ9/f7aPGAWNwYq370Bs4A3xqobBNWThK9xfC2rZN3V WX1hQQIEpGxA+xSSKZOCIYFb014KO3UYmW6qMt93Babe3/wkHvY0PSfWOZzhkw6vBkqMydnk9Cw 46pVQL99yDYGgFiJHP1TQdpdvsbeY X-Received: by 2002:a17:907:7758:b0:9a1:d1a0:41ad with SMTP id kx24-20020a170907775800b009a1d1a041admr1199442ejc.21.1694036866950; Wed, 06 Sep 2023 14:47:46 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFeicHOxHWwybNaTFb9qijYEr448VVqV7O/aC8/WYi9+ITWm+2xHcHBScSzk8cr+SgeTXpj4g== X-Received: by 2002:a17:907:7758:b0:9a1:d1a0:41ad with SMTP id kx24-20020a170907775800b009a1d1a041admr1199418ejc.21.1694036866460; Wed, 06 Sep 2023 14:47:46 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id j20-20020a17090686d400b00992ca779f42sm9536715ejy.97.2023.09.06.14.47.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 14:47:45 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Subject: [PATCH drm-misc-next v2 5/7] drm/gpuvm: add an abstraction for a VM / BO combination Date: Wed, 6 Sep 2023 23:47:13 +0200 Message-ID: <20230906214723.4393-6-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906214723.4393-1-dakr@redhat.com> References: <20230906214723.4393-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This patch adds an abstraction layer between the drm_gpuva mappings of a particular drm_gem_object and this GEM object itself. The abstraction represents a combination of a drm_gem_object and drm_gpuvm. The drm_gem_object holds a list of drm_gpuvm_bo structures (the structure representing this abstraction), while each drm_gpuvm_bo contains list of mappings of this GEM object. This has multiple advantages: 1) We can use the drm_gpuvm_bo structure to attach it to various lists of the drm_gpuvm. This is useful for tracking external and evicted objects per VM, which is introduced in subsequent patches. 2) Finding mappings of a certain drm_gem_object mapped in a certain drm_gpuvm becomes much cheaper. 3) Drivers can derive and extend the structure to easily represent driver specific states of a BO for a certain GPUVM. The idea of this abstraction was taken from amdgpu, hence the credit for this idea goes to the developers of amdgpu. Cc: Christian König Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuva_mgr.c | 210 +++++++++++++++++++++++-- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 77 ++++++--- include/drm/drm_gem.h | 48 ++++-- include/drm/drm_gpuva_mgr.h | 153 +++++++++++++++++- 4 files changed, 437 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c index 838277794990..da7a6e1aabe0 100644 --- a/drivers/gpu/drm/drm_gpuva_mgr.c +++ b/drivers/gpu/drm/drm_gpuva_mgr.c @@ -723,6 +723,161 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) } EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); +/** + * drm_gpuvm_bo_create() - create a new instance of struct drm_gpuvm_bo + * @gpuvm: The &drm_gpuvm the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @gpuvm. + * + * If provided by the driver, this function uses the &drm_gpuvm_ops + * vm_bo_alloc() callback to allocate. + * + * Returns: a pointer to the &drm_gpuvm_bo on success, NULL on failure + */ +struct drm_gpuvm_bo * +drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + const struct drm_gpuvm_ops *ops = gpuvm->ops; + struct drm_gpuvm_bo *vm_bo; + + if (ops && ops->vm_bo_alloc) + vm_bo = ops->vm_bo_alloc(); + else + vm_bo = kzalloc(sizeof(*vm_bo), GFP_KERNEL); + + if (unlikely(!vm_bo)) + return NULL; + + vm_bo->vm = gpuvm; + vm_bo->obj = obj; + + kref_init(&vm_bo->kref); + INIT_LIST_HEAD(&vm_bo->list.gpuva); + INIT_LIST_HEAD(&vm_bo->list.entry.gem); + + drm_gem_object_get(obj); + + return vm_bo; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_create); + +void +drm_gpuvm_bo_destroy(struct kref *kref) +{ + struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo, + kref); + const struct drm_gpuvm_ops *ops = vm_bo->vm->ops; + + drm_gem_object_put(vm_bo->obj); + + if (ops && ops->vm_bo_free) + ops->vm_bo_free(vm_bo); + else + kfree(vm_bo); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_destroy); + +static struct drm_gpuvm_bo * +__drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + struct drm_gpuvm_bo *vm_bo; + + drm_gem_gpuva_assert_lock_held(obj); + + drm_gem_for_each_gpuva_gem(vm_bo, obj) + if (vm_bo->vm == gpuvm) + return vm_bo; + + return NULL; +} + +/** + * drm_gpuvm_bo_find() - find the &drm_gpuvm_bo for the given + * &drm_gpuvm and &drm_gem_object + * @gpuvm: The &drm_gpuvm the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @gpuvm. + * + * Find the &drm_gpuvm_bo representing the combination of the given + * &drm_gpuvm and &drm_gem_object. If found, increases the reference + * count of the &drm_gpuvm_bo accordingly. + * + * Returns: a pointer to the &drm_gpuvm_bo on success, NULL on failure + */ +struct drm_gpuvm_bo * +drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + struct drm_gpuvm_bo *vm_bo = __drm_gpuvm_bo_find(gpuvm, obj); + + return vm_bo ? drm_gpuvm_bo_get(vm_bo) : NULL; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find); + +/** + * drm_gpuvm_bo_obtain() - obtains and instance of the &drm_gpuvm_bo for the + * given &drm_gpuvm and &drm_gem_object + * @gpuvm: The &drm_gpuvm the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @gpuvm. + * + * Find the &drm_gpuvm_bo representing the combination of the given + * &drm_gpuvm and &drm_gem_object. If found, increases the reference + * count of the &drm_gpuvm_bo accordingly. If not found, allocates a new + * &drm_gpuvm_bo. + * + * Returns: a pointer to the &drm_gpuvm_bo on success, an ERR_PTR on failure + */ +struct drm_gpuvm_bo * +drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + struct drm_gpuvm_bo *vm_bo; + + vm_bo = drm_gpuvm_bo_find(gpuvm, obj); + if (vm_bo) + return vm_bo; + + vm_bo = drm_gpuvm_bo_create(gpuvm, obj); + if (!vm_bo) + return ERR_PTR(-ENOMEM); + + return vm_bo; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain); + +/** + * drm_gpuvm_bo_obtain_prealloc() - obtains and instance of the &drm_gpuvm_bo + * for the given &drm_gpuvm and &drm_gem_object + * @gpuvm: The &drm_gpuvm the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @gpuvm. + * @__vm_bo: A pre-allocated struct drm_gpuvm_bo. + * + * Find the &drm_gpuvm_bo representing the combination of the given + * &drm_gpuvm and &drm_gem_object. If found, increases the reference + * count of the found &drm_gpuvm_bo accordingly, while the @__vm_bo reference + * count is decreased. If not found @__vm_bo is returned without further + * increase of the reference count. + * + * Returns: a pointer to the found &drm_gpuvm_bo or @__vm_bo if no existing + * &drm_gpuvm_bo was found + */ +struct drm_gpuvm_bo * +drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj, + struct drm_gpuvm_bo *__vm_bo) +{ + struct drm_gpuvm_bo *vm_bo; + + vm_bo = drm_gpuvm_bo_find(gpuvm, obj); + if (vm_bo) { + drm_gpuvm_bo_put(__vm_bo); + return vm_bo; + } + + return __vm_bo; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain_prealloc); + static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va) @@ -812,15 +967,20 @@ EXPORT_SYMBOL_GPL(drm_gpuva_remove); /** * drm_gpuva_link() - link a &drm_gpuva * @va: the &drm_gpuva to link + * @vm_bo: the &drm_gpuvm_bo to add the &drm_gpuva to * - * This adds the given &va to the GPU VA list of the &drm_gem_object it is - * associated with. + * This adds the given &va to the GPU VA list of the &drm_gpuvm_bo and the + * &drm_gpuvm_bo to the &drm_gem_object it is associated with. + * + * For every &drm_gpuva entry added to the &drm_gpuvm_bo an additional + * reference of the latter is taken. * * This function expects the caller to protect the GEM's GPUVA list against - * concurrent access using the GEMs dma_resv lock. + * concurrent access using either the GEMs dma_resv lock or a driver specific + * lock set through drm_gem_gpuva_set_lock(). */ void -drm_gpuva_link(struct drm_gpuva *va) +drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo) { struct drm_gem_object *obj = va->gem.obj; @@ -829,7 +989,10 @@ drm_gpuva_link(struct drm_gpuva *va) drm_gem_gpuva_assert_lock_held(obj); - list_add_tail(&va->gem.entry, &obj->gpuva.list); + drm_gpuvm_bo_get(vm_bo); + list_add_tail(&va->gem.entry, &vm_bo->list.gpuva); + if (list_empty(&vm_bo->list.entry.gem)) + list_add_tail(&vm_bo->list.entry.gem, &obj->gpuva.list); } EXPORT_SYMBOL_GPL(drm_gpuva_link); @@ -840,20 +1003,40 @@ EXPORT_SYMBOL_GPL(drm_gpuva_link); * This removes the given &va from the GPU VA list of the &drm_gem_object it is * associated with. * + * This removes the given &va from the GPU VA list of the &drm_gpuvm_bo and + * the &drm_gpuvm_bo from the &drm_gem_object it is associated with in case + * this call unlinks the last &drm_gpuva from the &drm_gpuvm_bo. + * + * For every &drm_gpuva entry removed from the &drm_gpuvm_bo a reference of + * the latter is dropped. + * * This function expects the caller to protect the GEM's GPUVA list against - * concurrent access using the GEMs dma_resv lock. + * concurrent access using either the GEMs dma_resv lock or a driver specific + * lock set through drm_gem_gpuva_set_lock(). */ void drm_gpuva_unlink(struct drm_gpuva *va) { struct drm_gem_object *obj = va->gem.obj; + struct drm_gpuvm_bo *vm_bo; if (unlikely(!obj)) return; drm_gem_gpuva_assert_lock_held(obj); + vm_bo = __drm_gpuvm_bo_find(va->vm, obj); + if (WARN(!vm_bo, "GPUVA doesn't seem to be linked.\n")) + return; + list_del_init(&va->gem.entry); + + /* This is the last mapping being unlinked for this GEM object, hence + * also remove the VM_BO from the GEM's gpuva list. + */ + if (list_empty(&vm_bo->list.gpuva)) + list_del_init(&vm_bo->list.entry.gem); + drm_gpuvm_bo_put(vm_bo); } EXPORT_SYMBOL_GPL(drm_gpuva_unlink); @@ -998,10 +1181,10 @@ drm_gpuva_remap(struct drm_gpuva *prev, struct drm_gpuva *next, struct drm_gpuva_op_remap *op) { - struct drm_gpuva *curr = op->unmap->va; - struct drm_gpuvm *gpuvm = curr->vm; + struct drm_gpuva *va = op->unmap->va; + struct drm_gpuvm *gpuvm = va->vm; - drm_gpuva_remove(curr); + drm_gpuva_remove(va); if (op->prev) { drm_gpuva_init_from_op(prev, op->prev); @@ -1645,7 +1828,7 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuvm *gpuvm, EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create); /** - * drm_gpuva_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM + * drm_gpuvm_bo_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM * @gpuvm: the &drm_gpuvm representing the GPU VA space * @obj: the &drm_gem_object to unmap * @@ -1664,11 +1847,12 @@ EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create); * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure */ struct drm_gpuva_ops * -drm_gpuva_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, +drm_gpuvm_bo_unmap_ops_create(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj) { struct drm_gpuva_ops *ops; struct drm_gpuva_op *op; + struct drm_gpuvm_bo *vm_bo; struct drm_gpuva *va; int ret; @@ -1680,7 +1864,7 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, INIT_LIST_HEAD(&ops->list); - drm_gem_for_each_gpuva(va, obj) { + drm_gem_for_each_gpuva(va, vm_bo, gpuvm, obj) { op = gpuva_op_alloc(gpuvm); if (!op) { ret = -ENOMEM; @@ -1698,7 +1882,7 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, drm_gpuva_ops_free(gpuvm, ops); return ERR_PTR(ret); } -EXPORT_SYMBOL_GPL(drm_gpuva_gem_unmap_ops_create); +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_unmap_ops_create); /** * drm_gpuva_ops_free() - free the given &drm_gpuva_ops diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index 21d5a06241ae..a93483a4ceb5 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -71,6 +71,7 @@ struct bind_job_op { u32 handle; u64 offset; struct drm_gem_object *obj; + struct drm_gpuvm_bo *vm_bo; } gem; struct nouveau_uvma_region *reg; @@ -1113,22 +1114,34 @@ bind_validate_region(struct nouveau_job *job) } static void -bind_link_gpuvas(struct drm_gpuva_ops *ops, struct nouveau_uvma_prealloc *new) +bind_link_gpuvas(struct bind_job_op *bop) { + struct nouveau_uvma_prealloc *new = &bop->new; + struct drm_gpuvm_bo *vm_bo = bop->gem.vm_bo; + struct drm_gpuva_ops *ops = bop->ops; struct drm_gpuva_op *op; drm_gpuva_for_each_op(op, ops) { switch (op->op) { case DRM_GPUVA_OP_MAP: - drm_gpuva_link(&new->map->va); + drm_gpuva_link(&new->map->va, vm_bo); break; - case DRM_GPUVA_OP_REMAP: + case DRM_GPUVA_OP_REMAP: { + struct drm_gpuva *va = op->remap.unmap->va; + struct drm_gpuvm_bo *vm_bo; + + vm_bo = drm_gpuvm_bo_find(va->vm, va->gem.obj); + BUG_ON(!vm_bo); + if (op->remap.prev) - drm_gpuva_link(&new->prev->va); + drm_gpuva_link(&new->prev->va, vm_bo); if (op->remap.next) - drm_gpuva_link(&new->next->va); - drm_gpuva_unlink(op->remap.unmap->va); + drm_gpuva_link(&new->next->va, vm_bo); + drm_gpuva_unlink(va); + + drm_gpuvm_bo_put(vm_bo); break; + } case DRM_GPUVA_OP_UNMAP: drm_gpuva_unlink(op->unmap.va); break; @@ -1150,10 +1163,22 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) list_for_each_op(op, &bind_job->ops) { if (op->op == OP_MAP) { - op->gem.obj = drm_gem_object_lookup(job->file_priv, - op->gem.handle); - if (!op->gem.obj) + struct drm_gem_object *obj; + + obj = drm_gem_object_lookup(job->file_priv, + op->gem.handle); + if (!obj) return -ENOENT; + + dma_resv_lock(obj->resv, NULL); + op->gem.vm_bo = drm_gpuvm_bo_obtain(&uvmm->base, obj); + dma_resv_unlock(obj->resv); + if (IS_ERR(op->gem.vm_bo)) { + drm_gem_object_put(obj); + return PTR_ERR(op->gem.vm_bo); + } + + op->gem.obj = obj; } ret = bind_validate_op(job, op); @@ -1364,7 +1389,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) case OP_UNMAP_SPARSE: case OP_MAP: case OP_UNMAP: - bind_link_gpuvas(op->ops, &op->new); + bind_link_gpuvas(op); break; default: break; @@ -1511,6 +1536,12 @@ nouveau_uvmm_bind_job_free_work_fn(struct work_struct *work) if (!IS_ERR_OR_NULL(op->ops)) drm_gpuva_ops_free(&uvmm->base, op->ops); + if (!IS_ERR_OR_NULL(op->gem.vm_bo)) { + dma_resv_lock(obj->resv, NULL); + drm_gpuvm_bo_put(op->gem.vm_bo); + dma_resv_unlock(obj->resv); + } + if (obj) drm_gem_object_put(obj); } @@ -1776,15 +1807,18 @@ void nouveau_uvmm_bo_map_all(struct nouveau_bo *nvbo, struct nouveau_mem *mem) { struct drm_gem_object *obj = &nvbo->bo.base; + struct drm_gpuvm_bo *vm_bo; struct drm_gpuva *va; dma_resv_assert_held(obj->resv); - drm_gem_for_each_gpuva(va, obj) { - struct nouveau_uvma *uvma = uvma_from_va(va); + drm_gem_for_each_gpuva_gem(vm_bo, obj) { + drm_gpuvm_bo_for_each_va(va, vm_bo) { + struct nouveau_uvma *uvma = uvma_from_va(va); - nouveau_uvma_map(uvma, mem); - drm_gpuva_invalidate(va, false); + nouveau_uvma_map(uvma, mem); + drm_gpuva_invalidate(va, false); + } } } @@ -1792,15 +1826,18 @@ void nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo) { struct drm_gem_object *obj = &nvbo->bo.base; + struct drm_gpuvm_bo *vm_bo; struct drm_gpuva *va; dma_resv_assert_held(obj->resv); - drm_gem_for_each_gpuva(va, obj) { - struct nouveau_uvma *uvma = uvma_from_va(va); + drm_gem_for_each_gpuva_gem(vm_bo, obj) { + drm_gpuvm_bo_for_each_va(va, vm_bo) { + struct nouveau_uvma *uvma = uvma_from_va(va); - nouveau_uvma_unmap(uvma); - drm_gpuva_invalidate(va, true); + nouveau_uvma_unmap(uvma); + drm_gpuva_invalidate(va, true); + } } } @@ -1847,14 +1884,14 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, kernel_managed_addr, kernel_managed_size, NULL, 0, &cli->uvmm.vmm.vmm); if (ret) - goto out_free_gpuva_mgr; + goto out_free_gpuvm; cli->uvmm.vmm.cli = cli; mutex_unlock(&cli->mutex); return 0; -out_free_gpuva_mgr: +out_free_gpuvm: drm_gpuvm_destroy(&uvmm->base); out_unlock: mutex_unlock(&cli->mutex); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index bc9f6aa2f3fe..7f591e76d74d 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -571,7 +571,7 @@ int drm_gem_evict(struct drm_gem_object *obj); * drm_gem_gpuva_init() - initialize the gpuva list of a GEM object * @obj: the &drm_gem_object * - * This initializes the &drm_gem_object's &drm_gpuva list. + * This initializes the &drm_gem_object's &drm_gpuvm_bo list. * * Calling this function is only necessary for drivers intending to support the * &drm_driver_feature DRIVER_GEM_GPUVA. @@ -584,28 +584,44 @@ static inline void drm_gem_gpuva_init(struct drm_gem_object *obj) } /** - * drm_gem_for_each_gpuva() - iternator to walk over a list of gpuvas - * @entry__: &drm_gpuva structure to assign to in each iteration step - * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with + * drm_gem_for_each_gpuva_gem() - iterator to walk over a list of &drm_gpuvm_bo + * @entry__: &drm_gpuvm_bo structure to assign to in each iteration step + * @obj__: the &drm_gem_object the &drm_gpuvm_bo to walk are associated with * - * This iterator walks over all &drm_gpuva structures associated with the - * &drm_gpuva_manager. + * This iterator walks over all &drm_gpuvm_bo structures associated with the + * &drm_gem_object. */ -#define drm_gem_for_each_gpuva(entry__, obj__) \ - list_for_each_entry(entry__, &(obj__)->gpuva.list, gem.entry) +#define drm_gem_for_each_gpuva_gem(entry__, obj__) \ + list_for_each_entry(entry__, &(obj__)->gpuva.list, list.entry.gem) /** - * drm_gem_for_each_gpuva_safe() - iternator to safely walk over a list of - * gpuvas - * @entry__: &drm_gpuva structure to assign to in each iteration step - * @next__: &next &drm_gpuva to store the next step - * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with + * drm_gem_for_each_gpuva_gem_safe() - iterator to safely walk over a list of + * &drm_gpuvm_bo + * @entry__: &drm_gpuvm_bostructure to assign to in each iteration step + * @next__: &next &drm_gpuvm_bo to store the next step + * @obj__: the &drm_gem_object the &drm_gpuvm_bo to walk are associated with * - * This iterator walks over all &drm_gpuva structures associated with the + * This iterator walks over all &drm_gpuvm_bo structures associated with the * &drm_gem_object. It is implemented with list_for_each_entry_safe(), hence * it is save against removal of elements. */ -#define drm_gem_for_each_gpuva_safe(entry__, next__, obj__) \ - list_for_each_entry_safe(entry__, next__, &(obj__)->gpuva.list, gem.entry) +#define drm_gem_for_each_gpuva_gem_safe(entry__, next__, obj__) \ + list_for_each_entry_safe(entry__, next__, &(obj__)->gpuva.list, list.entry.gem) + +/** + * drm_gem_for_each_gpuva() - iterator to walk over a list of &drm_gpuva + * @va__: &drm_gpuva structure to assign to in each iteration step + * @vm_bo__: the &drm_gpuvm_bo representing the @gpuvm__ and @obj__ combination + * @gpuvm__: the &drm_gpuvm the &drm_gpuvas to walk are associated with + * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gpuvm and &drm_gem_object. + */ +#define drm_gem_for_each_gpuva(va__, vm_bo__, gpuvm__, obj__) \ + for (vm_bo__ = drm_gpuvm_bo_find(gpuvm__, obj__), \ + va__ = vm_bo__ ? list_first_entry(&vm_bo__->list.gpuva, typeof(*va__), gem.entry) : NULL; \ + va__ && !list_entry_is_head(va__, &vm_bo__->list.gpuva, gem.entry); \ + va__ = list_next_entry(va__, gem.entry)) #endif /* __DRM_GEM_H__ */ diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuva_mgr.h index e09e3e3ac5b2..f8f29c1c858d 100644 --- a/include/drm/drm_gpuva_mgr.h +++ b/include/drm/drm_gpuva_mgr.h @@ -32,6 +32,7 @@ #include struct drm_gpuvm; +struct drm_gpuvm_bo; struct drm_gpuvm_ops; /** @@ -140,7 +141,7 @@ struct drm_gpuva { int drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va); void drm_gpuva_remove(struct drm_gpuva *va); -void drm_gpuva_link(struct drm_gpuva *va); +void drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo); void drm_gpuva_unlink(struct drm_gpuva *va); struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm, @@ -339,6 +340,130 @@ __drm_gpuva_next(struct drm_gpuva *va) #define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \ list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry) +/** + * struct drm_gpuvm_bo - structure representing a &drm_gpuvm and + * &drm_gem_object combination + * + * This structure is an abstraction representing a &drm_gpuvm and + * &drm_gem_object combination. It serves as an indirection to accelerate + * iterating all &drm_gpuvas within a &drm_gpuvm backed by the same + * &drm_gem_object. + * + * Furthermore it is used cache evicted GEM objects for a certain GPU-VM to + * accelerate validation. + * + * Typically, drivers want to create an instance of a struct drm_gpuvm_bo once + * a GEM object is mapped first in a GPU-VM and release the instance once the + * last mapping of the GEM object in this GPU-VM is unmapped. + */ +struct drm_gpuvm_bo { + + /** + * @gpuvm: The &drm_gpuvm the @obj is mapped in. + */ + struct drm_gpuvm *vm; + + /** + * @obj: The &drm_gem_object being mapped in the @gpuvm. + */ + struct drm_gem_object *obj; + + /** + * @kref: The reference count for this &drm_gpuvm_bo. + */ + struct kref kref; + + /** + * @list: Structure containing all &list_heads. + */ + struct { + /** + * @gpuva: The list of linked &drm_gpuvas. + */ + struct list_head gpuva; + + /** + * @entry: Structure containing all &list_heads serving as + * entry. + */ + struct { + /** + * @gem: List entry to attach to the &drm_gem_objects + * gpuva list. + */ + struct list_head gem; + } entry; + } list; +}; + +struct drm_gpuvm_bo * +drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj); +struct drm_gpuvm_bo * +drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj, + struct drm_gpuvm_bo *__vm_bo); + +struct drm_gpuvm_bo * +drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj); + +struct drm_gpuvm_bo * +drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj); +void drm_gpuvm_bo_destroy(struct kref *kref); + +/** + * drm_gpuvm_bo_get() - acquire a struct drm_gpuvm_bo reference + * @vm_bo: the &drm_gpuvm_bo to acquire the reference of + * + * This function acquires an additional reference to @vm_bo. It is illegal to + * call this without already holding a reference. No locks required. + */ +static inline struct drm_gpuvm_bo * +drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_bo) +{ + kref_get(&vm_bo->kref); + return vm_bo; +} + +/** + * drm_gpuvm_bo_put() - drop a struct drm_gpuvm_bo reference + * @vm_bo: the &drm_gpuvm_bo to release the reference of + * + * This releases a reference to @vm_bo. + */ +static inline void +drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo) +{ + kref_put(&vm_bo->kref, drm_gpuvm_bo_destroy); +} + +/** + * drm_gpuvm_bo_for_each_va() - iterator to walk over a list of &drm_gpuva + * @va__: &drm_gpuva structure to assign to in each iteration step + * @vm_bo__: the &drm_gpuvm_bo the &drm_gpuva to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gpuvm_bo. + */ +#define drm_gpuvm_bo_for_each_va(va__, vm_bo__) \ + list_for_each_entry(va__, &(vm_bo)->list.gpuva, gem.entry) + +/** + * drm_gpuvm_bo_for_each_va_safe() - iterator to safely walk over a list of + * &drm_gpuva + * @va__: &drm_gpuva structure to assign to in each iteration step + * @next__: &next &drm_gpuva to store the next step + * @vm_bo__: the &drm_gpuvm_bo the &drm_gpuva to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gpuvm_bo. It is implemented with list_for_each_entry_safe(), hence + * it is save against removal of elements. + */ +#define drm_gpuvm_bo_for_each_va_safe(va__, next__, vm_bo__) \ + list_for_each_entry_safe(va__, next__, &(vm_bo)->list.gpuva, gem.entry) + /** * enum drm_gpuva_op_type - GPU VA operation type * @@ -608,7 +733,7 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuvm *gpuvm, u64 addr, u64 range); struct drm_gpuva_ops * -drm_gpuva_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, +drm_gpuvm_bo_unmap_ops_create(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj); void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm, @@ -653,6 +778,30 @@ struct drm_gpuvm_ops { */ void (*op_free)(struct drm_gpuva_op *op); + /** + * @vm_bo_alloc: called when the &drm_gpuvm allocates + * a struct drm_gpuvm_bo + * + * Some drivers may want to embed struct drm_gpuvm_bo into driver + * specific structures. By implementing this callback drivers can + * allocate memory accordingly. + * + * This callback is optional. + */ + struct drm_gpuvm_bo *(*vm_bo_alloc)(void); + + /** + * @vm_bo_free: called when the &drm_gpuvm frees a + * struct drm_gpuvm_bo + * + * Some drivers may want to embed struct drm_gpuvm_bo into driver + * specific structures. By implementing this callback drivers can + * free the previously allocated memory accordingly. + * + * This callback is optional. + */ + void (*vm_bo_free)(struct drm_gpuvm_bo *vm_bo); + /** * @sm_step_map: called from &drm_gpuva_sm_map to finally insert the * mapping once all previous steps were completed From patchwork Wed Sep 6 21:47:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13375960 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D90EEEE14D9 for ; Wed, 6 Sep 2023 21:47:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2F3C910E716; Wed, 6 Sep 2023 21:47:57 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7A1FA10E723 for ; Wed, 6 Sep 2023 21:47:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694036873; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NVruo8+WaL18PY/f/PQckwTskKI4OUDG24U8LY+9fDU=; b=XIarJo6QbVBBB49af0GIkKjBqty9h2iEWC1uZFemF1MdHXxKSemntFne9WjRHgLXvyLp2Q duwaaRHeHPo+79qsAowzBs5jocl5ldrZfxiyK/MEIQ2tUUopyht7jyrfhwZzlWDaQQn4Lf B2Ghg6r+u5fmza5PBTPRb6e+0TXxRlU= Received: from mail-lf1-f69.google.com (mail-lf1-f69.google.com [209.85.167.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-557-1ggs-cmBOi-OyqzzUMMFug-1; Wed, 06 Sep 2023 17:47:52 -0400 X-MC-Unique: 1ggs-cmBOi-OyqzzUMMFug-1 Received: by mail-lf1-f69.google.com with SMTP id 2adb3069b0e04-500b95c69c3so222656e87.1 for ; Wed, 06 Sep 2023 14:47:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694036871; x=1694641671; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NVruo8+WaL18PY/f/PQckwTskKI4OUDG24U8LY+9fDU=; b=ZIx6Xilygl2fqfff/c1YU1txLVp6CqAm4K3uq/oOGAO85Zffr/tlZdI396P0ESN2Pt VgAZuLsfccPtX4YiANLF9Mdif5k/01IHXqmNm7x7pZgAnI4LBPL7t1LXzrHy2kZvI6f3 QuTOX5aI4c4id8WqgreKQr74UEYIh4X50WdTWCrhhF3rrisQrdsANEq3F1z5RYQ9t2jz 4wegtErfTiGLFMsSoJ9AnS9xi5IsAr2a+9BvdNyVLnI4Iafi1/NYyS7h1BFd93LemTpe cguOKvw41VwBtUavkhRKA9Kox1Iag2TqqBFG00E0VanUasg9Rttig4igkFZF/SrP+WKg YZcQ== X-Gm-Message-State: AOJu0YzVqdamPymn78LZrWATBwTZ4KLtkDQ48wutu67VJH8OsEC8hKnx hi61G2IIhgVJdA2kqRRpEFAbYZC4GHmicmW5kL8BCHdj8RDktShKegLv0h/bOGUep4NxAjcDUPV avWtvCdqJjE+XnwJB7rQPf1PfU80t X-Received: by 2002:ac2:4c13:0:b0:4f8:e4e9:499e with SMTP id t19-20020ac24c13000000b004f8e4e9499emr2967987lfq.12.1694036870757; Wed, 06 Sep 2023 14:47:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEsVd5gAa3cnUK5NrtIiKIU3+tQJyk99R/PMq6bCB6Q0TaGBLRvqxzkFAFwpLrDrSn+VJUoVA== X-Received: by 2002:ac2:4c13:0:b0:4f8:e4e9:499e with SMTP id t19-20020ac24c13000000b004f8e4e9499emr2967973lfq.12.1694036870396; Wed, 06 Sep 2023 14:47:50 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id o21-20020a1709062e9500b009a13fdc139fsm9547503eji.183.2023.09.06.14.47.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 14:47:50 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Subject: [PATCH drm-misc-next v2 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation Date: Wed, 6 Sep 2023 23:47:14 +0200 Message-ID: <20230906214723.4393-7-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906214723.4393-1-dakr@redhat.com> References: <20230906214723.4393-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" So far the DRM GPUVA manager offers common infrastructure to track GPU VA allocations and mappings, generically connect GPU VA mappings to their backing buffers and perform more complex mapping operations on the GPU VA space. However, there are more design patterns commonly used by drivers, which can potentially be generalized in order to make the DRM GPUVA manager represent a basic GPU-VM implementation. In this context, this patch aims at generalizing the following elements. 1) Provide a common dma-resv for GEM objects not being used outside of this GPU-VM. 2) Provide tracking of external GEM objects (GEM objects which are shared with other GPU-VMs). 3) Provide functions to efficiently lock all GEM objects dma-resv the GPU-VM contains mappings of. 4) Provide tracking of evicted GEM objects the GPU-VM contains mappings of, such that validation of evicted GEM objects is accelerated. 5) Provide some convinience functions for common patterns. Rather than being designed as a "framework", the target is to make all features appear as a collection of optional helper functions, such that drivers are free to make use of the DRM GPUVA managers basic functionality and opt-in for other features without setting any feature flags, just by making use of the corresponding functions. Suggested-by: Matthew Brost Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuva_mgr.c | 335 +++++++++++++++++++++++++++++++- include/drm/drm_gpuva_mgr.h | 235 +++++++++++++++++++++- 2 files changed, 564 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c index da7a6e1aabe0..db4ef4fadc4b 100644 --- a/drivers/gpu/drm/drm_gpuva_mgr.c +++ b/drivers/gpu/drm/drm_gpuva_mgr.c @@ -678,6 +678,10 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm, gpuvm->rb.tree = RB_ROOT_CACHED; INIT_LIST_HEAD(&gpuvm->rb.list); + INIT_LIST_HEAD(&gpuvm->extobj.list); + INIT_LIST_HEAD(&gpuvm->evict.list); + spin_lock_init(&gpuvm->evict.lock); + drm_gpuva_check_overflow(start_offset, range); gpuvm->mm_start = start_offset; gpuvm->mm_range = range; @@ -719,10 +723,291 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), "GPUVA tree is not empty, potentially leaking memory.\n"); + WARN(!list_empty(&gpuvm->extobj.list), "Extobj list should be empty.\n"); + WARN(!list_empty(&gpuvm->evict.list), "Evict list should be empty.\n"); + drm_gem_private_object_fini(&gpuvm->d_obj); } EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); +/** + * drm_gpuvm_prepare_objects() - prepare all assoiciated BOs + * @gpuvm: the &drm_gpuvm + * @exec: the &drm_exec locking context + * @num_fences: the amount of &dma_fences to reserve + * + * Calls drm_exec_prepare_obj() for all &drm_gem_objects the given + * &drm_gpuvm contains mappings of. + * + * Using this function directly, it is the drivers responsibility to call + * drm_exec_init() and drm_exec_fini() accordingly. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + unsigned int num_fences) +{ + struct drm_gpuvm_bo *vm_bo; + int ret; + + list_for_each_entry(vm_bo, &gpuvm->extobj.list, list.entry.extobj) { + struct drm_gem_object *obj = vm_bo->obj; + + ret = drm_exec_prepare_obj(exec, obj, num_fences); + if (ret) + return ret; + } + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_objects); + +/** + * drm_gpuvm_prepare_range() - prepare all BOs mapped within a given range + * @gpuvm: the &drm_gpuvm + * @exec: the &drm_exec locking context + * @addr: the start address within the VA space + * @range: the range to iterate within the VA space + * @num_fences: the amount of &dma_fences to reserve + * + * Calls drm_exec_prepare_obj() for all &drm_gem_objects mapped between @addr + * and @addr + @range. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, struct drm_exec *exec, + u64 addr, u64 range, unsigned int num_fences) +{ + struct drm_gpuva *va; + u64 end = addr + range; + int ret; + + drm_gpuvm_for_each_va_range(va, gpuvm, addr, end) { + struct drm_gem_object *obj = va->gem.obj; + + ret = drm_exec_prepare_obj(exec, obj, num_fences); + if (ret) + return ret; + } + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_range); + +/** + * drm_gpuvm_exec_lock() - lock all dma-resv of all assoiciated BOs + * @vm_exec: the &drm_gpuvm_exec abstraction + * @num_fences: the amount of &dma_fences to reserve + * @interruptible: sleep interruptible if waiting + * + * Acquires all dma-resv locks of all &drm_gem_objects the given + * &drm_gpuvm contains mappings of. + * + * Addionally, when calling this function with struct drm_gpuvm_exec::extra + * being set the driver receives the given @fn callback to lock additional + * dma-resv in the context of the &drm_gpuvm_exec instance. Typically, drivers + * would call drm_exec_prepare_obj() from within this callback. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_exec_lock(struct drm_gpuvm_exec *vm_exec, + unsigned int num_fences, + bool interruptible) +{ + struct drm_gpuvm *gpuvm = vm_exec->vm; + struct drm_exec *exec = &vm_exec->exec; + uint32_t flags; + int ret; + + flags = interruptible ? DRM_EXEC_INTERRUPTIBLE_WAIT : 0 | + DRM_EXEC_IGNORE_DUPLICATES; + + drm_exec_init(exec, flags); + + drm_exec_until_all_locked(exec) { + ret = drm_gpuvm_prepare_vm(gpuvm, exec, num_fences); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + + ret = drm_gpuvm_prepare_objects(gpuvm, exec, num_fences); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + + if (vm_exec->extra.fn) { + ret = vm_exec->extra.fn(vm_exec, num_fences); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + } + } + + return 0; + +err: + drm_exec_fini(exec); + return ret; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_exec_lock); + +static int +fn_lock_array(struct drm_gpuvm_exec *vm_exec, unsigned int num_fences) +{ + struct { + struct drm_gem_object **objs; + unsigned int num_objs; + } *args = vm_exec->extra.priv; + + return drm_exec_prepare_array(&vm_exec->exec, args->objs, + args->num_objs, num_fences); +} + +/** + * drm_gpuvm_exec_lock_array() - lock all dma-resv of all assoiciated BOs + * @vm_exec: the &drm_gpuvm_exec abstraction + * @objs: additional &drm_gem_objects to lock + * @num_objs: the number of additional &drm_gem_objects to lock + * @num_fences: the amount of &dma_fences to reserve + * @interruptible: sleep interruptible if waiting + * + * Acquires all dma-resv locks of all &drm_gem_objects the given &drm_gpuvm + * contains mappings of, plus the ones given through @objs. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_exec_lock_array(struct drm_gpuvm_exec *vm_exec, + struct drm_gem_object **objs, + unsigned int num_objs, + unsigned int num_fences, + bool interruptible) +{ + struct { + struct drm_gem_object **objs; + unsigned int num_objs; + } args; + + args.objs = objs; + args.num_objs = num_objs; + + vm_exec->extra.fn = fn_lock_array; + vm_exec->extra.priv = &args; + + return drm_gpuvm_exec_lock(vm_exec, num_fences, interruptible); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_exec_lock_array); + +/** + * drm_gpuvm_exec_lock_range() - prepare all BOs mapped within a given range + * @vm_exec: the &drm_gpuvm_exec abstraction + * @addr: the start address within the VA space + * @range: the range to iterate within the VA space + * @num_fences: the amount of &dma_fences to reserve + * @interruptible: sleep interruptible if waiting + * + * Acquires all dma-resv locks of all &drm_gem_objects mapped between @addr and + * @addr + @range. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_exec_lock_range(struct drm_gpuvm_exec *vm_exec, + u64 addr, u64 range, + unsigned int num_fences, + bool interruptible) +{ + struct drm_gpuvm *gpuvm = vm_exec->vm; + struct drm_exec *exec = &vm_exec->exec; + uint32_t flags; + int ret; + + flags = interruptible ? DRM_EXEC_INTERRUPTIBLE_WAIT : 0 | + DRM_EXEC_IGNORE_DUPLICATES; + + drm_exec_init(exec, flags); + + drm_exec_until_all_locked(exec) { + ret = drm_gpuvm_prepare_range(gpuvm, exec, addr, range, + num_fences); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + } + + return ret; + +err: + drm_exec_fini(exec); + return ret; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_exec_lock_range); + +/** + * drm_gpuvm_validate() - validate all BOs marked as evicted + * @gpuvm: the &drm_gpuvm to validate evicted BOs + * + * Calls the &drm_gpuvm_ops.bo_validate callback for all evicted buffer + * objects being mapped in the given &drm_gpuvm. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_validate(struct drm_gpuvm *gpuvm) +{ + const struct drm_gpuvm_ops *ops = gpuvm->ops; + struct drm_gpuvm_bo *vm_bo; + int ret; + + if (unlikely(!ops || !ops->bo_validate)) + return -ENOTSUPP; + + /* At this point we should hold all dma-resv locks of all GEM objects + * associated with this GPU-VM, hence it is safe to walk the list. + */ + list_for_each_entry(vm_bo, &gpuvm->evict.list, list.entry.evict) { + dma_resv_assert_held(vm_bo->obj->resv); + + ret = ops->bo_validate(vm_bo->obj); + if (ret) + return ret; + } + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_validate); + +/** + * drm_gpuvm_resv_add_fence - add fence to private and all extobj + * dma-resv + * @gpuvm: the &drm_gpuvm to add a fence to + * @fence: fence to add + * @private_usage: private dma-resv usage + * @extobj_usage: extobj dma-resv usage + */ +void +drm_gpuvm_resv_add_fence(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + struct dma_fence *fence, + enum dma_resv_usage private_usage, + enum dma_resv_usage extobj_usage) +{ + struct drm_gem_object *obj; + unsigned long index; + + drm_exec_for_each_locked_object(exec, index, obj) { + dma_resv_assert_held(obj->resv); + dma_resv_add_fence(obj->resv, fence, + drm_gpuvm_is_extobj(gpuvm, obj) ? + private_usage : extobj_usage); + } +} +EXPORT_SYMBOL_GPL(drm_gpuvm_resv_add_fence); + /** * drm_gpuvm_bo_create() - create a new instance of struct drm_gpuvm_bo * @gpuvm: The &drm_gpuvm the @obj is mapped in. @@ -754,6 +1039,8 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, kref_init(&vm_bo->kref); INIT_LIST_HEAD(&vm_bo->list.gpuva); INIT_LIST_HEAD(&vm_bo->list.entry.gem); + INIT_LIST_HEAD(&vm_bo->list.entry.extobj); + INIT_LIST_HEAD(&vm_bo->list.entry.evict); drm_gem_object_get(obj); @@ -878,6 +1165,46 @@ drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm *gpuvm, } EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain_prealloc); +/** + * drm_gpuvm_bo_evict() - add / remove a &drm_gem_object to / from a + * &drm_gpuvms evicted list + * @obj: the &drm_gem_object to add or remove + * @evict: indicates whether the object is evicted + * + * Adds a &drm_gem_object to or removes it from all &drm_gpuvms evicted + * list containing a mapping of this &drm_gem_object. + */ +void +drm_gpuvm_bo_evict(struct drm_gem_object *obj, bool evict) +{ + struct drm_gpuvm_bo *vm_bo; + + /* Required for iterating the GEMs GPUVA GEM list. If no driver specific + * lock has been set, the list is protected with the GEMs dma-resv lock. + */ + drm_gem_gpuva_assert_lock_held(obj); + + /* Required to protect the GPUVA managers evict list against concurrent + * access through drm_gpuvm_validate(). Concurrent insertions to + * the evict list through different GEM object evictions are protected + * by the GPUVA managers evict lock. + */ + dma_resv_assert_held(obj->resv); + + drm_gem_for_each_gpuva_gem(vm_bo, obj) { + struct drm_gpuvm *gpuvm = vm_bo->vm; + + spin_lock(&gpuvm->evict.lock); + if (evict) + list_add_tail(&vm_bo->list.entry.evict, + &gpuvm->evict.list); + else + list_del_init(&vm_bo->list.entry.evict); + spin_unlock(&gpuvm->evict.lock); + } +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_evict); + static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va) @@ -1032,10 +1359,14 @@ drm_gpuva_unlink(struct drm_gpuva *va) list_del_init(&va->gem.entry); /* This is the last mapping being unlinked for this GEM object, hence - * also remove the VM_BO from the GEM's gpuva list. + * also remove the VM_BO from the GEM's gpuva list as well as from the + * external and evicted object lists. */ - if (list_empty(&vm_bo->list.gpuva)) + if (list_empty(&vm_bo->list.gpuva)) { list_del_init(&vm_bo->list.entry.gem); + list_del_init(&vm_bo->list.entry.extobj); + list_del_init(&vm_bo->list.entry.evict); + } drm_gpuvm_bo_put(vm_bo); } EXPORT_SYMBOL_GPL(drm_gpuva_unlink); diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuva_mgr.h index f8f29c1c858d..805982370be7 100644 --- a/include/drm/drm_gpuva_mgr.h +++ b/include/drm/drm_gpuva_mgr.h @@ -26,10 +26,12 @@ */ #include +#include #include #include #include +#include struct drm_gpuvm; struct drm_gpuvm_bo; @@ -253,6 +255,34 @@ struct drm_gpuvm { * space */ struct dma_resv *resv; + + /** + * @extobj: structure holding the extobj list + */ + struct { + /** + * @list: &list_head storing &drm_gpuvm_bos serving as + * external object + */ + struct list_head list; + } extobj; + + /** + * @evict: structure holding the evict list and evict list lock + */ + struct { + /** + * @list: &list_head storing &drm_gpuvm_bos currently being + * evicted + */ + struct list_head list; + + /** + * @lock: spinlock to protect the evict list against concurrent + * insertion / removal of different &drm_gpuvm_bos + */ + spinlock_t lock; + } evict; }; void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm, @@ -262,6 +292,21 @@ void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm, const struct drm_gpuvm_ops *ops); void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm); +/** + * drm_gpuvm_is_extobj() - indicates whether the given &drm_gem_object is an + * external object + * @gpuvm: the &drm_gpuvm to check + * @obj: the &drm_gem_object to check + * + * Returns: true if the &drm_gem_object &dma_resv differs from the + * &drm_gpuvms &dma_resv, false otherwise + */ +static inline bool drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + return obj && obj->resv != gpuvm->resv; +} + static inline struct drm_gpuva * __drm_gpuva_next(struct drm_gpuva *va) { @@ -340,6 +385,128 @@ __drm_gpuva_next(struct drm_gpuva *va) #define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \ list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry) +/** + * struct drm_gpuvm_exec - &drm_gpuvm abstraction of &drm_exec + * + * This structure should be created on the stack as &drm_exec should be. + * + * Optionally, @extra can be set in order to lock additional &drm_gem_objects. + */ +struct drm_gpuvm_exec { + /** + * @exec: the &drm_exec structure + */ + struct drm_exec exec; + + /** + * @vm: the &drm_gpuvm to lock its DMA reservations + */ + struct drm_gpuvm *vm; + + /** + * @extra: Callback and corresponding private data for the driver to + * lock arbitrary additional &drm_gem_objects. + */ + struct { + /** + * @fn: The driver callback to lock additional &drm_gem_objects. + */ + int (*fn)(struct drm_gpuvm_exec *vm_exec, + unsigned int num_fences); + + /** + * @priv: driver private data for the @fn callback + */ + void *priv; + } extra; +}; + +/** + * drm_gpuvm_prepare_vm() - prepare the GPUVMs common dma-resv + * @gpuvm: the &drm_gpuvm + * @exec: the &drm_exec context + * @num_fences: the amount of &dma_fences to reserve + * + * Calls drm_exec_prepare_obj() for the GPUVMs dummy &drm_gem_object. + * + * Using this function directly, it is the drivers responsibility to call + * drm_exec_init() and drm_exec_fini() accordingly. + * + * Returns: 0 on success, negative error code on failure. + */ +static inline int +drm_gpuvm_prepare_vm(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + unsigned int num_fences) +{ + return drm_exec_prepare_obj(exec, &gpuvm->d_obj, num_fences); +} + +int drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + unsigned int num_fences); + +int drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + u64 addr, u64 range, + unsigned int num_fences); + +int drm_gpuvm_exec_lock(struct drm_gpuvm_exec *vm_exec, + unsigned int num_fences, + bool interruptible); + +int drm_gpuvm_exec_lock_array(struct drm_gpuvm_exec *vm_exec, + struct drm_gem_object **objs, + unsigned int num_objs, + unsigned int num_fences, + bool interruptible); + +int drm_gpuvm_exec_lock_range(struct drm_gpuvm_exec *vm_exec, + u64 addr, u64 range, + unsigned int num_fences, + bool interruptible); + +/** + * drm_gpuvm_lock() - lock all dma-resv of all assoiciated BOs + * @gpuvm: the &drm_gpuvm + * + * Releases all dma-resv locks of all &drm_gem_objects previously acquired + * through drm_gpuvm_lock() or its variants. + * + * Returns: 0 on success, negative error code on failure. + */ +static inline void +drm_gpuvm_exec_unlock(struct drm_gpuvm_exec *vm_exec) +{ + drm_exec_fini(&vm_exec->exec); +} + +int drm_gpuvm_validate(struct drm_gpuvm *gpuvm); +void drm_gpuvm_resv_add_fence(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + struct dma_fence *fence, + enum dma_resv_usage private_usage, + enum dma_resv_usage extobj_usage); + +/** + * drm_gpuvm_exec_resv_add_fence() + * @vm_exec: the &drm_gpuvm_exec abstraction + * @fence: fence to add + * @private_usage: private dma-resv usage + * @extobj_usage: extobj dma-resv usage + * + * See drm_gpuvm_resv_add_fence(). + */ +static inline void +drm_gpuvm_exec_resv_add_fence(struct drm_gpuvm_exec *vm_exec, + struct dma_fence *fence, + enum dma_resv_usage private_usage, + enum dma_resv_usage extobj_usage) +{ + drm_gpuvm_resv_add_fence(vm_exec->vm, &vm_exec->exec, fence, + private_usage, extobj_usage); +} + /** * struct drm_gpuvm_bo - structure representing a &drm_gpuvm and * &drm_gem_object combination @@ -392,6 +559,18 @@ struct drm_gpuvm_bo { * gpuva list. */ struct list_head gem; + + /** + * @evict: List entry to attach to the &drm_gpuvms + * extobj list. + */ + struct list_head extobj; + + /** + * @evict: List entry to attach to the &drm_gpuvms evict + * list. + */ + struct list_head evict; } entry; } list; }; @@ -404,15 +583,52 @@ drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, struct drm_gpuvm_bo *__vm_bo); -struct drm_gpuvm_bo * -drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, - struct drm_gem_object *obj); - struct drm_gpuvm_bo * drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj); void drm_gpuvm_bo_destroy(struct kref *kref); +struct drm_gpuvm_bo * +drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj); + +void drm_gpuvm_bo_evict(struct drm_gem_object *obj, bool evict); + +/** + * drm_gpuvm_bo_extobj_add() - adds the &drm_gpuvm_bo to its &drm_gpuvm's + * extobj list + * @vm_bo: The &drm_gpuvm_bo to add to its &drm_gpuvm's the extobj list. + * + * Adds the given @vm_bo to its &drm_gpuvm's extobj list if not on the list + * already and if the corresponding &drm_gem_object is an external object, + * actually. + */ +static inline void +drm_gpuvm_bo_extobj_add(struct drm_gpuvm_bo *vm_bo) +{ + struct drm_gpuvm *gpuvm = vm_bo->vm; + + if (drm_gpuvm_is_extobj(gpuvm, vm_bo->obj) && + list_empty(&vm_bo->list.entry.extobj)) + list_add_tail(&vm_bo->list.entry.extobj, &gpuvm->extobj.list); +} + +/** + * drm_gpuvm_bo_extobj_remove() - removes the &drm_gpuvm_bo from its + * &drm_gpuvm's extobj list + * @vm_bo: The &drm_gpuvm_bo to remove from its &drm_gpuvm's the extobj list. + * + * Removes the given @vm_bo from its &drm_gpuvm's extobj list. Drivers should + * only call that from an unwind path. Typically the extobj is removed from the + * extobj list through drm_gpuva_unlink(). + */ +static inline void +drm_gpuvm_bo_extobj_remove(struct drm_gpuvm_bo *vm_bo) +{ + if (likely(!list_empty(&vm_bo->list.entry.extobj))) + list_del(&vm_bo->list.entry.extobj); +} + /** * drm_gpuvm_bo_get() - acquire a struct drm_gpuvm_bo reference * @vm_bo: the &drm_gpuvm_bo to acquire the reference of @@ -845,6 +1061,17 @@ struct drm_gpuvm_ops { * used. */ int (*sm_step_unmap)(struct drm_gpuva_op *op, void *priv); + + /** + * @bo_validate: called from drm_gpuvm_validate() + * + * Drivers receive this callback for every evicted &drm_gem_object being + * mapped in the corresponding &drm_gpuvm. + * + * Typically, drivers would call their driver specific variant of + * ttm_bo_validate() from within this callback. + */ + int (*bo_validate)(struct drm_gem_object *obj); }; int drm_gpuva_sm_map(struct drm_gpuvm *gpuvm, void *priv, From patchwork Wed Sep 6 21:47:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13375961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 52FE6EE14D6 for ; Wed, 6 Sep 2023 21:48:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 642AC10E723; Wed, 6 Sep 2023 21:48:01 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9EBDE10E726 for ; Wed, 6 Sep 2023 21:47:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694036876; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xRVbCNfkMxg4yFkEwaXcKo3uGpv3jnA532GZgoIatuc=; b=Rrwlz2Xf2b/VpA1OgtHV050UzVZ1fgJ2z4ERBpkjNowy97Zzgw7prF/xMjPRAm/gN6JKif sFQ0TEii/9PUp8oGE/19683ZVEF/iJlM8zFbN4z7grpDbdHbh3J8VyfdvlUAbs01tt9CbH sYfSLFpi37vMyCuG0dG8MQ5yjdJxCEg= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-654-Df4Rq7_aNx6bx-W8qv22eQ-1; Wed, 06 Sep 2023 17:47:55 -0400 X-MC-Unique: Df4Rq7_aNx6bx-W8qv22eQ-1 Received: by mail-ej1-f72.google.com with SMTP id a640c23a62f3a-94a348facbbso11952166b.1 for ; Wed, 06 Sep 2023 14:47:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694036874; x=1694641674; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xRVbCNfkMxg4yFkEwaXcKo3uGpv3jnA532GZgoIatuc=; b=ZXrCH/N6fFcAeo1KkYzLqBiwZuu6PIviOzQQe4zw0AQaZ0HvB4e/IQYaVkqppa9J9F TEKxuJc3NpyUHxM8owpbSeFFophMjgoC9M5PNcGEQnhaESNNorK/MGoXFL0tuTikMkpf UJUsbNLpnPFoXR4QSscgenpCZvo/Nd8XgluTKGamz5rKeFfzPIHkL5MLseUiyxOy3pdG RUQMy/756qsng6yei0SXY7stNBySIMhecdX2XVyK1BXeVXXzBEav8OEYFUVjLsumkWA9 xZxIpv5Tor2/HupSitTOGfVI1Ods6OvUIZpv5UTI7pd0nsJ1LmkccZDEBwJn38nhov9d HJrg== X-Gm-Message-State: AOJu0Ywex0KCJALbVp/QPuqbK8GY8N6mL/99+D/XsRXgQGluDVHfIv8r 1BVy9tjXvNXtLHYGJr4Y5L67NcYEhqa4MIuEXKPuHfSqytgdI29ywZc6t6oxwIpUwysZTcG1ASW 3ykUodo6+vnZ0ALG++SbmcDEJ1TVf X-Received: by 2002:a17:907:1de3:b0:9a5:7926:e391 with SMTP id og35-20020a1709071de300b009a57926e391mr3258307ejc.10.1694036874421; Wed, 06 Sep 2023 14:47:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH1oZogly3PT3zBfMi6akWFbazQRsKx5+KCI2EQQPP9IWQwV3vbUXXT2toISTU0b1ENyShb5w== X-Received: by 2002:a17:907:1de3:b0:9a5:7926:e391 with SMTP id og35-20020a1709071de300b009a57926e391mr3258299ejc.10.1694036874194; Wed, 06 Sep 2023 14:47:54 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id x24-20020a170906805800b009894b476310sm9494565ejw.163.2023.09.06.14.47.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 14:47:53 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Subject: [PATCH drm-misc-next v2 7/7] drm/nouveau: GPUVM dma-resv/extobj handling, GEM validation Date: Wed, 6 Sep 2023 23:47:15 +0200 Message-ID: <20230906214723.4393-8-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906214723.4393-1-dakr@redhat.com> References: <20230906214723.4393-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Make use of the DRM GPUVA managers GPU-VM common dma-resv, external GEM object tracking, dma-resv locking, evicted GEM object tracking and validation features. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_bo.c | 4 +- drivers/gpu/drm/nouveau/nouveau_exec.c | 52 +++---------- drivers/gpu/drm/nouveau/nouveau_exec.h | 4 - drivers/gpu/drm/nouveau/nouveau_gem.c | 4 +- drivers/gpu/drm/nouveau/nouveau_sched.h | 4 +- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 99 ++++++++++++++++--------- 6 files changed, 82 insertions(+), 85 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 19cab37ac69c..18c91993dae1 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1060,17 +1060,18 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, { struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct nouveau_bo *nvbo = nouveau_bo(bo); + struct drm_gem_object *obj = &bo->base; struct ttm_resource *old_reg = bo->resource; struct nouveau_drm_tile *new_tile = NULL; int ret = 0; - if (new_reg->mem_type == TTM_PL_TT) { ret = nouveau_ttm_tt_bind(bo->bdev, bo->ttm, new_reg); if (ret) return ret; } + drm_gpuvm_bo_evict(obj, evict); nouveau_bo_move_ntfy(bo, new_reg); ret = ttm_bo_wait_ctx(bo, ctx); if (ret) @@ -1135,6 +1136,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, out_ntfy: if (ret) { nouveau_bo_move_ntfy(bo, bo->resource); + drm_gpuvm_bo_evict(obj, !evict); } return ret; } diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c index b4239af29e5a..5f86043046f5 100644 --- a/drivers/gpu/drm/nouveau/nouveau_exec.c +++ b/drivers/gpu/drm/nouveau/nouveau_exec.c @@ -1,7 +1,5 @@ // SPDX-License-Identifier: MIT -#include - #include "nouveau_drv.h" #include "nouveau_gem.h" #include "nouveau_mem.h" @@ -91,9 +89,6 @@ nouveau_exec_job_submit(struct nouveau_job *job) struct nouveau_exec_job *exec_job = to_nouveau_exec_job(job); struct nouveau_cli *cli = job->cli; struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(cli); - struct drm_exec *exec = &job->exec; - struct drm_gem_object *obj; - unsigned long index; int ret; ret = nouveau_fence_new(&exec_job->fence); @@ -101,52 +96,29 @@ nouveau_exec_job_submit(struct nouveau_job *job) return ret; nouveau_uvmm_lock(uvmm); - drm_exec_init(exec, DRM_EXEC_INTERRUPTIBLE_WAIT | - DRM_EXEC_IGNORE_DUPLICATES); - drm_exec_until_all_locked(exec) { - struct drm_gpuva *va; - - drm_gpuvm_for_each_va(va, &uvmm->base) { - if (unlikely(va == &uvmm->base.kernel_alloc_node)) - continue; - - ret = drm_exec_prepare_obj(exec, va->gem.obj, 1); - drm_exec_retry_on_contention(exec); - if (ret) - goto err_uvmm_unlock; - } + job->vm_exec.vm = &uvmm->base; + ret = drm_gpuvm_exec_lock(&job->vm_exec, 1, false); + if (ret) { + nouveau_uvmm_unlock(uvmm); + return ret; } nouveau_uvmm_unlock(uvmm); - drm_exec_for_each_locked_object(exec, index, obj) { - struct nouveau_bo *nvbo = nouveau_gem_object(obj); - - ret = nouveau_bo_validate(nvbo, true, false); - if (ret) - goto err_exec_fini; + ret = drm_gpuvm_validate(&uvmm->base); + if (ret) { + drm_gpuvm_exec_unlock(&job->vm_exec); + return ret; } return 0; - -err_uvmm_unlock: - nouveau_uvmm_unlock(uvmm); -err_exec_fini: - drm_exec_fini(exec); - return ret; - } static void nouveau_exec_job_armed_submit(struct nouveau_job *job) { - struct drm_exec *exec = &job->exec; - struct drm_gem_object *obj; - unsigned long index; - - drm_exec_for_each_locked_object(exec, index, obj) - dma_resv_add_fence(obj->resv, job->done_fence, job->resv_usage); - - drm_exec_fini(exec); + drm_gpuvm_exec_resv_add_fence(&job->vm_exec, job->done_fence, + job->resv_usage, job->resv_usage); + drm_gpuvm_exec_unlock(&job->vm_exec); } static struct dma_fence * diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.h b/drivers/gpu/drm/nouveau/nouveau_exec.h index 778cacd90f65..b815de2428f3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_exec.h +++ b/drivers/gpu/drm/nouveau/nouveau_exec.h @@ -3,16 +3,12 @@ #ifndef __NOUVEAU_EXEC_H__ #define __NOUVEAU_EXEC_H__ -#include - #include "nouveau_drv.h" #include "nouveau_sched.h" struct nouveau_exec_job_args { struct drm_file *file_priv; struct nouveau_sched_entity *sched_entity; - - struct drm_exec exec; struct nouveau_channel *chan; struct { diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index c0b10d8d3d03..b89b2494af98 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -111,7 +111,7 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct drm_file *file_priv) if (vmm->vmm.object.oclass < NVIF_CLASS_VMM_NV50) return 0; - if (nvbo->no_share && uvmm && &uvmm->resv != nvbo->bo.base.resv) + if (nvbo->no_share && uvmm && uvmm->base.resv != nvbo->bo.base.resv) return -EPERM; ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); @@ -245,7 +245,7 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain, if (unlikely(!uvmm)) return -EINVAL; - resv = &uvmm->resv; + resv = uvmm->base.resv; } if (!(domain & (NOUVEAU_GEM_DOMAIN_VRAM | NOUVEAU_GEM_DOMAIN_GART))) diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.h b/drivers/gpu/drm/nouveau/nouveau_sched.h index 27ac19792597..d1914fbf007a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.h +++ b/drivers/gpu/drm/nouveau/nouveau_sched.h @@ -5,7 +5,7 @@ #include -#include +#include #include #include "nouveau_drv.h" @@ -54,7 +54,7 @@ struct nouveau_job { struct drm_file *file_priv; struct nouveau_cli *cli; - struct drm_exec exec; + struct drm_gpuvm_exec vm_exec; enum dma_resv_usage resv_usage; struct dma_fence *done_fence; diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index a93483a4ceb5..231e3de94214 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -437,8 +437,9 @@ nouveau_uvma_region_complete(struct nouveau_uvma_region *reg) static void op_map_prepare_unwind(struct nouveau_uvma *uvma) { + struct drm_gpuva *va = &uvma->va; nouveau_uvma_gem_put(uvma); - drm_gpuva_remove(&uvma->va); + drm_gpuva_remove(va); nouveau_uvma_free(uvma); } @@ -467,6 +468,7 @@ nouveau_uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm, break; case DRM_GPUVA_OP_REMAP: { struct drm_gpuva_op_remap *r = &op->remap; + struct drm_gpuva *va = r->unmap->va; if (r->next) op_map_prepare_unwind(new->next); @@ -474,7 +476,7 @@ nouveau_uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm, if (r->prev) op_map_prepare_unwind(new->prev); - op_unmap_prepare_unwind(r->unmap->va); + op_unmap_prepare_unwind(va); break; } case DRM_GPUVA_OP_UNMAP: @@ -633,6 +635,7 @@ nouveau_uvmm_sm_prepare(struct nouveau_uvmm *uvmm, goto unwind; } } + break; } case DRM_GPUVA_OP_REMAP: { @@ -1151,13 +1154,44 @@ bind_link_gpuvas(struct bind_job_op *bop) } } +static int +bind_lock_extra(struct drm_gpuvm_exec *vm_exec, unsigned int num_fences) +{ + struct nouveau_uvmm_bind_job *bind_job = vm_exec->extra.priv; + struct drm_exec *exec = &vm_exec->exec; + struct bind_job_op *op; + int ret; + + list_for_each_op(op, &bind_job->ops) { + struct drm_gpuva_op *va_op; + + if (IS_ERR_OR_NULL(op->ops)) + continue; + + drm_gpuva_for_each_op(va_op, op->ops) { + struct drm_gem_object *obj = op_gem_obj(va_op); + + if (unlikely(!obj)) + continue; + + if (va_op->op != DRM_GPUVA_OP_UNMAP) + continue; + + ret = drm_exec_prepare_obj(exec, obj, num_fences); + if (ret) + return ret; + } + } + + return 0; +} + static int nouveau_uvmm_bind_job_submit(struct nouveau_job *job) { struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); struct nouveau_uvmm_bind_job *bind_job = to_uvmm_bind_job(job); struct nouveau_sched_entity *entity = job->entity; - struct drm_exec *exec = &job->exec; struct bind_job_op *op; int ret; @@ -1201,6 +1235,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) * unwind all GPU VA space changes on failure. */ nouveau_uvmm_lock(uvmm); + list_for_each_op(op, &bind_job->ops) { switch (op->op) { case OP_MAP_SPARSE: @@ -1286,6 +1321,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) goto unwind_continue; } + drm_gpuvm_bo_extobj_add(op->gem.vm_bo); break; } case OP_UNMAP: @@ -1312,30 +1348,13 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) } } - drm_exec_init(exec, DRM_EXEC_INTERRUPTIBLE_WAIT | - DRM_EXEC_IGNORE_DUPLICATES); - drm_exec_until_all_locked(exec) { - list_for_each_op(op, &bind_job->ops) { - struct drm_gpuva_op *va_op; - - if (IS_ERR_OR_NULL(op->ops)) - continue; + job->vm_exec.vm = &uvmm->base; + job->vm_exec.extra.fn = bind_lock_extra; + job->vm_exec.extra.priv = bind_job; - drm_gpuva_for_each_op(va_op, op->ops) { - struct drm_gem_object *obj = op_gem_obj(va_op); - - if (unlikely(!obj)) - continue; - - ret = drm_exec_prepare_obj(exec, obj, 1); - drm_exec_retry_on_contention(exec); - if (ret) { - op = list_last_op(&bind_job->ops); - goto unwind; - } - } - } - } + ret = drm_gpuvm_exec_lock(&job->vm_exec, 1, false); + if (ret) + goto unwind_continue; list_for_each_op(op, &bind_job->ops) { struct drm_gpuva_op *va_op; @@ -1418,6 +1437,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) op->ops); break; case OP_MAP: + drm_gpuvm_bo_extobj_remove(op->gem.vm_bo); nouveau_uvmm_sm_map_prepare_unwind(uvmm, &op->new, op->ops, op->va.addr, @@ -1435,21 +1455,16 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) } nouveau_uvmm_unlock(uvmm); - drm_exec_fini(exec); + drm_gpuvm_exec_unlock(&job->vm_exec); return ret; } static void nouveau_uvmm_bind_job_armed_submit(struct nouveau_job *job) { - struct drm_exec *exec = &job->exec; - struct drm_gem_object *obj; - unsigned long index; - - drm_exec_for_each_locked_object(exec, index, obj) - dma_resv_add_fence(obj->resv, job->done_fence, job->resv_usage); - - drm_exec_fini(exec); + drm_gpuvm_exec_resv_add_fence(&job->vm_exec, job->done_fence, + job->resv_usage, job->resv_usage); + drm_gpuvm_exec_unlock(&job->vm_exec); } static struct dma_fence * @@ -1841,6 +1856,18 @@ nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo) } } +static int +nouveau_uvmm_bo_validate(struct drm_gem_object *obj) +{ + struct nouveau_bo *nvbo = nouveau_gem_object(obj); + + return nouveau_bo_validate(nvbo, true, false); +} + +static const struct drm_gpuvm_ops gpuvm_ops = { + .bo_validate = nouveau_uvmm_bo_validate, +}; + int nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, u64 kernel_managed_addr, u64 kernel_managed_size) @@ -1877,7 +1904,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, kernel_managed_addr, kernel_managed_size, - NULL); + &gpuvm_ops); ret = nvif_vmm_ctor(&cli->mmu, "uvmm", cli->vmm.vmm.object.oclass, RAW,