From patchwork Tue Feb 28 08:33:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13154518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AEAF6C64ED6 for ; Tue, 28 Feb 2023 08:34:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8D29C10E4BC; Tue, 28 Feb 2023 08:34:13 +0000 (UTC) Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8DB2910E4B7; Tue, 28 Feb 2023 08:34:12 +0000 (UTC) Received: by mail-ed1-x535.google.com with SMTP id i34so36526279eda.7; Tue, 28 Feb 2023 00:34:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rvD5XrT7lxeiNUuJQnlOx566huQmv7HkH/bdoN0Qcbg=; b=E/c0OW5ZP+/bgL1HyJpC9xCn1qhOooQq6EiHX5XuRu71kLOKo7QKIF6irLXJYwLodb Cw4fnn7MxoYA6MWw0XPG3yGp7tGfX3lJYk3kxOXiP+1h3s1/E+jrYaDwuV5u0j3kRHDb oPygJliL7dxyxtPGgvoL5Cq+H3mKrmuu9gWPNKEcZwYXN08HkfSrphAzmtVEpet4nZaA lJQIKrOBic4lK8pRpYIyiKYKS1m35PvQCj3JOHJQytCyphxJ80jsSMlRBjOgoBWI0SJ0 XXGi1STlen56kZj2CjBjcGkD5yk0hjLBO9rRJl1OPvwjTnW/uXFGa+ZK5+llj0Ih3FHF 9BfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rvD5XrT7lxeiNUuJQnlOx566huQmv7HkH/bdoN0Qcbg=; b=b52zEcniLScjv/VT2x1RvlzOAyU+fv2zqYC9GWWfs8h3lBJcs/KosuECox0OpPn3vM l9FjziRduijqKYUI3BaXSqxc11y7dnKsurp0SZBV8CZMSyHz4lB6tFO+CiQXrCo252pa xbg7KCYe0Okx41YA3YT9h5qUi6PoOEna1vs1191zeh1zGvASXjNI1JbZ9bSABZgZjNt+ D5r31g43AzIHq3wGIKm+W3U44EczKEtywDYKMGzLGxqIcIuZ4Ka/fA6rLM07S85or1rZ sNtXy9NRDwzKb3VU0N8VOWd5v1CYKTqoEVo0kvcF/wc9e77kBR1b39U86oVE3Gc0uIU6 uZHA== X-Gm-Message-State: AO0yUKUowBf/9dUm9pch5ej+h+uqCpgrkNHlYPOqjiFknKhyO21Da2w6 kA4qlAn3YUQjQ/P8hDJRlJFJagKu1BY= X-Google-Smtp-Source: AK7set/WtFfJI5VGk/OOwsS5yf6VDgmXvk590iupPn2LCCOXfh2ukcny+bLDCBA45YxuDSY3ffbDMg== X-Received: by 2002:a17:907:3f88:b0:8ae:6b88:e52d with SMTP id hr8-20020a1709073f8800b008ae6b88e52dmr13308291ejc.7.1677573250863; Tue, 28 Feb 2023 00:34:10 -0800 (PST) Received: from able.fritz.box (p5b0ea2e7.dip0.t-ipconnect.de. [91.14.162.231]) by smtp.gmail.com with ESMTPSA id ss17-20020a170907039100b008cf6f8798e1sm4296969ejb.54.2023.02.28.00.34.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 00:34:10 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 1/9] drm: execution context for GEM buffers v3 Date: Tue, 28 Feb 2023 09:33:58 +0100 Message-Id: <20230228083406.1720795-2-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230228083406.1720795-1-christian.koenig@amd.com> References: <20230228083406.1720795-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dakr@redhat.com, arunpravin.paneerselvam@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This adds the infrastructure for an execution context for GEM buffers which is similar to the existinc TTMs execbuf util and intended to replace it in the long term. The basic functionality is that we abstracts the necessary loop to lock many different GEM buffers with automated deadlock and duplicate handling. v2: drop xarray and use dynamic resized array instead, the locking overhead is unecessary and measureable. v3: drop duplicate tracking, radeon is really the only one needing that. Signed-off-by: Christian König --- Documentation/gpu/drm-mm.rst | 12 ++ drivers/gpu/drm/Kconfig | 6 + drivers/gpu/drm/Makefile | 2 + drivers/gpu/drm/drm_exec.c | 249 +++++++++++++++++++++++++++++++++++ include/drm/drm_exec.h | 115 ++++++++++++++++ 5 files changed, 384 insertions(+) create mode 100644 drivers/gpu/drm/drm_exec.c create mode 100644 include/drm/drm_exec.h diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst index a79fd3549ff8..a52e6f4117d6 100644 --- a/Documentation/gpu/drm-mm.rst +++ b/Documentation/gpu/drm-mm.rst @@ -493,6 +493,18 @@ DRM Sync Objects .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c :export: +DRM Execution context +===================== + +.. kernel-doc:: drivers/gpu/drm/drm_exec.c + :doc: Overview + +.. kernel-doc:: include/drm/drm_exec.h + :internal: + +.. kernel-doc:: drivers/gpu/drm/drm_exec.c + :export: + GPU Scheduler ============= diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 17d252dc25e2..84a5fc28c48d 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -200,6 +200,12 @@ config DRM_TTM GPU memory types. Will be enabled automatically if a device driver uses it. +config DRM_EXEC + tristate + depends on DRM + help + Execution context for command submissions + config DRM_BUDDY tristate depends on DRM diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index ab4460fcd63f..d40defbb0347 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -78,6 +78,8 @@ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o # # Memory-management helpers # +# +obj-$(CONFIG_DRM_EXEC) += drm_exec.o obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o diff --git a/drivers/gpu/drm/drm_exec.c b/drivers/gpu/drm/drm_exec.c new file mode 100644 index 000000000000..df546cc5a227 --- /dev/null +++ b/drivers/gpu/drm/drm_exec.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ + +#include +#include +#include + +/** + * DOC: Overview + * + * This component mainly abstracts the retry loop necessary for locking + * multiple GEM objects while preparing hardware operations (e.g. command + * submissions, page table updates etc..). + * + * If a contention is detected while locking a GEM object the cleanup procedure + * unlocks all previously locked GEM objects and locks the contended one first + * before locking any further objects. + * + * After an object is locked fences slots can optionally be reserved on the + * dma_resv object inside the GEM object. + * + * A typical usage pattern should look like this:: + * + * struct drm_gem_object *obj; + * struct drm_exec exec; + * unsigned long index; + * int ret; + * + * drm_exec_init(&exec, true); + * drm_exec_while_not_all_locked(&exec) { + * ret = drm_exec_prepare_obj(&exec, boA, 1); + * drm_exec_continue_on_contention(&exec); + * if (ret) + * goto error; + * + * ret = drm_exec_lock(&exec, boB, 1); + * drm_exec_continue_on_contention(&exec); + * if (ret) + * goto error; + * } + * + * drm_exec_for_each_locked_object(&exec, index, obj) { + * dma_resv_add_fence(obj->resv, fence, DMA_RESV_USAGE_READ); + * ... + * } + * drm_exec_fini(&exec); + * + * See struct dma_exec for more details. + */ + +/* Dummy value used to initially enter the retry loop */ +#define DRM_EXEC_DUMMY (void*)~0 + +/* Unlock all objects and drop references */ +static void drm_exec_unlock_all(struct drm_exec *exec) +{ + struct drm_gem_object *obj; + unsigned long index; + + drm_exec_for_each_locked_object(exec, index, obj) { + dma_resv_unlock(obj->resv); + drm_gem_object_put(obj); + } + + if (exec->prelocked) { + dma_resv_unlock(exec->prelocked->resv); + drm_gem_object_put(exec->prelocked); + exec->prelocked = NULL; + } +} + +/** + * drm_exec_init - initialize a drm_exec object + * @exec: the drm_exec object to initialize + * @interruptible: if locks should be acquired interruptible + * + * Initialize the object and make sure that we can track locked and duplicate + * objects. + */ +void drm_exec_init(struct drm_exec *exec, bool interruptible) +{ + exec->interruptible = interruptible; + exec->objects = kmalloc(PAGE_SIZE, GFP_KERNEL); + + /* If allocation here fails, just delay that till the first use */ + exec->max_objects = exec->objects ? PAGE_SIZE / sizeof(void *) : 0; + exec->num_objects = 0; + exec->contended = DRM_EXEC_DUMMY; + exec->prelocked = NULL; +} +EXPORT_SYMBOL(drm_exec_init); + +/** + * drm_exec_fini - finalize a drm_exec object + * @exec: the drm_exec object to finilize + * + * Unlock all locked objects, drop the references to objects and free all memory + * used for tracking the state. + */ +void drm_exec_fini(struct drm_exec *exec) +{ + drm_exec_unlock_all(exec); + kvfree(exec->objects); + if (exec->contended != DRM_EXEC_DUMMY) { + drm_gem_object_put(exec->contended); + ww_acquire_fini(&exec->ticket); + } +} +EXPORT_SYMBOL(drm_exec_fini); + +/** + * drm_exec_cleanup - cleanup when contention is detected + * @exec: the drm_exec object to cleanup + * + * Cleanup the current state and return true if we should stay inside the retry + * loop, false if there wasn't any contention detected and we can keep the + * objects locked. + */ +bool drm_exec_cleanup(struct drm_exec *exec) +{ + if (likely(!exec->contended)) { + ww_acquire_done(&exec->ticket); + return false; + } + + if (likely(exec->contended == DRM_EXEC_DUMMY)) { + exec->contended = NULL; + ww_acquire_init(&exec->ticket, &reservation_ww_class); + return true; + } + + drm_exec_unlock_all(exec); + exec->num_objects = 0; + return true; +} +EXPORT_SYMBOL(drm_exec_cleanup); + +/* Track the locked object in the xa and reserve fences */ +static int drm_exec_obj_locked(struct drm_exec *exec, + struct drm_gem_object *obj) +{ + if (unlikely(exec->num_objects == exec->max_objects)) { + size_t size = exec->max_objects * sizeof(void *); + void *tmp; + + tmp = kvrealloc(exec->objects, size, size + PAGE_SIZE, + GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + exec->objects = tmp; + exec->max_objects += PAGE_SIZE / sizeof(void *); + } + drm_gem_object_get(obj); + exec->objects[exec->num_objects++] = obj; + + return 0; +} + +/* Make sure the contended object is locked first */ +static int drm_exec_lock_contended(struct drm_exec *exec) +{ + struct drm_gem_object *obj = exec->contended; + int ret; + + if (likely(!obj)) + return 0; + + if (exec->interruptible) { + ret = dma_resv_lock_slow_interruptible(obj->resv, + &exec->ticket); + if (unlikely(ret)) + goto error_dropref; + } else { + dma_resv_lock_slow(obj->resv, &exec->ticket); + } + + ret = drm_exec_obj_locked(exec, obj); + if (unlikely(ret)) { + dma_resv_unlock(obj->resv); + goto error_dropref; + } + + swap(exec->prelocked, obj); + +error_dropref: + /* Always cleanup the contention so that error handling can kick in */ + drm_gem_object_put(obj); + exec->contended = NULL; + return ret; +} + +/** + * drm_exec_prepare_obj - prepare a GEM object for use + * @exec: the drm_exec object with the state + * @obj: the GEM object to prepare + * @num_fences: how many fences to reserve + * + * Prepare a GEM object for use by locking it and reserving fence slots. All + * succesfully locked objects are put into the locked container. Duplicates + * detected as well and automatically moved into the duplicates container. + * + * Returns: -EDEADLK if a contention is detected, -ENOMEM when memory + * allocation failed and zero for success. + */ +int drm_exec_prepare_obj(struct drm_exec *exec, struct drm_gem_object *obj, + unsigned int num_fences) +{ + int ret; + + ret = drm_exec_lock_contended(exec); + if (unlikely(ret)) + return ret; + + if (exec->prelocked == obj) { + drm_gem_object_put(exec->prelocked); + exec->prelocked = NULL; + + return dma_resv_reserve_fences(obj->resv, num_fences); + } + + if (exec->interruptible) + ret = dma_resv_lock_interruptible(obj->resv, &exec->ticket); + else + ret = dma_resv_lock(obj->resv, &exec->ticket); + + if (unlikely(ret == -EDEADLK)) { + drm_gem_object_get(obj); + exec->contended = obj; + return -EDEADLK; + } + + if (unlikely(ret)) + return ret; + + ret = drm_exec_obj_locked(exec, obj); + if (ret) + goto error_unlock; + + /* Keep locked when reserving fences fails */ + return dma_resv_reserve_fences(obj->resv, num_fences); + +error_unlock: + dma_resv_unlock(obj->resv); + return ret; +} +EXPORT_SYMBOL(drm_exec_prepare_obj); + +MODULE_DESCRIPTION("DRM execution context"); +MODULE_LICENSE("Dual MIT/GPL"); diff --git a/include/drm/drm_exec.h b/include/drm/drm_exec.h new file mode 100644 index 000000000000..65e518c01db3 --- /dev/null +++ b/include/drm/drm_exec.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ + +#ifndef __DRM_EXEC_H__ +#define __DRM_EXEC_H__ + +#include + +struct drm_gem_object; + +/** + * struct drm_exec - Execution context + */ +struct drm_exec { + /** + * @interruptible: If locks should be taken interruptible + */ + bool interruptible; + + /** + * @ticket: WW ticket used for acquiring locks + */ + struct ww_acquire_ctx ticket; + + /** + * @num_objects: number of objects locked + */ + unsigned int num_objects; + + /** + * @max_objects: maximum objects in array + */ + unsigned int max_objects; + + /** + * @objects: array of the locked objects + */ + struct drm_gem_object **objects; + + /** + * @contended: contended GEM object we backet of for + */ + struct drm_gem_object *contended; + + /** + * @prelocked: already locked GEM object because of contention + */ + struct drm_gem_object *prelocked; +}; + +/** + * drm_exec_for_each_locked_object - iterate over all the locked objects + * @exec: drm_exec object + * @index: unsigned long index for the iteration + * @obj: the current GEM object + * + * Iterate over all the locked GEM objects inside the drm_exec object. + */ +#define drm_exec_for_each_locked_object(exec, index, obj) \ + for (index = 0, obj = (exec)->objects[0]; \ + index < (exec)->num_objects; \ + ++index, obj = (exec)->objects[index]) + +/** + * drm_exec_while_not_all_locked - loop until all GEM objects are prepared + * @exec: drm_exec object + * + * Core functionality of the drm_exec object. Loops until all GEM objects are + * prepared and no more contention exists. + * + * At the beginning of the loop it is guaranteed that no GEM object is locked. + */ +#define drm_exec_while_not_all_locked(exec) \ + while (drm_exec_cleanup(exec)) + +/** + * drm_exec_continue_on_contention - continue the loop when we need to cleanup + * @exec: drm_exec object + * + * Control flow helper to continue when a contention was detected and we need to + * clean up and re-start the loop to prepare all GEM objects. + */ +#define drm_exec_continue_on_contention(exec) \ + if (unlikely(drm_exec_is_contended(exec))) \ + continue + +/** + * drm_exec_break_on_contention - break a subordinal loop on contention + * @exec: drm_exec object + * + * Control flow helper to break a subordinal loop when a contention was detected + * and we need to clean up and re-start the loop to prepare all GEM objects. + */ +#define drm_exec_break_on_contention(exec) \ + if (unlikely(drm_exec_is_contended(exec))) \ + break + +/** + * drm_exec_is_contended - check for contention + * @exec: drm_exec object + * + * Returns true if the drm_exec object has run into some contention while + * locking a GEM object and needs to clean up. + */ +static inline bool drm_exec_is_contended(struct drm_exec *exec) +{ + return !!exec->contended; +} + +void drm_exec_init(struct drm_exec *exec, bool interruptible); +void drm_exec_fini(struct drm_exec *exec); +bool drm_exec_cleanup(struct drm_exec *exec); +int drm_exec_prepare_obj(struct drm_exec *exec, struct drm_gem_object *obj, + unsigned int num_fences); + +#endif From patchwork Tue Feb 28 08:33:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13154519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 559DDC64EC7 for ; Tue, 28 Feb 2023 08:34:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 88D4910E58A; Tue, 28 Feb 2023 08:34:15 +0000 (UTC) Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8072210E4B7; Tue, 28 Feb 2023 08:34:13 +0000 (UTC) Received: by mail-ed1-x534.google.com with SMTP id eg37so36387076edb.12; Tue, 28 Feb 2023 00:34:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rGt43u7HoznQ3xRDitEZby9LXDJQuSBDBMZ7DpscYzY=; b=ThjAMVz2FYeNO6ZgdAgsDqp5Ze0AMzmcN11Ib36bI8NRTLTxdLpdbA9rqACtRewaCo xJotutrIh8sDvOod2s/uBvPSf+7R0OtNeAIiF0YXjrkVHyxvgjBdhYsv3/ApzEtEqedP R0yYRbmc8UZGqDN0HlsXrBDXh0jFrQBI66jAn36pJbeJw+PfvAdfM9H20qP3Yb0W1Ejz ox243QVqUCGXt7/JRntVREvPdqMHaftBuw+OgOcEFOASf0wP7IDGa1I3xcJN75T3Os47 e04a+4Yjc73035l0LjSjQuaD9euoxOoXHlB0yLmDb66rzHMAf9uOgbmBNQo4GLjeY2hT 6RWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rGt43u7HoznQ3xRDitEZby9LXDJQuSBDBMZ7DpscYzY=; b=pc+FDQIqD2GGeQnwjlZJlwyMMHfS5kDnr68lgbtLrcRXxlA3nTyM+21QOfAD5JPBRn NOprFQAZ0+eQ6LtPYW8oLVRmF05jaC8jF0L2b9iy9m+9Q020Xj6hRnsYlVu/SO0pku6C 1UxMlDdmGjGjifIsCWGu6k5xSrbgE7UgqsBcgh3JtqwSYYp3yQvo7az0MEVzNSLByWh4 ZpwcS/nLqICRX6aFHiMPFJvthSY3H3W+VUdOyYLBYIaqYqjh9lk8zbE2cC9ZYBXMyPvf xDJ+5b2h+8hg/qtXkTEfwnozbY2tMui8kdUZuSUaQsQqZxnIDbsxZMnxF+yKJf7m6JcA M1Hw== X-Gm-Message-State: AO0yUKW4kiLnSzIeIq1ej+02zrngxKmNzggr4aX8Axo8MAcKAMjTc706 OVCLxoXUsP2rHPjkLd62v1urBSo+sj8= X-Google-Smtp-Source: AK7set9rIYHwuAO8lGMwj6JDIP/VV1w/xtJlfH0szoO+Mfj+pfFPnpA6H4XYTRdEUAOpOeWSwXf/rQ== X-Received: by 2002:a17:906:e42:b0:8f7:5038:9896 with SMTP id q2-20020a1709060e4200b008f750389896mr1362234eji.70.1677573251998; Tue, 28 Feb 2023 00:34:11 -0800 (PST) Received: from able.fritz.box (p5b0ea2e7.dip0.t-ipconnect.de. [91.14.162.231]) by smtp.gmail.com with ESMTPSA id ss17-20020a170907039100b008cf6f8798e1sm4296969ejb.54.2023.02.28.00.34.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 00:34:11 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 2/9] drm: add drm_exec selftests Date: Tue, 28 Feb 2023 09:33:59 +0100 Message-Id: <20230228083406.1720795-3-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230228083406.1720795-1-christian.koenig@amd.com> References: <20230228083406.1720795-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dakr@redhat.com, arunpravin.paneerselvam@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Largely just the initial skeleton. Signed-off-by: Christian König --- drivers/gpu/drm/Kconfig | 1 + drivers/gpu/drm/tests/Makefile | 3 +- drivers/gpu/drm/tests/drm_exec_test.c | 73 +++++++++++++++++++++++++++ 3 files changed, 76 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/drm/tests/drm_exec_test.c diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 84a5fc28c48d..0c8d8ed69154 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -79,6 +79,7 @@ config DRM_KUNIT_TEST select DRM_BUDDY select DRM_EXPORT_FOR_TESTS if m select DRM_KUNIT_TEST_HELPERS + select DRM_EXEC default KUNIT_ALL_TESTS help This builds unit tests for DRM. This option is not useful for diff --git a/drivers/gpu/drm/tests/Makefile b/drivers/gpu/drm/tests/Makefile index bca726a8f483..ba7baa622675 100644 --- a/drivers/gpu/drm/tests/Makefile +++ b/drivers/gpu/drm/tests/Makefile @@ -17,6 +17,7 @@ obj-$(CONFIG_DRM_KUNIT_TEST) += \ drm_modes_test.o \ drm_plane_helper_test.o \ drm_probe_helper_test.o \ - drm_rect_test.o + drm_rect_test.o \ + drm_exec_test.o CFLAGS_drm_mm_test.o := $(DISABLE_STRUCTLEAK_PLUGIN) diff --git a/drivers/gpu/drm/tests/drm_exec_test.c b/drivers/gpu/drm/tests/drm_exec_test.c new file mode 100644 index 000000000000..78eb61eb27cc --- /dev/null +++ b/drivers/gpu/drm/tests/drm_exec_test.c @@ -0,0 +1,73 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#define pr_fmt(fmt) "drm_exec: " fmt + +#include + +#include +#include + +#include +#include +#include + +#include "../lib/drm_random.h" + +static struct drm_device dev; + +static void drm_exec_sanitycheck(struct kunit *test) +{ + struct drm_exec exec; + + drm_exec_init(&exec, true); + drm_exec_fini(&exec); + pr_info("%s - ok!\n", __func__); +} + +static void drm_exec_lock1(struct kunit *test) +{ + struct drm_gem_object gobj = { }; + struct drm_exec exec; + int ret; + + drm_gem_private_object_init(&dev, &gobj, PAGE_SIZE); + + drm_exec_init(&exec, true); + drm_exec_while_not_all_locked(&exec) { + ret = drm_exec_prepare_obj(&exec, &gobj, 1); + drm_exec_continue_on_contention(&exec); + if (ret) { + drm_exec_fini(&exec); + pr_err("%s - err %d!\n", __func__, ret); + return; + } + } + drm_exec_fini(&exec); + pr_info("%s - ok!\n", __func__); +} + +static int drm_exec_suite_init(struct kunit_suite *suite) +{ + kunit_info(suite, "Testing DRM exec manager\n"); + return 0; +} + +static struct kunit_case drm_exec_tests[] = { + KUNIT_CASE(drm_exec_sanitycheck), + KUNIT_CASE(drm_exec_lock1), + {} +}; + +static struct kunit_suite drm_exec_test_suite = { + .name = "drm_exec", + .suite_init = drm_exec_suite_init, + .test_cases = drm_exec_tests, +}; + +kunit_test_suite(drm_exec_test_suite); + +MODULE_AUTHOR("AMD"); +MODULE_LICENSE("GPL and additional rights"); From patchwork Tue Feb 28 08:34:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13154520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 289D3C7EE30 for ; Tue, 28 Feb 2023 08:34:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3732710E64C; Tue, 28 Feb 2023 08:34:23 +0000 (UTC) Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8986910E626; Tue, 28 Feb 2023 08:34:14 +0000 (UTC) Received: by mail-ed1-x535.google.com with SMTP id cy6so36574483edb.5; Tue, 28 Feb 2023 00:34:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ejcfY+E3vME/sqVnFHUimaogQlTqln+Q5Izv7Jgfjzw=; b=KSFHq9EcNGMl6KygH8039s0iqb863RGoblGEmK186D9ngvtqyeXEQO8T8//eerZ3VQ ghzG8/yELdKL9FR4JPteAqNZ7E1qMIGWT6/v1TfjcB3IhzJsR5BpOZ+HFUv6+QBNdoSl kNbdOPchH+X4gRHwrR8t88AnN5w4QpgXedZ1K4OgMGY+p39LEDwLVNGJW8FcXOib8o/I krq9aFk6cyG3P4A+xUEm2n/EmrWZYRMnz9BL8i9th7OMYHVzDyR6BHCM9FMxaSphawhq vpWwV5DlWoQHJGssEFBztWUkcqxR4sVof+qgkCuA2SI99CgJWKgpn14TVmsGvjp/jsxz OkjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ejcfY+E3vME/sqVnFHUimaogQlTqln+Q5Izv7Jgfjzw=; b=nJsZq6fX4jAoZPVu0KJoGtdI1JOAXllioukXorZu6lxl1GUq7lk6u0Y0PlDLDgrmgv IDVYtx2Lg7ZS5k2A+oR5bQv7tVPIMKA0+lvJqBhmiR3efFjhWLKG8nCSOSq87nhzSOAI 3fiNxS43rr8rfcX9a2MY+FP8fGVlDPwKdlAMi1TuQfiCLAdFSvo4oDiOKnYiHD0LU2WU fK/xeZOSoT1vuN69EstEj6x7U32Amq5Pi9txEeSUkcIYvpWegZM5vPzVw3FuGWTwNpJg O7RpKFC8/IGpRu5xRQ/E4YTLwpOtFmoocGyPDmcc2CRWDfLOWCoNn8WNuxbGkfh6WfaY q+BA== X-Gm-Message-State: AO0yUKWAZck2sAeShvB+OVLGiaUYhTKvFLXWOWh+sBBvfqgswQn9ansj f70DOycthnhoaWBTCQOk8lbiIfHYuuY= X-Google-Smtp-Source: AK7set84c+txZz+f15k9e4rWkQKjovif7ziPnOaJauQhSde6R3yZkk6qJo1BjT9OPQGgPZn2hVTUMg== X-Received: by 2002:a17:907:38e:b0:88d:3c85:4ccf with SMTP id ss14-20020a170907038e00b0088d3c854ccfmr1436082ejb.25.1677573253095; Tue, 28 Feb 2023 00:34:13 -0800 (PST) Received: from able.fritz.box (p5b0ea2e7.dip0.t-ipconnect.de. [91.14.162.231]) by smtp.gmail.com with ESMTPSA id ss17-20020a170907039100b008cf6f8798e1sm4296969ejb.54.2023.02.28.00.34.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 00:34:12 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 3/9] drm/amdkfd: switch over to using drm_exec Date: Tue, 28 Feb 2023 09:34:00 +0100 Message-Id: <20230228083406.1720795-4-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230228083406.1720795-1-christian.koenig@amd.com> References: <20230228083406.1720795-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dakr@redhat.com, arunpravin.paneerselvam@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Avoids quite a bit of logic and kmalloc overhead. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h | 5 +- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 302 +++++++----------- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 14 + drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 + drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 32 +- 5 files changed, 151 insertions(+), 205 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h index 333780491867..e9ef493091a9 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h @@ -25,13 +25,13 @@ #ifndef AMDGPU_AMDKFD_H_INCLUDED #define AMDGPU_AMDKFD_H_INCLUDED +#include #include #include #include #include #include #include -#include #include "amdgpu_sync.h" #include "amdgpu_vm.h" @@ -69,8 +69,7 @@ struct kgd_mem { struct hmm_range *range; struct list_head attachments; /* protected by amdkfd_process_info.lock */ - struct ttm_validate_buffer validate_list; - struct ttm_validate_buffer resv_list; + struct list_head validate_list; uint32_t domain; unsigned int mapped_to_gpu_memory; uint64_t va; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index d6320c836251..2f4aeaf711a9 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -27,6 +27,8 @@ #include #include +#include + #include "amdgpu_object.h" #include "amdgpu_gem.h" #include "amdgpu_vm.h" @@ -897,28 +899,19 @@ static void add_kgd_mem_to_kfd_bo_list(struct kgd_mem *mem, struct amdkfd_process_info *process_info, bool userptr) { - struct ttm_validate_buffer *entry = &mem->validate_list; - struct amdgpu_bo *bo = mem->bo; - - INIT_LIST_HEAD(&entry->head); - entry->num_shared = 1; - entry->bo = &bo->tbo; - mutex_lock(&process_info->lock); if (userptr) - list_add_tail(&entry->head, &process_info->userptr_valid_list); + list_add_tail(&mem->validate_list, + &process_info->userptr_valid_list); else - list_add_tail(&entry->head, &process_info->kfd_bo_list); + list_add_tail(&mem->validate_list, &process_info->kfd_bo_list); mutex_unlock(&process_info->lock); } static void remove_kgd_mem_from_kfd_bo_list(struct kgd_mem *mem, struct amdkfd_process_info *process_info) { - struct ttm_validate_buffer *bo_list_entry; - - bo_list_entry = &mem->validate_list; mutex_lock(&process_info->lock); - list_del(&bo_list_entry->head); + list_del(&mem->validate_list); mutex_unlock(&process_info->lock); } @@ -1005,13 +998,12 @@ static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr, * object can track VM updates. */ struct bo_vm_reservation_context { - struct amdgpu_bo_list_entry kfd_bo; /* BO list entry for the KFD BO */ - unsigned int n_vms; /* Number of VMs reserved */ - struct amdgpu_bo_list_entry *vm_pd; /* Array of VM BO list entries */ - struct ww_acquire_ctx ticket; /* Reservation ticket */ - struct list_head list, duplicates; /* BO lists */ - struct amdgpu_sync *sync; /* Pointer to sync object */ - bool reserved; /* Whether BOs are reserved */ + /* DRM execution context for the reservation */ + struct drm_exec exec; + /* Number of VMs reserved */ + unsigned int n_vms; + /* Pointer to sync object */ + struct amdgpu_sync *sync; }; enum bo_vm_match { @@ -1035,35 +1027,24 @@ static int reserve_bo_and_vm(struct kgd_mem *mem, WARN_ON(!vm); - ctx->reserved = false; ctx->n_vms = 1; ctx->sync = &mem->sync; - - INIT_LIST_HEAD(&ctx->list); - INIT_LIST_HEAD(&ctx->duplicates); - - ctx->vm_pd = kcalloc(ctx->n_vms, sizeof(*ctx->vm_pd), GFP_KERNEL); - if (!ctx->vm_pd) - return -ENOMEM; - - ctx->kfd_bo.priority = 0; - ctx->kfd_bo.tv.bo = &bo->tbo; - ctx->kfd_bo.tv.num_shared = 1; - list_add(&ctx->kfd_bo.tv.head, &ctx->list); - - amdgpu_vm_get_pd_bo(vm, &ctx->list, &ctx->vm_pd[0]); - - ret = ttm_eu_reserve_buffers(&ctx->ticket, &ctx->list, - false, &ctx->duplicates); - if (ret) { - pr_err("Failed to reserve buffers in ttm.\n"); - kfree(ctx->vm_pd); - ctx->vm_pd = NULL; - return ret; + drm_exec_init(&ctx->exec, true); + drm_exec_while_not_all_locked(&ctx->exec) { + ret = amdgpu_vm_lock_pd(vm, &ctx->exec); + if (likely(!ret)) + ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, + 0); + drm_exec_continue_on_contention(&ctx->exec); + if (unlikely(ret)) + goto error; } - - ctx->reserved = true; return 0; + +error: + pr_err("Failed to reserve buffers in ttm.\n"); + drm_exec_fini(&ctx->exec); + return ret; } /** @@ -1080,63 +1061,39 @@ static int reserve_bo_and_cond_vms(struct kgd_mem *mem, struct amdgpu_vm *vm, enum bo_vm_match map_type, struct bo_vm_reservation_context *ctx) { - struct amdgpu_bo *bo = mem->bo; struct kfd_mem_attachment *entry; - unsigned int i; + struct amdgpu_bo *bo = mem->bo; int ret; - ctx->reserved = false; - ctx->n_vms = 0; - ctx->vm_pd = NULL; ctx->sync = &mem->sync; + drm_exec_init(&ctx->exec, true); + drm_exec_while_not_all_locked(&ctx->exec) { + ctx->n_vms = 0; + list_for_each_entry(entry, &mem->attachments, list) { + if ((vm && vm != entry->bo_va->base.vm) || + (entry->is_mapped != map_type + && map_type != BO_VM_ALL)) + continue; - INIT_LIST_HEAD(&ctx->list); - INIT_LIST_HEAD(&ctx->duplicates); - - list_for_each_entry(entry, &mem->attachments, list) { - if ((vm && vm != entry->bo_va->base.vm) || - (entry->is_mapped != map_type - && map_type != BO_VM_ALL)) - continue; - - ctx->n_vms++; - } - - if (ctx->n_vms != 0) { - ctx->vm_pd = kcalloc(ctx->n_vms, sizeof(*ctx->vm_pd), - GFP_KERNEL); - if (!ctx->vm_pd) - return -ENOMEM; - } - - ctx->kfd_bo.priority = 0; - ctx->kfd_bo.tv.bo = &bo->tbo; - ctx->kfd_bo.tv.num_shared = 1; - list_add(&ctx->kfd_bo.tv.head, &ctx->list); - - i = 0; - list_for_each_entry(entry, &mem->attachments, list) { - if ((vm && vm != entry->bo_va->base.vm) || - (entry->is_mapped != map_type - && map_type != BO_VM_ALL)) - continue; - - amdgpu_vm_get_pd_bo(entry->bo_va->base.vm, &ctx->list, - &ctx->vm_pd[i]); - i++; - } + ret = amdgpu_vm_lock_pd(vm, &ctx->exec); + drm_exec_break_on_contention(&ctx->exec); + if (unlikely(ret)) + goto error; + ++ctx->n_vms; + } + drm_exec_continue_on_contention(&ctx->exec); - ret = ttm_eu_reserve_buffers(&ctx->ticket, &ctx->list, - false, &ctx->duplicates); - if (ret) { - pr_err("Failed to reserve buffers in ttm.\n"); - kfree(ctx->vm_pd); - ctx->vm_pd = NULL; - return ret; + ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1); + drm_exec_continue_on_contention(&ctx->exec); + if (unlikely(ret)) + goto error; } - - ctx->reserved = true; return 0; + +error: + pr_err("Failed to reserve buffers in ttm.\n"); + drm_exec_fini(&ctx->exec); + return ret; } /** @@ -1157,15 +1114,8 @@ static int unreserve_bo_and_vms(struct bo_vm_reservation_context *ctx, if (wait) ret = amdgpu_sync_wait(ctx->sync, intr); - if (ctx->reserved) - ttm_eu_backoff_reservation(&ctx->ticket, &ctx->list); - kfree(ctx->vm_pd); - + drm_exec_fini(&ctx->exec); ctx->sync = NULL; - - ctx->reserved = false; - ctx->vm_pd = NULL; - return ret; } @@ -1752,7 +1702,6 @@ int amdgpu_amdkfd_gpuvm_free_memory_of_gpu( bool use_release_notifier = (mem->bo->kfd_bo == mem); struct kfd_mem_attachment *entry, *tmp; struct bo_vm_reservation_context ctx; - struct ttm_validate_buffer *bo_list_entry; unsigned int mapped_to_gpu_memory; int ret; bool is_imported = false; @@ -1780,9 +1729,8 @@ int amdgpu_amdkfd_gpuvm_free_memory_of_gpu( } /* Make sure restore workers don't access the BO any more */ - bo_list_entry = &mem->validate_list; mutex_lock(&process_info->lock); - list_del(&bo_list_entry->head); + list_del(&mem->validate_list); mutex_unlock(&process_info->lock); /* Cleanup user pages and MMU notifiers */ @@ -2324,14 +2272,14 @@ static int update_invalid_user_pages(struct amdkfd_process_info *process_info, /* Move all invalidated BOs to the userptr_inval_list */ list_for_each_entry_safe(mem, tmp_mem, &process_info->userptr_valid_list, - validate_list.head) + validate_list) if (mem->invalid) - list_move_tail(&mem->validate_list.head, + list_move_tail(&mem->validate_list, &process_info->userptr_inval_list); /* Go through userptr_inval_list and update any invalid user_pages */ list_for_each_entry(mem, &process_info->userptr_inval_list, - validate_list.head) { + validate_list) { invalid = mem->invalid; if (!invalid) /* BO hasn't been invalidated since the last @@ -2409,50 +2357,43 @@ static int update_invalid_user_pages(struct amdkfd_process_info *process_info, */ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info) { - struct amdgpu_bo_list_entry *pd_bo_list_entries; - struct list_head resv_list, duplicates; - struct ww_acquire_ctx ticket; + struct ttm_operation_ctx ctx = { false, false }; struct amdgpu_sync sync; + struct drm_exec exec; struct amdgpu_vm *peer_vm; struct kgd_mem *mem, *tmp_mem; struct amdgpu_bo *bo; - struct ttm_operation_ctx ctx = { false, false }; - int i, ret; - - pd_bo_list_entries = kcalloc(process_info->n_vms, - sizeof(struct amdgpu_bo_list_entry), - GFP_KERNEL); - if (!pd_bo_list_entries) { - pr_err("%s: Failed to allocate PD BO list entries\n", __func__); - ret = -ENOMEM; - goto out_no_mem; - } - - INIT_LIST_HEAD(&resv_list); - INIT_LIST_HEAD(&duplicates); + int ret; - /* Get all the page directory BOs that need to be reserved */ - i = 0; - list_for_each_entry(peer_vm, &process_info->vm_list_head, - vm_list_node) - amdgpu_vm_get_pd_bo(peer_vm, &resv_list, - &pd_bo_list_entries[i++]); - /* Add the userptr_inval_list entries to resv_list */ - list_for_each_entry(mem, &process_info->userptr_inval_list, - validate_list.head) { - list_add_tail(&mem->resv_list.head, &resv_list); - mem->resv_list.bo = mem->validate_list.bo; - mem->resv_list.num_shared = mem->validate_list.num_shared; - } + amdgpu_sync_create(&sync); + drm_exec_init(&exec, true); /* Reserve all BOs and page tables for validation */ - ret = ttm_eu_reserve_buffers(&ticket, &resv_list, false, &duplicates); - WARN(!list_empty(&duplicates), "Duplicates should be empty"); - if (ret) - goto out_free; + drm_exec_while_not_all_locked(&exec) { + /* Reserve all the page directories */ + list_for_each_entry(peer_vm, &process_info->vm_list_head, + vm_list_node) { + ret = amdgpu_vm_lock_pd(peer_vm, &exec); + drm_exec_break_on_contention(&exec); + if (unlikely(ret)) + goto unreserve_out; + } + drm_exec_continue_on_contention(&exec); - amdgpu_sync_create(&sync); + /* Reserve the userptr_inval_list entries to resv_list */ + list_for_each_entry(mem, &process_info->userptr_inval_list, + validate_list) { + struct drm_gem_object *gobj; + + gobj = &mem->bo->tbo.base; + ret = drm_exec_prepare_obj(&exec, gobj, 1); + drm_exec_break_on_contention(&exec); + if (unlikely(ret)) + goto unreserve_out; + } + drm_exec_continue_on_contention(&exec); + } ret = process_validate_vms(process_info); if (ret) @@ -2461,7 +2402,7 @@ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info) /* Validate BOs and update GPUVM page tables */ list_for_each_entry_safe(mem, tmp_mem, &process_info->userptr_inval_list, - validate_list.head) { + validate_list) { struct kfd_mem_attachment *attachment; bo = mem->bo; @@ -2503,12 +2444,9 @@ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info) ret = process_update_pds(process_info, &sync); unreserve_out: - ttm_eu_backoff_reservation(&ticket, &resv_list); + drm_exec_fini(&exec); amdgpu_sync_wait(&sync, false); amdgpu_sync_free(&sync); -out_free: - kfree(pd_bo_list_entries); -out_no_mem: return ret; } @@ -2524,7 +2462,7 @@ static int confirm_valid_user_pages_locked(struct amdkfd_process_info *process_i list_for_each_entry_safe(mem, tmp_mem, &process_info->userptr_inval_list, - validate_list.head) { + validate_list) { bool valid = amdgpu_ttm_tt_get_user_pages_done( mem->bo->tbo.ttm, mem->range); @@ -2536,7 +2474,7 @@ static int confirm_valid_user_pages_locked(struct amdkfd_process_info *process_i } WARN(mem->invalid, "Valid BO is marked invalid"); - list_move_tail(&mem->validate_list.head, + list_move_tail(&mem->validate_list, &process_info->userptr_valid_list); } @@ -2646,50 +2584,46 @@ static void amdgpu_amdkfd_restore_userptr_worker(struct work_struct *work) */ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef) { - struct amdgpu_bo_list_entry *pd_bo_list; struct amdkfd_process_info *process_info = info; struct amdgpu_vm *peer_vm; struct kgd_mem *mem; - struct bo_vm_reservation_context ctx; struct amdgpu_amdkfd_fence *new_fence; - int ret = 0, i; struct list_head duplicate_save; struct amdgpu_sync sync_obj; unsigned long failed_size = 0; unsigned long total_size = 0; + struct drm_exec exec; + int ret; INIT_LIST_HEAD(&duplicate_save); - INIT_LIST_HEAD(&ctx.list); - INIT_LIST_HEAD(&ctx.duplicates); - pd_bo_list = kcalloc(process_info->n_vms, - sizeof(struct amdgpu_bo_list_entry), - GFP_KERNEL); - if (!pd_bo_list) - return -ENOMEM; - - i = 0; mutex_lock(&process_info->lock); - list_for_each_entry(peer_vm, &process_info->vm_list_head, - vm_list_node) - amdgpu_vm_get_pd_bo(peer_vm, &ctx.list, &pd_bo_list[i++]); - /* Reserve all BOs and page tables/directory. Add all BOs from - * kfd_bo_list to ctx.list - */ - list_for_each_entry(mem, &process_info->kfd_bo_list, - validate_list.head) { - - list_add_tail(&mem->resv_list.head, &ctx.list); - mem->resv_list.bo = mem->validate_list.bo; - mem->resv_list.num_shared = mem->validate_list.num_shared; - } + drm_exec_init(&exec, false); + drm_exec_while_not_all_locked(&exec) { + list_for_each_entry(peer_vm, &process_info->vm_list_head, + vm_list_node) { + ret = amdgpu_vm_lock_pd(peer_vm, &exec); + drm_exec_break_on_contention(&exec); + if (unlikely(ret)) + goto ttm_reserve_fail; + } + drm_exec_continue_on_contention(&exec); - ret = ttm_eu_reserve_buffers(&ctx.ticket, &ctx.list, - false, &duplicate_save); - if (ret) { - pr_debug("Memory eviction: TTM Reserve Failed. Try again\n"); - goto ttm_reserve_fail; + /* Reserve all BOs and page tables/directory. Add all BOs from + * kfd_bo_list to ctx.list + */ + list_for_each_entry(mem, &process_info->kfd_bo_list, + validate_list) { + struct drm_gem_object *gobj; + + gobj = &mem->bo->tbo.base; + ret = drm_exec_prepare_obj(&exec, gobj, 1); + drm_exec_break_on_contention(&exec); + if (unlikely(ret)) + goto ttm_reserve_fail; + } + drm_exec_continue_on_contention(&exec); } amdgpu_sync_create(&sync_obj); @@ -2707,7 +2641,7 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef) /* Validate BOs and map them to GPUVM (update VM page tables). */ list_for_each_entry(mem, &process_info->kfd_bo_list, - validate_list.head) { + validate_list) { struct amdgpu_bo *bo = mem->bo; uint32_t domain = mem->domain; @@ -2780,8 +2714,7 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef) *ef = dma_fence_get(&new_fence->base); /* Attach new eviction fence to all BOs except pinned ones */ - list_for_each_entry(mem, &process_info->kfd_bo_list, - validate_list.head) { + list_for_each_entry(mem, &process_info->kfd_bo_list, validate_list) { if (mem->bo->tbo.pin_count) continue; @@ -2800,11 +2733,10 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef) } validate_map_fail: - ttm_eu_backoff_reservation(&ctx.ticket, &ctx.list); amdgpu_sync_free(&sync_obj); ttm_reserve_fail: + drm_exec_fini(&exec); mutex_unlock(&process_info->lock); - kfree(pd_bo_list); return ret; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index b9441ab457ea..f10a9331af9b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -34,6 +34,7 @@ #include #include #include +#include #include "amdgpu.h" #include "amdgpu_trace.h" #include "amdgpu_amdkfd.h" @@ -334,6 +335,19 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm, list_add(&entry->tv.head, validated); } +/** + * amdgpu_vm_lock_pd - lock PD in drm_exec + * + * @vm: vm providing the BOs + * @exec: drm execution context + * + * Lock the VM root PD in the DRM execution context. + */ +int amdgpu_vm_lock_pd(struct amdgpu_vm *vm, struct drm_exec *exec) +{ + return drm_exec_prepare_obj(exec, &vm->root.bo->tbo.base, 4); +} + /** * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU * diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h index 856a64bc7a89..4066731d3065 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h @@ -36,6 +36,8 @@ #include "amdgpu_ring.h" #include "amdgpu_ids.h" +struct drm_exec; + struct amdgpu_bo_va; struct amdgpu_job; struct amdgpu_bo_list_entry; @@ -389,6 +391,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm); void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm, struct list_head *validated, struct amdgpu_bo_list_entry *entry); +int amdgpu_vm_lock_pd(struct amdgpu_vm *vm, struct drm_exec *exec); bool amdgpu_vm_ready(struct amdgpu_vm *vm); int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm, int (*callback)(void *p, struct amdgpu_bo *bo), diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c index dc6fd6967050..6ca3c3ced9f0 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c @@ -24,6 +24,8 @@ #include #include #include +#include + #include "amdgpu_sync.h" #include "amdgpu_object.h" #include "amdgpu_vm.h" @@ -1423,9 +1425,7 @@ struct svm_validate_context { struct svm_range *prange; bool intr; DECLARE_BITMAP(bitmap, MAX_GPU_INSTANCE); - struct ttm_validate_buffer tv[MAX_GPU_INSTANCE]; - struct list_head validate_list; - struct ww_acquire_ctx ticket; + struct drm_exec exec; }; static int svm_range_reserve_bos(struct svm_validate_context *ctx) @@ -1435,25 +1435,23 @@ static int svm_range_reserve_bos(struct svm_validate_context *ctx) uint32_t gpuidx; int r; - INIT_LIST_HEAD(&ctx->validate_list); + drm_exec_init(&ctx->exec, true); for_each_set_bit(gpuidx, ctx->bitmap, MAX_GPU_INSTANCE) { pdd = kfd_process_device_from_gpuidx(ctx->process, gpuidx); if (!pdd) { pr_debug("failed to find device idx %d\n", gpuidx); - return -EINVAL; + r = -EINVAL; + goto unreserve_out; } vm = drm_priv_to_vm(pdd->drm_priv); - ctx->tv[gpuidx].bo = &vm->root.bo->tbo; - ctx->tv[gpuidx].num_shared = 4; - list_add(&ctx->tv[gpuidx].head, &ctx->validate_list); - } - - r = ttm_eu_reserve_buffers(&ctx->ticket, &ctx->validate_list, - ctx->intr, NULL); - if (r) { - pr_debug("failed %d to reserve bo\n", r); - return r; + r = amdgpu_vm_lock_pd(vm, &ctx->exec); + if (unlikely(r == -EDEADLK)) + continue; + if (unlikely(r)) { + pr_debug("failed %d to reserve bo\n", r); + goto unreserve_out; + } } for_each_set_bit(gpuidx, ctx->bitmap, MAX_GPU_INSTANCE) { @@ -1476,13 +1474,13 @@ static int svm_range_reserve_bos(struct svm_validate_context *ctx) return 0; unreserve_out: - ttm_eu_backoff_reservation(&ctx->ticket, &ctx->validate_list); + drm_exec_fini(&ctx->exec); return r; } static void svm_range_unreserve_bos(struct svm_validate_context *ctx) { - ttm_eu_backoff_reservation(&ctx->ticket, &ctx->validate_list); + drm_exec_fini(&ctx->exec); } static void *kfd_svm_page_owner(struct kfd_process *p, int32_t gpuidx) From patchwork Tue Feb 28 08:34:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13154522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18829C64ED6 for ; Tue, 28 Feb 2023 08:34:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7742C10E652; Tue, 28 Feb 2023 08:34:27 +0000 (UTC) Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by gabe.freedesktop.org (Postfix) with ESMTPS id 85BEE10E5E0; Tue, 28 Feb 2023 08:34:14 +0000 (UTC) Received: by mail-ed1-x534.google.com with SMTP id eg37so36387467edb.12; Tue, 28 Feb 2023 00:34:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rmYwpEAV+bvcPMAgOgX4SzZXYRjq4wLqTdeMHSV0VbI=; b=RmFW8ExTtRduLaJfGQ++DI0tPP3jlekwotuQQJlypTBwt/gTzZF1ZiW5cYWJnAwhXy PExOAQPiYTyr/k0f0EL5WbJxqfnAw2IWyQgkxwrqVBqxglTaeV+jO1OXJOCJuHALssia LbX6xNp94Y7vPLY8W3cG4YAVXlfOJz535h0PKe3zczSimzjV1/sMRbPuE4SlVRYzboz7 c17jNfW5ifxC061fYI9gR6FgpHLImNiTfE12zDrzTk3Nk33anuxVF6vXx9NiNh6YHrZ0 4kiPTK1Szc+PmeaU0xMo2gV73c6I6EnB2mPino1NTTCeGPWpUFh3Q0If3RzT/lVLKI5h uFNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rmYwpEAV+bvcPMAgOgX4SzZXYRjq4wLqTdeMHSV0VbI=; b=BjeSE+ydw8HMAmNbdPIa9FWuJLlOqPIyJtjlsGIyJpkxtc9vn0rqbpA9QiUtXgALRt cMMuI9Yvf5QAeGTBDUGAk4EtZw3Ez4/ECRvl7Ip4NhEGsDtmAkVaL9mJKs3vnL8GxZQh hPYnckXz4WGJERR38JTOmNmzWV5tR3gwCcuQ5AvI/guXtkN9pzBeH/4f1oNQ9ITA8nWr m1OwG9N0dQqGAkEyCQPDNScfA2tGFPZNHw/yoBzdJKTQaeYwfixXdIcp7clDqMB939cK cr0FYVH92e/LUIr0Jc+C23CmP0/QEecCaVG2JHOPgL70UucJRZ/L3NFTmdA9OsWXcd+L iAiA== X-Gm-Message-State: AO0yUKWi+fnPCfBOFMsa94X5IxwjjAnZfyyoYWbBxNoV7rUkuBP/HOf4 CjbLo6slRvRfGu5QtnK/pEQraUezqqI= X-Google-Smtp-Source: AK7set+GmSXwKyBjH9K/ZL8Tm/AawT5GOr99NZOM/IKMEZWaw1MT9CDS8tschiT4I0CjvvaxjZRdGQ== X-Received: by 2002:a17:906:854:b0:8f8:94f4:184 with SMTP id f20-20020a170906085400b008f894f40184mr1556578ejd.37.1677573254039; Tue, 28 Feb 2023 00:34:14 -0800 (PST) Received: from able.fritz.box (p5b0ea2e7.dip0.t-ipconnect.de. [91.14.162.231]) by smtp.gmail.com with ESMTPSA id ss17-20020a170907039100b008cf6f8798e1sm4296969ejb.54.2023.02.28.00.34.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 00:34:13 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 4/9] drm/amdgpu: use drm_exec for GEM and CSA handling Date: Tue, 28 Feb 2023 09:34:01 +0100 Message-Id: <20230228083406.1720795-5-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230228083406.1720795-1-christian.koenig@amd.com> References: <20230228083406.1720795-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dakr@redhat.com, arunpravin.paneerselvam@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Start using the new component here as well. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c | 42 ++++++-------- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 77 +++++++++++-------------- 2 files changed, 53 insertions(+), 66 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c index c6d4d41c4393..ea434c8de047 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c @@ -22,6 +22,8 @@ * * Author: Monk.liu@amd.com */ +#include + #include "amdgpu.h" uint64_t amdgpu_csa_vaddr(struct amdgpu_device *adev) @@ -65,31 +67,25 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm, struct amdgpu_bo *bo, struct amdgpu_bo_va **bo_va, uint64_t csa_addr, uint32_t size) { - struct ww_acquire_ctx ticket; - struct list_head list; - struct amdgpu_bo_list_entry pd; - struct ttm_validate_buffer csa_tv; + struct drm_exec exec; int r; - INIT_LIST_HEAD(&list); - INIT_LIST_HEAD(&csa_tv.head); - csa_tv.bo = &bo->tbo; - csa_tv.num_shared = 1; - - list_add(&csa_tv.head, &list); - amdgpu_vm_get_pd_bo(vm, &list, &pd); - - r = ttm_eu_reserve_buffers(&ticket, &list, true, NULL); - if (r) { - DRM_ERROR("failed to reserve CSA,PD BOs: err=%d\n", r); - return r; + drm_exec_init(&exec, true); + drm_exec_while_not_all_locked(&exec) { + r = amdgpu_vm_lock_pd(vm, &exec); + if (likely(!r)) + r = drm_exec_prepare_obj(&exec, &bo->tbo.base, 0); + drm_exec_continue_on_contention(&exec); + if (unlikely(r)) { + DRM_ERROR("failed to reserve CSA,PD BOs: err=%d\n", r); + goto error; + } } *bo_va = amdgpu_vm_bo_add(adev, vm, bo); if (!*bo_va) { - ttm_eu_backoff_reservation(&ticket, &list); - DRM_ERROR("failed to create bo_va for static CSA\n"); - return -ENOMEM; + r = -ENOMEM; + goto error; } r = amdgpu_vm_bo_map(adev, *bo_va, csa_addr, 0, size, @@ -99,10 +95,10 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm, if (r) { DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r); amdgpu_vm_bo_del(adev, *bo_va); - ttm_eu_backoff_reservation(&ticket, &list); - return r; + goto error; } - ttm_eu_backoff_reservation(&ticket, &list); - return 0; +error: + drm_exec_fini(&exec); + return r; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index ed1164a87fce..b070f3ae1569 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -33,6 +33,7 @@ #include #include +#include #include #include @@ -197,29 +198,23 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj, struct amdgpu_fpriv *fpriv = file_priv->driver_priv; struct amdgpu_vm *vm = &fpriv->vm; - struct amdgpu_bo_list_entry vm_pd; - struct list_head list, duplicates; struct dma_fence *fence = NULL; - struct ttm_validate_buffer tv; - struct ww_acquire_ctx ticket; struct amdgpu_bo_va *bo_va; + struct drm_exec exec; long r; - INIT_LIST_HEAD(&list); - INIT_LIST_HEAD(&duplicates); - - tv.bo = &bo->tbo; - tv.num_shared = 2; - list_add(&tv.head, &list); - - amdgpu_vm_get_pd_bo(vm, &list, &vm_pd); - - r = ttm_eu_reserve_buffers(&ticket, &list, false, &duplicates); - if (r) { - dev_err(adev->dev, "leaking bo va because " - "we fail to reserve bo (%ld)\n", r); - return; + drm_exec_init(&exec, false); + drm_exec_while_not_all_locked(&exec) { + r = drm_exec_prepare_obj(&exec, &bo->tbo.base, 0); + if (likely(!r)) + r = amdgpu_vm_lock_pd(vm, &exec); + drm_exec_continue_on_contention(&exec); + if (unlikely(r)) { + dev_err(adev->dev, "leaking bo va (%ld)\n", r); + goto out_unlock; + } } + bo_va = amdgpu_vm_bo_find(vm, bo); if (!bo_va || --bo_va->ref_count) goto out_unlock; @@ -229,6 +224,9 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj, goto out_unlock; r = amdgpu_vm_clear_freed(adev, vm, &fence); + if (unlikely(r < 0)) + dev_err(adev->dev, "failed to clear page " + "tables on GEM object close (%ld)\n", r); if (r || !fence) goto out_unlock; @@ -236,10 +234,7 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj, dma_fence_put(fence); out_unlock: - if (unlikely(r < 0)) - dev_err(adev->dev, "failed to clear page " - "tables on GEM object close (%ld)\n", r); - ttm_eu_backoff_reservation(&ticket, &list); + drm_exec_fini(&exec); } static int amdgpu_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) @@ -673,10 +668,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data, struct amdgpu_fpriv *fpriv = filp->driver_priv; struct amdgpu_bo *abo; struct amdgpu_bo_va *bo_va; - struct amdgpu_bo_list_entry vm_pd; - struct ttm_validate_buffer tv; - struct ww_acquire_ctx ticket; - struct list_head list, duplicates; + struct drm_exec exec; uint64_t va_flags; uint64_t vm_size; int r = 0; @@ -726,36 +718,37 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data, return -EINVAL; } - INIT_LIST_HEAD(&list); - INIT_LIST_HEAD(&duplicates); if ((args->operation != AMDGPU_VA_OP_CLEAR) && !(args->flags & AMDGPU_VM_PAGE_PRT)) { gobj = drm_gem_object_lookup(filp, args->handle); if (gobj == NULL) return -ENOENT; abo = gem_to_amdgpu_bo(gobj); - tv.bo = &abo->tbo; - if (abo->flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID) - tv.num_shared = 1; - else - tv.num_shared = 0; - list_add(&tv.head, &list); } else { gobj = NULL; abo = NULL; } - amdgpu_vm_get_pd_bo(&fpriv->vm, &list, &vm_pd); + drm_exec_init(&exec, true); + drm_exec_while_not_all_locked(&exec) { + if (gobj) { + r = drm_exec_prepare_obj(&exec, gobj, 0); + drm_exec_continue_on_contention(&exec); + if (unlikely(r)) + goto error; + } - r = ttm_eu_reserve_buffers(&ticket, &list, true, &duplicates); - if (r) - goto error_unref; + r = amdgpu_vm_lock_pd(&fpriv->vm, &exec); + drm_exec_continue_on_contention(&exec); + if (unlikely(r)) + goto error; + } if (abo) { bo_va = amdgpu_vm_bo_find(&fpriv->vm, abo); if (!bo_va) { r = -ENOENT; - goto error_backoff; + goto error; } } else if (args->operation != AMDGPU_VA_OP_CLEAR) { bo_va = fpriv->prt_va; @@ -792,10 +785,8 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data, amdgpu_gem_va_update_vm(adev, &fpriv->vm, bo_va, args->operation); -error_backoff: - ttm_eu_backoff_reservation(&ticket, &list); - -error_unref: +error: + drm_exec_fini(&exec); drm_gem_object_put(gobj); return r; } From patchwork Tue Feb 28 08:34:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13154525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DB13DC7EE30 for ; Tue, 28 Feb 2023 08:34:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 78D0F10E657; Tue, 28 Feb 2023 08:34:29 +0000 (UTC) Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by gabe.freedesktop.org (Postfix) with ESMTPS id A079C10E641; Tue, 28 Feb 2023 08:34:15 +0000 (UTC) Received: by mail-ed1-x534.google.com with SMTP id eg37so36387657edb.12; Tue, 28 Feb 2023 00:34:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+7mi95YwcJWHbZuY10t5rp9JU/rAQHm+Y4I/jOyv4Qg=; b=UxST0pCL2g5/1JoCdsj2gYJ91cHarOqJHCHAoNILh8uO8DcQDhMHC6/uW6n1BwG/qY 3/6gDN5GXSp6zeBX8gYhL4yed23kHivTJaBJVDhvLBmMndTu0hOjvLVOenrwc1ZCwk/a qZVXJMWW3FCZ1cJlaDaj4uEhW923fE+gdaz5eznjCcZA9Zrh80VMOV8aJ7MPmUDLCYjr 9Rn6P2aFb2D2lG6eKjoyPii5bjbTwxDDNc9AblUGOgtQkbC8pUjAquzqdYJiSLsKGobb IWy739dvcs18qmgQGE6Fnq5THncKANAceomavPfnW3XJaJnfVi76uI5eaDGkB4OO1fwZ rKkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+7mi95YwcJWHbZuY10t5rp9JU/rAQHm+Y4I/jOyv4Qg=; b=Krd12w0V+0XyIuWhX+/rvpDU0k35tLQ9KFzXwSc85H6jO0qX5eUngOs74bNqSHKhQD P4k9LJO/0Ma3EwhfTgvP8wMhEe+R8SOrw2UCUYYS++yYPAkKbq6+j2XaQQnsqXOI0Rd5 zn1PmxMG6ft46U6KBFjek/fh9xZaUpbgxGtSN8FGbN4FnE7I3wCbuD/lUn53GmFh0dG+ C0GJO3MTc/pBzHp0fvgJZ7z13LiZWVXJ7DbWETF6/2ebC8JEYKeKBDZiKxto3yBmfJpc Ti9gpHQgBDww2V7Mr5ZFMWjqUeUSZM5Df5lGUAOS9VAAHb1V9G0W4yjFchr+XoYbydM8 NXMg== X-Gm-Message-State: AO0yUKWJOXGIMPXS9IaXOqbjtvXtAvM9WKVEcgX6T+h4bk/9ULXV8mcV jO1T6sg5VcI3j329mFxaa5o0Ko5kHgs= X-Google-Smtp-Source: AK7set8SXpelyhWX7xenOfJenxE6twwqSjAFV2mv66gx9WfUH20NgntoYeYdUtKAdJKv9aBr/29h5A== X-Received: by 2002:a17:906:39ca:b0:87f:89f2:c012 with SMTP id i10-20020a17090639ca00b0087f89f2c012mr1564169eje.24.1677573255156; Tue, 28 Feb 2023 00:34:15 -0800 (PST) Received: from able.fritz.box (p5b0ea2e7.dip0.t-ipconnect.de. [91.14.162.231]) by smtp.gmail.com with ESMTPSA id ss17-20020a170907039100b008cf6f8798e1sm4296969ejb.54.2023.02.28.00.34.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 00:34:14 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 5/9] drm/amdgpu: use drm_exec for MES testing Date: Tue, 28 Feb 2023 09:34:02 +0100 Message-Id: <20230228083406.1720795-6-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230228083406.1720795-1-christian.koenig@amd.com> References: <20230228083406.1720795-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dakr@redhat.com, arunpravin.paneerselvam@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Start using the new component here as well. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c | 86 +++++++++++-------------- 1 file changed, 39 insertions(+), 47 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c index 82e27bd4f038..95292a65fd25 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c @@ -22,6 +22,7 @@ */ #include +#include #include "amdgpu_mes.h" #include "amdgpu.h" @@ -1126,34 +1127,29 @@ int amdgpu_mes_ctx_map_meta_data(struct amdgpu_device *adev, struct amdgpu_mes_ctx_data *ctx_data) { struct amdgpu_bo_va *bo_va; - struct ww_acquire_ctx ticket; - struct list_head list; - struct amdgpu_bo_list_entry pd; - struct ttm_validate_buffer csa_tv; struct amdgpu_sync sync; + struct drm_exec exec; int r; amdgpu_sync_create(&sync); - INIT_LIST_HEAD(&list); - INIT_LIST_HEAD(&csa_tv.head); - csa_tv.bo = &ctx_data->meta_data_obj->tbo; - csa_tv.num_shared = 1; - - list_add(&csa_tv.head, &list); - amdgpu_vm_get_pd_bo(vm, &list, &pd); - - r = ttm_eu_reserve_buffers(&ticket, &list, true, NULL); - if (r) { - DRM_ERROR("failed to reserve meta data BO: err=%d\n", r); - return r; + drm_exec_init(&exec, false); + drm_exec_while_not_all_locked(&exec) { + r = drm_exec_prepare_obj(&exec, + &ctx_data->meta_data_obj->tbo.base, + 0); + if (likely(!r)) + r = amdgpu_vm_lock_pd(vm, &exec); + drm_exec_continue_on_contention(&exec); + if (unlikely(r)) + goto error_fini_exec; } bo_va = amdgpu_vm_bo_add(adev, vm, ctx_data->meta_data_obj); if (!bo_va) { - ttm_eu_backoff_reservation(&ticket, &list); DRM_ERROR("failed to create bo_va for meta data BO\n"); - return -ENOMEM; + r = -ENOMEM; + goto error_fini_exec; } r = amdgpu_vm_bo_map(adev, bo_va, ctx_data->meta_data_gpu_addr, 0, @@ -1163,33 +1159,35 @@ int amdgpu_mes_ctx_map_meta_data(struct amdgpu_device *adev, if (r) { DRM_ERROR("failed to do bo_map on meta data, err=%d\n", r); - goto error; + goto error_del_bo_va; } r = amdgpu_vm_bo_update(adev, bo_va, false); if (r) { DRM_ERROR("failed to do vm_bo_update on meta data\n"); - goto error; + goto error_del_bo_va; } amdgpu_sync_fence(&sync, bo_va->last_pt_update); r = amdgpu_vm_update_pdes(adev, vm, false); if (r) { DRM_ERROR("failed to update pdes on meta data\n"); - goto error; + goto error_del_bo_va; } amdgpu_sync_fence(&sync, vm->last_update); amdgpu_sync_wait(&sync, false); - ttm_eu_backoff_reservation(&ticket, &list); + drm_exec_fini(&exec); amdgpu_sync_free(&sync); ctx_data->meta_data_va = bo_va; return 0; -error: +error_del_bo_va: amdgpu_vm_bo_del(adev, bo_va); - ttm_eu_backoff_reservation(&ticket, &list); + +error_fini_exec: + drm_exec_fini(&exec); amdgpu_sync_free(&sync); return r; } @@ -1200,34 +1198,28 @@ int amdgpu_mes_ctx_unmap_meta_data(struct amdgpu_device *adev, struct amdgpu_bo_va *bo_va = ctx_data->meta_data_va; struct amdgpu_bo *bo = ctx_data->meta_data_obj; struct amdgpu_vm *vm = bo_va->base.vm; - struct amdgpu_bo_list_entry vm_pd; - struct list_head list, duplicates; - struct dma_fence *fence = NULL; - struct ttm_validate_buffer tv; - struct ww_acquire_ctx ticket; - long r = 0; - - INIT_LIST_HEAD(&list); - INIT_LIST_HEAD(&duplicates); - - tv.bo = &bo->tbo; - tv.num_shared = 2; - list_add(&tv.head, &list); - - amdgpu_vm_get_pd_bo(vm, &list, &vm_pd); - - r = ttm_eu_reserve_buffers(&ticket, &list, false, &duplicates); - if (r) { - dev_err(adev->dev, "leaking bo va because " - "we fail to reserve bo (%ld)\n", r); - return r; + struct dma_fence *fence; + struct drm_exec exec; + long r; + + drm_exec_init(&exec, false); + drm_exec_while_not_all_locked(&exec) { + r = drm_exec_prepare_obj(&exec, + &ctx_data->meta_data_obj->tbo.base, + 0); + if (likely(!r)) + r = amdgpu_vm_lock_pd(vm, &exec); + drm_exec_continue_on_contention(&exec); + if (unlikely(r)) + goto out_unlock; } amdgpu_vm_bo_del(adev, bo_va); if (!amdgpu_vm_ready(vm)) goto out_unlock; - r = dma_resv_get_singleton(bo->tbo.base.resv, DMA_RESV_USAGE_BOOKKEEP, &fence); + r = dma_resv_get_singleton(bo->tbo.base.resv, DMA_RESV_USAGE_BOOKKEEP, + &fence); if (r) goto out_unlock; if (fence) { @@ -1246,7 +1238,7 @@ int amdgpu_mes_ctx_unmap_meta_data(struct amdgpu_device *adev, out_unlock: if (unlikely(r < 0)) dev_err(adev->dev, "failed to clear page tables (%ld)\n", r); - ttm_eu_backoff_reservation(&ticket, &list); + drm_exec_fini(&exec); return r; } From patchwork Tue Feb 28 08:34:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13154526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 29BE9C64EC7 for ; Tue, 28 Feb 2023 08:34:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6DF4C10E65A; Tue, 28 Feb 2023 08:34:31 +0000 (UTC) Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com [IPv6:2a00:1450:4864:20::536]) by gabe.freedesktop.org (Postfix) with ESMTPS id DF78F10E64B; Tue, 28 Feb 2023 08:34:17 +0000 (UTC) Received: by mail-ed1-x536.google.com with SMTP id ec43so36512972edb.8; Tue, 28 Feb 2023 00:34:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Lg0fSnZQ4zZbFoVz2IwzFRAoi2ttjWiEvl8yZX/emwA=; b=GREHoiFAFuj1hZUtkWvKU40aZiog40hUB9RGC0zLGRov9yM8laeZUCDKgFOPmg7m5a 7oin43DqHkJaKru6yuDAiWBLvZsGN+rgeGWO4fQxvrHJtLDhgaGaaZkdGcXfLbH6ItOX i2b1SB3vxgNE4e+TdBueOpHJSWw1zCnQtABNW7pMbsPmCIu4JSVz1+7AoTcNhoxsiuck UmUW+B0yC5Wf4NNlnR4RYaRn3tAkzFLCJTDbhQnWsw0jNTCmCALGjbAElG5rLBy+oKRB soWDjANXII0x3q+E52YUc21Ld30nNZ5ST8ul2kEqGnArvtW3q+kQksPD7WRY2C09kYmk Sf8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Lg0fSnZQ4zZbFoVz2IwzFRAoi2ttjWiEvl8yZX/emwA=; b=SPJbBCKDGS4WS78KRyCCDxupgHPtn9Ud8x6zaDkDJKYt4vPaWagAKVTqWLOjW16pj+ at+O/VI6r6uno7hQZf6IOrftVWzR58fIIYM6gyqnUHB5lyYPMXf4sQdkC/st4zNTwfls Y76civ0HezSGkT/UL7paIYRVmL1cYngH0MCNlYyBXmXn4nRqEcPnzDX/0f+Bfxj3zUjF AqA6OuQ1oEAJc/cxN69lSi8LunoEm+mo9MOo7tYLjyzcnO9Bi3qUtFJ9fHzmNdeOBPVQ hAwEqPHjeJgJ2vSYiuSNBNnulhapY3QORltCTIz4y1t+W92eo0i1SRBuVwV/vBanf7kE lYvg== X-Gm-Message-State: AO0yUKW9gr0hwzJWSSZEymPmDTQWrW2glhO3yhGuxhdtNkeiuF8KOr5/ rqOFhtz4WElnX7KXFZtSSbqZ60SHIgc= X-Google-Smtp-Source: AK7set9zY3B7kF8CtGbJ1cy7alSsFoqrCqpM4iMAunhb0hXldbWTMUguvU23h5ATC8ei1rVdmxbNWA== X-Received: by 2002:a17:906:912:b0:8b2:abcc:8d9e with SMTP id i18-20020a170906091200b008b2abcc8d9emr2058543ejd.26.1677573256338; Tue, 28 Feb 2023 00:34:16 -0800 (PST) Received: from able.fritz.box (p5b0ea2e7.dip0.t-ipconnect.de. [91.14.162.231]) by smtp.gmail.com with ESMTPSA id ss17-20020a170907039100b008cf6f8798e1sm4296969ejb.54.2023.02.28.00.34.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 00:34:15 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 6/9] drm/amdgpu: use the new drm_exec object for CS v2 Date: Tue, 28 Feb 2023 09:34:03 +0100 Message-Id: <20230228083406.1720795-7-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230228083406.1720795-1-christian.koenig@amd.com> References: <20230228083406.1720795-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dakr@redhat.com, arunpravin.paneerselvam@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use the new component here as well and remove the old handling. v2: drop dupplicate handling Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 - drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c | 71 ++----- drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 210 +++++++++----------- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h | 7 +- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 22 -- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 - 7 files changed, 115 insertions(+), 204 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 4e4efd10cb89..255161dd05f1 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -54,7 +54,6 @@ #include #include -#include #include #include diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c index 252a876b0725..b6298e901cbd 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c @@ -28,6 +28,7 @@ * Christian König */ +#include #include #include "amdgpu.h" @@ -50,13 +51,20 @@ static void amdgpu_bo_list_free(struct kref *ref) refcount); struct amdgpu_bo_list_entry *e; - amdgpu_bo_list_for_each_entry(e, list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); + amdgpu_bo_list_for_each_entry(e, list) + amdgpu_bo_unref(&e->bo); + call_rcu(&list->rhead, amdgpu_bo_list_free_rcu); +} - amdgpu_bo_unref(&bo); - } +static int amdgpu_bo_list_entry_cmp(const void *_a, const void *_b) +{ + const struct amdgpu_bo_list_entry *a = _a, *b = _b; - call_rcu(&list->rhead, amdgpu_bo_list_free_rcu); + if (a->priority > b->priority) + return 1; + if (a->priority < b->priority) + return -1; + return 0; } int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp, @@ -118,7 +126,7 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp, entry->priority = min(info[i].bo_priority, AMDGPU_BO_LIST_MAX_PRIORITY); - entry->tv.bo = &bo->tbo; + entry->bo = bo; if (bo->preferred_domains == AMDGPU_GEM_DOMAIN_GDS) list->gds_obj = bo; @@ -133,6 +141,8 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp, list->first_userptr = first_userptr; list->num_entries = num_entries; + sort(array, last_entry, sizeof(struct amdgpu_bo_list_entry), + amdgpu_bo_list_entry_cmp, NULL); trace_amdgpu_cs_bo_status(list->num_entries, total_size); @@ -141,16 +151,10 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp, return 0; error_free: - for (i = 0; i < last_entry; ++i) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo); - - amdgpu_bo_unref(&bo); - } - for (i = first_userptr; i < num_entries; ++i) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo); - - amdgpu_bo_unref(&bo); - } + for (i = 0; i < last_entry; ++i) + amdgpu_bo_unref(&array[i].bo); + for (i = first_userptr; i < num_entries; ++i) + amdgpu_bo_unref(&array[i].bo); kvfree(list); return r; @@ -182,41 +186,6 @@ int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id, return -ENOENT; } -void amdgpu_bo_list_get_list(struct amdgpu_bo_list *list, - struct list_head *validated) -{ - /* This is based on the bucket sort with O(n) time complexity. - * An item with priority "i" is added to bucket[i]. The lists are then - * concatenated in descending order. - */ - struct list_head bucket[AMDGPU_BO_LIST_NUM_BUCKETS]; - struct amdgpu_bo_list_entry *e; - unsigned i; - - for (i = 0; i < AMDGPU_BO_LIST_NUM_BUCKETS; i++) - INIT_LIST_HEAD(&bucket[i]); - - /* Since buffers which appear sooner in the relocation list are - * likely to be used more often than buffers which appear later - * in the list, the sort mustn't change the ordering of buffers - * with the same priority, i.e. it must be stable. - */ - amdgpu_bo_list_for_each_entry(e, list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); - unsigned priority = e->priority; - - if (!bo->parent) - list_add_tail(&e->tv.head, &bucket[priority]); - - e->user_pages = NULL; - e->range = NULL; - } - - /* Connect the sorted buckets in the output list. */ - for (i = 0; i < AMDGPU_BO_LIST_NUM_BUCKETS; i++) - list_splice(&bucket[i], validated); -} - void amdgpu_bo_list_put(struct amdgpu_bo_list *list) { kref_put(&list->refcount, amdgpu_bo_list_free); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h index ededdc01ca28..26c01cb131f2 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h @@ -23,7 +23,6 @@ #ifndef __AMDGPU_BO_LIST_H__ #define __AMDGPU_BO_LIST_H__ -#include #include struct hmm_range; @@ -36,7 +35,7 @@ struct amdgpu_bo_va; struct amdgpu_fpriv; struct amdgpu_bo_list_entry { - struct ttm_validate_buffer tv; + struct amdgpu_bo *bo; struct amdgpu_bo_va *bo_va; uint32_t priority; struct page **user_pages; @@ -60,8 +59,6 @@ struct amdgpu_bo_list { int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id, struct amdgpu_bo_list **result); -void amdgpu_bo_list_get_list(struct amdgpu_bo_list *list, - struct list_head *validated); void amdgpu_bo_list_put(struct amdgpu_bo_list *list); int amdgpu_bo_create_list_entry_array(struct drm_amdgpu_bo_list_in *in, struct drm_amdgpu_bo_list_entry **info_param); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 0f4cb41078c1..ae4a6fcbbffa 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -65,6 +65,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, } amdgpu_sync_create(&p->sync); + drm_exec_init(&p->exec, true); return 0; } @@ -122,7 +123,6 @@ static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p, uint32_t *offset) { struct drm_gem_object *gobj; - struct amdgpu_bo *bo; unsigned long size; int r; @@ -130,21 +130,16 @@ static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p, if (gobj == NULL) return -EINVAL; - bo = amdgpu_bo_ref(gem_to_amdgpu_bo(gobj)); - p->uf_entry.priority = 0; - p->uf_entry.tv.bo = &bo->tbo; - /* One for TTM and two for the CS job */ - p->uf_entry.tv.num_shared = 3; - + p->uf_bo = amdgpu_bo_ref(gem_to_amdgpu_bo(gobj)); drm_gem_object_put(gobj); - size = amdgpu_bo_size(bo); + size = amdgpu_bo_size(p->uf_bo); if (size != PAGE_SIZE || (data->offset + 8) > size) { r = -EINVAL; goto error_unref; } - if (amdgpu_ttm_tt_get_usermm(bo->tbo.ttm)) { + if (amdgpu_ttm_tt_get_usermm(p->uf_bo->tbo.ttm)) { r = -EINVAL; goto error_unref; } @@ -154,7 +149,7 @@ static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p, return 0; error_unref: - amdgpu_bo_unref(&bo); + amdgpu_bo_unref(&p->uf_bo); return r; } @@ -310,7 +305,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p, goto free_all_kdata; } - if (p->uf_entry.tv.bo) + if (p->uf_bo) p->gang_leader->uf_addr = uf_offset; kvfree(chunk_array); @@ -355,7 +350,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, ib = &job->ibs[job->num_ibs++]; /* MM engine doesn't support user fences */ - if (p->uf_entry.tv.bo && ring->funcs->no_user_fence) + if (p->uf_bo && ring->funcs->no_user_fence) return -EINVAL; if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX && @@ -814,55 +809,18 @@ static int amdgpu_cs_bo_validate(void *param, struct amdgpu_bo *bo) return r; } -static int amdgpu_cs_list_validate(struct amdgpu_cs_parser *p, - struct list_head *validated) -{ - struct ttm_operation_ctx ctx = { true, false }; - struct amdgpu_bo_list_entry *lobj; - int r; - - list_for_each_entry(lobj, validated, tv.head) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(lobj->tv.bo); - struct mm_struct *usermm; - - usermm = amdgpu_ttm_tt_get_usermm(bo->tbo.ttm); - if (usermm && usermm != current->mm) - return -EPERM; - - if (amdgpu_ttm_tt_is_userptr(bo->tbo.ttm) && - lobj->user_invalidated && lobj->user_pages) { - amdgpu_bo_placement_from_domain(bo, - AMDGPU_GEM_DOMAIN_CPU); - r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); - if (r) - return r; - - amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm, - lobj->user_pages); - } - - r = amdgpu_cs_bo_validate(p, bo); - if (r) - return r; - - kvfree(lobj->user_pages); - lobj->user_pages = NULL; - } - return 0; -} - static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, union drm_amdgpu_cs *cs) { struct amdgpu_fpriv *fpriv = p->filp->driver_priv; + struct ttm_operation_ctx ctx = { true, false }; struct amdgpu_vm *vm = &fpriv->vm; struct amdgpu_bo_list_entry *e; - struct list_head duplicates; + struct drm_gem_object *obj; + unsigned long index; unsigned int i; int r; - INIT_LIST_HEAD(&p->validated); - /* p->bo_list could already be assigned if AMDGPU_CHUNK_ID_BO_HANDLES is present */ if (cs->in.bo_list_handle) { if (p->bo_list) @@ -882,25 +840,13 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, mutex_lock(&p->bo_list->bo_list_mutex); - /* One for TTM and one for the CS job */ - amdgpu_bo_list_for_each_entry(e, p->bo_list) - e->tv.num_shared = 2; - - amdgpu_bo_list_get_list(p->bo_list, &p->validated); - - INIT_LIST_HEAD(&duplicates); - amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd); - - if (p->uf_entry.tv.bo && !ttm_to_amdgpu_bo(p->uf_entry.tv.bo)->parent) - list_add(&p->uf_entry.tv.head, &p->validated); - /* Get userptr backing pages. If pages are updated after registered * in amdgpu_gem_userptr_ioctl(), amdgpu_cs_list_validate() will do * amdgpu_ttm_backend_bind() to flush and invalidate new pages */ amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); bool userpage_invalidated = false; + struct amdgpu_bo *bo = e->bo; int i; e->user_pages = kvmalloc_array(bo->tbo.ttm->num_pages, @@ -928,18 +874,56 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, e->user_invalidated = userpage_invalidated; } - r = ttm_eu_reserve_buffers(&p->ticket, &p->validated, true, - &duplicates); - if (unlikely(r != 0)) { - if (r != -ERESTARTSYS) - DRM_ERROR("ttm_eu_reserve_buffers failed.\n"); - goto out_free_user_pages; + drm_exec_while_not_all_locked(&p->exec) { + r = amdgpu_vm_lock_pd(&fpriv->vm, &p->exec); + drm_exec_continue_on_contention(&p->exec); + if (unlikely(r)) + goto out_free_user_pages; + + amdgpu_bo_list_for_each_entry(e, p->bo_list) { + r = drm_exec_prepare_obj(&p->exec, &e->bo->tbo.base, 2); + drm_exec_break_on_contention(&p->exec); + if (unlikely(r)) + goto out_free_user_pages; + + e->bo_va = amdgpu_vm_bo_find(vm, e->bo); + e->range = NULL; + } + drm_exec_continue_on_contention(&p->exec); + + if (p->uf_bo) { + r = drm_exec_prepare_obj(&p->exec, &p->uf_bo->tbo.base, + 2); + drm_exec_continue_on_contention(&p->exec); + if (unlikely(r)) + goto out_free_user_pages; + } } - amdgpu_bo_list_for_each_entry(e, p->bo_list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); + amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { + struct mm_struct *usermm; - e->bo_va = amdgpu_vm_bo_find(vm, bo); + usermm = amdgpu_ttm_tt_get_usermm(e->bo->tbo.ttm); + if (usermm && usermm != current->mm) { + r = -EPERM; + goto out_free_user_pages; + } + + if (amdgpu_ttm_tt_is_userptr(e->bo->tbo.ttm) && + e->user_invalidated && e->user_pages) { + amdgpu_bo_placement_from_domain(e->bo, + AMDGPU_GEM_DOMAIN_CPU); + r = ttm_bo_validate(&e->bo->tbo, &e->bo->placement, + &ctx); + if (r) + goto out_free_user_pages; + + amdgpu_ttm_tt_set_user_pages(e->bo->tbo.ttm, + e->user_pages); + } + + kvfree(e->user_pages); + e->user_pages = NULL; } amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold, @@ -951,25 +935,21 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, amdgpu_cs_bo_validate, p); if (r) { DRM_ERROR("amdgpu_vm_validate_pt_bos() failed.\n"); - goto error_validate; + goto out_free_user_pages; } - r = amdgpu_cs_list_validate(p, &duplicates); - if (r) - goto error_validate; - - r = amdgpu_cs_list_validate(p, &p->validated); - if (r) - goto error_validate; - - if (p->uf_entry.tv.bo) { - struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo); + drm_exec_for_each_locked_object(&p->exec, index, obj) { + r = amdgpu_cs_bo_validate(p, gem_to_amdgpu_bo(obj)); + if (unlikely(r)) + goto out_free_user_pages; + } - r = amdgpu_ttm_alloc_gart(&uf->tbo); - if (r) - goto error_validate; + if (p->uf_bo) { + r = amdgpu_ttm_alloc_gart(&p->uf_bo->tbo); + if (unlikely(r)) + goto out_free_user_pages; - p->gang_leader->uf_addr += amdgpu_bo_gpu_offset(uf); + p->gang_leader->uf_addr += amdgpu_bo_gpu_offset(p->uf_bo); } amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved, @@ -981,12 +961,9 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, p->bo_list->oa_obj); return 0; -error_validate: - ttm_eu_backoff_reservation(&p->ticket, &p->validated); - out_free_user_pages: amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); + struct amdgpu_bo *bo = e->bo; if (!e->user_pages) continue; @@ -1093,7 +1070,6 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) struct amdgpu_vm *vm = &fpriv->vm; struct amdgpu_bo_list_entry *e; struct amdgpu_bo_va *bo_va; - struct amdgpu_bo *bo; unsigned int i; int r; @@ -1122,11 +1098,6 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) } amdgpu_bo_list_for_each_entry(e, p->bo_list) { - /* ignore duplicates */ - bo = ttm_to_amdgpu_bo(e->tv.bo); - if (!bo) - continue; - bo_va = e->bo_va; if (bo_va == NULL) continue; @@ -1164,7 +1135,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) if (amdgpu_vm_debug) { /* Invalidate all BOs to test for userspace bugs */ amdgpu_bo_list_for_each_entry(e, p->bo_list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); + struct amdgpu_bo *bo = e->bo; /* ignore duplicates */ if (!bo) @@ -1181,8 +1152,9 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p) { struct amdgpu_fpriv *fpriv = p->filp->driver_priv; struct drm_gpu_scheduler *sched; - struct amdgpu_bo_list_entry *e; + struct drm_gem_object *obj; struct dma_fence *fence; + unsigned long index; unsigned int i; int r; @@ -1193,8 +1165,9 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p) return r; } - list_for_each_entry(e, &p->validated, tv.head) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); + drm_exec_for_each_locked_object(&p->exec, index, obj) { + struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); + struct dma_resv *resv = bo->tbo.base.resv; enum amdgpu_sync_mode sync_mode; @@ -1255,6 +1228,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, struct amdgpu_fpriv *fpriv = p->filp->driver_priv; struct amdgpu_job *leader = p->gang_leader; struct amdgpu_bo_list_entry *e; + struct drm_gem_object *gobj; + unsigned long index; unsigned int i; uint64_t seq; int r; @@ -1293,9 +1268,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, */ r = 0; amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); - - r |= !amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm, e->range); + r |= !amdgpu_ttm_tt_get_user_pages_done(e->bo->tbo.ttm, + e->range); e->range = NULL; } if (r) { @@ -1304,20 +1278,22 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, } p->fence = dma_fence_get(&leader->base.s_fence->finished); - list_for_each_entry(e, &p->validated, tv.head) { + drm_exec_for_each_locked_object(&p->exec, index, gobj) { + + ttm_bo_move_to_lru_tail_unlocked(&gem_to_amdgpu_bo(gobj)->tbo); /* Everybody except for the gang leader uses READ */ for (i = 0; i < p->gang_size; ++i) { if (p->jobs[i] == leader) continue; - dma_resv_add_fence(e->tv.bo->base.resv, + dma_resv_add_fence(gobj->resv, &p->jobs[i]->base.s_fence->finished, DMA_RESV_USAGE_READ); } - /* The gang leader is remembered as writer */ - e->tv.num_shared = 0; + /* The gang leader as remembered as writer */ + dma_resv_add_fence(gobj->resv, p->fence, DMA_RESV_USAGE_WRITE); } seq = amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_leader_idx], @@ -1333,7 +1309,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, cs->out.handle = seq; leader->uf_sequence = seq; - amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket); + amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->exec.ticket); for (i = 0; i < p->gang_size; ++i) { amdgpu_job_free_resources(p->jobs[i]); trace_amdgpu_cs_ioctl(p->jobs[i]); @@ -1342,7 +1318,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, } amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm); - ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence); mutex_unlock(&p->adev->notifier_lock); mutex_unlock(&p->bo_list->bo_list_mutex); @@ -1363,6 +1338,8 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser) unsigned i; amdgpu_sync_free(&parser->sync); + drm_exec_fini(&parser->exec); + for (i = 0; i < parser->num_post_deps; i++) { drm_syncobj_put(parser->post_deps[i].syncobj); kfree(parser->post_deps[i].chain); @@ -1383,11 +1360,7 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser) if (parser->jobs[i]) amdgpu_job_free(parser->jobs[i]); } - if (parser->uf_entry.tv.bo) { - struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo); - - amdgpu_bo_unref(&uf); - } + amdgpu_bo_unref(&parser->uf_bo); } int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) @@ -1448,7 +1421,6 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) return 0; error_backoff: - ttm_eu_backoff_reservation(&parser.ticket, &parser.validated); mutex_unlock(&parser.bo_list->bo_list_mutex); error_fini: @@ -1783,7 +1755,7 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser, *map = mapping; /* Double check that the BO is reserved by this CS */ - if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->ticket) + if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->exec.ticket) return -EINVAL; if (!((*bo)->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h index fb3e3d56d427..39c33ad100cb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h @@ -24,6 +24,7 @@ #define __AMDGPU_CS_H__ #include +#include #include "amdgpu_job.h" #include "amdgpu_bo_list.h" @@ -62,11 +63,9 @@ struct amdgpu_cs_parser { struct amdgpu_job *gang_leader; /* buffer objects */ - struct ww_acquire_ctx ticket; + struct drm_exec exec; struct amdgpu_bo_list *bo_list; struct amdgpu_mn *mn; - struct amdgpu_bo_list_entry vm_pd; - struct list_head validated; struct dma_fence *fence; uint64_t bytes_moved_threshold; uint64_t bytes_moved_vis_threshold; @@ -74,7 +73,7 @@ struct amdgpu_cs_parser { uint64_t bytes_moved_vis; /* user fence */ - struct amdgpu_bo_list_entry uf_entry; + struct amdgpu_bo *uf_bo; unsigned num_post_deps; struct amdgpu_cs_post_dep *post_deps; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index f10a9331af9b..c7e6421ee5d2 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -313,28 +313,6 @@ void amdgpu_vm_bo_base_init(struct amdgpu_vm_bo_base *base, amdgpu_vm_bo_evicted(base); } -/** - * amdgpu_vm_get_pd_bo - add the VM PD to a validation list - * - * @vm: vm providing the BOs - * @validated: head of validation list - * @entry: entry to add - * - * Add the page directory to the list of BOs to - * validate for command submission. - */ -void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm, - struct list_head *validated, - struct amdgpu_bo_list_entry *entry) -{ - entry->priority = 0; - entry->tv.bo = &vm->root.bo->tbo; - /* Two for VM updates, one for TTM and one for the CS job */ - entry->tv.num_shared = 4; - entry->user_pages = NULL; - list_add(&entry->tv.head, validated); -} - /** * amdgpu_vm_lock_pd - lock PD in drm_exec * diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h index 4066731d3065..f6cc7e22e574 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h @@ -388,9 +388,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm); int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm); void amdgpu_vm_release_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm); void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm); -void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm, - struct list_head *validated, - struct amdgpu_bo_list_entry *entry); int amdgpu_vm_lock_pd(struct amdgpu_vm *vm, struct drm_exec *exec); bool amdgpu_vm_ready(struct amdgpu_vm *vm); int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm, From patchwork Tue Feb 28 08:34:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13154523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6BF12C64EC7 for ; Tue, 28 Feb 2023 08:34:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id ED34910E653; Tue, 28 Feb 2023 08:34:27 +0000 (UTC) Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0AD0010E64D; Tue, 28 Feb 2023 08:34:19 +0000 (UTC) Received: by mail-ed1-x535.google.com with SMTP id ee7so36653195edb.2; Tue, 28 Feb 2023 00:34:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=clE43PlHbBeLhQySptwkgxMX8ZPTnLX3VD+t4/p7uhc=; b=SjGJ6+nXqG4ERwwac98WWxQOQzQ4N5XRl0rIs7leCtTlNkh58IsSRUYRYCQuU69ZHF QEGYdTKMTfwD1uwffnXjnEQyrL3WxXdhsEIy3mp0vLp/V1QmDT+Na9acbR0US7fK+ihB H4Uh5UvmlUGgK/2/pNWQaPzc95IiIX/yg7OP8G4696RIYOIOiJM8MFN10+acQk5qIZF6 ywkN5TFH8IvnXJ+KGYpicrKvL1DKmepM/25dKiY0xl0G5JECVB2D4rzSZkjf3inwvvE6 Zm/mnJXOrDwUi4xXVilvQMf6oNVXo5DmPk3JpcOoZ788YB8z8nV54uIo0f3Rw6jioJRj 08eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=clE43PlHbBeLhQySptwkgxMX8ZPTnLX3VD+t4/p7uhc=; b=Ib6YsJ86790aeHVCvsMIa5Codij2/tUdbeABNpCd8LCZjp+ZWUnGjvUzfI1/8S0Jsk ElnyCoOQl1mLbIALSqA4Bo5zcdyEahQ4NINbwNb891aH/yifnhzeuq+CjMj/IRGFE+jD 3Z6Gba/WFj7on2ERsB+4G1l0RCjkhoX889iraO6OYgrn3jt/K+yfIuqqyeOTn+red1bN NpQVlW4KK9VJNfTJ45KtT9XZao0ku5NkhhZckPhvpO4ik5IlUteXh+V1YMU3gJLY+ian b1vI/aUb15YGHT563yU8RX2tg30G1MoOLlhRhXSXHUiFGslK6C2a1ymMzXRy/uGNMxI1 uVZQ== X-Gm-Message-State: AO0yUKWlpV2eST/HPTRjq9TYIf0RW4X2y+4bpjqspSgpQ/PTXKpwB5RP z3AgSGBTTz3cYiuqhX5KKRMvxZfcvJo= X-Google-Smtp-Source: AK7set/UrZbuN6+jBGVhLpdyxhBddR0DTKq9VGW9NSLYV4cNhKxkQJOAJEPUfynDV/9N5qGMCtP0kA== X-Received: by 2002:a17:906:6543:b0:8a9:e031:c4ae with SMTP id u3-20020a170906654300b008a9e031c4aemr1921260ejn.2.1677573257536; Tue, 28 Feb 2023 00:34:17 -0800 (PST) Received: from able.fritz.box (p5b0ea2e7.dip0.t-ipconnect.de. [91.14.162.231]) by smtp.gmail.com with ESMTPSA id ss17-20020a170907039100b008cf6f8798e1sm4296969ejb.54.2023.02.28.00.34.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 00:34:16 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 7/9] drm/radeon: switch over to drm_exec Date: Tue, 28 Feb 2023 09:34:04 +0100 Message-Id: <20230228083406.1720795-8-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230228083406.1720795-1-christian.koenig@amd.com> References: <20230228083406.1720795-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dakr@redhat.com, arunpravin.paneerselvam@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Just a straightforward conversion without any optimization. Signed-off-by: Christian König --- drivers/gpu/drm/radeon/radeon.h | 7 ++-- drivers/gpu/drm/radeon/radeon_cs.c | 45 +++++++++++++------------- drivers/gpu/drm/radeon/radeon_gem.c | 40 +++++++++++++---------- drivers/gpu/drm/radeon/radeon_object.c | 25 +++++++------- drivers/gpu/drm/radeon/radeon_object.h | 2 +- drivers/gpu/drm/radeon/radeon_vm.c | 10 +++--- 6 files changed, 66 insertions(+), 63 deletions(-) diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index 57e20780a458..c67b537170e7 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -75,8 +75,8 @@ #include #include -#include +#include #include #include @@ -457,7 +457,8 @@ struct radeon_mman { struct radeon_bo_list { struct radeon_bo *robj; - struct ttm_validate_buffer tv; + struct list_head list; + bool shared; uint64_t gpu_offset; unsigned preferred_domains; unsigned allowed_domains; @@ -1068,6 +1069,7 @@ struct radeon_cs_parser { struct radeon_bo_list *vm_bos; struct list_head validated; unsigned dma_reloc_idx; + struct drm_exec exec; /* indices of various chunks */ struct radeon_cs_chunk *chunk_ib; struct radeon_cs_chunk *chunk_relocs; @@ -1081,7 +1083,6 @@ struct radeon_cs_parser { u32 cs_flags; u32 ring; s32 priority; - struct ww_acquire_ctx ticket; }; static inline u32 radeon_get_ib_value(struct radeon_cs_parser *p, int idx) diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c index 46a27ebf4588..5c681a44cec7 100644 --- a/drivers/gpu/drm/radeon/radeon_cs.c +++ b/drivers/gpu/drm/radeon/radeon_cs.c @@ -182,11 +182,8 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p) } } - p->relocs[i].tv.bo = &p->relocs[i].robj->tbo; - p->relocs[i].tv.num_shared = !r->write_domain; - - radeon_cs_buckets_add(&buckets, &p->relocs[i].tv.head, - priority); + p->relocs[i].shared = !r->write_domain; + radeon_cs_buckets_add(&buckets, &p->relocs[i].list, priority); } radeon_cs_buckets_get_list(&buckets, &p->validated); @@ -197,7 +194,7 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p) if (need_mmap_lock) mmap_read_lock(current->mm); - r = radeon_bo_list_validate(p->rdev, &p->ticket, &p->validated, p->ring); + r = radeon_bo_list_validate(p->rdev, &p->exec, &p->validated, p->ring); if (need_mmap_lock) mmap_read_unlock(current->mm); @@ -253,12 +250,11 @@ static int radeon_cs_sync_rings(struct radeon_cs_parser *p) struct radeon_bo_list *reloc; int r; - list_for_each_entry(reloc, &p->validated, tv.head) { + list_for_each_entry(reloc, &p->validated, list) { struct dma_resv *resv; resv = reloc->robj->tbo.base.resv; - r = radeon_sync_resv(p->rdev, &p->ib.sync, resv, - reloc->tv.num_shared); + r = radeon_sync_resv(p->rdev, &p->ib.sync, resv, reloc->shared); if (r) return r; } @@ -275,6 +271,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data) s32 priority = 0; INIT_LIST_HEAD(&p->validated); + drm_exec_init(&p->exec, true); if (!cs->num_chunks) { return 0; @@ -396,8 +393,8 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data) static int cmp_size_smaller_first(void *priv, const struct list_head *a, const struct list_head *b) { - struct radeon_bo_list *la = list_entry(a, struct radeon_bo_list, tv.head); - struct radeon_bo_list *lb = list_entry(b, struct radeon_bo_list, tv.head); + struct radeon_bo_list *la = list_entry(a, struct radeon_bo_list, list); + struct radeon_bo_list *lb = list_entry(b, struct radeon_bo_list, list); /* Sort A before B if A is smaller. */ if (la->robj->tbo.base.size > lb->robj->tbo.base.size) @@ -416,11 +413,13 @@ static int cmp_size_smaller_first(void *priv, const struct list_head *a, * If error is set than unvalidate buffer, otherwise just free memory * used by parsing context. **/ -static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bool backoff) +static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error) { unsigned i; if (!error) { + struct radeon_bo_list *reloc; + /* Sort the buffer list from the smallest to largest buffer, * which affects the order of buffers in the LRU list. * This assures that the smallest buffers are added first @@ -432,15 +431,17 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bo * per frame under memory pressure. */ list_sort(NULL, &parser->validated, cmp_size_smaller_first); - - ttm_eu_fence_buffer_objects(&parser->ticket, - &parser->validated, - &parser->ib.fence->base); - } else if (backoff) { - ttm_eu_backoff_reservation(&parser->ticket, - &parser->validated); + list_for_each_entry(reloc, &parser->validated, list) { + dma_resv_add_fence(reloc->robj->tbo.base.resv, + &parser->ib.fence->base, + reloc->shared ? + DMA_RESV_USAGE_READ : + DMA_RESV_USAGE_WRITE); + } } + drm_exec_fini(&parser->exec); + if (parser->relocs != NULL) { for (i = 0; i < parser->nrelocs; i++) { struct radeon_bo *bo = parser->relocs[i].robj; @@ -692,7 +693,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) r = radeon_cs_parser_init(&parser, data); if (r) { DRM_ERROR("Failed to initialize parser !\n"); - radeon_cs_parser_fini(&parser, r, false); + radeon_cs_parser_fini(&parser, r); up_read(&rdev->exclusive_lock); r = radeon_cs_handle_lockup(rdev, r); return r; @@ -706,7 +707,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) } if (r) { - radeon_cs_parser_fini(&parser, r, false); + radeon_cs_parser_fini(&parser, r); up_read(&rdev->exclusive_lock); r = radeon_cs_handle_lockup(rdev, r); return r; @@ -723,7 +724,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) goto out; } out: - radeon_cs_parser_fini(&parser, r, true); + radeon_cs_parser_fini(&parser, r); up_read(&rdev->exclusive_lock); r = radeon_cs_handle_lockup(rdev, r); return r; diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index 261fcbae88d7..c67b69b3e4d2 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -625,33 +625,41 @@ int radeon_gem_get_tiling_ioctl(struct drm_device *dev, void *data, static void radeon_gem_va_update_vm(struct radeon_device *rdev, struct radeon_bo_va *bo_va) { - struct ttm_validate_buffer tv, *entry; - struct radeon_bo_list *vm_bos; - struct ww_acquire_ctx ticket; + struct radeon_bo_list *vm_bos, *entry; struct list_head list; + struct drm_exec exec; unsigned domain; int r; INIT_LIST_HEAD(&list); - tv.bo = &bo_va->bo->tbo; - tv.num_shared = 1; - list_add(&tv.head, &list); - vm_bos = radeon_vm_get_bos(rdev, bo_va->vm, &list); if (!vm_bos) return; - r = ttm_eu_reserve_buffers(&ticket, &list, true, NULL); - if (r) - goto error_free; + drm_exec_init(&exec, true); + drm_exec_while_not_all_locked(&exec) { + list_for_each_entry(entry, &list, list) { + r = drm_exec_prepare_obj(&exec, &entry->robj->tbo.base, + 1); + drm_exec_break_on_contention(&exec); + if (unlikely(r)) + goto error_cleanup; + } + drm_exec_continue_on_contention(&exec); - list_for_each_entry(entry, &list, head) { - domain = radeon_mem_type_to_domain(entry->bo->resource->mem_type); + r = drm_exec_prepare_obj(&exec, &bo_va->bo->tbo.base, 1); + drm_exec_continue_on_contention(&exec); + if (unlikely(r)) + goto error_cleanup; + } + + list_for_each_entry(entry, &list, list) { + domain = radeon_mem_type_to_domain(entry->robj->tbo.resource->mem_type); /* if anything is swapped out don't swap it in here, just abort and wait for the next CS */ if (domain == RADEON_GEM_DOMAIN_CPU) - goto error_unreserve; + goto error_cleanup; } mutex_lock(&bo_va->vm->mutex); @@ -665,10 +673,8 @@ static void radeon_gem_va_update_vm(struct radeon_device *rdev, error_unlock: mutex_unlock(&bo_va->vm->mutex); -error_unreserve: - ttm_eu_backoff_reservation(&ticket, &list); - -error_free: +error_cleanup: + drm_exec_fini(&exec); kvfree(vm_bos); if (r && r != -ERESTARTSYS) diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index 10c0fbd9d2b4..508a6e9a2dca 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -468,23 +468,26 @@ static u64 radeon_bo_get_threshold_for_moves(struct radeon_device *rdev) } int radeon_bo_list_validate(struct radeon_device *rdev, - struct ww_acquire_ctx *ticket, + struct drm_exec *exec, struct list_head *head, int ring) { struct ttm_operation_ctx ctx = { true, false }; struct radeon_bo_list *lobj; - struct list_head duplicates; - int r; u64 bytes_moved = 0, initial_bytes_moved; u64 bytes_moved_threshold = radeon_bo_get_threshold_for_moves(rdev); + int r; - INIT_LIST_HEAD(&duplicates); - r = ttm_eu_reserve_buffers(ticket, head, true, &duplicates); - if (unlikely(r != 0)) { - return r; + drm_exec_while_not_all_locked(exec) { + list_for_each_entry(lobj, head, list) { + r = drm_exec_prepare_obj(exec, &lobj->robj->tbo.base, + 1); + drm_exec_break_on_contention(exec); + if (unlikely(r && r != -EALREADY)) + return r; + } } - list_for_each_entry(lobj, head, tv.head) { + list_for_each_entry(lobj, head, list) { struct radeon_bo *bo = lobj->robj; if (!bo->tbo.pin_count) { u32 domain = lobj->preferred_domains; @@ -523,7 +526,6 @@ int radeon_bo_list_validate(struct radeon_device *rdev, domain = lobj->allowed_domains; goto retry; } - ttm_eu_backoff_reservation(ticket, head); return r; } } @@ -531,11 +533,6 @@ int radeon_bo_list_validate(struct radeon_device *rdev, lobj->tiling_flags = bo->tiling_flags; } - list_for_each_entry(lobj, &duplicates, tv.head) { - lobj->gpu_offset = radeon_bo_gpu_offset(lobj->robj); - lobj->tiling_flags = lobj->robj->tiling_flags; - } - return 0; } diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h index 0a6ef49e990a..04c7c17e8287 100644 --- a/drivers/gpu/drm/radeon/radeon_object.h +++ b/drivers/gpu/drm/radeon/radeon_object.h @@ -152,7 +152,7 @@ extern void radeon_bo_force_delete(struct radeon_device *rdev); extern int radeon_bo_init(struct radeon_device *rdev); extern void radeon_bo_fini(struct radeon_device *rdev); extern int radeon_bo_list_validate(struct radeon_device *rdev, - struct ww_acquire_ctx *ticket, + struct drm_exec *exec, struct list_head *head, int ring); extern int radeon_bo_set_tiling_flags(struct radeon_bo *bo, u32 tiling_flags, u32 pitch); diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index 987cabbf1318..647c4a07d92a 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -142,10 +142,9 @@ struct radeon_bo_list *radeon_vm_get_bos(struct radeon_device *rdev, list[0].robj = vm->page_directory; list[0].preferred_domains = RADEON_GEM_DOMAIN_VRAM; list[0].allowed_domains = RADEON_GEM_DOMAIN_VRAM; - list[0].tv.bo = &vm->page_directory->tbo; - list[0].tv.num_shared = 1; + list[0].shared = true; list[0].tiling_flags = 0; - list_add(&list[0].tv.head, head); + list_add(&list[0].list, head); for (i = 0, idx = 1; i <= vm->max_pde_used; i++) { if (!vm->page_tables[i].bo) @@ -154,10 +153,9 @@ struct radeon_bo_list *radeon_vm_get_bos(struct radeon_device *rdev, list[idx].robj = vm->page_tables[i].bo; list[idx].preferred_domains = RADEON_GEM_DOMAIN_VRAM; list[idx].allowed_domains = RADEON_GEM_DOMAIN_VRAM; - list[idx].tv.bo = &list[idx].robj->tbo; - list[idx].tv.num_shared = 1; + list[idx].shared = true; list[idx].tiling_flags = 0; - list_add(&list[idx++].tv.head, head); + list_add(&list[idx++].list, head); } return list; From patchwork Tue Feb 28 08:34:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13154524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1FA71C64ED6 for ; Tue, 28 Feb 2023 08:34:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 198BB10E655; Tue, 28 Feb 2023 08:34:28 +0000 (UTC) Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com [IPv6:2a00:1450:4864:20::52f]) by gabe.freedesktop.org (Postfix) with ESMTPS id 592D410E64D; Tue, 28 Feb 2023 08:34:20 +0000 (UTC) Received: by mail-ed1-x52f.google.com with SMTP id o15so34045680edr.13; Tue, 28 Feb 2023 00:34:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RnOLuzBIHXd8N7NdPWm8ajKMgx1StmqHBUkyiupwMv8=; b=hm4c0pvjqX1fMtpUFxwCwMRES+V0XSi0Yx2BAfT4EQ0XvAh6pgMIEpMa2aQNv1SzQu KN9/xEwhNzWVagW2fhH2fZhIaj8Oj75XCLjHFXQ45ACUSrpao1UoVHvYhxiPbxLgXml0 mU5eS+z3q5bko18q2403mvF4we7FMP0kMa7jo0BIj84eWaE2drSIJGs2ynlanzpItx8v t2ZqwTtBrtHOD0frfRhlq262GHgIXK5Mg4LtYEPaKCC+6ir+fkEUtLQoht+4Sks38y+N UnZLOlOyw0ykfkhFII4haBRN85TN2Jo8vy2QedWQ8tJi91oNfwsUlY9J1m68FYlG+p0L EXHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RnOLuzBIHXd8N7NdPWm8ajKMgx1StmqHBUkyiupwMv8=; b=QL2sScpEDqx4A8l53HyiTuO6LHe1xnepd9Kcu12HWQMw/tICwIjzdIcGa3zBetEf+/ 4Nv8IUXFPWFz6nJTYx+KrJ64lDnGFmux2m7py3wuA9y+lQmI/yP3spHQSpVIZbub+vvZ 7gDmJuWMldiCv5mFh0kYsRGETFt9Nz+/u5e6Jf3ACNZJUg6YWRdisJYGYtb0FTzS1br8 dlVV9RV8IQrqnGhHzro1OuoCcr2TBMc/q6x6YNRT0HNTeokbMd2TbhOr4EUV72InRMOQ oszuEFD6Q5bEbxgZoRAnk8vEUdpj0gssiUIdT1D2oc4aCw+W9np+afimmlJvLzIxgHCg iQ6w== X-Gm-Message-State: AO0yUKVlgqEX6JcOO2i0xWeu7tN7YncdXCJSMN+VoUBqcmtmGpTArN/t Q9w2J1b9RapmDYm7oq82/zHbtDlHAhA= X-Google-Smtp-Source: AK7set/Gj6j8lk/pcLMglOxXHpU0XW8VruRW1IMOGmRGKqZqyNfoCnn9jFAk5oTg4GGx1uK1YgIE7Q== X-Received: by 2002:a17:906:1803:b0:885:5682:7e52 with SMTP id v3-20020a170906180300b0088556827e52mr1591386eje.13.1677573258932; Tue, 28 Feb 2023 00:34:18 -0800 (PST) Received: from able.fritz.box (p5b0ea2e7.dip0.t-ipconnect.de. [91.14.162.231]) by smtp.gmail.com with ESMTPSA id ss17-20020a170907039100b008cf6f8798e1sm4296969ejb.54.2023.02.28.00.34.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 00:34:18 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 8/9] drm/qxl: switch to using drm_exec Date: Tue, 28 Feb 2023 09:34:05 +0100 Message-Id: <20230228083406.1720795-9-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230228083406.1720795-1-christian.koenig@amd.com> References: <20230228083406.1720795-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dakr@redhat.com, arunpravin.paneerselvam@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Just a straightforward conversion without any optimization. Only compile tested for now. Signed-off-by: Christian König --- drivers/gpu/drm/qxl/qxl_drv.h | 7 ++-- drivers/gpu/drm/qxl/qxl_release.c | 67 ++++++++++++++++--------------- 2 files changed, 38 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index ea993d7162e8..3e732648b332 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -38,12 +38,12 @@ #include #include +#include #include #include #include #include #include -#include #include #include "qxl_dev.h" @@ -101,7 +101,8 @@ struct qxl_gem { }; struct qxl_bo_list { - struct ttm_validate_buffer tv; + struct qxl_bo *bo; + struct list_head list; }; struct qxl_crtc { @@ -151,7 +152,7 @@ struct qxl_release { struct qxl_bo *release_bo; uint32_t release_offset; uint32_t surface_release_id; - struct ww_acquire_ctx ticket; + struct drm_exec exec; struct list_head bos; }; diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 368d26da0d6a..da7cd9cd58f9 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -121,13 +121,11 @@ qxl_release_free_list(struct qxl_release *release) { while (!list_empty(&release->bos)) { struct qxl_bo_list *entry; - struct qxl_bo *bo; entry = container_of(release->bos.next, - struct qxl_bo_list, tv.head); - bo = to_qxl_bo(entry->tv.bo); - qxl_bo_unref(&bo); - list_del(&entry->tv.head); + struct qxl_bo_list, list); + qxl_bo_unref(&entry->bo); + list_del(&entry->list); kfree(entry); } release->release_bo = NULL; @@ -172,8 +170,8 @@ int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo) { struct qxl_bo_list *entry; - list_for_each_entry(entry, &release->bos, tv.head) { - if (entry->tv.bo == &bo->tbo) + list_for_each_entry(entry, &release->bos, list) { + if (entry->bo == bo) return 0; } @@ -182,9 +180,8 @@ int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo) return -ENOMEM; qxl_bo_ref(bo); - entry->tv.bo = &bo->tbo; - entry->tv.num_shared = 0; - list_add_tail(&entry->tv.head, &release->bos); + entry->bo = bo; + list_add_tail(&entry->list, &release->bos); return 0; } @@ -221,21 +218,27 @@ int qxl_release_reserve_list(struct qxl_release *release, bool no_intr) if (list_is_singular(&release->bos)) return 0; - ret = ttm_eu_reserve_buffers(&release->ticket, &release->bos, - !no_intr, NULL); - if (ret) - return ret; - - list_for_each_entry(entry, &release->bos, tv.head) { - struct qxl_bo *bo = to_qxl_bo(entry->tv.bo); - - ret = qxl_release_validate_bo(bo); - if (ret) { - ttm_eu_backoff_reservation(&release->ticket, &release->bos); - return ret; + drm_exec_init(&release->exec, !no_intr); + drm_exec_while_not_all_locked(&release->exec) { + list_for_each_entry(entry, &release->bos, list) { + ret = drm_exec_prepare_obj(&release->exec, + &entry->bo->tbo.base, + 1); + drm_exec_break_on_contention(&release->exec); + if (ret) + goto error; } } + + list_for_each_entry(entry, &release->bos, list) { + ret = qxl_release_validate_bo(entry->bo); + if (ret) + goto error; + } return 0; +error: + drm_exec_fini(&release->exec); + return ret; } void qxl_release_backoff_reserve_list(struct qxl_release *release) @@ -245,7 +248,7 @@ void qxl_release_backoff_reserve_list(struct qxl_release *release) if (list_is_singular(&release->bos)) return; - ttm_eu_backoff_reservation(&release->ticket, &release->bos); + drm_exec_fini(&release->exec); } int qxl_alloc_surface_release_reserved(struct qxl_device *qdev, @@ -404,18 +407,18 @@ void qxl_release_unmap(struct qxl_device *qdev, void qxl_release_fence_buffer_objects(struct qxl_release *release) { - struct ttm_buffer_object *bo; struct ttm_device *bdev; - struct ttm_validate_buffer *entry; + struct qxl_bo_list *entry; struct qxl_device *qdev; + struct qxl_bo *bo; /* if only one object on the release its the release itself since these objects are pinned no need to reserve */ if (list_is_singular(&release->bos) || list_empty(&release->bos)) return; - bo = list_first_entry(&release->bos, struct ttm_validate_buffer, head)->bo; - bdev = bo->bdev; + bo = list_first_entry(&release->bos, struct qxl_bo_list, list)->bo; + bdev = bo->tbo.bdev; qdev = container_of(bdev, struct qxl_device, mman.bdev); /* @@ -426,14 +429,12 @@ void qxl_release_fence_buffer_objects(struct qxl_release *release) release->id | 0xf0000000, release->base.seqno); trace_dma_fence_emit(&release->base); - list_for_each_entry(entry, &release->bos, head) { + list_for_each_entry(entry, &release->bos, list) { bo = entry->bo; - dma_resv_add_fence(bo->base.resv, &release->base, + dma_resv_add_fence(bo->tbo.base.resv, &release->base, DMA_RESV_USAGE_READ); - ttm_bo_move_to_lru_tail_unlocked(bo); - dma_resv_unlock(bo->base.resv); + ttm_bo_move_to_lru_tail_unlocked(&bo->tbo); } - ww_acquire_fini(&release->ticket); + drm_exec_fini(&release->exec); } - From patchwork Tue Feb 28 08:34:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13154521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AF5AC7EE31 for ; Tue, 28 Feb 2023 08:34:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6BB1E10E64F; Tue, 28 Feb 2023 08:34:23 +0000 (UTC) Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by gabe.freedesktop.org (Postfix) with ESMTPS id E58E310E5E0; Tue, 28 Feb 2023 08:34:20 +0000 (UTC) Received: by mail-ed1-x535.google.com with SMTP id cy6so36575710edb.5; Tue, 28 Feb 2023 00:34:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=c/hmbm/q0nSKNbqQpgu9nQ6qdhK2VpCsSoq4ps+Ylow=; b=AGht0Vd60zzLNPAlw4Ol2WZFE57+NhYsd01ieaAnwKAqQ6oxwvmzTEpa81oe5tnL8W cYXfGuYierW3h46C/s3Cn8qkRDVmkZ9EQ91IObTjnnq+pubENHcF8bJTbyALPpSIvMnr Ms9F6DygwhyMHLfa+WtaybQvkmg03PRcpLBKW5jCNju/SvsTHLXNhJ9dOG+UApCwqcJK XrYfuEKnMlYj/ZIi9B4tDxKVBxXshT43q7mDvrYocNCOnTnovevoWBl6Ru3QBFBeMswN zpQft/DxtcszyDmrPjwxn8ztekyrIhH4Fm/UTocCF/wpmc48FaufeF1LMTz+RU3zjVgk DEdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=c/hmbm/q0nSKNbqQpgu9nQ6qdhK2VpCsSoq4ps+Ylow=; b=sSRQJeyLEJBxfamIip0Zi1xGZzYB9WFZs12bI3o/7xk80B+cTaFWpzAZ+vQUnoqZLn cDDpfoX1SdC1FMXcURA1sj1bWlwCAU2Pyc5yRPbO0SwV86rubya2Sktz0RaaNU/MYhaM ekOQrl/AXHx1jfqLblXWsUOg9vNsXocNu1WjhXDXA408aGN3zCJFfP9DpVpRLQQb3w+P 9nYwtOJYcbN4pRVHrmGMPY0TzMT/LMK9Yu5ssMt94XAhS2bUy7pR1OxR9A41qb5sFs2O BuRLYrQt2eJz9X48kFYGhCb6Anix9co76ef2z3EjVnynR9PYMSzyZourSWql6JK4r6/P xnCA== X-Gm-Message-State: AO0yUKXFO1ktc2OcXvET7xXGGyP1Z4EetG/ZruhGyQ1DGqkn+aR1S5gN VQvGWjkH3ED6zwYh2oHVemIzsGBvzvY= X-Google-Smtp-Source: AK7set/UdJyyW1j1CKa7Ky3v16AMCR3iXoSK1ncaOrr2yq59k1y3rfpOA1wLHCC4BXmjC04gToAbWA== X-Received: by 2002:a17:906:430b:b0:8af:370a:c1f8 with SMTP id j11-20020a170906430b00b008af370ac1f8mr1528509ejm.23.1677573260541; Tue, 28 Feb 2023 00:34:20 -0800 (PST) Received: from able.fritz.box (p5b0ea2e7.dip0.t-ipconnect.de. [91.14.162.231]) by smtp.gmail.com with ESMTPSA id ss17-20020a170907039100b008cf6f8798e1sm4296969ejb.54.2023.02.28.00.34.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 00:34:20 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 9/9] drm: move ttm_execbuf_util into vmwgfx Date: Tue, 28 Feb 2023 09:34:06 +0100 Message-Id: <20230228083406.1720795-10-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230228083406.1720795-1-christian.koenig@amd.com> References: <20230228083406.1720795-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dakr@redhat.com, arunpravin.paneerselvam@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" VMWGFX is the only remaining user of this and should probably moved over to drm_exec when it starts using GEM as well. Signed-off-by: Christian König --- drivers/gpu/drm/ttm/Makefile | 4 ++-- drivers/gpu/drm/vmwgfx/Makefile | 2 +- drivers/gpu/drm/{ttm => vmwgfx}/ttm_execbuf_util.c | 7 ++++++- .../drm/ttm => drivers/gpu/drm/vmwgfx}/ttm_execbuf_util.h | 0 drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_validation.h | 2 +- 6 files changed, 11 insertions(+), 6 deletions(-) rename drivers/gpu/drm/{ttm => vmwgfx}/ttm_execbuf_util.c (97%) rename {include/drm/ttm => drivers/gpu/drm/vmwgfx}/ttm_execbuf_util.h (100%) diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile index f906b22959cf..b05a8477d0d0 100644 --- a/drivers/gpu/drm/ttm/Makefile +++ b/drivers/gpu/drm/ttm/Makefile @@ -3,8 +3,8 @@ # Makefile for the drm device driver. This driver provides support for the ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ - ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o \ - ttm_device.o ttm_sys_manager.o + ttm_range_manager.o ttm_resource.o ttm_pool.o ttm_device.o \ + ttm_sys_manager.o ttm-$(CONFIG_AGP) += ttm_agp_backend.o obj-$(CONFIG_DRM_TTM) += ttm.o diff --git a/drivers/gpu/drm/vmwgfx/Makefile b/drivers/gpu/drm/vmwgfx/Makefile index e94479d9cd5b..e30e10e25c53 100644 --- a/drivers/gpu/drm/vmwgfx/Makefile +++ b/drivers/gpu/drm/vmwgfx/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_kms.o vmwgfx_drv.o \ - vmwgfx_ioctl.o vmwgfx_resource.o vmwgfx_ttm_buffer.o \ + vmwgfx_ioctl.o vmwgfx_resource.o vmwgfx_ttm_buffer.o ttm_execbuf_util.o \ vmwgfx_cmd.o vmwgfx_irq.o vmwgfx_ldu.o \ vmwgfx_overlay.o vmwgfx_gmrid_manager.o vmwgfx_fence.o \ vmwgfx_bo.o vmwgfx_scrn.o vmwgfx_context.o \ diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/vmwgfx/ttm_execbuf_util.c similarity index 97% rename from drivers/gpu/drm/ttm/ttm_execbuf_util.c rename to drivers/gpu/drm/vmwgfx/ttm_execbuf_util.c index f1c60fa80c2d..5e4e28899acd 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/vmwgfx/ttm_execbuf_util.c @@ -26,8 +26,13 @@ * **************************************************************************/ -#include #include +#include +#include +#include +#include + +#include "ttm_execbuf_util.h" static void ttm_eu_backoff_reservation_reverse(struct list_head *list, struct ttm_validate_buffer *entry) diff --git a/include/drm/ttm/ttm_execbuf_util.h b/drivers/gpu/drm/vmwgfx/ttm_execbuf_util.h similarity index 100% rename from include/drm/ttm/ttm_execbuf_util.h rename to drivers/gpu/drm/vmwgfx/ttm_execbuf_util.h diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index fb8f0c0642c0..49e3dd8c04ec 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -37,11 +37,11 @@ #include #include -#include #include #include #include +#include "ttm_execbuf_util.h" #include "ttm_object.h" #include "vmwgfx_fence.h" diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h index 240ee0c4ebfd..927fc8afdbfe 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h @@ -32,7 +32,7 @@ #include #include -#include +#include "ttm_execbuf_util.h" #define VMW_RES_DIRTY_NONE 0 #define VMW_RES_DIRTY_SET BIT(0)