From patchwork Wed Nov 1 23:30:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB55DC4332F for ; Wed, 1 Nov 2023 23:31:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3298610E7DD; Wed, 1 Nov 2023 23:31:29 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8C7B110E7DD for ; Wed, 1 Nov 2023 23:31:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881485; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1lD0COgBfwH5UYAd332SA45aE7haVTJhuDDmzsBJW6I=; b=T8Yxexz7aFqL+WbUcwli5ZpJ8KlRjmLnPdHmxkwjlLIRz/0/V3bJbTfZbJfQ4ith31jJuK ma0O8VPM2/ssViOxu+oXEkEi2Ztkhr+Uoevk8XYxnTHvgJYoNH7p3OIqquinoSh/6NOmdC 16EgFACF+/TqZTmTrTCQERrTJA2ncw0= Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-548-Ccl1gc7GOM-f6H4xSXL0Hg-1; Wed, 01 Nov 2023 19:31:24 -0400 X-MC-Unique: Ccl1gc7GOM-f6H4xSXL0Hg-1 Received: by mail-ed1-f70.google.com with SMTP id 4fb4d7f45d1cf-54132059c6aso205644a12.1 for ; Wed, 01 Nov 2023 16:31:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881483; x=1699486283; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1lD0COgBfwH5UYAd332SA45aE7haVTJhuDDmzsBJW6I=; b=wWKGd3QsVr9HQ6wGJZPOqdSD2Dx1jIF07lamy5Yv8DXcG9w0p5HMsMqIOkv8bv4Fts WzSnHgCl73jcTThF4/WNFhYcVVv3nwtme7FfAKUf+EI1M8I3uSyHi0hPoxkhcICdz+Pt QR3+orRjFRGKOBIXrSz0KBFad/ILXf5b/PuYt7yC23aehvEA5I8c4DdQX/mtiZMB6JRQ QrUN0Z7HNKLPmyUQs0fwm8IcFj6CP0rIB1QM2QdPCj7fJC3sL9GT8QnwP0W4EqOVxJWN cx3uA0XzJQ6SenZCUChR9Hw2BW4Uq+baU/lZDjlNwCpqVw8SgaC3d1kYv9rgCRQmwK4H YCRQ== X-Gm-Message-State: AOJu0Yxbo2S4Ikp4LuOOiLgrQ3a6HjGw0ytSpoW7qBJdUZ0v9wFv8uga wvCPAVIV6Hjsl2lkkTJyi+BhDeGk7YV+6Zm4KAXzk8vUp5YcRf9Kkh/UL8VOktJAy9tv2CJTCBw fb+hJ2gCbF3UW4VhlQ88vZP2xFISI X-Received: by 2002:a05:6402:1ac8:b0:53d:b839:2045 with SMTP id ba8-20020a0564021ac800b0053db8392045mr12074612edb.25.1698881482996; Wed, 01 Nov 2023 16:31:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFF4r0+dOme5iGT3Nw+myMv/GbANRSOpuEAorLBfyWK6TebZ9Lppjqo8x4AQ4l4nKNYbvJHKQ== X-Received: by 2002:a05:6402:1ac8:b0:53d:b839:2045 with SMTP id ba8-20020a0564021ac800b0053db8392045mr12074599edb.25.1698881482710; Wed, 01 Nov 2023 16:31:22 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id s28-20020a50ab1c000000b0053dfd3519f4sm1621724edc.22.2023.11.01.16.31.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:22 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 01/12] drm/gpuvm: convert WARN() to drm_WARN() variants Date: Thu, 2 Nov 2023 00:30:53 +0100 Message-ID: <20231101233113.8059-2-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use drm_WARN() and drm_WARN_ON() variants to indicate drivers the context the failing VM resides in. Reviewed-by: Boris Brezillon Reviewed-by: Thomas Hellström Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuvm.c | 32 ++++++++++++++------------ drivers/gpu/drm/nouveau/nouveau_uvmm.c | 3 ++- include/drm/drm_gpuvm.h | 7 ++++++ 3 files changed, 26 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 08c088319652..d7367a202fee 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -614,12 +614,12 @@ static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, static void __drm_gpuva_remove(struct drm_gpuva *va); static bool -drm_gpuvm_check_overflow(u64 addr, u64 range) +drm_gpuvm_check_overflow(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { u64 end; - return WARN(check_add_overflow(addr, range, &end), - "GPUVA address limited to %zu bytes.\n", sizeof(end)); + return drm_WARN(gpuvm->drm, check_add_overflow(addr, range, &end), + "GPUVA address limited to %zu bytes.\n", sizeof(end)); } static bool @@ -647,7 +647,7 @@ static bool drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { - return !drm_gpuvm_check_overflow(addr, range) && + return !drm_gpuvm_check_overflow(gpuvm, addr, range) && drm_gpuvm_in_mm_range(gpuvm, addr, range) && !drm_gpuvm_in_kernel_node(gpuvm, addr, range); } @@ -656,6 +656,7 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, * drm_gpuvm_init() - initialize a &drm_gpuvm * @gpuvm: pointer to the &drm_gpuvm to initialize * @name: the name of the GPU VA space + * @drm: the &drm_device this VM resides in * @start_offset: the start offset of the GPU VA space * @range: the size of the GPU VA space * @reserve_offset: the start of the kernel reserved GPU VA area @@ -668,8 +669,8 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, * &name is expected to be managed by the surrounding driver structures. */ void -drm_gpuvm_init(struct drm_gpuvm *gpuvm, - const char *name, +drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, + struct drm_device *drm, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, const struct drm_gpuvm_ops *ops) @@ -677,20 +678,20 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, gpuvm->rb.tree = RB_ROOT_CACHED; INIT_LIST_HEAD(&gpuvm->rb.list); - drm_gpuvm_check_overflow(start_offset, range); - gpuvm->mm_start = start_offset; - gpuvm->mm_range = range; - gpuvm->name = name ? name : "unknown"; gpuvm->ops = ops; + gpuvm->drm = drm; - memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); + drm_gpuvm_check_overflow(gpuvm, start_offset, range); + gpuvm->mm_start = start_offset; + gpuvm->mm_range = range; + memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); if (reserve_range) { gpuvm->kernel_alloc_node.va.addr = reserve_offset; gpuvm->kernel_alloc_node.va.range = reserve_range; - if (likely(!drm_gpuvm_check_overflow(reserve_offset, + if (likely(!drm_gpuvm_check_overflow(gpuvm, reserve_offset, reserve_range))) __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node); } @@ -712,8 +713,8 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) if (gpuvm->kernel_alloc_node.va.range) __drm_gpuva_remove(&gpuvm->kernel_alloc_node); - WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), - "GPUVA tree is not empty, potentially leaking memory."); + drm_WARN(gpuvm->drm, !RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), + "GPUVA tree is not empty, potentially leaking memory.\n"); } EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); @@ -795,7 +796,8 @@ drm_gpuva_remove(struct drm_gpuva *va) struct drm_gpuvm *gpuvm = va->vm; if (unlikely(va == &gpuvm->kernel_alloc_node)) { - WARN(1, "Can't destroy kernel reserved node.\n"); + drm_WARN(gpuvm->drm, 1, + "Can't destroy kernel reserved node.\n"); return; } diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index 5cf892c50f43..aaf5d28bd587 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1808,6 +1808,7 @@ int nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, u64 kernel_managed_addr, u64 kernel_managed_size) { + struct drm_device *drm = cli->drm->dev; int ret; u64 kernel_managed_end = kernel_managed_addr + kernel_managed_size; @@ -1836,7 +1837,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, uvmm->kernel_managed_addr = kernel_managed_addr; uvmm->kernel_managed_size = kernel_managed_size; - drm_gpuvm_init(&uvmm->base, cli->name, + drm_gpuvm_init(&uvmm->base, cli->name, drm, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, kernel_managed_addr, kernel_managed_size, diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index bdfafc4a7705..687fd5893624 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -29,6 +29,7 @@ #include #include +#include #include struct drm_gpuvm; @@ -201,6 +202,11 @@ struct drm_gpuvm { */ const char *name; + /** + * @drm: the &drm_device this VM lives in + */ + struct drm_device *drm; + /** * @mm_start: start of the VA space */ @@ -241,6 +247,7 @@ struct drm_gpuvm { }; void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, + struct drm_device *drm, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, const struct drm_gpuvm_ops *ops); From patchwork Wed Nov 1 23:30:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5B3A2C4167B for ; Wed, 1 Nov 2023 23:31:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 10DEC10E7E0; Wed, 1 Nov 2023 23:31:32 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9FF0810E7E0 for ; Wed, 1 Nov 2023 23:31:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881488; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HHqLLAc60dwamxaygC2L/egQsbDCJ0kydHBon+KAuxI=; b=LE2aXS3yJz/jbOT1G6MqN0lQaKURyAatshqediycRnXT3xI+tUprzPfkFudAairjRSlYsk i0kd9qZfLItGghJ6geNRtCAA5YIyv4qEmn5xPYiAH++gJex7oQWmK6D7gvvIWKWg+/Hcdb 1dGtFDX/2WuRL1T90h2u8xB0vUJEh90= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-190-zNqZy1JbOeqSwo3tuXJ57w-1; Wed, 01 Nov 2023 19:31:27 -0400 X-MC-Unique: zNqZy1JbOeqSwo3tuXJ57w-1 Received: by mail-ej1-f72.google.com with SMTP id a640c23a62f3a-9d223144f23so19840366b.3 for ; Wed, 01 Nov 2023 16:31:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881486; x=1699486286; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HHqLLAc60dwamxaygC2L/egQsbDCJ0kydHBon+KAuxI=; b=vQPfHMN94ZKMhmofnIP9Ajkp2ZAeCMSs0t/3jQup0KhM1G1/qfLT+dXagEVkSWwSlI uqiigwxxx7woFSspRMliFz83n8vDt9oeZgKiKrCIuhyfcwb+9xnNKHfivYgmlju1OtNe IwI/5CiEdA7W/XE1O3cgNjJVS/6my7jGFK8/mtdKavUl1B2XdEaZ4W4mZg2JjRp5Q+H5 6NrbhXZ9WkMCToQWGb6/viAcnW6VsV4lfjpDPxEByH2Ab7Qyt1QBMZiQp0kakVBF+JyE jKxrYoTCZbR4/cwd4AZSyr0+8xP9XOZk/G5WTw5fAfX5DDFq0/VXr8z4IGGuB6VY3Xi4 qVqA== X-Gm-Message-State: AOJu0Yx5nJxX0rSgmYckoB3JYURydz2vLHzBwudUqVqmlvg2lOYN1SEp fSw/K9Nt0yPHUvK6UzZdiJ22E4hAJNggiYNi84VVuYXLv5VZ1EQdiA2ISXdcDPBo/ZbzV1uL0P5 HgRGCAAzKsrx+XqOCto9EMSrUeBsG X-Received: by 2002:a17:907:31c3:b0:9bf:122a:7db2 with SMTP id xf3-20020a17090731c300b009bf122a7db2mr3474703ejb.66.1698881486562; Wed, 01 Nov 2023 16:31:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH/5E8sRENo9jAXHprVwNPsovyn+OIgRlppVqqGsSqi1Rdc/suX6QYRwvYBLU9eNRAS4LTetg== X-Received: by 2002:a17:907:31c3:b0:9bf:122a:7db2 with SMTP id xf3-20020a17090731c300b009bf122a7db2mr3474685ejb.66.1698881486291; Wed, 01 Nov 2023 16:31:26 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id h4-20020a170906718400b00997e99a662bsm463484ejk.20.2023.11.01.16.31.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:25 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 02/12] drm/gpuvm: don't always WARN in drm_gpuvm_check_overflow() Date: Thu, 2 Nov 2023 00:30:54 +0100 Message-ID: <20231101233113.8059-3-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Don't always WARN in drm_gpuvm_check_overflow() and separate it into a drm_gpuvm_check_overflow() and a dedicated drm_gpuvm_warn_check_overflow() variant. This avoids printing warnings due to invalid userspace requests. Signed-off-by: Danilo Krummrich Reviewed-by: Thomas Hellström --- drivers/gpu/drm/drm_gpuvm.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index d7367a202fee..445767f8fbc4 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -614,12 +614,18 @@ static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, static void __drm_gpuva_remove(struct drm_gpuva *va); static bool -drm_gpuvm_check_overflow(struct drm_gpuvm *gpuvm, u64 addr, u64 range) +drm_gpuvm_check_overflow(u64 addr, u64 range) { u64 end; - return drm_WARN(gpuvm->drm, check_add_overflow(addr, range, &end), - "GPUVA address limited to %zu bytes.\n", sizeof(end)); + return check_add_overflow(addr, range, &end); +} + +static bool +drm_gpuvm_warn_check_overflow(struct drm_gpuvm *gpuvm, u64 addr, u64 range) +{ + return drm_WARN(gpuvm->drm, drm_gpuvm_check_overflow(addr, range), + "GPUVA address limited to %zu bytes.\n", sizeof(addr)); } static bool @@ -647,7 +653,7 @@ static bool drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { - return !drm_gpuvm_check_overflow(gpuvm, addr, range) && + return !drm_gpuvm_check_overflow(addr, range) && drm_gpuvm_in_mm_range(gpuvm, addr, range) && !drm_gpuvm_in_kernel_node(gpuvm, addr, range); } @@ -682,7 +688,7 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, gpuvm->ops = ops; gpuvm->drm = drm; - drm_gpuvm_check_overflow(gpuvm, start_offset, range); + drm_gpuvm_warn_check_overflow(gpuvm, start_offset, range); gpuvm->mm_start = start_offset; gpuvm->mm_range = range; @@ -691,8 +697,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, gpuvm->kernel_alloc_node.va.addr = reserve_offset; gpuvm->kernel_alloc_node.va.range = reserve_range; - if (likely(!drm_gpuvm_check_overflow(gpuvm, reserve_offset, - reserve_range))) + if (likely(!drm_gpuvm_warn_check_overflow(gpuvm, reserve_offset, + reserve_range))) __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node); } } From patchwork Wed Nov 1 23:30:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6FDEFC4167B for ; Wed, 1 Nov 2023 23:31:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1BB3C10E7E2; Wed, 1 Nov 2023 23:31:36 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4052B10E7E1 for ; Wed, 1 Nov 2023 23:31:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881492; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B/VNYdgSll2ACKqst9EYQmPz34iqqPWKPy+dBFz8o6o=; b=YQRhEbsn9DX1gpBm9FGF/Fc1x6MBmYIqygdYVqIDtcbzYYYCptOSWcsN4vyqhf7G18JuyC qlLeIJmMm8gj58mE9B/b4Ca601GXAI9GTG2XzCiwFaSHh4caZn1S6TLQrqelACqlxNoKea RwFilGYNJEn/bWXshj7HecAZ76q0Jjk= Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-161-3XlnWvy6Nl-X_NnqZbEMfg-1; Wed, 01 Nov 2023 19:31:31 -0400 X-MC-Unique: 3XlnWvy6Nl-X_NnqZbEMfg-1 Received: by mail-ed1-f72.google.com with SMTP id 4fb4d7f45d1cf-54366567af4so358107a12.1 for ; Wed, 01 Nov 2023 16:31:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881490; x=1699486290; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=B/VNYdgSll2ACKqst9EYQmPz34iqqPWKPy+dBFz8o6o=; b=GjZh8P0vf17Xvu/A0VO4J3IwRNF97OJ5Ta7FgzhdR5PHanwizYpiADwES1/4pRnp6w z/TMUpZLcbwzx6kghysU64y/zSulUwIFMfM2HPIGJoSTqSqB2hQev8Sob/dCnlTaPoji NxoimDwNOkZCoEJ3E4A65XPe9fNcw0luwr5tkjyy16esSuah5n+napWBox3uBwofrHHo G+2kwGEr8Q7j+XtnIddBNePfS5ivxQH1qoMn7gB8KMvqqonRVYAbVuCOt1jWIlfK3U8t dnwVpLnulNT6jTIV4q7atUr9V/V9bsnq28deHTcrjdd90lHkXYL4kbXazZ97MrmvtNNt KA4g== X-Gm-Message-State: AOJu0YynrVGjdOzB1xVvH9Q2OmOV7JIA1vPwZ6zXB4lE3s5vCl564Bg5 UXw+mKa2JOLiJ0iOuKC6E55fbd22MBBHRs/Ny/pfBEyLNHd4M73BSowTdJyJwTplF44J5bTEVb9 lIyFxCN6dLLfgx9rcazAnaPAJ2tRn X-Received: by 2002:aa7:d6c4:0:b0:543:42ac:c9f3 with SMTP id x4-20020aa7d6c4000000b0054342acc9f3mr3612982edr.19.1698881490024; Wed, 01 Nov 2023 16:31:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFRYXLwVkzVGPU7LguN6QpCv28Yb/7P5U0T7qrfr5WzCGdt3zscx7X5wozPytxOnq0whY0MOA== X-Received: by 2002:aa7:d6c4:0:b0:543:42ac:c9f3 with SMTP id x4-20020aa7d6c4000000b0054342acc9f3mr3612974edr.19.1698881489858; Wed, 01 Nov 2023 16:31:29 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id c2-20020a50d642000000b0053635409213sm1641016edj.34.2023.11.01.16.31.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:29 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 03/12] drm/gpuvm: export drm_gpuvm_range_valid() Date: Thu, 2 Nov 2023 00:30:55 +0100 Message-ID: <20231101233113.8059-4-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Drivers may use this function to validate userspace requests in advance, hence export it. Signed-off-by: Danilo Krummrich Reviewed-by: Thomas Hellström --- drivers/gpu/drm/drm_gpuvm.c | 14 +++++++++++++- include/drm/drm_gpuvm.h | 1 + 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 445767f8fbc4..2669f9bbc377 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -649,7 +649,18 @@ drm_gpuvm_in_kernel_node(struct drm_gpuvm *gpuvm, u64 addr, u64 range) return krange && addr < kend && kstart < end; } -static bool +/** + * drm_gpuvm_range_valid() - checks whether the given range is valid for the + * given &drm_gpuvm + * @gpuvm: the GPUVM to check the range for + * @addr: the base address + * @range: the range starting from the base address + * + * Checks whether the range is within the GPUVM's managed boundaries. + * + * Returns: true for a valid range, false otherwise + */ +bool drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { @@ -657,6 +668,7 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, drm_gpuvm_in_mm_range(gpuvm, addr, range) && !drm_gpuvm_in_kernel_node(gpuvm, addr, range); } +EXPORT_SYMBOL_GPL(drm_gpuvm_range_valid); /** * drm_gpuvm_init() - initialize a &drm_gpuvm diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 687fd5893624..13eac6f70061 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -253,6 +253,7 @@ void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, const struct drm_gpuvm_ops *ops); void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm); +bool drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, u64 addr, u64 range); bool drm_gpuvm_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range); static inline struct drm_gpuva * From patchwork Wed Nov 1 23:30:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2AB1DC4332F for ; Wed, 1 Nov 2023 23:31:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 89CA510E7E4; Wed, 1 Nov 2023 23:31:40 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id DA78C10E7E1 for ; Wed, 1 Nov 2023 23:31:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881496; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ToQKAQ1y1PRH/3OVIpCvIaQCyxCqcVrO+cdPlLKjGls=; b=gbYWOWKb68qI0A4HNr0EY/5/03Wz9jOUCGI3JUvm3Z4ZeD+rmDakZV3BQWkgY6AFYt0Aw5 5319P6isVB7q3CSqX51xBsmGzC+2c74cA2Azy85SRCqD0EPQG7ZJy3BzK/f5SkE2SNnyJg vfP/eMIIQu3kMrFHNpUQFr67MwUPBFM= Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-329-K8htCTh9PdS3IpQnetAkLw-1; Wed, 01 Nov 2023 19:31:34 -0400 X-MC-Unique: K8htCTh9PdS3IpQnetAkLw-1 Received: by mail-ed1-f70.google.com with SMTP id 4fb4d7f45d1cf-53dfe3af1b9so190225a12.1 for ; Wed, 01 Nov 2023 16:31:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881494; x=1699486294; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ToQKAQ1y1PRH/3OVIpCvIaQCyxCqcVrO+cdPlLKjGls=; b=Ac5DtXOPoPyPXZCmgFeY3/PgtDOWoG7LYuakXVmiRX8ET8akb3BRm2VyDliH8I4ely ycDrB/QH6tVjCIaEeZJjIz5a+F/iWFevaCfkDcmwYggtHmFmADfcDu60th9yNopHKraB Vq0jqszRHRBGfoNgq2FfQZZCr96q60Rh0dOm3LMJhPXLdDnQDsEvPZbKIWCBurduGqpx rn5bWnzXXwCsdmmBdJ/csTwGB/WbpmEwTd0jiBP3xQXANoPnRLe8PQpLa1tj6r+O8CWA 3+5zXDgIKYXG2AOLYhaLbbHFcuKBPRaK9f+YiGQsHzNrbsZurM/+LMbUIihyxOgdjrU8 ao2Q== X-Gm-Message-State: AOJu0Yy1Mp5wQMiE/0j7kaGM9V93q87LgzcdWLnzwifKk34KDh5ahqQn vFQqSxFpqv5SJvgpg4Yw48ttk82V8qOm6op/yy4M39NPc3XYmmTz8v+32Eht+vox+Ct9vJAOOlr V6TXT58xgVzr8kKxnyOjF9j1l6yWh X-Received: by 2002:a17:907:2d88:b0:9bf:60f9:9b7f with SMTP id gt8-20020a1709072d8800b009bf60f99b7fmr3588034ejc.4.1698881493766; Wed, 01 Nov 2023 16:31:33 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE8VXTjVmx2ScEY9zyfvtwm9s6xZPaZj87Oi885o90gLsjCTQH8aVAE6ZRQgz+nv+ASvdFvfQ== X-Received: by 2002:a17:907:2d88:b0:9bf:60f9:9b7f with SMTP id gt8-20020a1709072d8800b009bf60f99b7fmr3588018ejc.4.1698881493503; Wed, 01 Nov 2023 16:31:33 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id z10-20020a1709063a0a00b009ad778a68c5sm465904eje.60.2023.11.01.16.31.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:33 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 04/12] drm/nouveau: make use of drm_gpuvm_range_valid() Date: Thu, 2 Nov 2023 00:30:56 +0100 Message-ID: <20231101233113.8059-5-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use drm_gpuvm_range_valid() in order to validate userspace requests. Signed-off-by: Danilo Krummrich Reviewed-by: Dave Airlie --- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 17 +---------------- drivers/gpu/drm/nouveau/nouveau_uvmm.h | 3 --- 2 files changed, 1 insertion(+), 19 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index aaf5d28bd587..641a911528db 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -929,25 +929,13 @@ nouveau_uvmm_sm_unmap_cleanup(struct nouveau_uvmm *uvmm, static int nouveau_uvmm_validate_range(struct nouveau_uvmm *uvmm, u64 addr, u64 range) { - u64 end = addr + range; - u64 kernel_managed_end = uvmm->kernel_managed_addr + - uvmm->kernel_managed_size; - if (addr & ~PAGE_MASK) return -EINVAL; if (range & ~PAGE_MASK) return -EINVAL; - if (end <= addr) - return -EINVAL; - - if (addr < NOUVEAU_VA_SPACE_START || - end > NOUVEAU_VA_SPACE_END) - return -EINVAL; - - if (addr < kernel_managed_end && - end > uvmm->kernel_managed_addr) + if (!drm_gpuvm_range_valid(&uvmm->base, addr, range)) return -EINVAL; return 0; @@ -1834,9 +1822,6 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, goto out_unlock; } - uvmm->kernel_managed_addr = kernel_managed_addr; - uvmm->kernel_managed_size = kernel_managed_size; - drm_gpuvm_init(&uvmm->base, cli->name, drm, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h index a308c59760a5..06a0c36de392 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h @@ -14,9 +14,6 @@ struct nouveau_uvmm { struct mutex mutex; struct dma_resv resv; - u64 kernel_managed_addr; - u64 kernel_managed_size; - bool disabled; }; From patchwork Wed Nov 1 23:30:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C2C6C4167B for ; Wed, 1 Nov 2023 23:31:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 386C610E7E7; Wed, 1 Nov 2023 23:31:44 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id C2A0F10E7EA for ; Wed, 1 Nov 2023 23:31:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881500; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VrqtmRv09TQhKlPPw+IOPHKwAPbXmQstmKJsYeBQBoE=; b=cYwQUQSdNgmhTyusxCxXSkDu+UHDJSofxbHZ2ZTWi2bkmzz1Ywl9kqtwikIL/dGqqa9Y45 iKho1EC0OWoqcFFlfR1o0721IFcul4+K86mSyTozuuaUeA8mApkEmx+K7oGhO4gK2fknYK 5WqNxLL7n642djbe6XfuTGCpjX4SYrQ= Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-646-L1XMw8I6NeOKACW7pQIkMw-1; Wed, 01 Nov 2023 19:31:38 -0400 X-MC-Unique: L1XMw8I6NeOKACW7pQIkMw-1 Received: by mail-ed1-f69.google.com with SMTP id 4fb4d7f45d1cf-53dfe3af1b9so190252a12.1 for ; Wed, 01 Nov 2023 16:31:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881497; x=1699486297; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VrqtmRv09TQhKlPPw+IOPHKwAPbXmQstmKJsYeBQBoE=; b=R05BsvzYoE/iwYe7cM00pO5Cx6jgYBZs1oZcjg0fWi7M3qsEWscCHfRKjbIoz8pSH3 ETG+oJVxt70cJt3r/JvAtBO5YwriGi7kSpD23vOpgIY+d2AOZ43UAg1Llt4rHlvgPtsE fMFn13ZSG4+Sa7s6lRZVM+dCvm9tjvZWd+hkgh9yKTyv4DxBLroD37m+u6Gyr+ZuHlhp Ay0uWNCKMPscKwJDqo/hU5TkYGX6mvuSPKzxLanN5fO1tiPm0KeZHqRXaTA7pjIyEvPM yXvXFuEqQm89fcaY2dCk5NeaC7uh4KT45Nms3uAuM9Mr+6KfADQxAYIqZCspr+RSGzuK 9NUg== X-Gm-Message-State: AOJu0YyP3Dnm+mDWRUvajpOxgHpOYNj8gsZ7Ghpf8W5MKmNrF3ThbXAU +sUPKXjniv4FugObLdmKAJUoGy+jytD6TDCOvGLdjTqW6eLq3sbPR7GLDCUXnprBidAafEWLldP fVst//tCe4by77/rrC2E8maPTCvRc X-Received: by 2002:a17:907:5cd:b0:9d5:7c41:ccf6 with SMTP id wg13-20020a17090705cd00b009d57c41ccf6mr3146526ejb.26.1698881497567; Wed, 01 Nov 2023 16:31:37 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHUqq/wRZ3wp4IJtYm5bWqzL1FFZNVnHWCIF6hxU0mlvyL3MGSg7xCmp/J5/ji50OFueDWXpQ== X-Received: by 2002:a17:907:5cd:b0:9d5:7c41:ccf6 with SMTP id wg13-20020a17090705cd00b009d57c41ccf6mr3146519ejb.26.1698881497329; Wed, 01 Nov 2023 16:31:37 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id rn25-20020a170906d93900b00992e94bcfabsm449425ejb.167.2023.11.01.16.31.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:36 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 05/12] drm/gpuvm: add common dma-resv per struct drm_gpuvm Date: Thu, 2 Nov 2023 00:30:57 +0100 Message-ID: <20231101233113.8059-6-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Provide a common dma-resv for GEM objects not being used outside of this GPU-VM. This is used in a subsequent patch to generalize dma-resv, external and evicted object handling and GEM validation. Reviewed-by: Boris Brezillon Reviewed-by: Thomas Hellström Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuvm.c | 53 ++++++++++++++++++++++++++ drivers/gpu/drm/nouveau/nouveau_uvmm.c | 13 ++++++- include/drm/drm_gpuvm.h | 33 ++++++++++++++++ 3 files changed, 97 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 2669f9bbc377..af5805e4d7c9 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -61,6 +61,15 @@ * contained within struct drm_gpuva already. Hence, for inserting &drm_gpuva * entries from within dma-fence signalling critical sections it is enough to * pre-allocate the &drm_gpuva structures. + * + * &drm_gem_objects which are private to a single VM can share a common + * &dma_resv in order to improve locking efficiency (e.g. with &drm_exec). + * For this purpose drivers must pass a &drm_gem_object to drm_gpuvm_init(), in + * the following called 'resv object', which serves as the container of the + * GPUVM's shared &dma_resv. This resv object can be a driver specific + * &drm_gem_object, such as the &drm_gem_object containing the root page table, + * but it can also be a 'dummy' object, which can be allocated with + * drm_gpuvm_resv_object_alloc(). */ /** @@ -670,11 +679,49 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, } EXPORT_SYMBOL_GPL(drm_gpuvm_range_valid); +static void +drm_gpuvm_gem_object_free(struct drm_gem_object *obj) +{ + drm_gem_object_release(obj); + kfree(obj); +} + +static const struct drm_gem_object_funcs drm_gpuvm_object_funcs = { + .free = drm_gpuvm_gem_object_free, +}; + +/** + * drm_gpuvm_resv_object_alloc() - allocate a dummy &drm_gem_object + * @drm: the drivers &drm_device + * + * Allocates a dummy &drm_gem_object which can be passed to drm_gpuvm_init() in + * order to serve as root GEM object providing the &drm_resv shared across + * &drm_gem_objects local to a single GPUVM. + * + * Returns: the &drm_gem_object on success, NULL on failure + */ +struct drm_gem_object * +drm_gpuvm_resv_object_alloc(struct drm_device *drm) +{ + struct drm_gem_object *obj; + + obj = kzalloc(sizeof(*obj), GFP_KERNEL); + if (!obj) + return NULL; + + obj->funcs = &drm_gpuvm_object_funcs; + drm_gem_private_object_init(drm, obj, 0); + + return obj; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_resv_object_alloc); + /** * drm_gpuvm_init() - initialize a &drm_gpuvm * @gpuvm: pointer to the &drm_gpuvm to initialize * @name: the name of the GPU VA space * @drm: the &drm_device this VM resides in + * @r_obj: the resv &drm_gem_object providing the GPUVM's common &dma_resv * @start_offset: the start offset of the GPU VA space * @range: the size of the GPU VA space * @reserve_offset: the start of the kernel reserved GPU VA area @@ -689,6 +736,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_range_valid); void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, struct drm_device *drm, + struct drm_gem_object *r_obj, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, const struct drm_gpuvm_ops *ops) @@ -699,6 +747,9 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, gpuvm->name = name ? name : "unknown"; gpuvm->ops = ops; gpuvm->drm = drm; + gpuvm->r_obj = r_obj; + + drm_gem_object_get(r_obj); drm_gpuvm_warn_check_overflow(gpuvm, start_offset, range); gpuvm->mm_start = start_offset; @@ -733,6 +784,8 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) drm_WARN(gpuvm->drm, !RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), "GPUVA tree is not empty, potentially leaking memory.\n"); + + drm_gem_object_put(gpuvm->r_obj); } EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index 641a911528db..f74bf30bc683 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1797,8 +1797,9 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, u64 kernel_managed_addr, u64 kernel_managed_size) { struct drm_device *drm = cli->drm->dev; - int ret; + struct drm_gem_object *r_obj; u64 kernel_managed_end = kernel_managed_addr + kernel_managed_size; + int ret; mutex_init(&uvmm->mutex); dma_resv_init(&uvmm->resv); @@ -1822,11 +1823,19 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, goto out_unlock; } - drm_gpuvm_init(&uvmm->base, cli->name, drm, + r_obj = drm_gpuvm_resv_object_alloc(drm); + if (!r_obj) { + ret = -ENOMEM; + goto out_unlock; + } + + drm_gpuvm_init(&uvmm->base, cli->name, drm, r_obj, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, kernel_managed_addr, kernel_managed_size, NULL); + /* GPUVM takes care from here on. */ + drm_gem_object_put(r_obj); ret = nvif_vmm_ctor(&cli->mmu, "uvmm", cli->vmm.vmm.object.oclass, RAW, diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 13eac6f70061..ff3377cbfe52 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -244,10 +244,16 @@ struct drm_gpuvm { * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers */ const struct drm_gpuvm_ops *ops; + + /** + * @r_obj: Resv GEM object; representing the GPUVM's common &dma_resv. + */ + struct drm_gem_object *r_obj; }; void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, struct drm_device *drm, + struct drm_gem_object *r_obj, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, const struct drm_gpuvm_ops *ops); @@ -256,6 +262,33 @@ void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm); bool drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, u64 addr, u64 range); bool drm_gpuvm_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range); +struct drm_gem_object * +drm_gpuvm_resv_object_alloc(struct drm_device *drm); + +/** + * drm_gpuvm_resv() - returns the &drm_gpuvm's &dma_resv + * @gpuvm__: the &drm_gpuvm + * + * Returns: a pointer to the &drm_gpuvm's shared &dma_resv + */ +#define drm_gpuvm_resv(gpuvm__) ((gpuvm__)->r_obj->resv) + +/** + * drm_gpuvm_resv_obj() - returns the &drm_gem_object holding the &drm_gpuvm's + * &dma_resv + * @gpuvm__: the &drm_gpuvm + * + * Returns: a pointer to the &drm_gem_object holding the &drm_gpuvm's shared + * &dma_resv + */ +#define drm_gpuvm_resv_obj(gpuvm__) ((gpuvm__)->r_obj) + +#define drm_gpuvm_resv_held(gpuvm__) \ + dma_resv_held(drm_gpuvm_resv(gpuvm__)) + +#define drm_gpuvm_resv_assert_held(gpuvm__) \ + dma_resv_assert_held(drm_gpuvm_resv(gpuvm__)) + static inline struct drm_gpuva * __drm_gpuva_next(struct drm_gpuva *va) { From patchwork Wed Nov 1 23:30:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A4F6C41535 for ; Wed, 1 Nov 2023 23:31:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 05DE310E7EC; Wed, 1 Nov 2023 23:31:49 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8796D10E7EC for ; Wed, 1 Nov 2023 23:31:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881505; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9h4gDR/fORcBfQCQq3ZbFmiTv2MIcBJK9/ph1UUGxK8=; b=SSU95ycYPy8sC8+eX/x/qSp1QvkPKBdex4ifc6u87OC1S8+2+EHc6KBLdjxBABNroBUzPj VrpUfsM1Iw8yOYFy3SZc/DBkMGUXGR4abZ9rJcy0Zx1xloo3F+b1ufA5rOyNcnjXtttGqN gZXo+ytqgmbSUEXAxkigrzEbi53Vr3M= Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com [209.85.218.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-339-Yhpvko_nNYGE9j149uYmaQ-1; Wed, 01 Nov 2023 19:31:42 -0400 X-MC-Unique: Yhpvko_nNYGE9j149uYmaQ-1 Received: by mail-ej1-f69.google.com with SMTP id a640c23a62f3a-9c7f0a33afbso19711266b.3 for ; Wed, 01 Nov 2023 16:31:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881501; x=1699486301; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9h4gDR/fORcBfQCQq3ZbFmiTv2MIcBJK9/ph1UUGxK8=; b=rjf72VFy/UU6JphbdXa/5qdwQeO1dpyjAeS0iBhR+TZKk0T004NALRvTJOkFKIv1Aj L6QUQ0zAf4qtWYm6AUwQTtQdslFtAnlE857l0BMC4weWfV2dp13OoAv4s/70Vl4/DiAG ZM1Xquk6E9K/KQFTiplvLp/C/Vut49IJkRWKl20oOG9SxS6FDWywTDSwBlEF1pxkU8jb TnKfASLg1dBT+9+XVbf/0e69Wqsxau9yHN5Qv53ajVGw3uhxmO3CGtkYcPxFRzWkkKFI hywEIhwdzvLjJVczGrN5GXAmMl5B1JZyz4FC+f/xWDbEn6VKiSr18eMOCAYnlM/iK+ef X9wQ== X-Gm-Message-State: AOJu0YxCS6M5yKg20SfBnP9pNKji1Q+cOwM2Fiw78kWRYL60NXAzvjGw iwIKNOGMglcDJE1Z96ZjxVGuq5lPkE5r1NmbbAlpYgQylvp4yjjecS+mDCX1Zuw3d/JKT4c3n31 BEkqzTdbxLwBiNzJDurz/qZibiXJc X-Received: by 2002:a17:907:7f07:b0:9bd:fa48:83c5 with SMTP id qf7-20020a1709077f0700b009bdfa4883c5mr3418692ejc.70.1698881501317; Wed, 01 Nov 2023 16:31:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGdgbVt+h8vUAhN1SkYA6wFpnHn4HoiLCPG0PpaFbeDsw3PgAnduK8dK75Vj39qRfRTq7aa3g== X-Received: by 2002:a17:907:7f07:b0:9bd:fa48:83c5 with SMTP id qf7-20020a1709077f0700b009bdfa4883c5mr3418674ejc.70.1698881501118; Wed, 01 Nov 2023 16:31:41 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id h13-20020a170906584d00b009cd1fca4d68sm464340ejs.5.2023.11.01.16.31.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:40 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 06/12] drm/nouveau: make use of the GPUVM's shared dma-resv Date: Thu, 2 Nov 2023 00:30:58 +0100 Message-ID: <20231101233113.8059-7-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" DRM GEM objects private to a single GPUVM can use a shared dma-resv. Make use of the shared dma-resv of GPUVM rather than a driver specific one. The shared dma-resv originates from a "root" GEM object serving as container for the dma-resv to make it compatible with drm_exec. In order to make sure the object proving the shared dma-resv can't be freed up before the objects making use of it, let every such GEM object take a reference on it. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_bo.c | 11 +++++++++-- drivers/gpu/drm/nouveau/nouveau_bo.h | 5 +++++ drivers/gpu/drm/nouveau/nouveau_gem.c | 10 ++++++++-- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 7 ++----- drivers/gpu/drm/nouveau/nouveau_uvmm.h | 1 - 5 files changed, 24 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 0f3bd187ede6..7afad86da64b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -148,10 +148,17 @@ nouveau_bo_del_ttm(struct ttm_buffer_object *bo) * If nouveau_bo_new() allocated this buffer, the GEM object was never * initialized, so don't attempt to release it. */ - if (bo->base.dev) + if (bo->base.dev) { + /* Gem objects not being shared with other VMs get their + * dma_resv from a root GEM object. + */ + if (nvbo->no_share) + drm_gem_object_put(nvbo->r_obj); + drm_gem_object_release(&bo->base); - else + } else { dma_resv_fini(&bo->base._resv); + } kfree(nvbo); } diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h index 07f671cf895e..70c551921a9e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.h +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h @@ -26,6 +26,11 @@ struct nouveau_bo { struct list_head entry; int pbbo_index; bool validate_mapped; + + /* Root GEM object we derive the dma_resv of in case this BO is not + * shared between VMs. + */ + struct drm_gem_object *r_obj; bool no_share; /* GPU address space is independent of CPU word size */ diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index a0d303e5ce3d..49c2bcbef129 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -111,7 +111,8 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct drm_file *file_priv) if (vmm->vmm.object.oclass < NVIF_CLASS_VMM_NV50) return 0; - if (nvbo->no_share && uvmm && &uvmm->resv != nvbo->bo.base.resv) + if (nvbo->no_share && uvmm && + drm_gpuvm_resv(&uvmm->base) != nvbo->bo.base.resv) return -EPERM; ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); @@ -245,7 +246,7 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain, if (unlikely(!uvmm)) return -EINVAL; - resv = &uvmm->resv; + resv = drm_gpuvm_resv(&uvmm->base); } if (!(domain & (NOUVEAU_GEM_DOMAIN_VRAM | NOUVEAU_GEM_DOMAIN_GART))) @@ -288,6 +289,11 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain, if (drm->client.device.info.family >= NV_DEVICE_INFO_V0_TESLA) nvbo->valid_domains &= domain; + if (nvbo->no_share) { + nvbo->r_obj = drm_gpuvm_resv_obj(&uvmm->base); + drm_gem_object_get(nvbo->r_obj); + } + *pnvbo = nvbo; return 0; } diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index f74bf30bc683..8977a518de96 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1802,7 +1802,6 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, int ret; mutex_init(&uvmm->mutex); - dma_resv_init(&uvmm->resv); mt_init_flags(&uvmm->region_mt, MT_FLAGS_LOCK_EXTERN); mt_set_external_lock(&uvmm->region_mt, &uvmm->mutex); @@ -1842,14 +1841,14 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, kernel_managed_addr, kernel_managed_size, NULL, 0, &cli->uvmm.vmm.vmm); if (ret) - goto out_free_gpuva_mgr; + goto out_gpuvm_fini; cli->uvmm.vmm.cli = cli; mutex_unlock(&cli->mutex); return 0; -out_free_gpuva_mgr: +out_gpuvm_fini: drm_gpuvm_destroy(&uvmm->base); out_unlock: mutex_unlock(&cli->mutex); @@ -1907,6 +1906,4 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) nouveau_vmm_fini(&uvmm->vmm); drm_gpuvm_destroy(&uvmm->base); mutex_unlock(&cli->mutex); - - dma_resv_fini(&uvmm->resv); } diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h index 06a0c36de392..22607270fae0 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h @@ -12,7 +12,6 @@ struct nouveau_uvmm { struct nouveau_vmm vmm; struct maple_tree region_mt; struct mutex mutex; - struct dma_resv resv; bool disabled; }; From patchwork Wed Nov 1 23:30:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BE6ADC4332F for ; Wed, 1 Nov 2023 23:32:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3245910E7ED; Wed, 1 Nov 2023 23:31:57 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1D68F10E7E1 for ; Wed, 1 Nov 2023 23:31:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881511; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RXgqRC5/5tCXBERwm7aEPSo0YVJhEAXXDD4zWhliPJ8=; b=dOvcuX5cAntE4iMcEee6QOhn1zn7mP9YI4dhs6iB6nBvx39wpBO1/7P0K4+0oYIe1NuSop 3dR1MmGoBqI5HSHpV9nWWa4YxFjoqZkq2GPL9ogru1pIF8exWLxQg+OwaCVzAap4WRloC1 I0b93k70LTWvJQXQNeRAnRMELCpYF5Q= Received: from mail-lf1-f69.google.com (mail-lf1-f69.google.com [209.85.167.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-317-jOuNeynAOzWxTxokKwFQPA-1; Wed, 01 Nov 2023 19:31:46 -0400 X-MC-Unique: jOuNeynAOzWxTxokKwFQPA-1 Received: by mail-lf1-f69.google.com with SMTP id 2adb3069b0e04-507cafb69e8so249570e87.1 for ; Wed, 01 Nov 2023 16:31:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881505; x=1699486305; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RXgqRC5/5tCXBERwm7aEPSo0YVJhEAXXDD4zWhliPJ8=; b=k6Y3+k2tALmrF37froRrRLbasLz0RerxZKhib7jc7NS95ZIkIAjXCo00eOahhuGWRa CYPjMdvdcsaFkozQc0k7kUwgrBvCIdu8i9Ug0gkZo1tA6NVZDwcld7pnjo92t6z82l/0 gpHOheZJJp7oFKWx7ob3ploOrA6YCidYrPEYlst0IH7mxyGVT8pmQrCEkm/oEJ6bgc+r YIJjUFiLfam52uP/Qnv7iytMcXoKEYqq69a04DGptsqHTGedqCBdww91iwTARRx6JDag /JEYiYBOapdVy3LJp0+xy7vgyO5BQrKpL77xrdeoBx7r+afrE7XfYa+dFatQTRRBcujD eQ0g== X-Gm-Message-State: AOJu0YxqKJ0MqG9isOVB82aTKYft0wm3zjM4LVcsFmLBi3ZVWNpAINsU XNV0vw0UhVJICR5DKZQ/0jUhNtFebBeEjYYpeInTjGrQVEPbe9Zy6CyGkPb9oycsGeabUhT+LiD xG+K+kDy0Wr5m1zsI04HGu6Gb+TRH X-Received: by 2002:ac2:5e85:0:b0:509:451f:72f with SMTP id b5-20020ac25e85000000b00509451f072fmr1051019lfq.48.1698881505346; Wed, 01 Nov 2023 16:31:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEZQnFzK4yiK/j0ulyfKXEQuNYco5ChZGRIZtqU1f5JUsGCSpUAFPknC105jHZDcS5M1tvK/w== X-Received: by 2002:ac2:5e85:0:b0:509:451f:72f with SMTP id b5-20020ac25e85000000b00509451f072fmr1051000lfq.48.1698881505032; Wed, 01 Nov 2023 16:31:45 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id fx3-20020a170906b74300b009d2526048acsm461007ejb.40.2023.11.01.16.31.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:44 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 07/12] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm Date: Thu, 2 Nov 2023 00:30:59 +0100 Message-ID: <20231101233113.8059-8-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Introduce flags for struct drm_gpuvm, this required by subsequent commits. Reviewed-by: Boris Brezillon Reviewed-by: Thomas Hellström Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuvm.c | 3 +++ drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +- include/drm/drm_gpuvm.h | 16 ++++++++++++++++ 3 files changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index af5805e4d7c9..53e2c406fb04 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -720,6 +720,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_resv_object_alloc); * drm_gpuvm_init() - initialize a &drm_gpuvm * @gpuvm: pointer to the &drm_gpuvm to initialize * @name: the name of the GPU VA space + * @flags: the &drm_gpuvm_flags for this GPUVM * @drm: the &drm_device this VM resides in * @r_obj: the resv &drm_gem_object providing the GPUVM's common &dma_resv * @start_offset: the start offset of the GPU VA space @@ -735,6 +736,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_resv_object_alloc); */ void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, + enum drm_gpuvm_flags flags, struct drm_device *drm, struct drm_gem_object *r_obj, u64 start_offset, u64 range, @@ -745,6 +747,7 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, INIT_LIST_HEAD(&gpuvm->rb.list); gpuvm->name = name ? name : "unknown"; + gpuvm->flags = flags; gpuvm->ops = ops; gpuvm->drm = drm; gpuvm->r_obj = r_obj; diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index 8977a518de96..f765e3835306 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1828,7 +1828,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, goto out_unlock; } - drm_gpuvm_init(&uvmm->base, cli->name, drm, r_obj, + drm_gpuvm_init(&uvmm->base, cli->name, 0, drm, r_obj, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, kernel_managed_addr, kernel_managed_size, diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index ff3377cbfe52..0c2e24155a93 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -184,6 +184,16 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va) return va->flags & DRM_GPUVA_INVALIDATED; } +/** + * enum drm_gpuvm_flags - flags for struct drm_gpuvm + */ +enum drm_gpuvm_flags { + /** + * @DRM_GPUVM_USERBITS: user defined bits + */ + DRM_GPUVM_USERBITS = BIT(0), +}; + /** * struct drm_gpuvm - DRM GPU VA Manager * @@ -202,6 +212,11 @@ struct drm_gpuvm { */ const char *name; + /** + * @flags: the &drm_gpuvm_flags of this GPUVM + */ + enum drm_gpuvm_flags flags; + /** * @drm: the &drm_device this VM lives in */ @@ -252,6 +267,7 @@ struct drm_gpuvm { }; void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, + enum drm_gpuvm_flags flags, struct drm_device *drm, struct drm_gem_object *r_obj, u64 start_offset, u64 range, From patchwork Wed Nov 1 23:31:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E7CB0C001B5 for ; Wed, 1 Nov 2023 23:32:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8EBC710E7EE; Wed, 1 Nov 2023 23:31:55 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9166110E7ED for ; Wed, 1 Nov 2023 23:31:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881511; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=irrJuuQ5tmacZy/XoQLGNhRQzkif13E2RUO07n0sHVI=; b=VIHHldwwoNNkrGOSADMefYYGolymokp9OBoJR835ic13XKnKWmMmKOmwX8xRJuKLjXMggW zC7sgEwqvTmJU9E8cdwKdvzi2VLPtokZZh7jLrx3dM/HFrm+Ls/cLBCJyoV4iHt6Ad3wyl TxULfqx3TvBIQnc/h+k7UxsCLFBxVUE= Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-563-ZMHc-svwOrS_-FL6g8lOZA-1; Wed, 01 Nov 2023 19:31:50 -0400 X-MC-Unique: ZMHc-svwOrS_-FL6g8lOZA-1 Received: by mail-ed1-f69.google.com with SMTP id 4fb4d7f45d1cf-5411a36a37fso428697a12.0 for ; Wed, 01 Nov 2023 16:31:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881509; x=1699486309; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=irrJuuQ5tmacZy/XoQLGNhRQzkif13E2RUO07n0sHVI=; b=YejYQN69Q5vCIy7NymWHPzF/qGDEburBIDvEuDO4pdd5UXVEAs8aulZQ9rZ5MV/PT8 puKkaDYFrFZeMQQ1F1wCuJwvTXvWh8VzhNkoHJihoirmHYfa6p1A1B0l181FToIaWeK8 +uG/vsakcPRacCXftq2y73Pk4F6PN3bKgykz1utKLz1LWT8i/wLuazGoYUFlyLcB8kuQ fd/ExWympc5KPDb9xkoXCFbaIuIRjTsUdl/U63qfrcQc4IntDxMVztpD1phYEMKjSRPa B6/7REoEddYcbpBn1Ql4VxzfGQ5BO54bK3Oq056Os11qQ7XoaHyTb5w0AEDPauGbtbHJ Df0A== X-Gm-Message-State: AOJu0Yx2f3jptSA16EP/+mtWNSAgk0VYJfuBsLp0CWI+UdovXlBqtNGB D5ep51uT9kAtl9qfeI6190n7a0eAhdagJ/XwcdBzWg/dooxZqd+T9gSMmjhNeeXmp+/dnlVYXmo oajW4++wekwjXPuxUBxPrTEFUTOSa X-Received: by 2002:a50:ee05:0:b0:52c:f73:3567 with SMTP id g5-20020a50ee05000000b0052c0f733567mr3908820eds.13.1698881509373; Wed, 01 Nov 2023 16:31:49 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFXsxbiINauiDj0RcyTWjPFJZUC8WLQdDa6QaRkoraHiwscWeXXmQRMcVfiWS0+GlOzhwxDpA== X-Received: by 2002:a50:ee05:0:b0:52c:f73:3567 with SMTP id g5-20020a50ee05000000b0052c0f733567mr3908812eds.13.1698881509151; Wed, 01 Nov 2023 16:31:49 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id q22-20020a50c356000000b0053dec545c8fsm1651202edb.3.2023.11.01.16.31.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:48 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 08/12] drm/nouveau: separately allocate struct nouveau_uvmm Date: Thu, 2 Nov 2023 00:31:00 +0100 Message-ID: <20231101233113.8059-9-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Allocate struct nouveau_uvmm separately in preparation for subsequent commits introducing reference counting for struct drm_gpuvm. While at it, get rid of nouveau_uvmm_init() as indirection of nouveau_uvmm_ioctl_vm_init() and perform some minor cleanups. Signed-off-by: Danilo Krummrich Reviewed-by: Dave Airlie --- drivers/gpu/drm/nouveau/nouveau_drm.c | 5 +- drivers/gpu/drm/nouveau/nouveau_drv.h | 10 ++-- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 63 +++++++++++++------------- drivers/gpu/drm/nouveau/nouveau_uvmm.h | 4 -- 4 files changed, 40 insertions(+), 42 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index 50589f982d1a..f603eaef1560 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -190,6 +190,8 @@ nouveau_cli_work_queue(struct nouveau_cli *cli, struct dma_fence *fence, static void nouveau_cli_fini(struct nouveau_cli *cli) { + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm_locked(cli); + /* All our channels are dead now, which means all the fences they * own are signalled, and all callback functions have been called. * @@ -199,7 +201,8 @@ nouveau_cli_fini(struct nouveau_cli *cli) WARN_ON(!list_empty(&cli->worker)); usif_client_fini(cli); - nouveau_uvmm_fini(&cli->uvmm); + if (uvmm) + nouveau_uvmm_fini(uvmm); nouveau_sched_entity_fini(&cli->sched_entity); nouveau_vmm_fini(&cli->svm); nouveau_vmm_fini(&cli->vmm); diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h index 3666a7403e47..e514110bf391 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drv.h +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h @@ -93,7 +93,10 @@ struct nouveau_cli { struct nvif_mmu mmu; struct nouveau_vmm vmm; struct nouveau_vmm svm; - struct nouveau_uvmm uvmm; + struct { + struct nouveau_uvmm *ptr; + bool disabled; + } uvmm; struct nouveau_sched_entity sched_entity; @@ -121,10 +124,7 @@ struct nouveau_cli_work { static inline struct nouveau_uvmm * nouveau_cli_uvmm(struct nouveau_cli *cli) { - if (!cli || !cli->uvmm.vmm.cli) - return NULL; - - return &cli->uvmm; + return cli ? cli->uvmm.ptr : NULL; } static inline struct nouveau_uvmm * diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index f765e3835306..54be12c1272f 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1636,18 +1636,6 @@ nouveau_uvmm_bind_job_init(struct nouveau_uvmm_bind_job **pjob, return ret; } -int -nouveau_uvmm_ioctl_vm_init(struct drm_device *dev, - void *data, - struct drm_file *file_priv) -{ - struct nouveau_cli *cli = nouveau_cli(file_priv); - struct drm_nouveau_vm_init *init = data; - - return nouveau_uvmm_init(&cli->uvmm, cli, init->kernel_managed_addr, - init->kernel_managed_size); -} - static int nouveau_uvmm_vm_bind(struct nouveau_uvmm_bind_job_args *args) { @@ -1793,17 +1781,25 @@ nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo) } int -nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, - u64 kernel_managed_addr, u64 kernel_managed_size) +nouveau_uvmm_ioctl_vm_init(struct drm_device *dev, + void *data, + struct drm_file *file_priv) { + struct nouveau_uvmm *uvmm; + struct nouveau_cli *cli = nouveau_cli(file_priv); struct drm_device *drm = cli->drm->dev; struct drm_gem_object *r_obj; - u64 kernel_managed_end = kernel_managed_addr + kernel_managed_size; + struct drm_nouveau_vm_init *init = data; + u64 kernel_managed_end; int ret; - mutex_init(&uvmm->mutex); - mt_init_flags(&uvmm->region_mt, MT_FLAGS_LOCK_EXTERN); - mt_set_external_lock(&uvmm->region_mt, &uvmm->mutex); + if (check_add_overflow(init->kernel_managed_addr, + init->kernel_managed_size, + &kernel_managed_end)) + return -EINVAL; + + if (kernel_managed_end > NOUVEAU_VA_SPACE_END) + return -EINVAL; mutex_lock(&cli->mutex); @@ -1812,44 +1808,49 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, goto out_unlock; } - if (kernel_managed_end <= kernel_managed_addr) { - ret = -EINVAL; - goto out_unlock; - } - - if (kernel_managed_end > NOUVEAU_VA_SPACE_END) { - ret = -EINVAL; + uvmm = kzalloc(sizeof(*uvmm), GFP_KERNEL); + if (!uvmm) { + ret = -ENOMEM; goto out_unlock; } r_obj = drm_gpuvm_resv_object_alloc(drm); if (!r_obj) { + kfree(uvmm); ret = -ENOMEM; goto out_unlock; } + mutex_init(&uvmm->mutex); + mt_init_flags(&uvmm->region_mt, MT_FLAGS_LOCK_EXTERN); + mt_set_external_lock(&uvmm->region_mt, &uvmm->mutex); + drm_gpuvm_init(&uvmm->base, cli->name, 0, drm, r_obj, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, - kernel_managed_addr, kernel_managed_size, + init->kernel_managed_addr, + init->kernel_managed_size, NULL); /* GPUVM takes care from here on. */ drm_gem_object_put(r_obj); ret = nvif_vmm_ctor(&cli->mmu, "uvmm", cli->vmm.vmm.object.oclass, RAW, - kernel_managed_addr, kernel_managed_size, - NULL, 0, &cli->uvmm.vmm.vmm); + init->kernel_managed_addr, + init->kernel_managed_size, + NULL, 0, &uvmm->vmm.vmm); if (ret) goto out_gpuvm_fini; - cli->uvmm.vmm.cli = cli; + uvmm->vmm.cli = cli; + cli->uvmm.ptr = uvmm; mutex_unlock(&cli->mutex); return 0; out_gpuvm_fini: drm_gpuvm_destroy(&uvmm->base); + kfree(uvmm); out_unlock: mutex_unlock(&cli->mutex); return ret; @@ -1864,9 +1865,6 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) struct nouveau_sched_entity *entity = &cli->sched_entity; struct drm_gpuva *va, *next; - if (!cli) - return; - rmb(); /* for list_empty to work without lock */ wait_event(entity->job.wq, list_empty(&entity->job.list.head)); @@ -1905,5 +1903,6 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) mutex_lock(&cli->mutex); nouveau_vmm_fini(&uvmm->vmm); drm_gpuvm_destroy(&uvmm->base); + kfree(uvmm); mutex_unlock(&cli->mutex); } diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h index 22607270fae0..f0a6d98ace4f 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h @@ -12,8 +12,6 @@ struct nouveau_uvmm { struct nouveau_vmm vmm; struct maple_tree region_mt; struct mutex mutex; - - bool disabled; }; struct nouveau_uvma_region { @@ -78,8 +76,6 @@ struct nouveau_uvmm_bind_job_args { #define to_uvmm_bind_job(job) container_of((job), struct nouveau_uvmm_bind_job, base) -int nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, - u64 kernel_managed_addr, u64 kernel_managed_size); void nouveau_uvmm_fini(struct nouveau_uvmm *uvmm); void nouveau_uvmm_bo_map_all(struct nouveau_bo *nvbov, struct nouveau_mem *mem); From patchwork Wed Nov 1 23:31:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CB88C001B5 for ; Wed, 1 Nov 2023 23:32:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6DF7210E7F1; Wed, 1 Nov 2023 23:32:00 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id A1EB510E7F1 for ; Wed, 1 Nov 2023 23:31:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881515; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=juQeCGfPWswDlP+Z0KdUDVDRgyhQp0+hsdRA02TinWo=; b=G8k6xlNebz9o7F9QUF8gQAmp3MxMMp0PvjYr4C49moODblEOy6MQeC32WfrHpPJ74AsgKK 1J5qFxuvzSUTkGCTUaG+PfCG7IsSUgWUVwefGQ8cox6p3lKjfA8QFBe0j3/4jkt5bozdn1 P3Celyk4j5MYTrUjNJoad0p4zoyuKIo= Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-568-zuwwZ1dTNf6DXl_nEcrbcw-1; Wed, 01 Nov 2023 19:31:54 -0400 X-MC-Unique: zuwwZ1dTNf6DXl_nEcrbcw-1 Received: by mail-ej1-f71.google.com with SMTP id a640c23a62f3a-9c7558b89ccso18204466b.3 for ; Wed, 01 Nov 2023 16:31:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881513; x=1699486313; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=juQeCGfPWswDlP+Z0KdUDVDRgyhQp0+hsdRA02TinWo=; b=v6gdBmeHX2h7DioSmXSj5KgM7IwlfAp+hWD6ed7VrhkeMik2G9NwYQwxHUkCr6UR/l t+fqcIVOrfTbxOBEn7TsaGIEMNK/FVPy+TfDdSoMEtLqWc6oUXZvtXMC++8nq/PWPmH6 ejKogMuvKaBQmzuXOEN64SBgEpZQHPOZIF/NrURw528oTu9M3MG8bq3AoW3uFwVpUX9O y2Vg41E1eI927C4rxbZXCCrvLAUqwNVYd8Xbb3JZGgGklXYXs9bZC9CRpaTJXG7FdM0P +ema+IXBouC+QzZ/vKA9+k/MGiaf2f/cahVjOm6PpmLyQYjidUWdOEzKdHUI5j0LxWn+ oEpg== X-Gm-Message-State: AOJu0Yz1HRvX9jhd5Dwv3sxm8uqVuLWvBqSe79KqJb6VNpbXuBYzGHAz cUiq9BdplEZhpK8F7qMHRr0T1TI6MfsN24EoiX62Un/UNFCVMd1276nd9V3rk6a0kurl79Y12fd 89JfrbEvIoJ1wxlk5Mn5BiLsVv/jZ X-Received: by 2002:a17:907:7da6:b0:9a9:ef41:e5c7 with SMTP id oz38-20020a1709077da600b009a9ef41e5c7mr3738003ejc.8.1698881513210; Wed, 01 Nov 2023 16:31:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE3haPe7UuyYPxTUThuUD/4vMv9tfXO+2VXpt9Qcm8NUJqbE6A1CQ8Urtch+YYcdlVqRZEpYw== X-Received: by 2002:a17:907:7da6:b0:9a9:ef41:e5c7 with SMTP id oz38-20020a1709077da600b009a9ef41e5c7mr3737989ejc.8.1698881512969; Wed, 01 Nov 2023 16:31:52 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id i17-20020a170906a29100b00992e265495csm456222ejz.212.2023.11.01.16.31.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:52 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 09/12] drm/gpuvm: reference count drm_gpuvm structures Date: Thu, 2 Nov 2023 00:31:01 +0100 Message-ID: <20231101233113.8059-10-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Implement reference counting for struct drm_gpuvm. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuvm.c | 44 +++++++++++++++++++------- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 20 +++++++++--- include/drm/drm_gpuvm.h | 31 +++++++++++++++++- 3 files changed, 78 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 53e2c406fb04..6a88eafc5229 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -746,6 +746,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, gpuvm->rb.tree = RB_ROOT_CACHED; INIT_LIST_HEAD(&gpuvm->rb.list); + kref_init(&gpuvm->kref); + gpuvm->name = name ? name : "unknown"; gpuvm->flags = flags; gpuvm->ops = ops; @@ -770,15 +772,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, } EXPORT_SYMBOL_GPL(drm_gpuvm_init); -/** - * drm_gpuvm_destroy() - cleanup a &drm_gpuvm - * @gpuvm: pointer to the &drm_gpuvm to clean up - * - * Note that it is a bug to call this function on a manager that still - * holds GPU VA mappings. - */ -void -drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) +static void +drm_gpuvm_fini(struct drm_gpuvm *gpuvm) { gpuvm->name = NULL; @@ -790,7 +785,33 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) drm_gem_object_put(gpuvm->r_obj); } -EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); + +static void +drm_gpuvm_free(struct kref *kref) +{ + struct drm_gpuvm *gpuvm = container_of(kref, struct drm_gpuvm, kref); + + if (drm_WARN_ON(gpuvm->drm, !gpuvm->ops->vm_free)) + return; + + drm_gpuvm_fini(gpuvm); + + gpuvm->ops->vm_free(gpuvm); +} + +/** + * drm_gpuvm_bo_put() - drop a struct drm_gpuvm reference + * @gpuvm: the &drm_gpuvm to release the reference of + * + * This releases a reference to @gpuvm. + */ +void +drm_gpuvm_put(struct drm_gpuvm *gpuvm) +{ + if (gpuvm) + kref_put(&gpuvm->kref, drm_gpuvm_free); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_put); static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, @@ -843,7 +864,7 @@ drm_gpuva_insert(struct drm_gpuvm *gpuvm, if (unlikely(!drm_gpuvm_range_valid(gpuvm, addr, range))) return -EINVAL; - return __drm_gpuva_insert(gpuvm, va); + return __drm_gpuva_insert(drm_gpuvm_get(gpuvm), va); } EXPORT_SYMBOL_GPL(drm_gpuva_insert); @@ -876,6 +897,7 @@ drm_gpuva_remove(struct drm_gpuva *va) } __drm_gpuva_remove(va); + drm_gpuvm_put(va->vm); } EXPORT_SYMBOL_GPL(drm_gpuva_remove); diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index 54be12c1272f..cb2f06565c46 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1780,6 +1780,18 @@ nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo) } } +static void +nouveau_uvmm_free(struct drm_gpuvm *gpuvm) +{ + struct nouveau_uvmm *uvmm = uvmm_from_gpuvm(gpuvm); + + kfree(uvmm); +} + +static const struct drm_gpuvm_ops gpuvm_ops = { + .vm_free = nouveau_uvmm_free, +}; + int nouveau_uvmm_ioctl_vm_init(struct drm_device *dev, void *data, @@ -1830,7 +1842,7 @@ nouveau_uvmm_ioctl_vm_init(struct drm_device *dev, NOUVEAU_VA_SPACE_END, init->kernel_managed_addr, init->kernel_managed_size, - NULL); + &gpuvm_ops); /* GPUVM takes care from here on. */ drm_gem_object_put(r_obj); @@ -1849,8 +1861,7 @@ nouveau_uvmm_ioctl_vm_init(struct drm_device *dev, return 0; out_gpuvm_fini: - drm_gpuvm_destroy(&uvmm->base); - kfree(uvmm); + drm_gpuvm_put(&uvmm->base); out_unlock: mutex_unlock(&cli->mutex); return ret; @@ -1902,7 +1913,6 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) mutex_lock(&cli->mutex); nouveau_vmm_fini(&uvmm->vmm); - drm_gpuvm_destroy(&uvmm->base); - kfree(uvmm); + drm_gpuvm_put(&uvmm->base); mutex_unlock(&cli->mutex); } diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 0c2e24155a93..4e6e1fd3485a 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -247,6 +247,11 @@ struct drm_gpuvm { struct list_head list; } rb; + /** + * @kref: reference count of this object + */ + struct kref kref; + /** * @kernel_alloc_node: * @@ -273,7 +278,23 @@ void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, const struct drm_gpuvm_ops *ops); -void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm); + +/** + * drm_gpuvm_get() - acquire a struct drm_gpuvm reference + * @gpuvm: the &drm_gpuvm to acquire the reference of + * + * This function acquires an additional reference to @gpuvm. It is illegal to + * call this without already holding a reference. No locks required. + */ +static inline struct drm_gpuvm * +drm_gpuvm_get(struct drm_gpuvm *gpuvm) +{ + kref_get(&gpuvm->kref); + + return gpuvm; +} + +void drm_gpuvm_put(struct drm_gpuvm *gpuvm); bool drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, u64 addr, u64 range); bool drm_gpuvm_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range); @@ -673,6 +694,14 @@ static inline void drm_gpuva_init_from_op(struct drm_gpuva *va, * operations to drivers. */ struct drm_gpuvm_ops { + /** + * @vm_free: called when the last reference of a struct drm_gpuvm is + * dropped + * + * This callback is mandatory. + */ + void (*vm_free)(struct drm_gpuvm *gpuvm); + /** * @op_alloc: called when the &drm_gpuvm allocates * a struct drm_gpuva_op From patchwork Wed Nov 1 23:31:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7D913C4167B for ; Wed, 1 Nov 2023 23:32:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 26CC410E7F8; Wed, 1 Nov 2023 23:32:08 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id E6AAE10E7F2 for ; Wed, 1 Nov 2023 23:32:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881520; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VB94IULVj9Ldl9RgtLoNPuLTMGrJq4IiOi+AOe597Ps=; b=f5rGrC6VjQbKZwK67nz8XpH9NFh+L9Rrv4xN7Dc5RGBIrgak0drCmPwV37WKLrPeCSf1C1 nU5KH8KcYOg3erpQ2qRnQtJE9Uq7EytDhhmfq2jvHA2daNfUiW8ahcZfOsteeOKQgZzXTd fF2yvbJNCIP/h6SBwnUIhxLdGM/CC/I= Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-484-E8iaqNVIMPWvHiAbi0G7qA-1; Wed, 01 Nov 2023 19:31:58 -0400 X-MC-Unique: E8iaqNVIMPWvHiAbi0G7qA-1 Received: by mail-ed1-f71.google.com with SMTP id 4fb4d7f45d1cf-51da39aa6dcso185203a12.2 for ; Wed, 01 Nov 2023 16:31:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881517; x=1699486317; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VB94IULVj9Ldl9RgtLoNPuLTMGrJq4IiOi+AOe597Ps=; b=qcO3EqDjcas/y4HeV7S7gwNePeuKnfwua6s8fFxV3AbRGF1f7qRlaFgqX9Nlbic1WN 5BBIf4DifjBjzrp7ZTsD0gAgcZhOuGSFUsoupqjCm1sQ2MWvL2mRcBOejxRMtF3E4WvI CqqvaTKIZDiuex2xDVvG4h5zs4CMu3lJZYim2kYHxHO6mFOnScv3mBh797wLUDyaiY4W CsylGF8odpR/WAus8QaavP2rLR59yo5BIqiVixfPiEjd7WJmbgoi40X4D5HFml8ry3+6 0qclovG0m41B34UpHoJg9RW9xlAtng2XM+o6J3DzseckAwfFzYj/4rBOukiwpoLgSy8h AfCQ== X-Gm-Message-State: AOJu0Yx0P8M85+sz9PsBQ0ys/Cmxd1KmuR3UjjCcsKga0Mx771gMrfo0 9vB4DfNM3+J2iNn8A/7co6DvObZ8/HHSOGF+p68o0ob7X3mskKq0vIgYR1fo0Nh2LUTz0OKseWz 0s5MQVnUOn8lcfN3owdSQSjeUWPMt X-Received: by 2002:aa7:d50b:0:b0:53e:7d60:58bb with SMTP id y11-20020aa7d50b000000b0053e7d6058bbmr14128645edq.27.1698881517143; Wed, 01 Nov 2023 16:31:57 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFHhwmrkZXOZq0XuLO/zq7Al8UEPtTNDKop3QFiollzU1vTvFb6z4/OxyYD/0g8gfZRZc5TsA== X-Received: by 2002:aa7:d50b:0:b0:53e:7d60:58bb with SMTP id y11-20020aa7d50b000000b0053e7d6058bbmr14128633edq.27.1698881516751; Wed, 01 Nov 2023 16:31:56 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id q32-20020a05640224a000b0054130b1bc77sm1604342eda.51.2023.11.01.16.31.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:31:56 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 10/12] drm/gpuvm: add an abstraction for a VM / BO combination Date: Thu, 2 Nov 2023 00:31:02 +0100 Message-ID: <20231101233113.8059-11-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add an abstraction layer between the drm_gpuva mappings of a particular drm_gem_object and this GEM object itself. The abstraction represents a combination of a drm_gem_object and drm_gpuvm. The drm_gem_object holds a list of drm_gpuvm_bo structures (the structure representing this abstraction), while each drm_gpuvm_bo contains list of mappings of this GEM object. This has multiple advantages: 1) We can use the drm_gpuvm_bo structure to attach it to various lists of the drm_gpuvm. This is useful for tracking external and evicted objects per VM, which is introduced in subsequent patches. 2) Finding mappings of a certain drm_gem_object mapped in a certain drm_gpuvm becomes much cheaper. 3) Drivers can derive and extend the structure to easily represent driver specific states of a BO for a certain GPUVM. The idea of this abstraction was taken from amdgpu, hence the credit for this idea goes to the developers of amdgpu. Cc: Christian König Reviewed-by: Boris Brezillon Signed-off-by: Danilo Krummrich Reviewed-by: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/drm_gpuvm.c | 336 +++++++++++++++++++++---- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 63 +++-- include/drm/drm_gem.h | 32 +-- include/drm/drm_gpuvm.h | 185 +++++++++++++- 4 files changed, 530 insertions(+), 86 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 6a88eafc5229..2c8fdefb19f0 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -70,6 +70,18 @@ * &drm_gem_object, such as the &drm_gem_object containing the root page table, * but it can also be a 'dummy' object, which can be allocated with * drm_gpuvm_resv_object_alloc(). + * + * In order to connect a struct drm_gpuva its backing &drm_gem_object each + * &drm_gem_object maintains a list of &drm_gpuvm_bo structures, and each + * &drm_gpuvm_bo contains a list of &drm_gpuva structures. + * + * A &drm_gpuvm_bo is an abstraction that represents a combination of a + * &drm_gpuvm and a &drm_gem_object. Every such combination should be unique. + * This is ensured by the API through drm_gpuvm_bo_obtain() and + * drm_gpuvm_bo_obtain_prealloc() which first look into the corresponding + * &drm_gem_object list of &drm_gpuvm_bos for an existing instance of this + * particular combination. If not existent a new instance is created and linked + * to the &drm_gem_object. */ /** @@ -395,21 +407,28 @@ /** * DOC: Locking * - * Generally, the GPU VA manager does not take care of locking itself, it is - * the drivers responsibility to take care about locking. Drivers might want to - * protect the following operations: inserting, removing and iterating - * &drm_gpuva objects as well as generating all kinds of operations, such as - * split / merge or prefetch. - * - * The GPU VA manager also does not take care of the locking of the backing - * &drm_gem_object buffers GPU VA lists by itself; drivers are responsible to - * enforce mutual exclusion using either the GEMs dma_resv lock or alternatively - * a driver specific external lock. For the latter see also - * drm_gem_gpuva_set_lock(). - * - * However, the GPU VA manager contains lockdep checks to ensure callers of its - * API hold the corresponding lock whenever the &drm_gem_objects GPU VA list is - * accessed by functions such as drm_gpuva_link() or drm_gpuva_unlink(). + * In terms of managing &drm_gpuva entries DRM GPUVM does not take care of + * locking itself, it is the drivers responsibility to take care about locking. + * Drivers might want to protect the following operations: inserting, removing + * and iterating &drm_gpuva objects as well as generating all kinds of + * operations, such as split / merge or prefetch. + * + * DRM GPUVM also does not take care of the locking of the backing + * &drm_gem_object buffers GPU VA lists and &drm_gpuvm_bo abstractions by + * itself; drivers are responsible to enforce mutual exclusion using either the + * GEMs dma_resv lock or alternatively a driver specific external lock. For the + * latter see also drm_gem_gpuva_set_lock(). + * + * However, DRM GPUVM contains lockdep checks to ensure callers of its API hold + * the corresponding lock whenever the &drm_gem_objects GPU VA list is accessed + * by functions such as drm_gpuva_link() or drm_gpuva_unlink(), but also + * drm_gpuvm_bo_obtain() and drm_gpuvm_bo_put(). + * + * The latter is required since on creation and destruction of a &drm_gpuvm_bo + * the &drm_gpuvm_bo is attached / removed from the &drm_gem_objects gpuva list. + * Subsequent calls to drm_gpuvm_bo_obtain() for the same &drm_gpuvm and + * &drm_gem_object must be able to observe previous creations and destructions + * of &drm_gpuvm_bos in order to keep instances unique. */ /** @@ -439,6 +458,7 @@ * { * struct drm_gpuva_ops *ops; * struct drm_gpuva_op *op + * struct drm_gpuvm_bo *vm_bo; * * driver_lock_va_space(); * ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range, @@ -446,6 +466,10 @@ * if (IS_ERR(ops)) * return PTR_ERR(ops); * + * vm_bo = drm_gpuvm_bo_obtain(gpuvm, obj); + * if (IS_ERR(vm_bo)) + * return PTR_ERR(vm_bo); + * * drm_gpuva_for_each_op(op, ops) { * struct drm_gpuva *va; * @@ -458,7 +482,7 @@ * * driver_vm_map(); * drm_gpuva_map(gpuvm, va, &op->map); - * drm_gpuva_link(va); + * drm_gpuva_link(va, vm_bo); * * break; * case DRM_GPUVA_OP_REMAP: { @@ -485,11 +509,11 @@ * driver_vm_remap(); * drm_gpuva_remap(prev, next, &op->remap); * - * drm_gpuva_unlink(va); * if (prev) - * drm_gpuva_link(prev); + * drm_gpuva_link(prev, va->vm_bo); * if (next) - * drm_gpuva_link(next); + * drm_gpuva_link(next, va->vm_bo); + * drm_gpuva_unlink(va); * * break; * } @@ -505,6 +529,7 @@ * break; * } * } + * drm_gpuvm_bo_put(vm_bo); * driver_unlock_va_space(); * * return 0; @@ -514,6 +539,7 @@ * * struct driver_context { * struct drm_gpuvm *gpuvm; + * struct drm_gpuvm_bo *vm_bo; * struct drm_gpuva *new_va; * struct drm_gpuva *prev_va; * struct drm_gpuva *next_va; @@ -534,6 +560,7 @@ * struct drm_gem_object *obj, u64 offset) * { * struct driver_context ctx; + * struct drm_gpuvm_bo *vm_bo; * struct drm_gpuva_ops *ops; * struct drm_gpuva_op *op; * int ret = 0; @@ -543,16 +570,23 @@ * ctx.new_va = kzalloc(sizeof(*ctx.new_va), GFP_KERNEL); * ctx.prev_va = kzalloc(sizeof(*ctx.prev_va), GFP_KERNEL); * ctx.next_va = kzalloc(sizeof(*ctx.next_va), GFP_KERNEL); - * if (!ctx.new_va || !ctx.prev_va || !ctx.next_va) { + * ctx.vm_bo = drm_gpuvm_bo_create(gpuvm, obj); + * if (!ctx.new_va || !ctx.prev_va || !ctx.next_va || !vm_bo) { * ret = -ENOMEM; * goto out; * } * + * // Typically protected with a driver specific GEM gpuva lock + * // used in the fence signaling path for drm_gpuva_link() and + * // drm_gpuva_unlink(), hence pre-allocate. + * ctx.vm_bo = drm_gpuvm_bo_obtain_prealloc(ctx.vm_bo); + * * driver_lock_va_space(); * ret = drm_gpuvm_sm_map(gpuvm, &ctx, addr, range, obj, offset); * driver_unlock_va_space(); * * out: + * drm_gpuvm_bo_put(ctx.vm_bo); * kfree(ctx.new_va); * kfree(ctx.prev_va); * kfree(ctx.next_va); @@ -565,7 +599,7 @@ * * drm_gpuva_map(ctx->vm, ctx->new_va, &op->map); * - * drm_gpuva_link(ctx->new_va); + * drm_gpuva_link(ctx->new_va, ctx->vm_bo); * * // prevent the new GPUVA from being freed in * // driver_mapping_create() @@ -577,22 +611,23 @@ * int driver_gpuva_remap(struct drm_gpuva_op *op, void *__ctx) * { * struct driver_context *ctx = __ctx; + * struct drm_gpuva *va = op->remap.unmap->va; * * drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap); * - * drm_gpuva_unlink(op->remap.unmap->va); - * kfree(op->remap.unmap->va); - * * if (op->remap.prev) { - * drm_gpuva_link(ctx->prev_va); + * drm_gpuva_link(ctx->prev_va, va->vm_bo); * ctx->prev_va = NULL; * } * * if (op->remap.next) { - * drm_gpuva_link(ctx->next_va); + * drm_gpuva_link(ctx->next_va, va->vm_bo); * ctx->next_va = NULL; * } * + * drm_gpuva_unlink(va); + * kfree(va); + * * return 0; * } * @@ -813,6 +848,195 @@ drm_gpuvm_put(struct drm_gpuvm *gpuvm) } EXPORT_SYMBOL_GPL(drm_gpuvm_put); +/** + * drm_gpuvm_bo_create() - create a new instance of struct drm_gpuvm_bo + * @gpuvm: The &drm_gpuvm the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @gpuvm. + * + * If provided by the driver, this function uses the &drm_gpuvm_ops + * vm_bo_alloc() callback to allocate. + * + * Returns: a pointer to the &drm_gpuvm_bo on success, NULL on failure + */ +struct drm_gpuvm_bo * +drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + const struct drm_gpuvm_ops *ops = gpuvm->ops; + struct drm_gpuvm_bo *vm_bo; + + if (ops && ops->vm_bo_alloc) + vm_bo = ops->vm_bo_alloc(); + else + vm_bo = kzalloc(sizeof(*vm_bo), GFP_KERNEL); + + if (unlikely(!vm_bo)) + return NULL; + + vm_bo->vm = drm_gpuvm_get(gpuvm); + vm_bo->obj = obj; + drm_gem_object_get(obj); + + kref_init(&vm_bo->kref); + INIT_LIST_HEAD(&vm_bo->list.gpuva); + INIT_LIST_HEAD(&vm_bo->list.entry.gem); + + return vm_bo; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_create); + +static void +drm_gpuvm_bo_destroy(struct kref *kref) +{ + struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo, + kref); + struct drm_gpuvm *gpuvm = vm_bo->vm; + const struct drm_gpuvm_ops *ops = gpuvm->ops; + struct drm_gem_object *obj = vm_bo->obj; + bool lock = !drm_gpuvm_resv_protected(gpuvm); + + if (!lock) + drm_gpuvm_resv_assert_held(gpuvm); + + drm_gem_gpuva_assert_lock_held(obj); + list_del(&vm_bo->list.entry.gem); + + if (ops && ops->vm_bo_free) + ops->vm_bo_free(vm_bo); + else + kfree(vm_bo); + + drm_gpuvm_put(gpuvm); + drm_gem_object_put(obj); +} + +/** + * drm_gpuvm_bo_put() - drop a struct drm_gpuvm_bo reference + * @vm_bo: the &drm_gpuvm_bo to release the reference of + * + * This releases a reference to @vm_bo. + * + * If the reference count drops to zero, the &gpuvm_bo is destroyed, which + * includes removing it from the GEMs gpuva list. Hence, if a call to this + * function can potentially let the reference count to zero the caller must + * hold the dma-resv or driver specific GEM gpuva lock. + */ +void +drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo) +{ + if (vm_bo) + kref_put(&vm_bo->kref, drm_gpuvm_bo_destroy); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put); + +static struct drm_gpuvm_bo * +__drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + struct drm_gpuvm_bo *vm_bo; + + drm_gem_gpuva_assert_lock_held(obj); + drm_gem_for_each_gpuvm_bo(vm_bo, obj) + if (vm_bo->vm == gpuvm) + return vm_bo; + + return NULL; +} + +/** + * drm_gpuvm_bo_find() - find the &drm_gpuvm_bo for the given + * &drm_gpuvm and &drm_gem_object + * @gpuvm: The &drm_gpuvm the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @gpuvm. + * + * Find the &drm_gpuvm_bo representing the combination of the given + * &drm_gpuvm and &drm_gem_object. If found, increases the reference + * count of the &drm_gpuvm_bo accordingly. + * + * Returns: a pointer to the &drm_gpuvm_bo on success, NULL on failure + */ +struct drm_gpuvm_bo * +drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + struct drm_gpuvm_bo *vm_bo = __drm_gpuvm_bo_find(gpuvm, obj); + + return vm_bo ? drm_gpuvm_bo_get(vm_bo) : NULL; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find); + +/** + * drm_gpuvm_bo_obtain() - obtains and instance of the &drm_gpuvm_bo for the + * given &drm_gpuvm and &drm_gem_object + * @gpuvm: The &drm_gpuvm the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @gpuvm. + * + * Find the &drm_gpuvm_bo representing the combination of the given + * &drm_gpuvm and &drm_gem_object. If found, increases the reference + * count of the &drm_gpuvm_bo accordingly. If not found, allocates a new + * &drm_gpuvm_bo. + * + * A new &drm_gpuvm_bo is added to the GEMs gpuva list. + * + * Returns: a pointer to the &drm_gpuvm_bo on success, an ERR_PTR on failure + */ +struct drm_gpuvm_bo * +drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + struct drm_gpuvm_bo *vm_bo; + + vm_bo = drm_gpuvm_bo_find(gpuvm, obj); + if (vm_bo) + return vm_bo; + + vm_bo = drm_gpuvm_bo_create(gpuvm, obj); + if (!vm_bo) + return ERR_PTR(-ENOMEM); + + drm_gem_gpuva_assert_lock_held(obj); + list_add_tail(&vm_bo->list.entry.gem, &obj->gpuva.list); + + return vm_bo; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain); + +/** + * drm_gpuvm_bo_obtain_prealloc() - obtains and instance of the &drm_gpuvm_bo + * for the given &drm_gpuvm and &drm_gem_object + * @__vm_bo: A pre-allocated struct drm_gpuvm_bo. + * + * Find the &drm_gpuvm_bo representing the combination of the given + * &drm_gpuvm and &drm_gem_object. If found, increases the reference + * count of the found &drm_gpuvm_bo accordingly, while the @__vm_bo reference + * count is decreased. If not found @__vm_bo is returned without further + * increase of the reference count. + * + * A new &drm_gpuvm_bo is added to the GEMs gpuva list. + * + * Returns: a pointer to the found &drm_gpuvm_bo or @__vm_bo if no existing + * &drm_gpuvm_bo was found + */ +struct drm_gpuvm_bo * +drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *__vm_bo) +{ + struct drm_gpuvm *gpuvm = __vm_bo->vm; + struct drm_gem_object *obj = __vm_bo->obj; + struct drm_gpuvm_bo *vm_bo; + + vm_bo = drm_gpuvm_bo_find(gpuvm, obj); + if (vm_bo) { + drm_gpuvm_bo_put(__vm_bo); + return vm_bo; + } + + drm_gem_gpuva_assert_lock_held(obj); + list_add_tail(&__vm_bo->list.entry.gem, &obj->gpuva.list); + + return __vm_bo; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain_prealloc); + static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va) @@ -904,24 +1128,33 @@ EXPORT_SYMBOL_GPL(drm_gpuva_remove); /** * drm_gpuva_link() - link a &drm_gpuva * @va: the &drm_gpuva to link + * @vm_bo: the &drm_gpuvm_bo to add the &drm_gpuva to * - * This adds the given &va to the GPU VA list of the &drm_gem_object it is - * associated with. + * This adds the given &va to the GPU VA list of the &drm_gpuvm_bo and the + * &drm_gpuvm_bo to the &drm_gem_object it is associated with. + * + * For every &drm_gpuva entry added to the &drm_gpuvm_bo an additional + * reference of the latter is taken. * * This function expects the caller to protect the GEM's GPUVA list against - * concurrent access using the GEMs dma_resv lock. + * concurrent access using either the GEMs dma_resv lock or a driver specific + * lock set through drm_gem_gpuva_set_lock(). */ void -drm_gpuva_link(struct drm_gpuva *va) +drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo) { struct drm_gem_object *obj = va->gem.obj; + struct drm_gpuvm *gpuvm = va->vm; if (unlikely(!obj)) return; - drm_gem_gpuva_assert_lock_held(obj); + drm_WARN_ON(gpuvm->drm, obj != vm_bo->obj); - list_add_tail(&va->gem.entry, &obj->gpuva.list); + va->vm_bo = drm_gpuvm_bo_get(vm_bo); + + drm_gem_gpuva_assert_lock_held(obj); + list_add_tail(&va->gem.entry, &vm_bo->list.gpuva); } EXPORT_SYMBOL_GPL(drm_gpuva_link); @@ -932,20 +1165,31 @@ EXPORT_SYMBOL_GPL(drm_gpuva_link); * This removes the given &va from the GPU VA list of the &drm_gem_object it is * associated with. * + * This removes the given &va from the GPU VA list of the &drm_gpuvm_bo and + * the &drm_gpuvm_bo from the &drm_gem_object it is associated with in case + * this call unlinks the last &drm_gpuva from the &drm_gpuvm_bo. + * + * For every &drm_gpuva entry removed from the &drm_gpuvm_bo a reference of + * the latter is dropped. + * * This function expects the caller to protect the GEM's GPUVA list against - * concurrent access using the GEMs dma_resv lock. + * concurrent access using either the GEMs dma_resv lock or a driver specific + * lock set through drm_gem_gpuva_set_lock(). */ void drm_gpuva_unlink(struct drm_gpuva *va) { struct drm_gem_object *obj = va->gem.obj; + struct drm_gpuvm_bo *vm_bo = va->vm_bo; if (unlikely(!obj)) return; drm_gem_gpuva_assert_lock_held(obj); - list_del_init(&va->gem.entry); + + va->vm_bo = NULL; + drm_gpuvm_bo_put(vm_bo); } EXPORT_SYMBOL_GPL(drm_gpuva_unlink); @@ -1090,10 +1334,10 @@ drm_gpuva_remap(struct drm_gpuva *prev, struct drm_gpuva *next, struct drm_gpuva_op_remap *op) { - struct drm_gpuva *curr = op->unmap->va; - struct drm_gpuvm *gpuvm = curr->vm; + struct drm_gpuva *va = op->unmap->va; + struct drm_gpuvm *gpuvm = va->vm; - drm_gpuva_remove(curr); + drm_gpuva_remove(va); if (op->prev) { drm_gpuva_init_from_op(prev, op->prev); @@ -1735,9 +1979,8 @@ drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm, EXPORT_SYMBOL_GPL(drm_gpuvm_prefetch_ops_create); /** - * drm_gpuvm_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM - * @gpuvm: the &drm_gpuvm representing the GPU VA space - * @obj: the &drm_gem_object to unmap + * drm_gpuvm_bo_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM + * @vm_bo: the &drm_gpuvm_bo abstraction * * This function creates a list of operations to perform unmapping for every * GPUVA attached to a GEM. @@ -1754,15 +1997,14 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_prefetch_ops_create); * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure */ struct drm_gpuva_ops * -drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, - struct drm_gem_object *obj) +drm_gpuvm_bo_unmap_ops_create(struct drm_gpuvm_bo *vm_bo) { struct drm_gpuva_ops *ops; struct drm_gpuva_op *op; struct drm_gpuva *va; int ret; - drm_gem_gpuva_assert_lock_held(obj); + drm_gem_gpuva_assert_lock_held(vm_bo->obj); ops = kzalloc(sizeof(*ops), GFP_KERNEL); if (!ops) @@ -1770,8 +2012,8 @@ drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, INIT_LIST_HEAD(&ops->list); - drm_gem_for_each_gpuva(va, obj) { - op = gpuva_op_alloc(gpuvm); + drm_gpuvm_bo_for_each_va(va, vm_bo) { + op = gpuva_op_alloc(vm_bo->vm); if (!op) { ret = -ENOMEM; goto err_free_ops; @@ -1785,10 +2027,10 @@ drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, return ops; err_free_ops: - drm_gpuva_ops_free(gpuvm, ops); + drm_gpuva_ops_free(vm_bo->vm, ops); return ERR_PTR(ret); } -EXPORT_SYMBOL_GPL(drm_gpuvm_gem_unmap_ops_create); +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_unmap_ops_create); /** * drm_gpuva_ops_free() - free the given &drm_gpuva_ops diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index cb2f06565c46..eda7bb8624f1 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -62,6 +62,8 @@ struct bind_job_op { enum vm_bind_op op; u32 flags; + struct drm_gpuvm_bo *vm_bo; + struct { u64 addr; u64 range; @@ -1101,22 +1103,28 @@ bind_validate_region(struct nouveau_job *job) } static void -bind_link_gpuvas(struct drm_gpuva_ops *ops, struct nouveau_uvma_prealloc *new) +bind_link_gpuvas(struct bind_job_op *bop) { + struct nouveau_uvma_prealloc *new = &bop->new; + struct drm_gpuvm_bo *vm_bo = bop->vm_bo; + struct drm_gpuva_ops *ops = bop->ops; struct drm_gpuva_op *op; drm_gpuva_for_each_op(op, ops) { switch (op->op) { case DRM_GPUVA_OP_MAP: - drm_gpuva_link(&new->map->va); + drm_gpuva_link(&new->map->va, vm_bo); break; - case DRM_GPUVA_OP_REMAP: + case DRM_GPUVA_OP_REMAP: { + struct drm_gpuva *va = op->remap.unmap->va; + if (op->remap.prev) - drm_gpuva_link(&new->prev->va); + drm_gpuva_link(&new->prev->va, va->vm_bo); if (op->remap.next) - drm_gpuva_link(&new->next->va); - drm_gpuva_unlink(op->remap.unmap->va); + drm_gpuva_link(&new->next->va, va->vm_bo); + drm_gpuva_unlink(va); break; + } case DRM_GPUVA_OP_UNMAP: drm_gpuva_unlink(op->unmap.va); break; @@ -1138,10 +1146,17 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) list_for_each_op(op, &bind_job->ops) { if (op->op == OP_MAP) { - op->gem.obj = drm_gem_object_lookup(job->file_priv, - op->gem.handle); - if (!op->gem.obj) + struct drm_gem_object *obj = op->gem.obj = + drm_gem_object_lookup(job->file_priv, + op->gem.handle); + if (!obj) return -ENOENT; + + dma_resv_lock(obj->resv, NULL); + op->vm_bo = drm_gpuvm_bo_obtain(&uvmm->base, obj); + dma_resv_unlock(obj->resv); + if (IS_ERR(op->vm_bo)) + return PTR_ERR(op->vm_bo); } ret = bind_validate_op(job, op); @@ -1352,7 +1367,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) case OP_UNMAP_SPARSE: case OP_MAP: case OP_UNMAP: - bind_link_gpuvas(op->ops, &op->new); + bind_link_gpuvas(op); break; default: break; @@ -1499,6 +1514,12 @@ nouveau_uvmm_bind_job_free_work_fn(struct work_struct *work) if (!IS_ERR_OR_NULL(op->ops)) drm_gpuva_ops_free(&uvmm->base, op->ops); + if (!IS_ERR_OR_NULL(op->vm_bo)) { + dma_resv_lock(obj->resv, NULL); + drm_gpuvm_bo_put(op->vm_bo); + dma_resv_unlock(obj->resv); + } + if (obj) drm_gem_object_put(obj); } @@ -1752,15 +1773,18 @@ void nouveau_uvmm_bo_map_all(struct nouveau_bo *nvbo, struct nouveau_mem *mem) { struct drm_gem_object *obj = &nvbo->bo.base; + struct drm_gpuvm_bo *vm_bo; struct drm_gpuva *va; dma_resv_assert_held(obj->resv); - drm_gem_for_each_gpuva(va, obj) { - struct nouveau_uvma *uvma = uvma_from_va(va); + drm_gem_for_each_gpuvm_bo(vm_bo, obj) { + drm_gpuvm_bo_for_each_va(va, vm_bo) { + struct nouveau_uvma *uvma = uvma_from_va(va); - nouveau_uvma_map(uvma, mem); - drm_gpuva_invalidate(va, false); + nouveau_uvma_map(uvma, mem); + drm_gpuva_invalidate(va, false); + } } } @@ -1768,15 +1792,18 @@ void nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo) { struct drm_gem_object *obj = &nvbo->bo.base; + struct drm_gpuvm_bo *vm_bo; struct drm_gpuva *va; dma_resv_assert_held(obj->resv); - drm_gem_for_each_gpuva(va, obj) { - struct nouveau_uvma *uvma = uvma_from_va(va); + drm_gem_for_each_gpuvm_bo(vm_bo, obj) { + drm_gpuvm_bo_for_each_va(va, vm_bo) { + struct nouveau_uvma *uvma = uvma_from_va(va); - nouveau_uvma_unmap(uvma); - drm_gpuva_invalidate(va, true); + nouveau_uvma_unmap(uvma); + drm_gpuva_invalidate(va, true); + } } } diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 16364487fde9..369505447acd 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -580,7 +580,7 @@ int drm_gem_evict(struct drm_gem_object *obj); * drm_gem_gpuva_init() - initialize the gpuva list of a GEM object * @obj: the &drm_gem_object * - * This initializes the &drm_gem_object's &drm_gpuva list. + * This initializes the &drm_gem_object's &drm_gpuvm_bo list. * * Calling this function is only necessary for drivers intending to support the * &drm_driver_feature DRIVER_GEM_GPUVA. @@ -593,28 +593,28 @@ static inline void drm_gem_gpuva_init(struct drm_gem_object *obj) } /** - * drm_gem_for_each_gpuva() - iternator to walk over a list of gpuvas - * @entry__: &drm_gpuva structure to assign to in each iteration step - * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with + * drm_gem_for_each_gpuvm_bo() - iterator to walk over a list of &drm_gpuvm_bo + * @entry__: &drm_gpuvm_bo structure to assign to in each iteration step + * @obj__: the &drm_gem_object the &drm_gpuvm_bo to walk are associated with * - * This iterator walks over all &drm_gpuva structures associated with the - * &drm_gpuva_manager. + * This iterator walks over all &drm_gpuvm_bo structures associated with the + * &drm_gem_object. */ -#define drm_gem_for_each_gpuva(entry__, obj__) \ - list_for_each_entry(entry__, &(obj__)->gpuva.list, gem.entry) +#define drm_gem_for_each_gpuvm_bo(entry__, obj__) \ + list_for_each_entry(entry__, &(obj__)->gpuva.list, list.entry.gem) /** - * drm_gem_for_each_gpuva_safe() - iternator to safely walk over a list of - * gpuvas - * @entry__: &drm_gpuva structure to assign to in each iteration step - * @next__: &next &drm_gpuva to store the next step - * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with + * drm_gem_for_each_gpuvm_bo_safe() - iterator to safely walk over a list of + * &drm_gpuvm_bo + * @entry__: &drm_gpuvm_bostructure to assign to in each iteration step + * @next__: &next &drm_gpuvm_bo to store the next step + * @obj__: the &drm_gem_object the &drm_gpuvm_bo to walk are associated with * - * This iterator walks over all &drm_gpuva structures associated with the + * This iterator walks over all &drm_gpuvm_bo structures associated with the * &drm_gem_object. It is implemented with list_for_each_entry_safe(), hence * it is save against removal of elements. */ -#define drm_gem_for_each_gpuva_safe(entry__, next__, obj__) \ - list_for_each_entry_safe(entry__, next__, &(obj__)->gpuva.list, gem.entry) +#define drm_gem_for_each_gpuvm_bo_safe(entry__, next__, obj__) \ + list_for_each_entry_safe(entry__, next__, &(obj__)->gpuva.list, list.entry.gem) #endif /* __DRM_GEM_H__ */ diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 4e6e1fd3485a..b12fb22b0e22 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -25,6 +25,7 @@ * OTHER DEALINGS IN THE SOFTWARE. */ +#include #include #include #include @@ -33,6 +34,7 @@ #include struct drm_gpuvm; +struct drm_gpuvm_bo; struct drm_gpuvm_ops; /** @@ -73,6 +75,12 @@ struct drm_gpuva { */ struct drm_gpuvm *vm; + /** + * @vm_bo: the &drm_gpuvm_bo abstraction for the mapped + * &drm_gem_object + */ + struct drm_gpuvm_bo *vm_bo; + /** * @flags: the &drm_gpuva_flags for this mapping */ @@ -108,7 +116,7 @@ struct drm_gpuva { struct drm_gem_object *obj; /** - * @entry: the &list_head to attach this object to a &drm_gem_object + * @entry: the &list_head to attach this object to a &drm_gpuvm_bo */ struct list_head entry; } gem; @@ -141,7 +149,7 @@ struct drm_gpuva { int drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va); void drm_gpuva_remove(struct drm_gpuva *va); -void drm_gpuva_link(struct drm_gpuva *va); +void drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo); void drm_gpuva_unlink(struct drm_gpuva *va); struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm, @@ -188,10 +196,16 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va) * enum drm_gpuvm_flags - flags for struct drm_gpuvm */ enum drm_gpuvm_flags { + /** + * @DRM_GPUVM_RESV_PROTECTED: GPUVM is protected externally by the + * GPUVM's &dma_resv lock + */ + DRM_GPUVM_RESV_PROTECTED = BIT(0), + /** * @DRM_GPUVM_USERBITS: user defined bits */ - DRM_GPUVM_USERBITS = BIT(0), + DRM_GPUVM_USERBITS = BIT(1), }; /** @@ -302,6 +316,19 @@ bool drm_gpuvm_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range); struct drm_gem_object * drm_gpuvm_resv_object_alloc(struct drm_device *drm); +/** + * drm_gpuvm_resv_protected() - indicates whether &DRM_GPUVM_RESV_PROTECTED is + * set + * @gpuvm: the &drm_gpuvm + * + * Returns: true if &DRM_GPUVM_RESV_PROTECTED is set, false otherwise. + */ +static inline bool +drm_gpuvm_resv_protected(struct drm_gpuvm *gpuvm) +{ + return gpuvm->flags & DRM_GPUVM_RESV_PROTECTED; +} + /** * drm_gpuvm_resv() - returns the &drm_gpuvm's &dma_resv * @gpuvm__: the &drm_gpuvm @@ -320,6 +347,12 @@ drm_gpuvm_resv_object_alloc(struct drm_device *drm); */ #define drm_gpuvm_resv_obj(gpuvm__) ((gpuvm__)->r_obj) +#define drm_gpuvm_resv_held(gpuvm__) \ + dma_resv_held(drm_gpuvm_resv(gpuvm__)) + +#define drm_gpuvm_resv_assert_held(gpuvm__) \ + dma_resv_assert_held(drm_gpuvm_resv(gpuvm__)) + #define drm_gpuvm_resv_held(gpuvm__) \ dma_resv_held(drm_gpuvm_resv(gpuvm__)) @@ -404,6 +437,125 @@ __drm_gpuva_next(struct drm_gpuva *va) #define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \ list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry) +/** + * struct drm_gpuvm_bo - structure representing a &drm_gpuvm and + * &drm_gem_object combination + * + * This structure is an abstraction representing a &drm_gpuvm and + * &drm_gem_object combination. It serves as an indirection to accelerate + * iterating all &drm_gpuvas within a &drm_gpuvm backed by the same + * &drm_gem_object. + * + * Furthermore it is used cache evicted GEM objects for a certain GPU-VM to + * accelerate validation. + * + * Typically, drivers want to create an instance of a struct drm_gpuvm_bo once + * a GEM object is mapped first in a GPU-VM and release the instance once the + * last mapping of the GEM object in this GPU-VM is unmapped. + */ +struct drm_gpuvm_bo { + /** + * @vm: The &drm_gpuvm the @obj is mapped in. This is a reference + * counted pointer. + */ + struct drm_gpuvm *vm; + + /** + * @obj: The &drm_gem_object being mapped in @vm. This is a reference + * counted pointer. + */ + struct drm_gem_object *obj; + + /** + * @kref: The reference count for this &drm_gpuvm_bo. + */ + struct kref kref; + + /** + * @list: Structure containing all &list_heads. + */ + struct { + /** + * @gpuva: The list of linked &drm_gpuvas. + * + * It is safe to access entries from this list as long as the + * GEM's gpuva lock is held. See also struct drm_gem_object. + */ + struct list_head gpuva; + + /** + * @entry: Structure containing all &list_heads serving as + * entry. + */ + struct { + /** + * @gem: List entry to attach to the &drm_gem_objects + * gpuva list. + */ + struct list_head gem; + } entry; + } list; +}; + +struct drm_gpuvm_bo * +drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj); + +struct drm_gpuvm_bo * +drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj); +struct drm_gpuvm_bo * +drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *vm_bo); + +/** + * drm_gpuvm_bo_get() - acquire a struct drm_gpuvm_bo reference + * @vm_bo: the &drm_gpuvm_bo to acquire the reference of + * + * This function acquires an additional reference to @vm_bo. It is illegal to + * call this without already holding a reference. No locks required. + */ +static inline struct drm_gpuvm_bo * +drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_bo) +{ + kref_get(&vm_bo->kref); + return vm_bo; +} + +void drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo); + +struct drm_gpuvm_bo * +drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj); + +/** + * drm_gpuvm_bo_for_each_va() - iterator to walk over a list of &drm_gpuva + * @va__: &drm_gpuva structure to assign to in each iteration step + * @vm_bo__: the &drm_gpuvm_bo the &drm_gpuva to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gpuvm_bo. + * + * The caller must hold the GEM's gpuva lock. + */ +#define drm_gpuvm_bo_for_each_va(va__, vm_bo__) \ + list_for_each_entry(va__, &(vm_bo)->list.gpuva, gem.entry) + +/** + * drm_gpuvm_bo_for_each_va_safe() - iterator to safely walk over a list of + * &drm_gpuva + * @va__: &drm_gpuva structure to assign to in each iteration step + * @next__: &next &drm_gpuva to store the next step + * @vm_bo__: the &drm_gpuvm_bo the &drm_gpuva to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gpuvm_bo. It is implemented with list_for_each_entry_safe(), hence + * it is save against removal of elements. + * + * The caller must hold the GEM's gpuva lock. + */ +#define drm_gpuvm_bo_for_each_va_safe(va__, next__, vm_bo__) \ + list_for_each_entry_safe(va__, next__, &(vm_bo)->list.gpuva, gem.entry) + /** * enum drm_gpuva_op_type - GPU VA operation type * @@ -673,8 +825,7 @@ drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm, u64 addr, u64 range); struct drm_gpuva_ops * -drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm, - struct drm_gem_object *obj); +drm_gpuvm_bo_unmap_ops_create(struct drm_gpuvm_bo *vm_bo); void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm, struct drm_gpuva_ops *ops); @@ -726,6 +877,30 @@ struct drm_gpuvm_ops { */ void (*op_free)(struct drm_gpuva_op *op); + /** + * @vm_bo_alloc: called when the &drm_gpuvm allocates + * a struct drm_gpuvm_bo + * + * Some drivers may want to embed struct drm_gpuvm_bo into driver + * specific structures. By implementing this callback drivers can + * allocate memory accordingly. + * + * This callback is optional. + */ + struct drm_gpuvm_bo *(*vm_bo_alloc)(void); + + /** + * @vm_bo_free: called when the &drm_gpuvm frees a + * struct drm_gpuvm_bo + * + * Some drivers may want to embed struct drm_gpuvm_bo into driver + * specific structures. By implementing this callback drivers can + * free the previously allocated memory accordingly. + * + * This callback is optional. + */ + void (*vm_bo_free)(struct drm_gpuvm_bo *vm_bo); + /** * @sm_step_map: called from &drm_gpuvm_sm_map to finally insert the * mapping once all previous steps were completed From patchwork Wed Nov 1 23:31:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7BAF9C4332F for ; Wed, 1 Nov 2023 23:32:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A547B10E7E9; Wed, 1 Nov 2023 23:32:11 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3055C10E7F5 for ; Wed, 1 Nov 2023 23:32:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881524; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/kAPXim4DdESec2v9RLkXSt7W+pdhnH11xmgsERa1Jw=; b=AM+Yq14WlvXVZRegOBVSh1iosPimxCNtqaiR0sB4UCQcuiwIMH6icnVxnXwebPdVF+h93t RthKv8B2u4znFMtP6gwFAj1jHRTeo/jsrSY/BVx+8mVuENceQ9DVT6WAQghNVy190VaKWs A3CpHNccHvHlgMLhKZ/lkrCrOmjtJZk= Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-649-k7oiYN1dPLiHRl-wN3Hjmw-1; Wed, 01 Nov 2023 19:32:02 -0400 X-MC-Unique: k7oiYN1dPLiHRl-wN3Hjmw-1 Received: by mail-ej1-f71.google.com with SMTP id a640c23a62f3a-9c749c28651so20748066b.0 for ; Wed, 01 Nov 2023 16:32:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881521; x=1699486321; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/kAPXim4DdESec2v9RLkXSt7W+pdhnH11xmgsERa1Jw=; b=dwqvz9I92+0/+el+1Y7augnk68oJbRhkqW6G/RJE+gXzZgbS8z9D2PQEeHB4Oar2kQ vtVdDRbJYkabW9itBjEWdBAUakgOBM3XHie7CPsSV4kHqccbu/9gFj5icYdhKLBJgkD0 AlqqcD9EuRZuHvi4AQ7abNzc4iiv5S9nsROxcxbJYomIqjlaOjOzw2lAaYs/BFWbz0E6 C/5jyw6OC6eJ3TMwthA1mnnytPKGz6/uEEmhoalTCMGDNIjZ8/8osoDZ/y1xP/KLX25c q5gMrvoPdhwZSo6qPXim83gDg96Zc/wa4oIdRJS3MGnwmuZwCLK7J9+bV97CN83HYKkF dG2w== X-Gm-Message-State: AOJu0YyVELfdjcYArhwfIEsYU0+C6jac+V16GKLy6bHErlIJIH15nG3x eg6IiUZx4YCXYrE6unh9g/5pUl0bWxR+s3wo1vrVd3xGZD8imtP1fSuGG4ttlgkY5ITn7iCS4ey pdNgiZPfL1sxWe1s9WM9U2CTkzxrx X-Received: by 2002:a17:907:987:b0:9bf:270d:8c37 with SMTP id bf7-20020a170907098700b009bf270d8c37mr3142404ejc.58.1698881520959; Wed, 01 Nov 2023 16:32:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE8pthdMTZA3nuWDf+VylPBPHmQLbHKNWNskEIGa7SXSxbXosdLE6n7hsuOonWTdXuSaIn+Bg== X-Received: by 2002:a17:907:987:b0:9bf:270d:8c37 with SMTP id bf7-20020a170907098700b009bf270d8c37mr3142387ejc.58.1698881520490; Wed, 01 Nov 2023 16:32:00 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id re20-20020a170906d8d400b009cc1227f443sm456905ejb.104.2023.11.01.16.31.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:32:00 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 11/12] drm/gpuvm: track/lock/validate external/evicted objects Date: Thu, 2 Nov 2023 00:31:03 +0100 Message-ID: <20231101233113.8059-12-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Currently the DRM GPUVM offers common infrastructure to track GPU VA allocations and mappings, generically connect GPU VA mappings to their backing buffers and perform more complex mapping operations on the GPU VA space. However, there are more design patterns commonly used by drivers, which can potentially be generalized in order to make the DRM GPUVM represent a basis for GPU-VM implementations. In this context, this patch aims at generalizing the following elements. 1) Provide a common dma-resv for GEM objects not being used outside of this GPU-VM. 2) Provide tracking of external GEM objects (GEM objects which are shared with other GPU-VMs). 3) Provide functions to efficiently lock all GEM objects dma-resv the GPU-VM contains mappings of. 4) Provide tracking of evicted GEM objects the GPU-VM contains mappings of, such that validation of evicted GEM objects is accelerated. 5) Provide some convinience functions for common patterns. Big thanks to Boris Brezillon for his help to figure out locking for drivers updating the GPU VA space within the fence signalling path. Reviewed-by: Boris Brezillon Reviewed-by: Thomas Hellström Suggested-by: Matthew Brost Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuvm.c | 633 ++++++++++++++++++++++++++++++++++++ include/drm/drm_gpuvm.h | 250 ++++++++++++++ 2 files changed, 883 insertions(+) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 2c8fdefb19f0..d289282693ec 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -82,6 +82,21 @@ * &drm_gem_object list of &drm_gpuvm_bos for an existing instance of this * particular combination. If not existent a new instance is created and linked * to the &drm_gem_object. + * + * &drm_gpuvm_bo structures, since unique for a given &drm_gpuvm, are also used + * as entry for the &drm_gpuvm's lists of external and evicted objects. Those + * lists are maintained in order to accelerate locking of dma-resv locks and + * validation of evicted objects bound in a &drm_gpuvm. For instance, all + * &drm_gem_object's &dma_resv of a given &drm_gpuvm can be locked by calling + * drm_gpuvm_exec_lock(). Once locked drivers can call drm_gpuvm_validate() in + * order to validate all evicted &drm_gem_objects. It is also possible to lock + * additional &drm_gem_objects by providing the corresponding parameters to + * drm_gpuvm_exec_lock() as well as open code the &drm_exec loop while making + * use of helper functions such as drm_gpuvm_prepare_range() or + * drm_gpuvm_prepare_objects(). + * + * Every bound &drm_gem_object is treated as external object when its &dma_resv + * structure is different than the &drm_gpuvm's common &dma_resv structure. */ /** @@ -429,6 +444,20 @@ * Subsequent calls to drm_gpuvm_bo_obtain() for the same &drm_gpuvm and * &drm_gem_object must be able to observe previous creations and destructions * of &drm_gpuvm_bos in order to keep instances unique. + * + * The &drm_gpuvm's lists for keeping track of external and evicted objects are + * protected against concurrent insertion / removal and iteration internally. + * + * However, drivers still need ensure to protect concurrent calls to functions + * iterating those lists, namely drm_gpuvm_prepare_objects() and + * drm_gpuvm_validate(). + * + * Alternatively, drivers can set the &DRM_GPUVM_RESV_PROTECTED flag to indicate + * that the corresponding &dma_resv locks are held in order to protect the + * lists. If &DRM_GPUVM_RESV_PROTECTED is set, internal locking is disabled and + * the corresponding lockdep checks are enabled. This is an optimization for + * drivers which are capable of taking the corresponding &dma_resv locks and + * hence do not require internal locking. */ /** @@ -641,6 +670,201 @@ * } */ +/** + * get_next_vm_bo_from_list() - get the next vm_bo element + * @__gpuvm: the &drm_gpuvm + * @__list_name: the name of the list we're iterating on + * @__local_list: a pointer to the local list used to store already iterated items + * @__prev_vm_bo: the previous element we got from get_next_vm_bo_from_list() + * + * This helper is here to provide lockless list iteration. Lockless as in, the + * iterator releases the lock immediately after picking the first element from + * the list, so list insertion deletion can happen concurrently. + * + * Elements popped from the original list are kept in a local list, so removal + * and is_empty checks can still happen while we're iterating the list. + */ +#define get_next_vm_bo_from_list(__gpuvm, __list_name, __local_list, __prev_vm_bo) \ + ({ \ + struct drm_gpuvm_bo *__vm_bo = NULL; \ + \ + drm_gpuvm_bo_put(__prev_vm_bo); \ + \ + spin_lock(&(__gpuvm)->__list_name.lock); \ + if (!(__gpuvm)->__list_name.local_list) \ + (__gpuvm)->__list_name.local_list = __local_list; \ + else \ + drm_WARN_ON((__gpuvm)->drm, \ + (__gpuvm)->__list_name.local_list != __local_list); \ + \ + while (!list_empty(&(__gpuvm)->__list_name.list)) { \ + __vm_bo = list_first_entry(&(__gpuvm)->__list_name.list, \ + struct drm_gpuvm_bo, \ + list.entry.__list_name); \ + if (kref_get_unless_zero(&__vm_bo->kref)) { \ + list_move_tail(&(__vm_bo)->list.entry.__list_name, \ + __local_list); \ + break; \ + } else { \ + list_del_init(&(__vm_bo)->list.entry.__list_name); \ + __vm_bo = NULL; \ + } \ + } \ + spin_unlock(&(__gpuvm)->__list_name.lock); \ + \ + __vm_bo; \ + }) + +/** + * for_each_vm_bo_in_list() - internal vm_bo list iterator + * @__gpuvm: the &drm_gpuvm + * @__list_name: the name of the list we're iterating on + * @__local_list: a pointer to the local list used to store already iterated items + * @__vm_bo: the struct drm_gpuvm_bo to assign in each iteration step + * + * This helper is here to provide lockless list iteration. Lockless as in, the + * iterator releases the lock immediately after picking the first element from the + * list, hence list insertion and deletion can happen concurrently. + * + * It is not allowed to re-assign the vm_bo pointer from inside this loop. + * + * Typical use: + * + * struct drm_gpuvm_bo *vm_bo; + * LIST_HEAD(my_local_list); + * + * ret = 0; + * for_each_vm_bo_in_list(gpuvm, , &my_local_list, vm_bo) { + * ret = do_something_with_vm_bo(..., vm_bo); + * if (ret) + * break; + * } + * // Drop ref in case we break out of the loop. + * drm_gpuvm_bo_put(vm_bo); + * restore_vm_bo_list(gpuvm, , &my_local_list); + * + * + * Only used for internal list iterations, not meant to be exposed to the outside + * world. + */ +#define for_each_vm_bo_in_list(__gpuvm, __list_name, __local_list, __vm_bo) \ + for (__vm_bo = get_next_vm_bo_from_list(__gpuvm, __list_name, \ + __local_list, NULL); \ + __vm_bo; \ + __vm_bo = get_next_vm_bo_from_list(__gpuvm, __list_name, \ + __local_list, __vm_bo)) + +static void +__restore_vm_bo_list(struct drm_gpuvm *gpuvm, spinlock_t *lock, + struct list_head *list, struct list_head **local_list) +{ + /* Merge back the two lists, moving local list elements to the + * head to preserve previous ordering, in case it matters. + */ + spin_lock(lock); + if (*local_list) { + list_splice(*local_list, list); + *local_list = NULL; + } + spin_unlock(lock); +} + +/** + * restore_vm_bo_list() - move vm_bo elements back to their original list + * @__gpuvm: the &drm_gpuvm + * @__list_name: the name of the list we're iterating on + * + * When we're done iterating a vm_bo list, we should call restore_vm_bo_list() + * to restore the original state and let new iterations take place. + */ +#define restore_vm_bo_list(__gpuvm, __list_name) \ + __restore_vm_bo_list((__gpuvm), &(__gpuvm)->__list_name.lock, \ + &(__gpuvm)->__list_name.list, \ + &(__gpuvm)->__list_name.local_list) + +static void +cond_spin_lock(spinlock_t *lock, bool cond) +{ + if (cond) + spin_lock(lock); +} + +static void +cond_spin_unlock(spinlock_t *lock, bool cond) +{ + if (cond) + spin_unlock(lock); +} + +static void +__drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm, spinlock_t *lock, + struct list_head *entry, struct list_head *list) +{ + cond_spin_lock(lock, !!lock); + if (list_empty(entry)) + list_add_tail(entry, list); + cond_spin_unlock(lock, !!lock); +} + +/** + * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list + * @__vm_bo: the &drm_gpuvm_bo + * @__list_name: the name of the list to insert into + * @__lock: whether to lock with the internal spinlock + * + * Inserts the given @__vm_bo into the list specified by @__list_name. + */ +#define drm_gpuvm_bo_list_add(__vm_bo, __list_name, __lock) \ + __drm_gpuvm_bo_list_add((__vm_bo)->vm, \ + __lock ? &(__vm_bo)->vm->__list_name.lock : \ + NULL, \ + &(__vm_bo)->list.entry.__list_name, \ + &(__vm_bo)->vm->__list_name.list) + +static void +__drm_gpuvm_bo_list_del(struct drm_gpuvm *gpuvm, spinlock_t *lock, + struct list_head *entry, bool init) +{ + cond_spin_lock(lock, !!lock); + if (init) { + if (!list_empty(entry)) + list_del_init(entry); + } else { + list_del(entry); + } + cond_spin_unlock(lock, !!lock); +} + +/** + * drm_gpuvm_bo_list_del_init() - remove a vm_bo from the given list + * @__vm_bo: the &drm_gpuvm_bo + * @__list_name: the name of the list to insert into + * @__lock: whether to lock with the internal spinlock + * + * Removes the given @__vm_bo from the list specified by @__list_name. + */ +#define drm_gpuvm_bo_list_del_init(__vm_bo, __list_name, __lock) \ + __drm_gpuvm_bo_list_del((__vm_bo)->vm, \ + __lock ? &(__vm_bo)->vm->__list_name.lock : \ + NULL, \ + &(__vm_bo)->list.entry.__list_name, \ + true) + +/** + * drm_gpuvm_bo_list_del() - remove a vm_bo from the given list + * @__vm_bo: the &drm_gpuvm_bo + * @__list_name: the name of the list to insert into + * @__lock: whether to lock with the internal spinlock + * + * Removes the given @__vm_bo from the list specified by @__list_name. + */ +#define drm_gpuvm_bo_list_del(__vm_bo, __list_name, __lock) \ + __drm_gpuvm_bo_list_del((__vm_bo)->vm, \ + __lock ? &(__vm_bo)->vm->__list_name.lock : \ + NULL, \ + &(__vm_bo)->list.entry.__list_name, \ + false) + #define to_drm_gpuva(__node) container_of((__node), struct drm_gpuva, rb.node) #define GPUVA_START(node) ((node)->va.addr) @@ -781,6 +1005,12 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, gpuvm->rb.tree = RB_ROOT_CACHED; INIT_LIST_HEAD(&gpuvm->rb.list); + INIT_LIST_HEAD(&gpuvm->extobj.list); + spin_lock_init(&gpuvm->extobj.lock); + + INIT_LIST_HEAD(&gpuvm->evict.list); + spin_lock_init(&gpuvm->evict.lock); + kref_init(&gpuvm->kref); gpuvm->name = name ? name : "unknown"; @@ -818,6 +1048,11 @@ drm_gpuvm_fini(struct drm_gpuvm *gpuvm) drm_WARN(gpuvm->drm, !RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), "GPUVA tree is not empty, potentially leaking memory.\n"); + drm_WARN(gpuvm->drm, !list_empty(&gpuvm->extobj.list), + "Extobj list should be empty.\n"); + drm_WARN(gpuvm->drm, !list_empty(&gpuvm->evict.list), + "Evict list should be empty.\n"); + drm_gem_object_put(gpuvm->r_obj); } @@ -848,6 +1083,343 @@ drm_gpuvm_put(struct drm_gpuvm *gpuvm) } EXPORT_SYMBOL_GPL(drm_gpuvm_put); +static int +__drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + unsigned int num_fences) +{ + struct drm_gpuvm_bo *vm_bo; + LIST_HEAD(extobjs); + int ret = 0; + + for_each_vm_bo_in_list(gpuvm, extobj, &extobjs, vm_bo) { + ret = drm_exec_prepare_obj(exec, vm_bo->obj, num_fences); + if (ret) + break; + } + /* Drop ref in case we break out of the loop. */ + drm_gpuvm_bo_put(vm_bo); + restore_vm_bo_list(gpuvm, extobj); + + return ret; +} + +static int +drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + unsigned int num_fences) +{ + struct drm_gpuvm_bo *vm_bo; + int ret = 0; + + drm_gpuvm_resv_assert_held(gpuvm); + list_for_each_entry(vm_bo, &gpuvm->extobj.list, list.entry.extobj) { + ret = drm_exec_prepare_obj(exec, vm_bo->obj, num_fences); + if (ret) + break; + + if (vm_bo->evicted) + drm_gpuvm_bo_list_add(vm_bo, evict, false); + } + + return ret; +} + +/** + * drm_gpuvm_prepare_objects() - prepare all assoiciated BOs + * @gpuvm: the &drm_gpuvm + * @exec: the &drm_exec locking context + * @num_fences: the amount of &dma_fences to reserve + * + * Calls drm_exec_prepare_obj() for all &drm_gem_objects the given + * &drm_gpuvm contains mappings of. + * + * Using this function directly, it is the drivers responsibility to call + * drm_exec_init() and drm_exec_fini() accordingly. + * + * Note: This function is safe against concurrent insertion and removal of + * external objects, however it is not safe against concurrent usage itself. + * + * Drivers need to make sure to protect this case with either an outer VM lock + * or by calling drm_gpuvm_prepare_vm() before this function within the + * drm_exec_until_all_locked() loop, such that the GPUVM's dma-resv lock ensures + * mutual exclusion. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + unsigned int num_fences) +{ + if (drm_gpuvm_resv_protected(gpuvm)) + return drm_gpuvm_prepare_objects_locked(gpuvm, exec, + num_fences); + else + return __drm_gpuvm_prepare_objects(gpuvm, exec, num_fences); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_objects); + +/** + * drm_gpuvm_prepare_range() - prepare all BOs mapped within a given range + * @gpuvm: the &drm_gpuvm + * @exec: the &drm_exec locking context + * @addr: the start address within the VA space + * @range: the range to iterate within the VA space + * @num_fences: the amount of &dma_fences to reserve + * + * Calls drm_exec_prepare_obj() for all &drm_gem_objects mapped between @addr + * and @addr + @range. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, struct drm_exec *exec, + u64 addr, u64 range, unsigned int num_fences) +{ + struct drm_gpuva *va; + u64 end = addr + range; + int ret; + + drm_gpuvm_for_each_va_range(va, gpuvm, addr, end) { + struct drm_gem_object *obj = va->gem.obj; + + ret = drm_exec_prepare_obj(exec, obj, num_fences); + if (ret) + return ret; + } + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_range); + +/** + * drm_gpuvm_exec_lock() - lock all dma-resv of all assoiciated BOs + * @vm_exec: the &drm_gpuvm_exec wrapper + * + * Acquires all dma-resv locks of all &drm_gem_objects the given + * &drm_gpuvm contains mappings of. + * + * Addionally, when calling this function with struct drm_gpuvm_exec::extra + * being set the driver receives the given @fn callback to lock additional + * dma-resv in the context of the &drm_gpuvm_exec instance. Typically, drivers + * would call drm_exec_prepare_obj() from within this callback. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_exec_lock(struct drm_gpuvm_exec *vm_exec) +{ + struct drm_gpuvm *gpuvm = vm_exec->vm; + struct drm_exec *exec = &vm_exec->exec; + unsigned int num_fences = vm_exec->num_fences; + int ret; + + drm_exec_init(exec, vm_exec->flags); + + drm_exec_until_all_locked(exec) { + ret = drm_gpuvm_prepare_vm(gpuvm, exec, num_fences); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + + ret = drm_gpuvm_prepare_objects(gpuvm, exec, num_fences); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + + if (vm_exec->extra.fn) { + ret = vm_exec->extra.fn(vm_exec); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + } + } + + return 0; + +err: + drm_exec_fini(exec); + return ret; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_exec_lock); + +static int +fn_lock_array(struct drm_gpuvm_exec *vm_exec) +{ + struct { + struct drm_gem_object **objs; + unsigned int num_objs; + } *args = vm_exec->extra.priv; + + return drm_exec_prepare_array(&vm_exec->exec, args->objs, + args->num_objs, vm_exec->num_fences); +} + +/** + * drm_gpuvm_exec_lock_array() - lock all dma-resv of all assoiciated BOs + * @vm_exec: the &drm_gpuvm_exec wrapper + * @objs: additional &drm_gem_objects to lock + * @num_objs: the number of additional &drm_gem_objects to lock + * + * Acquires all dma-resv locks of all &drm_gem_objects the given &drm_gpuvm + * contains mappings of, plus the ones given through @objs. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_exec_lock_array(struct drm_gpuvm_exec *vm_exec, + struct drm_gem_object **objs, + unsigned int num_objs) +{ + struct { + struct drm_gem_object **objs; + unsigned int num_objs; + } args; + + args.objs = objs; + args.num_objs = num_objs; + + vm_exec->extra.fn = fn_lock_array; + vm_exec->extra.priv = &args; + + return drm_gpuvm_exec_lock(vm_exec); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_exec_lock_array); + +/** + * drm_gpuvm_exec_lock_range() - prepare all BOs mapped within a given range + * @vm_exec: the &drm_gpuvm_exec wrapper + * @addr: the start address within the VA space + * @range: the range to iterate within the VA space + * + * Acquires all dma-resv locks of all &drm_gem_objects mapped between @addr and + * @addr + @range. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_exec_lock_range(struct drm_gpuvm_exec *vm_exec, + u64 addr, u64 range) +{ + struct drm_gpuvm *gpuvm = vm_exec->vm; + struct drm_exec *exec = &vm_exec->exec; + int ret; + + drm_exec_init(exec, vm_exec->flags); + + drm_exec_until_all_locked(exec) { + ret = drm_gpuvm_prepare_range(gpuvm, exec, addr, range, + vm_exec->num_fences); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + } + + return ret; + +err: + drm_exec_fini(exec); + return ret; +} +EXPORT_SYMBOL_GPL(drm_gpuvm_exec_lock_range); + +static int +__drm_gpuvm_validate(struct drm_gpuvm *gpuvm, struct drm_exec *exec) +{ + const struct drm_gpuvm_ops *ops = gpuvm->ops; + struct drm_gpuvm_bo *vm_bo; + LIST_HEAD(evict); + int ret = 0; + + for_each_vm_bo_in_list(gpuvm, evict, &evict, vm_bo) { + ret = ops->vm_bo_validate(vm_bo, exec); + if (ret) + break; + } + /* Drop ref in case we break out of the loop. */ + drm_gpuvm_bo_put(vm_bo); + restore_vm_bo_list(gpuvm, evict); + + return ret; +} + +static int +drm_gpuvm_validate_locked(struct drm_gpuvm *gpuvm, struct drm_exec *exec) +{ + const struct drm_gpuvm_ops *ops = gpuvm->ops; + struct drm_gpuvm_bo *vm_bo, *next; + int ret = 0; + + drm_gpuvm_resv_assert_held(gpuvm); + + list_for_each_entry_safe(vm_bo, next, &gpuvm->evict.list, + list.entry.evict) { + ret = ops->vm_bo_validate(vm_bo, exec); + if (ret) + break; + + dma_resv_assert_held(vm_bo->obj->resv); + if (!vm_bo->evicted) + drm_gpuvm_bo_list_del_init(vm_bo, evict, false); + } + + return ret; +} + +/** + * drm_gpuvm_validate() - validate all BOs marked as evicted + * @gpuvm: the &drm_gpuvm to validate evicted BOs + * @exec: the &drm_exec instance used for locking the GPUVM + * + * Calls the &drm_gpuvm_ops::vm_bo_validate callback for all evicted buffer + * objects being mapped in the given &drm_gpuvm. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuvm_validate(struct drm_gpuvm *gpuvm, struct drm_exec *exec) +{ + const struct drm_gpuvm_ops *ops = gpuvm->ops; + + if (unlikely(!ops || !ops->vm_bo_validate)) + return -EOPNOTSUPP; + + if (drm_gpuvm_resv_protected(gpuvm)) + return drm_gpuvm_validate_locked(gpuvm, exec); + else + return __drm_gpuvm_validate(gpuvm, exec); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_validate); + +/** + * drm_gpuvm_resv_add_fence - add fence to private and all extobj + * dma-resv + * @gpuvm: the &drm_gpuvm to add a fence to + * @exec: the &drm_exec locking context + * @fence: fence to add + * @private_usage: private dma-resv usage + * @extobj_usage: extobj dma-resv usage + */ +void +drm_gpuvm_resv_add_fence(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + struct dma_fence *fence, + enum dma_resv_usage private_usage, + enum dma_resv_usage extobj_usage) +{ + struct drm_gem_object *obj; + unsigned long index; + + drm_exec_for_each_locked_object(exec, index, obj) { + dma_resv_assert_held(obj->resv); + dma_resv_add_fence(obj->resv, fence, + drm_gpuvm_is_extobj(gpuvm, obj) ? + extobj_usage : private_usage); + } +} +EXPORT_SYMBOL_GPL(drm_gpuvm_resv_add_fence); + /** * drm_gpuvm_bo_create() - create a new instance of struct drm_gpuvm_bo * @gpuvm: The &drm_gpuvm the @obj is mapped in. @@ -881,6 +1453,9 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, INIT_LIST_HEAD(&vm_bo->list.gpuva); INIT_LIST_HEAD(&vm_bo->list.entry.gem); + INIT_LIST_HEAD(&vm_bo->list.entry.extobj); + INIT_LIST_HEAD(&vm_bo->list.entry.evict); + return vm_bo; } EXPORT_SYMBOL_GPL(drm_gpuvm_bo_create); @@ -898,6 +1473,9 @@ drm_gpuvm_bo_destroy(struct kref *kref) if (!lock) drm_gpuvm_resv_assert_held(gpuvm); + drm_gpuvm_bo_list_del(vm_bo, extobj, lock); + drm_gpuvm_bo_list_del(vm_bo, evict, lock); + drm_gem_gpuva_assert_lock_held(obj); list_del(&vm_bo->list.entry.gem); @@ -1037,6 +1615,61 @@ drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *__vm_bo) } EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain_prealloc); +/** + * drm_gpuvm_bo_extobj_add() - adds the &drm_gpuvm_bo to its &drm_gpuvm's + * extobj list + * @vm_bo: The &drm_gpuvm_bo to add to its &drm_gpuvm's the extobj list. + * + * Adds the given @vm_bo to its &drm_gpuvm's extobj list if not on the list + * already and if the corresponding &drm_gem_object is an external object, + * actually. + */ +void +drm_gpuvm_bo_extobj_add(struct drm_gpuvm_bo *vm_bo) +{ + struct drm_gpuvm *gpuvm = vm_bo->vm; + bool lock = !drm_gpuvm_resv_protected(gpuvm); + + if (!lock) + drm_gpuvm_resv_assert_held(gpuvm); + + if (drm_gpuvm_is_extobj(gpuvm, vm_bo->obj)) + drm_gpuvm_bo_list_add(vm_bo, extobj, lock); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_extobj_add); + +/** + * drm_gpuvm_bo_evict() - add / remove a &drm_gpuvm_bo to / from the &drm_gpuvms + * evicted list + * @vm_bo: the &drm_gpuvm_bo to add or remove + * @evict: indicates whether the object is evicted + * + * Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvms evicted list. + */ +void +drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict) +{ + struct drm_gpuvm *gpuvm = vm_bo->vm; + struct drm_gem_object *obj = vm_bo->obj; + bool lock = !drm_gpuvm_resv_protected(gpuvm); + + dma_resv_assert_held(obj->resv); + vm_bo->evicted = evict; + + /* Can't add external objects to the evicted list directly if not using + * internal spinlocks, since in this case the evicted list is protected + * with the VM's common dma-resv lock. + */ + if (drm_gpuvm_is_extobj(gpuvm, obj) && !lock) + return; + + if (evict) + drm_gpuvm_bo_list_add(vm_bo, evict, lock); + else + drm_gpuvm_bo_list_del_init(vm_bo, evict, lock); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_evict); + static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va) diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index b12fb22b0e22..8ca10461d8ac 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -32,6 +32,7 @@ #include #include +#include struct drm_gpuvm; struct drm_gpuvm_bo; @@ -283,6 +284,50 @@ struct drm_gpuvm { * @r_obj: Resv GEM object; representing the GPUVM's common &dma_resv. */ struct drm_gem_object *r_obj; + + /** + * @extobj: structure holding the extobj list + */ + struct { + /** + * @list: &list_head storing &drm_gpuvm_bos serving as + * external object + */ + struct list_head list; + + /** + * @local_list: pointer to the local list temporarily storing + * entries from the external object list + */ + struct list_head *local_list; + + /** + * @lock: spinlock to protect the extobj list + */ + spinlock_t lock; + } extobj; + + /** + * @evict: structure holding the evict list and evict list lock + */ + struct { + /** + * @list: &list_head storing &drm_gpuvm_bos currently being + * evicted + */ + struct list_head list; + + /** + * @local_list: pointer to the local list temporarily storing + * entries from the evicted object list + */ + struct list_head *local_list; + + /** + * @lock: spinlock to protect the evict list + */ + spinlock_t lock; + } evict; }; void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, @@ -359,6 +404,22 @@ drm_gpuvm_resv_protected(struct drm_gpuvm *gpuvm) #define drm_gpuvm_resv_assert_held(gpuvm__) \ dma_resv_assert_held(drm_gpuvm_resv(gpuvm__)) +/** + * drm_gpuvm_is_extobj() - indicates whether the given &drm_gem_object is an + * external object + * @gpuvm: the &drm_gpuvm to check + * @obj: the &drm_gem_object to check + * + * Returns: true if the &drm_gem_object &dma_resv differs from the + * &drm_gpuvms &dma_resv, false otherwise + */ +static inline bool +drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + return obj && obj->resv != drm_gpuvm_resv(gpuvm); +} + static inline struct drm_gpuva * __drm_gpuva_next(struct drm_gpuva *va) { @@ -437,6 +498,144 @@ __drm_gpuva_next(struct drm_gpuva *va) #define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \ list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry) +/** + * struct drm_gpuvm_exec - &drm_gpuvm abstraction of &drm_exec + * + * This structure should be created on the stack as &drm_exec should be. + * + * Optionally, @extra can be set in order to lock additional &drm_gem_objects. + */ +struct drm_gpuvm_exec { + /** + * @exec: the &drm_exec structure + */ + struct drm_exec exec; + + /** + * @flags: the flags for the struct drm_exec + */ + uint32_t flags; + + /** + * @vm: the &drm_gpuvm to lock its DMA reservations + */ + struct drm_gpuvm *vm; + + /** + * @num_fences: the number of fences to reserve for the &dma_resv of the + * locked &drm_gem_objects + */ + unsigned int num_fences; + + /** + * @extra: Callback and corresponding private data for the driver to + * lock arbitrary additional &drm_gem_objects. + */ + struct { + /** + * @fn: The driver callback to lock additional &drm_gem_objects. + */ + int (*fn)(struct drm_gpuvm_exec *vm_exec); + + /** + * @priv: driver private data for the @fn callback + */ + void *priv; + } extra; +}; + +/** + * drm_gpuvm_prepare_vm() - prepare the GPUVMs common dma-resv + * @gpuvm: the &drm_gpuvm + * @exec: the &drm_exec context + * @num_fences: the amount of &dma_fences to reserve + * + * Calls drm_exec_prepare_obj() for the GPUVMs dummy &drm_gem_object. + * + * Using this function directly, it is the drivers responsibility to call + * drm_exec_init() and drm_exec_fini() accordingly. + * + * Returns: 0 on success, negative error code on failure. + */ +static inline int +drm_gpuvm_prepare_vm(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + unsigned int num_fences) +{ + return drm_exec_prepare_obj(exec, gpuvm->r_obj, num_fences); +} + +int drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + unsigned int num_fences); + +int drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + u64 addr, u64 range, + unsigned int num_fences); + +int drm_gpuvm_exec_lock(struct drm_gpuvm_exec *vm_exec); + +int drm_gpuvm_exec_lock_array(struct drm_gpuvm_exec *vm_exec, + struct drm_gem_object **objs, + unsigned int num_objs); + +int drm_gpuvm_exec_lock_range(struct drm_gpuvm_exec *vm_exec, + u64 addr, u64 range); + +/** + * drm_gpuvm_exec_unlock() - lock all dma-resv of all assoiciated BOs + * @vm_exec: the &drm_gpuvm_exec wrapper + * + * Releases all dma-resv locks of all &drm_gem_objects previously acquired + * through drm_gpuvm_exec_lock() or its variants. + * + * Returns: 0 on success, negative error code on failure. + */ +static inline void +drm_gpuvm_exec_unlock(struct drm_gpuvm_exec *vm_exec) +{ + drm_exec_fini(&vm_exec->exec); +} + +int drm_gpuvm_validate(struct drm_gpuvm *gpuvm, struct drm_exec *exec); +void drm_gpuvm_resv_add_fence(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, + struct dma_fence *fence, + enum dma_resv_usage private_usage, + enum dma_resv_usage extobj_usage); + +/** + * drm_gpuvm_exec_resv_add_fence() + * @vm_exec: the &drm_gpuvm_exec wrapper + * @fence: fence to add + * @private_usage: private dma-resv usage + * @extobj_usage: extobj dma-resv usage + * + * See drm_gpuvm_resv_add_fence(). + */ +static inline void +drm_gpuvm_exec_resv_add_fence(struct drm_gpuvm_exec *vm_exec, + struct dma_fence *fence, + enum dma_resv_usage private_usage, + enum dma_resv_usage extobj_usage) +{ + drm_gpuvm_resv_add_fence(vm_exec->vm, &vm_exec->exec, fence, + private_usage, extobj_usage); +} + +/** + * drm_gpuvm_exec_validate() + * @vm_exec: the &drm_gpuvm_exec wrapper + * + * See drm_gpuvm_validate(). + */ +static inline int +drm_gpuvm_exec_validate(struct drm_gpuvm_exec *vm_exec) +{ + return drm_gpuvm_validate(vm_exec->vm, &vm_exec->exec); +} + /** * struct drm_gpuvm_bo - structure representing a &drm_gpuvm and * &drm_gem_object combination @@ -466,6 +665,12 @@ struct drm_gpuvm_bo { */ struct drm_gem_object *obj; + /** + * @evicted: Indicates whether the &drm_gem_object is evicted; field + * protected by the &drm_gem_object's dma-resv lock. + */ + bool evicted; + /** * @kref: The reference count for this &drm_gpuvm_bo. */ @@ -493,6 +698,18 @@ struct drm_gpuvm_bo { * gpuva list. */ struct list_head gem; + + /** + * @evict: List entry to attach to the &drm_gpuvms + * extobj list. + */ + struct list_head extobj; + + /** + * @evict: List entry to attach to the &drm_gpuvms evict + * list. + */ + struct list_head evict; } entry; } list; }; @@ -527,6 +744,27 @@ struct drm_gpuvm_bo * drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj); +void drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict); + +/** + * drm_gpuvm_bo_gem_evict() + * @obj: the &drm_gem_object + * @evict: indicates whether @obj is evicted + * + * See drm_gpuvm_bo_evict(). + */ +static inline void +drm_gpuvm_bo_gem_evict(struct drm_gem_object *obj, bool evict) +{ + struct drm_gpuvm_bo *vm_bo; + + drm_gem_gpuva_assert_lock_held(obj); + drm_gem_for_each_gpuvm_bo(vm_bo, obj) + drm_gpuvm_bo_evict(vm_bo, evict); +} + +void drm_gpuvm_bo_extobj_add(struct drm_gpuvm_bo *vm_bo); + /** * drm_gpuvm_bo_for_each_va() - iterator to walk over a list of &drm_gpuva * @va__: &drm_gpuva structure to assign to in each iteration step @@ -901,6 +1139,18 @@ struct drm_gpuvm_ops { */ void (*vm_bo_free)(struct drm_gpuvm_bo *vm_bo); + /** + * @vm_bo_validate: called from drm_gpuvm_validate() + * + * Drivers receive this callback for every evicted &drm_gem_object being + * mapped in the corresponding &drm_gpuvm. + * + * Typically, drivers would call their driver specific variant of + * ttm_bo_validate() from within this callback. + */ + int (*vm_bo_validate)(struct drm_gpuvm_bo *vm_bo, + struct drm_exec *exec); + /** * @sm_step_map: called from &drm_gpuvm_sm_map to finally insert the * mapping once all previous steps were completed From patchwork Wed Nov 1 23:31:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13443132 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F018BC4167D for ; Wed, 1 Nov 2023 23:32:14 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BCA1610E7FC; Wed, 1 Nov 2023 23:32:12 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1E70D10E7F7 for ; Wed, 1 Nov 2023 23:32:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698881526; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sMJtWwwzOnXgVGgwdNTGwhIH6thK6PoDSMuGjW3JYsQ=; b=FkV0lBYDj0tC82X8dAxjG8J+TpRSDoYPJ+hZQicgFJGcJqJH9PjDY60l4IpD9rIWF2/sP5 rKsoLKeZW/DQsJ73QJQci9NSDcTaH9fccurpe/zHHcQIAYIRhAtfbmHYcQSZIOuTTExY9I IXqFzC1YN+eA2Exr1MQN2XOEj7Xw8J4= Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com [209.85.218.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-684-a8mnrMzaO2y4vcK7tssM6A-1; Wed, 01 Nov 2023 19:32:05 -0400 X-MC-Unique: a8mnrMzaO2y4vcK7tssM6A-1 Received: by mail-ej1-f69.google.com with SMTP id a640c23a62f3a-9ae57d8b502so19945866b.2 for ; Wed, 01 Nov 2023 16:32:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698881524; x=1699486324; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sMJtWwwzOnXgVGgwdNTGwhIH6thK6PoDSMuGjW3JYsQ=; b=ngQ4fXAWl6PLGfB4amUY4VvYP9iU7NXmp9yuvNJb2GH1P1bBBcmx319+KjjzLg8G2p oiDn+rCEYF31/LatI8rY9GqaCiBlqeplzGiIlMoVLccOTV6chSK99vlTek6+IeFaVWmg iZjReL2UB5+4kjSUuKgu7uK03QxqSZHDPsPWhT3JDjvNaGWT6+AGlzikx+vRVAGF5WAu 2URUR9hsFouZv2MASnaAR5foCUtliM54ix+PKsIWSiH495CWakIINRUlc4c7EyYLFmyR 8Vph3UXVCjzXl6aI5NkBgE4sNHFg8WbjHl/aGqmxVIVGrDT9JxClUyw4d8YoBn/gvKXj RTtQ== X-Gm-Message-State: AOJu0YwEdw24hMsHsowCiI9JODyomsf8p5D8zg+6Lxk46+sA+j0wU7KM WQ11nU065ADyR/7SwoGzNygWrCHYNYiI3bhsoDcB2IzYBnRkpNjm1RSl+VxjAvDg9LVA8j31CaD 93sv4sW2oojv21zyR+AQW0AyJ/UTb X-Received: by 2002:a17:907:1392:b0:9d6:e1b5:1afa with SMTP id vs18-20020a170907139200b009d6e1b51afamr1984413ejb.46.1698881524521; Wed, 01 Nov 2023 16:32:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG7H/Bjvaw2o86RDVALW4ML+vTEPIQa7r6t0Wtvvx05um+pZ/Et1fSYofSvZqQdXpG46HLWAQ== X-Received: by 2002:a17:907:1392:b0:9d6:e1b5:1afa with SMTP id vs18-20020a170907139200b009d6e1b51afamr1984405ejb.46.1698881524249; Wed, 01 Nov 2023 16:32:04 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id pw9-20020a17090720a900b009c7608eb499sm460820ejb.94.2023.11.01.16.32.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 16:32:03 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Subject: [PATCH drm-misc-next v8 12/12] drm/nouveau: use GPUVM common infrastructure Date: Thu, 2 Nov 2023 00:31:04 +0100 Message-ID: <20231101233113.8059-13-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231101233113.8059-1-dakr@redhat.com> References: <20231101233113.8059-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" GPUVM provides common infrastructure to track external and evicted GEM objects as well as locking and validation helpers. Especially external and evicted object tracking is a huge improvement compared to the current brute force approach of iterating all mappings in order to lock and validate the GPUVM's GEM objects. Hence, make us of it. Signed-off-by: Danilo Krummrich Reviewed-by: Dave Airlie --- drivers/gpu/drm/nouveau/nouveau_bo.c | 4 +- drivers/gpu/drm/nouveau/nouveau_exec.c | 57 ++++---------- drivers/gpu/drm/nouveau/nouveau_exec.h | 4 - drivers/gpu/drm/nouveau/nouveau_sched.c | 9 ++- drivers/gpu/drm/nouveau/nouveau_sched.h | 7 +- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 99 ++++++++++++++++--------- 6 files changed, 90 insertions(+), 90 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 7afad86da64b..b7dda486a7ea 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1061,17 +1061,18 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, { struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct nouveau_bo *nvbo = nouveau_bo(bo); + struct drm_gem_object *obj = &bo->base; struct ttm_resource *old_reg = bo->resource; struct nouveau_drm_tile *new_tile = NULL; int ret = 0; - if (new_reg->mem_type == TTM_PL_TT) { ret = nouveau_ttm_tt_bind(bo->bdev, bo->ttm, new_reg); if (ret) return ret; } + drm_gpuvm_bo_gem_evict(obj, evict); nouveau_bo_move_ntfy(bo, new_reg); ret = ttm_bo_wait_ctx(bo, ctx); if (ret) @@ -1136,6 +1137,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, out_ntfy: if (ret) { nouveau_bo_move_ntfy(bo, bo->resource); + drm_gpuvm_bo_gem_evict(obj, !evict); } return ret; } diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c index bf6c12f4342a..9d9835fb5970 100644 --- a/drivers/gpu/drm/nouveau/nouveau_exec.c +++ b/drivers/gpu/drm/nouveau/nouveau_exec.c @@ -1,7 +1,5 @@ // SPDX-License-Identifier: MIT -#include - #include "nouveau_drv.h" #include "nouveau_gem.h" #include "nouveau_mem.h" @@ -86,14 +84,12 @@ */ static int -nouveau_exec_job_submit(struct nouveau_job *job) +nouveau_exec_job_submit(struct nouveau_job *job, + struct drm_gpuvm_exec *vme) { struct nouveau_exec_job *exec_job = to_nouveau_exec_job(job); struct nouveau_cli *cli = job->cli; struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(cli); - struct drm_exec *exec = &job->exec; - struct drm_gem_object *obj; - unsigned long index; int ret; /* Create a new fence, but do not emit yet. */ @@ -102,52 +98,29 @@ nouveau_exec_job_submit(struct nouveau_job *job) return ret; nouveau_uvmm_lock(uvmm); - drm_exec_init(exec, DRM_EXEC_INTERRUPTIBLE_WAIT | - DRM_EXEC_IGNORE_DUPLICATES); - drm_exec_until_all_locked(exec) { - struct drm_gpuva *va; - - drm_gpuvm_for_each_va(va, &uvmm->base) { - if (unlikely(va == &uvmm->base.kernel_alloc_node)) - continue; - - ret = drm_exec_prepare_obj(exec, va->gem.obj, 1); - drm_exec_retry_on_contention(exec); - if (ret) - goto err_uvmm_unlock; - } + ret = drm_gpuvm_exec_lock(vme); + if (ret) { + nouveau_uvmm_unlock(uvmm); + return ret; } nouveau_uvmm_unlock(uvmm); - drm_exec_for_each_locked_object(exec, index, obj) { - struct nouveau_bo *nvbo = nouveau_gem_object(obj); - - ret = nouveau_bo_validate(nvbo, true, false); - if (ret) - goto err_exec_fini; + ret = drm_gpuvm_exec_validate(vme); + if (ret) { + drm_gpuvm_exec_unlock(vme); + return ret; } return 0; - -err_uvmm_unlock: - nouveau_uvmm_unlock(uvmm); -err_exec_fini: - drm_exec_fini(exec); - return ret; - } static void -nouveau_exec_job_armed_submit(struct nouveau_job *job) +nouveau_exec_job_armed_submit(struct nouveau_job *job, + struct drm_gpuvm_exec *vme) { - struct drm_exec *exec = &job->exec; - struct drm_gem_object *obj; - unsigned long index; - - drm_exec_for_each_locked_object(exec, index, obj) - dma_resv_add_fence(obj->resv, job->done_fence, job->resv_usage); - - drm_exec_fini(exec); + drm_gpuvm_exec_resv_add_fence(vme, job->done_fence, + job->resv_usage, job->resv_usage); + drm_gpuvm_exec_unlock(vme); } static struct dma_fence * diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.h b/drivers/gpu/drm/nouveau/nouveau_exec.h index 778cacd90f65..b815de2428f3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_exec.h +++ b/drivers/gpu/drm/nouveau/nouveau_exec.h @@ -3,16 +3,12 @@ #ifndef __NOUVEAU_EXEC_H__ #define __NOUVEAU_EXEC_H__ -#include - #include "nouveau_drv.h" #include "nouveau_sched.h" struct nouveau_exec_job_args { struct drm_file *file_priv; struct nouveau_sched_entity *sched_entity; - - struct drm_exec exec; struct nouveau_channel *chan; struct { diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c b/drivers/gpu/drm/nouveau/nouveau_sched.c index 7e64b5ef90fb..0416fd6b6a40 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.c +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c @@ -263,6 +263,11 @@ nouveau_job_submit(struct nouveau_job *job) { struct nouveau_sched_entity *entity = to_nouveau_sched_entity(job->base.entity); struct dma_fence *done_fence = NULL; + struct drm_gpuvm_exec vm_exec = { + .vm = &nouveau_cli_uvmm(job->cli)->base, + .flags = DRM_EXEC_IGNORE_DUPLICATES, + .num_fences = 1, + }; int ret; ret = nouveau_job_add_deps(job); @@ -282,7 +287,7 @@ nouveau_job_submit(struct nouveau_job *job) * successfully. */ if (job->ops->submit) { - ret = job->ops->submit(job); + ret = job->ops->submit(job, &vm_exec); if (ret) goto err_cleanup; } @@ -315,7 +320,7 @@ nouveau_job_submit(struct nouveau_job *job) set_bit(DRM_SCHED_FENCE_DONT_PIPELINE, &job->done_fence->flags); if (job->ops->armed_submit) - job->ops->armed_submit(job); + job->ops->armed_submit(job, &vm_exec); nouveau_job_fence_attach(job); diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.h b/drivers/gpu/drm/nouveau/nouveau_sched.h index 27ac19792597..0f87697dbc9e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.h +++ b/drivers/gpu/drm/nouveau/nouveau_sched.h @@ -5,7 +5,7 @@ #include -#include +#include #include #include "nouveau_drv.h" @@ -54,7 +54,6 @@ struct nouveau_job { struct drm_file *file_priv; struct nouveau_cli *cli; - struct drm_exec exec; enum dma_resv_usage resv_usage; struct dma_fence *done_fence; @@ -76,8 +75,8 @@ struct nouveau_job { /* If .submit() returns without any error, it is guaranteed that * armed_submit() is called. */ - int (*submit)(struct nouveau_job *); - void (*armed_submit)(struct nouveau_job *); + int (*submit)(struct nouveau_job *, struct drm_gpuvm_exec *); + void (*armed_submit)(struct nouveau_job *, struct drm_gpuvm_exec *); struct dma_fence *(*run)(struct nouveau_job *); void (*free)(struct nouveau_job *); enum drm_gpu_sched_stat (*timeout)(struct nouveau_job *); diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index eda7bb8624f1..2bb72fff06e0 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -438,8 +438,9 @@ nouveau_uvma_region_complete(struct nouveau_uvma_region *reg) static void op_map_prepare_unwind(struct nouveau_uvma *uvma) { + struct drm_gpuva *va = &uvma->va; nouveau_uvma_gem_put(uvma); - drm_gpuva_remove(&uvma->va); + drm_gpuva_remove(va); nouveau_uvma_free(uvma); } @@ -468,6 +469,7 @@ nouveau_uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm, break; case DRM_GPUVA_OP_REMAP: { struct drm_gpuva_op_remap *r = &op->remap; + struct drm_gpuva *va = r->unmap->va; if (r->next) op_map_prepare_unwind(new->next); @@ -475,7 +477,7 @@ nouveau_uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm, if (r->prev) op_map_prepare_unwind(new->prev); - op_unmap_prepare_unwind(r->unmap->va); + op_unmap_prepare_unwind(va); break; } case DRM_GPUVA_OP_UNMAP: @@ -634,6 +636,7 @@ nouveau_uvmm_sm_prepare(struct nouveau_uvmm *uvmm, goto unwind; } } + break; } case DRM_GPUVA_OP_REMAP: { @@ -1135,12 +1138,44 @@ bind_link_gpuvas(struct bind_job_op *bop) } static int -nouveau_uvmm_bind_job_submit(struct nouveau_job *job) +bind_lock_extra(struct drm_gpuvm_exec *vme) +{ + struct nouveau_uvmm_bind_job *bind_job = vme->extra.priv; + struct drm_exec *exec = &vme->exec; + struct bind_job_op *op; + int ret; + + list_for_each_op(op, &bind_job->ops) { + struct drm_gpuva_op *va_op; + + if (IS_ERR_OR_NULL(op->ops)) + continue; + + drm_gpuva_for_each_op(va_op, op->ops) { + struct drm_gem_object *obj = op_gem_obj(va_op); + + if (unlikely(!obj)) + continue; + + if (va_op->op != DRM_GPUVA_OP_UNMAP) + continue; + + ret = drm_exec_prepare_obj(exec, obj, vme->num_fences); + if (ret) + return ret; + } + } + + return 0; +} + +static int +nouveau_uvmm_bind_job_submit(struct nouveau_job *job, + struct drm_gpuvm_exec *vme) { struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); struct nouveau_uvmm_bind_job *bind_job = to_uvmm_bind_job(job); struct nouveau_sched_entity *entity = job->entity; - struct drm_exec *exec = &job->exec; struct bind_job_op *op; int ret; @@ -1157,6 +1192,8 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) dma_resv_unlock(obj->resv); if (IS_ERR(op->vm_bo)) return PTR_ERR(op->vm_bo); + + drm_gpuvm_bo_extobj_add(op->vm_bo); } ret = bind_validate_op(job, op); @@ -1179,6 +1216,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) * unwind all GPU VA space changes on failure. */ nouveau_uvmm_lock(uvmm); + list_for_each_op(op, &bind_job->ops) { switch (op->op) { case OP_MAP_SPARSE: @@ -1290,30 +1328,12 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) } } - drm_exec_init(exec, DRM_EXEC_INTERRUPTIBLE_WAIT | - DRM_EXEC_IGNORE_DUPLICATES); - drm_exec_until_all_locked(exec) { - list_for_each_op(op, &bind_job->ops) { - struct drm_gpuva_op *va_op; + vme->extra.fn = bind_lock_extra; + vme->extra.priv = bind_job; - if (IS_ERR_OR_NULL(op->ops)) - continue; - - drm_gpuva_for_each_op(va_op, op->ops) { - struct drm_gem_object *obj = op_gem_obj(va_op); - - if (unlikely(!obj)) - continue; - - ret = drm_exec_prepare_obj(exec, obj, 1); - drm_exec_retry_on_contention(exec); - if (ret) { - op = list_last_op(&bind_job->ops); - goto unwind; - } - } - } - } + ret = drm_gpuvm_exec_lock(vme); + if (ret) + goto unwind_continue; list_for_each_op(op, &bind_job->ops) { struct drm_gpuva_op *va_op; @@ -1413,21 +1433,17 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) } nouveau_uvmm_unlock(uvmm); - drm_exec_fini(exec); + drm_gpuvm_exec_unlock(vme); return ret; } static void -nouveau_uvmm_bind_job_armed_submit(struct nouveau_job *job) +nouveau_uvmm_bind_job_armed_submit(struct nouveau_job *job, + struct drm_gpuvm_exec *vme) { - struct drm_exec *exec = &job->exec; - struct drm_gem_object *obj; - unsigned long index; - - drm_exec_for_each_locked_object(exec, index, obj) - dma_resv_add_fence(obj->resv, job->done_fence, job->resv_usage); - - drm_exec_fini(exec); + drm_gpuvm_exec_resv_add_fence(vme, job->done_fence, + job->resv_usage, job->resv_usage); + drm_gpuvm_exec_unlock(vme); } static struct dma_fence * @@ -1815,8 +1831,17 @@ nouveau_uvmm_free(struct drm_gpuvm *gpuvm) kfree(uvmm); } +static int +nouveau_uvmm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec) +{ + struct nouveau_bo *nvbo = nouveau_gem_object(vm_bo->obj); + + return nouveau_bo_validate(nvbo, true, false); +} + static const struct drm_gpuvm_ops gpuvm_ops = { .vm_free = nouveau_uvmm_free, + .vm_bo_validate = nouveau_uvmm_bo_validate, }; int