From patchwork Thu Nov 14 15:30:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13875265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 00963D68B33 for ; Thu, 14 Nov 2024 15:30:29 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1EDF910E804; Thu, 14 Nov 2024 15:30:28 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="COgc1p0t"; dkim-atps=neutral Received: from mail-ej1-f48.google.com (mail-ej1-f48.google.com [209.85.218.48]) by gabe.freedesktop.org (Postfix) with ESMTPS id E27DF10E801; Thu, 14 Nov 2024 15:30:25 +0000 (UTC) Received: by mail-ej1-f48.google.com with SMTP id a640c23a62f3a-a9e8522c10bso103343466b.1; Thu, 14 Nov 2024 07:30:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731598224; x=1732203024; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=o5gUunTk+fi0xEBeUGvsU/lF64TxrCQn5ATGQpWTQDU=; b=COgc1p0tUrKq6ZbWIKgppi1RPh663d4jzK7xpCYjBLsNn6NzvutcbMgpBWdM1nMu7s Bua0mf9VcPGQrIz8OBax9JB3Hdz5LAwQ69Dbqu7rbrDaHzo2wEMpcRs+J2RhCuxznoHV YatzpxFKIvOzJyDNlprdtmpCRefbPOm2mkTitbt+qD2t6ocG+62rd9uZd+nsbpMvKSWN a0UdyPHbdgiN4/QFVKA/TjWrNWWg/JEbKn3WB3znGO3IoSYmWYqGYPMtESTxyywwbSUk HWtSomuhQWnoM8NP/w51YgNIXthg6vFMukQ4yxg8U+C7u8TqeEws02Z7kBR9y7aqjKjW hsgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731598224; x=1732203024; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o5gUunTk+fi0xEBeUGvsU/lF64TxrCQn5ATGQpWTQDU=; b=oAUlcr6GX30LrOx5uAzJyyPV9fjDOi+ePi3V0GajK1NV4oBm+MBWHZ+L/Pu7IQnIbu CHEuib5SJY4DbpMiIerNfFIt7oToszYlDPj8AhQlwceWhn3vDKuBUhEzhv7U+9Rh67l4 QeyJQjhcVB339Nat6KepwNDkWFnTaj9eP9NH+IHssA/bx09CUQFa0KOaG9tFibVN7g20 UhGqwGbbff217LEqOcIFrBJRDvQYpHQnXBSCu8293r3+jVKaoWIm/JG0YLxIfcKZqUHi xTARkOYY+WYATMCwi9BnFlPgv5M53WAPLzy9hVmNe970uhWJMrhO3oGSy6ygTi4+pSjN tBrg== X-Forwarded-Encrypted: i=1; AJvYcCUYGLTBalAVEtMNhbtHE9yx5s4hRx580CWC0F3g0T9cO9Aq3ZZkXpEIvlYhl+h6pwvcFqk7xIF3V8/H@lists.freedesktop.org, AJvYcCUzwTJMfgiJysL4vx0+EaJ+5On0dIFTcmKwpXetMDtduuPswtuJSdAf2IG7shyZSam7Wr5U8jyR3Qc=@lists.freedesktop.org, AJvYcCVOwsETB300wxAYrGHXzWeCzIEwnQhagDwZQXNb5fPcx543p5lRXGJjMw0TwpJff4ja3WIMF6m9@lists.freedesktop.org X-Gm-Message-State: AOJu0YyPKwdGebGVfqV08FIL+5ZBzPBRDV1WrciaxKXJYZKc0M2HZAFm 5/JjpRuMsYrqncSPUEKgPm+I0jjvZVZAmUc7QYRtkZsouXWmrBjS X-Google-Smtp-Source: AGHT+IFv2EXUftXiHfEra+V5C7isNy2EHptzBjJnU85zwKMHGlhmcs44uSG0Ldp50mgkHCwxYPUjtg== X-Received: by 2002:a17:907:d29:b0:a9e:d532:4cc7 with SMTP id a640c23a62f3a-aa2076cc161mr358920566b.8.1731598223989; Thu, 14 Nov 2024 07:30:23 -0800 (PST) Received: from able.fritz.box ([2a00:e180:15c9:2500:bb23:40f5:fe29:201]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aa20e046919sm74063266b.156.2024.11.14.07.30.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Nov 2024 07:30:23 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: kraxel@redhat.com, airlied@redhat.com, alexander.deucher@amd.com, zack.rusin@broadcom.com, bcm-kernel-feedback-list@broadcom.com, virtualization@lists.linux.dev, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 1/7] drm/radeon: switch over to drm_exec v2 Date: Thu, 14 Nov 2024 16:30:14 +0100 Message-Id: <20241114153020.6209-2-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241114153020.6209-1-christian.koenig@amd.com> References: <20241114153020.6209-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Just a straightforward conversion without any optimization. Smoke tested on actual hardware. v2: rebase Signed-off-by: Christian König Acked-by: Alex Deucher --- drivers/gpu/drm/radeon/Kconfig | 1 + drivers/gpu/drm/radeon/radeon.h | 7 ++-- drivers/gpu/drm/radeon/radeon_cs.c | 45 +++++++++++++------------- drivers/gpu/drm/radeon/radeon_gem.c | 39 ++++++++++++---------- drivers/gpu/drm/radeon/radeon_object.c | 25 +++++++------- drivers/gpu/drm/radeon/radeon_object.h | 2 +- drivers/gpu/drm/radeon/radeon_vm.c | 10 +++--- 7 files changed, 66 insertions(+), 63 deletions(-) diff --git a/drivers/gpu/drm/radeon/Kconfig b/drivers/gpu/drm/radeon/Kconfig index 9c6c74a75778..f51bace9555d 100644 --- a/drivers/gpu/drm/radeon/Kconfig +++ b/drivers/gpu/drm/radeon/Kconfig @@ -13,6 +13,7 @@ config DRM_RADEON select DRM_TTM select DRM_TTM_HELPER select FB_IOMEM_HELPERS if DRM_FBDEV_EMULATION + select DRM_EXEC select SND_HDA_COMPONENT if SND_HDA_CORE select POWER_SUPPLY select HWMON diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index fd8a4513025f..8605c074d9f7 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -75,8 +75,8 @@ #include #include -#include +#include #include #include #include @@ -457,7 +457,8 @@ struct radeon_mman { struct radeon_bo_list { struct radeon_bo *robj; - struct ttm_validate_buffer tv; + struct list_head list; + bool shared; uint64_t gpu_offset; unsigned preferred_domains; unsigned allowed_domains; @@ -1030,6 +1031,7 @@ struct radeon_cs_parser { struct radeon_bo_list *vm_bos; struct list_head validated; unsigned dma_reloc_idx; + struct drm_exec exec; /* indices of various chunks */ struct radeon_cs_chunk *chunk_ib; struct radeon_cs_chunk *chunk_relocs; @@ -1043,7 +1045,6 @@ struct radeon_cs_parser { u32 cs_flags; u32 ring; s32 priority; - struct ww_acquire_ctx ticket; }; static inline u32 radeon_get_ib_value(struct radeon_cs_parser *p, int idx) diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c index a6700d7278bf..64b26bfeafc9 100644 --- a/drivers/gpu/drm/radeon/radeon_cs.c +++ b/drivers/gpu/drm/radeon/radeon_cs.c @@ -182,11 +182,8 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p) } } - p->relocs[i].tv.bo = &p->relocs[i].robj->tbo; - p->relocs[i].tv.num_shared = !r->write_domain; - - radeon_cs_buckets_add(&buckets, &p->relocs[i].tv.head, - priority); + p->relocs[i].shared = !r->write_domain; + radeon_cs_buckets_add(&buckets, &p->relocs[i].list, priority); } radeon_cs_buckets_get_list(&buckets, &p->validated); @@ -197,7 +194,7 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p) if (need_mmap_lock) mmap_read_lock(current->mm); - r = radeon_bo_list_validate(p->rdev, &p->ticket, &p->validated, p->ring); + r = radeon_bo_list_validate(p->rdev, &p->exec, &p->validated, p->ring); if (need_mmap_lock) mmap_read_unlock(current->mm); @@ -253,12 +250,11 @@ static int radeon_cs_sync_rings(struct radeon_cs_parser *p) struct radeon_bo_list *reloc; int r; - list_for_each_entry(reloc, &p->validated, tv.head) { + list_for_each_entry(reloc, &p->validated, list) { struct dma_resv *resv; resv = reloc->robj->tbo.base.resv; - r = radeon_sync_resv(p->rdev, &p->ib.sync, resv, - reloc->tv.num_shared); + r = radeon_sync_resv(p->rdev, &p->ib.sync, resv, reloc->shared); if (r) return r; } @@ -276,6 +272,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data) s32 priority = 0; INIT_LIST_HEAD(&p->validated); + drm_exec_init(&p->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0); if (!cs->num_chunks) { return 0; @@ -397,8 +394,8 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data) static int cmp_size_smaller_first(void *priv, const struct list_head *a, const struct list_head *b) { - struct radeon_bo_list *la = list_entry(a, struct radeon_bo_list, tv.head); - struct radeon_bo_list *lb = list_entry(b, struct radeon_bo_list, tv.head); + struct radeon_bo_list *la = list_entry(a, struct radeon_bo_list, list); + struct radeon_bo_list *lb = list_entry(b, struct radeon_bo_list, list); /* Sort A before B if A is smaller. */ if (la->robj->tbo.base.size > lb->robj->tbo.base.size) @@ -417,11 +414,13 @@ static int cmp_size_smaller_first(void *priv, const struct list_head *a, * If error is set than unvalidate buffer, otherwise just free memory * used by parsing context. **/ -static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bool backoff) +static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error) { unsigned i; if (!error) { + struct radeon_bo_list *reloc; + /* Sort the buffer list from the smallest to largest buffer, * which affects the order of buffers in the LRU list. * This assures that the smallest buffers are added first @@ -433,15 +432,17 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bo * per frame under memory pressure. */ list_sort(NULL, &parser->validated, cmp_size_smaller_first); - - ttm_eu_fence_buffer_objects(&parser->ticket, - &parser->validated, - &parser->ib.fence->base); - } else if (backoff) { - ttm_eu_backoff_reservation(&parser->ticket, - &parser->validated); + list_for_each_entry(reloc, &parser->validated, list) { + dma_resv_add_fence(reloc->robj->tbo.base.resv, + &parser->ib.fence->base, + reloc->shared ? + DMA_RESV_USAGE_READ : + DMA_RESV_USAGE_WRITE); + } } + drm_exec_fini(&parser->exec); + if (parser->relocs != NULL) { for (i = 0; i < parser->nrelocs; i++) { struct radeon_bo *bo = parser->relocs[i].robj; @@ -693,7 +694,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) r = radeon_cs_parser_init(&parser, data); if (r) { DRM_ERROR("Failed to initialize parser !\n"); - radeon_cs_parser_fini(&parser, r, false); + radeon_cs_parser_fini(&parser, r); up_read(&rdev->exclusive_lock); r = radeon_cs_handle_lockup(rdev, r); return r; @@ -707,7 +708,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) } if (r) { - radeon_cs_parser_fini(&parser, r, false); + radeon_cs_parser_fini(&parser, r); up_read(&rdev->exclusive_lock); r = radeon_cs_handle_lockup(rdev, r); return r; @@ -724,7 +725,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) goto out; } out: - radeon_cs_parser_fini(&parser, r, true); + radeon_cs_parser_fini(&parser, r); up_read(&rdev->exclusive_lock); r = radeon_cs_handle_lockup(rdev, r); return r; diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index bf2d4b16dc2a..f86773f3db20 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -605,33 +605,40 @@ int radeon_gem_get_tiling_ioctl(struct drm_device *dev, void *data, static void radeon_gem_va_update_vm(struct radeon_device *rdev, struct radeon_bo_va *bo_va) { - struct ttm_validate_buffer tv, *entry; - struct radeon_bo_list *vm_bos; - struct ww_acquire_ctx ticket; + struct radeon_bo_list *vm_bos, *entry; struct list_head list; + struct drm_exec exec; unsigned domain; int r; INIT_LIST_HEAD(&list); - tv.bo = &bo_va->bo->tbo; - tv.num_shared = 1; - list_add(&tv.head, &list); - vm_bos = radeon_vm_get_bos(rdev, bo_va->vm, &list); if (!vm_bos) return; - r = ttm_eu_reserve_buffers(&ticket, &list, true, NULL); - if (r) - goto error_free; + drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0); + drm_exec_until_all_locked(&exec) { + list_for_each_entry(entry, &list, list) { + r = drm_exec_prepare_obj(&exec, &entry->robj->tbo.base, + 1); + drm_exec_retry_on_contention(&exec); + if (unlikely(r)) + goto error_cleanup; + } - list_for_each_entry(entry, &list, head) { - domain = radeon_mem_type_to_domain(entry->bo->resource->mem_type); + r = drm_exec_prepare_obj(&exec, &bo_va->bo->tbo.base, 1); + drm_exec_retry_on_contention(&exec); + if (unlikely(r)) + goto error_cleanup; + } + + list_for_each_entry(entry, &list, list) { + domain = radeon_mem_type_to_domain(entry->robj->tbo.resource->mem_type); /* if anything is swapped out don't swap it in here, just abort and wait for the next CS */ if (domain == RADEON_GEM_DOMAIN_CPU) - goto error_unreserve; + goto error_cleanup; } mutex_lock(&bo_va->vm->mutex); @@ -645,10 +652,8 @@ static void radeon_gem_va_update_vm(struct radeon_device *rdev, error_unlock: mutex_unlock(&bo_va->vm->mutex); -error_unreserve: - ttm_eu_backoff_reservation(&ticket, &list); - -error_free: +error_cleanup: + drm_exec_fini(&exec); kvfree(vm_bos); if (r && r != -ERESTARTSYS) diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index 7672404fdb29..a0fc0801abb0 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -464,23 +464,26 @@ static u64 radeon_bo_get_threshold_for_moves(struct radeon_device *rdev) } int radeon_bo_list_validate(struct radeon_device *rdev, - struct ww_acquire_ctx *ticket, + struct drm_exec *exec, struct list_head *head, int ring) { struct ttm_operation_ctx ctx = { true, false }; struct radeon_bo_list *lobj; - struct list_head duplicates; - int r; u64 bytes_moved = 0, initial_bytes_moved; u64 bytes_moved_threshold = radeon_bo_get_threshold_for_moves(rdev); + int r; - INIT_LIST_HEAD(&duplicates); - r = ttm_eu_reserve_buffers(ticket, head, true, &duplicates); - if (unlikely(r != 0)) { - return r; + drm_exec_until_all_locked(exec) { + list_for_each_entry(lobj, head, list) { + r = drm_exec_prepare_obj(exec, &lobj->robj->tbo.base, + 1); + drm_exec_retry_on_contention(exec); + if (unlikely(r && r != -EALREADY)) + return r; + } } - list_for_each_entry(lobj, head, tv.head) { + list_for_each_entry(lobj, head, list) { struct radeon_bo *bo = lobj->robj; if (!bo->tbo.pin_count) { u32 domain = lobj->preferred_domains; @@ -519,7 +522,6 @@ int radeon_bo_list_validate(struct radeon_device *rdev, domain = lobj->allowed_domains; goto retry; } - ttm_eu_backoff_reservation(ticket, head); return r; } } @@ -527,11 +529,6 @@ int radeon_bo_list_validate(struct radeon_device *rdev, lobj->tiling_flags = bo->tiling_flags; } - list_for_each_entry(lobj, &duplicates, tv.head) { - lobj->gpu_offset = radeon_bo_gpu_offset(lobj->robj); - lobj->tiling_flags = lobj->robj->tiling_flags; - } - return 0; } diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h index 39cc87a59a9a..d7bbb52db546 100644 --- a/drivers/gpu/drm/radeon/radeon_object.h +++ b/drivers/gpu/drm/radeon/radeon_object.h @@ -152,7 +152,7 @@ extern void radeon_bo_force_delete(struct radeon_device *rdev); extern int radeon_bo_init(struct radeon_device *rdev); extern void radeon_bo_fini(struct radeon_device *rdev); extern int radeon_bo_list_validate(struct radeon_device *rdev, - struct ww_acquire_ctx *ticket, + struct drm_exec *exec, struct list_head *head, int ring); extern int radeon_bo_set_tiling_flags(struct radeon_bo *bo, u32 tiling_flags, u32 pitch); diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index c38b4d5d6a14..21a5340aefdf 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -142,10 +142,9 @@ struct radeon_bo_list *radeon_vm_get_bos(struct radeon_device *rdev, list[0].robj = vm->page_directory; list[0].preferred_domains = RADEON_GEM_DOMAIN_VRAM; list[0].allowed_domains = RADEON_GEM_DOMAIN_VRAM; - list[0].tv.bo = &vm->page_directory->tbo; - list[0].tv.num_shared = 1; + list[0].shared = true; list[0].tiling_flags = 0; - list_add(&list[0].tv.head, head); + list_add(&list[0].list, head); for (i = 0, idx = 1; i <= vm->max_pde_used; i++) { if (!vm->page_tables[i].bo) @@ -154,10 +153,9 @@ struct radeon_bo_list *radeon_vm_get_bos(struct radeon_device *rdev, list[idx].robj = vm->page_tables[i].bo; list[idx].preferred_domains = RADEON_GEM_DOMAIN_VRAM; list[idx].allowed_domains = RADEON_GEM_DOMAIN_VRAM; - list[idx].tv.bo = &list[idx].robj->tbo; - list[idx].tv.num_shared = 1; + list[idx].shared = true; list[idx].tiling_flags = 0; - list_add(&list[idx++].tv.head, head); + list_add(&list[idx++].list, head); } return list; From patchwork Thu Nov 14 15:30:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13875264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7147BD68B32 for ; Thu, 14 Nov 2024 15:30:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4971010E807; Thu, 14 Nov 2024 15:30:28 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Nu5fZ50X"; dkim-atps=neutral Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com [209.85.218.49]) by gabe.freedesktop.org (Postfix) with ESMTPS id D1D0B10E801; Thu, 14 Nov 2024 15:30:26 +0000 (UTC) Received: by mail-ej1-f49.google.com with SMTP id a640c23a62f3a-a9e8522445dso143028366b.1; Thu, 14 Nov 2024 07:30:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731598225; x=1732203025; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=TjSRnW8Jbot+FYwZFGF/dqccGF8efL0UZHIw1iaZnvE=; b=Nu5fZ50XzYELSacbl8k778Y/JJNfTFtKNV08SsIOiM43O8x+JAHCONeyfeEPrQZIvH Tym1vzEjPaPO3EPE9FrmwEyQPe5RUq5ZOhdJ90C26fjY3ODlHK7MKlsat12nAMg1wsmm 049oj6VduQga1+M+3J4oJERKiOto/RStpMb1KI2gW55nb5QnKnm/mB+LvPilUkJFAyJn vrXWjWzl63nan8cg0iB2E2WmUYdwX1AjJVAjRf1Km5tnOrP6x8DnmsiolRe/rl6JyP/t BUZBGCjKs/YlQc9sTbbXalvWmiVToMbOmhkSFeVfqv/dhAzsqHNCWl9CwoVhzmHxnR9j oCaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731598225; x=1732203025; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TjSRnW8Jbot+FYwZFGF/dqccGF8efL0UZHIw1iaZnvE=; b=NTrU9K3PtyQCmcPVYDgnN4wV/FUyzhb5wRQUfb0WdNDnvmO4Aw/YLu5Rm6+qC7Wkli +Uq8PaBYqn9dX9iFQJcyO2pvyA2DC+W8E4hkXGXD/W40i3n52WTK3k/QmnzMNWjN1C5u gKPzDiuLBVpWKNhKgn3tatvT/MvFApd44BYe4cTXShLHNj9+lBSMJoNzoUB66GC1wdP2 jqcynrOTTSPWBdngqddOa1t3ZLJcYJpNo0q1OJ32h/SxTodcCpWjS7dbGLu+Y2dFpuCZ zEoK+4hKJw/HZofHWfuOUXX1VE4FPR4Iv8E8b1KEyln/1OpWBAYysii4SDaQMOw9qdmp EPgQ== X-Forwarded-Encrypted: i=1; AJvYcCUlpVo078+tunnj982BgGBjMHoJgMfAmKLmXqOkYVOignwt/TGOjNgZF/0OTUD6f9q1FizyrRlxZzE=@lists.freedesktop.org, AJvYcCVZiaAd0j92ODGCSGxU9u1V6GqG2XxZ4GVTUI4LMskVNBoKIfLEEoHAIM5UiEOmzzy7JezZ2CVr9XBa@lists.freedesktop.org, AJvYcCVdbXnIqXx7b6+2IFsM5N64w33DDsxrBvmB8YIdHHfve6551vJUbf7ZuaOC4PnXheW3uRfnEMi7@lists.freedesktop.org X-Gm-Message-State: AOJu0YzOVX2/L+HDUsVAZRIBXksIRIVbYlwMFB9cqS+FGSioV+zA1tLe 5otgksSlORciCVluQy37/CRsCQ1PmzqZTcxjB5EKzHT1Zt9bRGCq X-Google-Smtp-Source: AGHT+IE7TXyLRVQqj0KhpJuttZv8/yoJzgvJc7AqQhU3Y4RfG9yyS/ho2z24xbF0lIYw/lSxzZUeog== X-Received: by 2002:a17:907:60c8:b0:a9a:673f:4dcc with SMTP id a640c23a62f3a-aa20cd0589bmr217313866b.22.1731598224992; Thu, 14 Nov 2024 07:30:24 -0800 (PST) Received: from able.fritz.box ([2a00:e180:15c9:2500:bb23:40f5:fe29:201]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aa20e046919sm74063266b.156.2024.11.14.07.30.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Nov 2024 07:30:24 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: kraxel@redhat.com, airlied@redhat.com, alexander.deucher@amd.com, zack.rusin@broadcom.com, bcm-kernel-feedback-list@broadcom.com, virtualization@lists.linux.dev, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 2/7] drm/qxl: switch to using drm_exec v2 Date: Thu, 14 Nov 2024 16:30:15 +0100 Message-Id: <20241114153020.6209-3-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241114153020.6209-1-christian.koenig@amd.com> References: <20241114153020.6209-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Just a straightforward conversion without any optimization. Only compile tested for now. v2: rebase Signed-off-by: Christian König Acked-by: Alex Deucher --- drivers/gpu/drm/qxl/Kconfig | 1 + drivers/gpu/drm/qxl/qxl_drv.h | 7 ++-- drivers/gpu/drm/qxl/qxl_release.c | 68 ++++++++++++++++--------------- 3 files changed, 40 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/qxl/Kconfig b/drivers/gpu/drm/qxl/Kconfig index 1992df4a82d2..ebf452aa1e80 100644 --- a/drivers/gpu/drm/qxl/Kconfig +++ b/drivers/gpu/drm/qxl/Kconfig @@ -6,6 +6,7 @@ config DRM_QXL select DRM_KMS_HELPER select DRM_TTM select DRM_TTM_HELPER + select DRM_EXEC select CRC32 help QXL virtual GPU for Spice virtualization desktop integration. diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index 32069acd93f8..b5fc14c9525d 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -38,12 +38,12 @@ #include #include +#include #include #include #include #include #include -#include #include #include "qxl_dev.h" @@ -101,7 +101,8 @@ struct qxl_gem { }; struct qxl_bo_list { - struct ttm_validate_buffer tv; + struct qxl_bo *bo; + struct list_head list; }; struct qxl_crtc { @@ -150,7 +151,7 @@ struct qxl_release { struct qxl_bo *release_bo; uint32_t release_offset; uint32_t surface_release_id; - struct ww_acquire_ctx ticket; + struct drm_exec exec; struct list_head bos; }; diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 368d26da0d6a..05204a6a3fa8 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -121,13 +121,11 @@ qxl_release_free_list(struct qxl_release *release) { while (!list_empty(&release->bos)) { struct qxl_bo_list *entry; - struct qxl_bo *bo; entry = container_of(release->bos.next, - struct qxl_bo_list, tv.head); - bo = to_qxl_bo(entry->tv.bo); - qxl_bo_unref(&bo); - list_del(&entry->tv.head); + struct qxl_bo_list, list); + qxl_bo_unref(&entry->bo); + list_del(&entry->list); kfree(entry); } release->release_bo = NULL; @@ -172,8 +170,8 @@ int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo) { struct qxl_bo_list *entry; - list_for_each_entry(entry, &release->bos, tv.head) { - if (entry->tv.bo == &bo->tbo) + list_for_each_entry(entry, &release->bos, list) { + if (entry->bo == bo) return 0; } @@ -182,9 +180,8 @@ int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo) return -ENOMEM; qxl_bo_ref(bo); - entry->tv.bo = &bo->tbo; - entry->tv.num_shared = 0; - list_add_tail(&entry->tv.head, &release->bos); + entry->bo = bo; + list_add_tail(&entry->list, &release->bos); return 0; } @@ -221,21 +218,28 @@ int qxl_release_reserve_list(struct qxl_release *release, bool no_intr) if (list_is_singular(&release->bos)) return 0; - ret = ttm_eu_reserve_buffers(&release->ticket, &release->bos, - !no_intr, NULL); - if (ret) - return ret; - - list_for_each_entry(entry, &release->bos, tv.head) { - struct qxl_bo *bo = to_qxl_bo(entry->tv.bo); - - ret = qxl_release_validate_bo(bo); - if (ret) { - ttm_eu_backoff_reservation(&release->ticket, &release->bos); - return ret; + drm_exec_init(&release->exec, no_intr ? 0 : + DRM_EXEC_INTERRUPTIBLE_WAIT, 0); + drm_exec_until_all_locked(&release->exec) { + list_for_each_entry(entry, &release->bos, list) { + ret = drm_exec_prepare_obj(&release->exec, + &entry->bo->tbo.base, + 1); + drm_exec_retry_on_contention(&release->exec); + if (ret) + goto error; } } + + list_for_each_entry(entry, &release->bos, list) { + ret = qxl_release_validate_bo(entry->bo); + if (ret) + goto error; + } return 0; +error: + drm_exec_fini(&release->exec); + return ret; } void qxl_release_backoff_reserve_list(struct qxl_release *release) @@ -245,7 +249,7 @@ void qxl_release_backoff_reserve_list(struct qxl_release *release) if (list_is_singular(&release->bos)) return; - ttm_eu_backoff_reservation(&release->ticket, &release->bos); + drm_exec_fini(&release->exec); } int qxl_alloc_surface_release_reserved(struct qxl_device *qdev, @@ -404,18 +408,18 @@ void qxl_release_unmap(struct qxl_device *qdev, void qxl_release_fence_buffer_objects(struct qxl_release *release) { - struct ttm_buffer_object *bo; struct ttm_device *bdev; - struct ttm_validate_buffer *entry; + struct qxl_bo_list *entry; struct qxl_device *qdev; + struct qxl_bo *bo; /* if only one object on the release its the release itself since these objects are pinned no need to reserve */ if (list_is_singular(&release->bos) || list_empty(&release->bos)) return; - bo = list_first_entry(&release->bos, struct ttm_validate_buffer, head)->bo; - bdev = bo->bdev; + bo = list_first_entry(&release->bos, struct qxl_bo_list, list)->bo; + bdev = bo->tbo.bdev; qdev = container_of(bdev, struct qxl_device, mman.bdev); /* @@ -426,14 +430,12 @@ void qxl_release_fence_buffer_objects(struct qxl_release *release) release->id | 0xf0000000, release->base.seqno); trace_dma_fence_emit(&release->base); - list_for_each_entry(entry, &release->bos, head) { + list_for_each_entry(entry, &release->bos, list) { bo = entry->bo; - dma_resv_add_fence(bo->base.resv, &release->base, + dma_resv_add_fence(bo->tbo.base.resv, &release->base, DMA_RESV_USAGE_READ); - ttm_bo_move_to_lru_tail_unlocked(bo); - dma_resv_unlock(bo->base.resv); + ttm_bo_move_to_lru_tail_unlocked(&bo->tbo); } - ww_acquire_fini(&release->ticket); + drm_exec_fini(&release->exec); } - From patchwork Thu Nov 14 15:30:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13875266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A4BA9D68B31 for ; Thu, 14 Nov 2024 15:30:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 92B3F10E808; Thu, 14 Nov 2024 15:30:28 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="LFmSqZ53"; dkim-atps=neutral Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by gabe.freedesktop.org (Postfix) with ESMTPS id E8B0A10E804; Thu, 14 Nov 2024 15:30:27 +0000 (UTC) Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-5cef772621eso994874a12.3; Thu, 14 Nov 2024 07:30:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731598226; x=1732203026; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=ErkItQWaM7N0kJs1XM7K7VUauKrmuIWKKvWHFPQ8Fxo=; b=LFmSqZ53Eaa7WtPJm3gwF+TcIIUtsl+aTHQltl4XTyG8dCoxSpBw6+RmqbQu+zA6/w sDkwTxal/ng3bmWZW8n6JkgPBVpoJ/KbrZJ4T6G+8PxlEC7d2Fhu86Yk0w31SHie6G9x pz8HovnLIjJSJ2+vVvDroPCc5k3iZbCK8WakYSNBT7sybPEQelePZwOwa8jqXHdhaLjF hTuDlJmBGNDOZpnOzxJ4PEc0yTNSep98GajxKgwQ6tXWim1WCHqHsHtu8t5VadRHkyVQ wnwzxN8u6SDCgUsfQgzR6COWtLdhbcgMT2dLUaGgdBYodI4RbyEIYE0/ez3exdQWd1ef +gQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731598226; x=1732203026; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ErkItQWaM7N0kJs1XM7K7VUauKrmuIWKKvWHFPQ8Fxo=; b=SOdxRpmExmzktxYjXR1Yewv7dldZbYHHwsKu0TWb/Y0SGiOVYFYXAsfyqBEnC+cseI v3KmPJFszFFcl3RjLK97Ic5zVWxHQp1uNS04E6Cb/7Y8Bl5PVwTvt6KfPT8gntL4lRS4 HsgNw0lcPPhLOTfpV/jZL2i51Zk7wkLOliq1LYtftLNK2jaZ5lfXw9lxTm19K/ROipis mH3tX724pamiASyuaUjVotM/cUNcc+2y/oF9gyHRJ8jnylcUgBHz1YpCdtLBX4od7Rv+ WNw/DwAknIlVjgo32ivxlx3dSvYN12lNmsa4Hbih9i6kZq5EEdSfCdqCQWHprVLUawAu x09A== X-Forwarded-Encrypted: i=1; AJvYcCUWv2sw6jfhP+8ZnUzxh9k+lv78N+LkFaqrAzu9WHKkkCmHOpdgvNjH5msojZjpaTOcp2PYv8U4chsU@lists.freedesktop.org, AJvYcCVq0UTPKt3wtSMvvYiIGzqKcEHMjroR/FgT0c6b3dr8zCiPeyHusctGzi6T4n38VOTzJBOUQjoT@lists.freedesktop.org, AJvYcCWMtykfF6vPRUfot5aRC3/zpUHcixFleujovVCU1mukRoYAXHPv6n3Cv1ns45nb7wHpwY8xB0toXJo=@lists.freedesktop.org X-Gm-Message-State: AOJu0Yyu37o7TRgiw/aj1edCDxLmV4n0cm/zGmLgi16WhpDlQfEm6Ikw OEbIoesLOB6+611EBNReRN3sAXvyIAtsZB4+dE1fUGt2hLvFSL73 X-Google-Smtp-Source: AGHT+IGKBHvOM3gtxZsaJmGYhSp9T6ZeHIjigD32WAn1nmdVm1JMzdHnWbyDWVbKR4ezf9kLUGd1BQ== X-Received: by 2002:a17:907:5cb:b0:a9a:1792:f1a with SMTP id a640c23a62f3a-aa1f8043f6amr652788066b.7.1731598226120; Thu, 14 Nov 2024 07:30:26 -0800 (PST) Received: from able.fritz.box ([2a00:e180:15c9:2500:bb23:40f5:fe29:201]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aa20e046919sm74063266b.156.2024.11.14.07.30.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Nov 2024 07:30:25 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: kraxel@redhat.com, airlied@redhat.com, alexander.deucher@amd.com, zack.rusin@broadcom.com, bcm-kernel-feedback-list@broadcom.com, virtualization@lists.linux.dev, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 3/7] drm/vmwgfx: start to phase out ttm_exec Date: Thu, 14 Nov 2024 16:30:16 +0100 Message-Id: <20241114153020.6209-4-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241114153020.6209-1-christian.koenig@amd.com> References: <20241114153020.6209-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Start switching over vmwgfx to drm_exec as well. Replacing some unnecessary complex calls with just just single BO dma_resv locking. No intentional functional change, but only compile tested for now. Signed-off-by: Christian König --- drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 49 ++++++++---------------- 1 file changed, 16 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index a73af8a355fb..793293b59d43 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -517,7 +517,7 @@ static int vmw_resource_check_buffer(struct ww_acquire_ctx *ticket, struct vmw_resource *res, bool interruptible, - struct ttm_validate_buffer *val_buf) + struct ttm_buffer_object **bo) { struct ttm_operation_ctx ctx = { true, false }; struct list_head val_list; @@ -532,10 +532,12 @@ vmw_resource_check_buffer(struct ww_acquire_ctx *ticket, INIT_LIST_HEAD(&val_list); ttm_bo_get(&res->guest_memory_bo->tbo); - val_buf->bo = &res->guest_memory_bo->tbo; - val_buf->num_shared = 0; - list_add_tail(&val_buf->head, &val_list); - ret = ttm_eu_reserve_buffers(ticket, &val_list, interruptible, NULL); + + *bo = &res->guest_memory_bo->tbo; + if (ticket) + ww_acquire_init(ticket, &reservation_ww_class); + + ret = ttm_bo_reserve(*bo, interruptible, (ticket == NULL), ticket); if (unlikely(ret != 0)) goto out_no_reserve; @@ -555,10 +557,11 @@ vmw_resource_check_buffer(struct ww_acquire_ctx *ticket, return 0; out_no_validate: - ttm_eu_backoff_reservation(ticket, &val_list); + dma_resv_unlock((*bo)->base.resv); + if (ticket) + ww_acquire_fini(ticket); out_no_reserve: - ttm_bo_put(val_buf->bo); - val_buf->bo = NULL; + ttm_bo_put(*bo); if (guest_memory_dirty) vmw_user_bo_unref(&res->guest_memory_bo); @@ -600,29 +603,6 @@ int vmw_resource_reserve(struct vmw_resource *res, bool interruptible, return 0; } -/** - * vmw_resource_backoff_reservation - Unreserve and unreference a - * guest memory buffer - *. - * @ticket: The ww acquire ctx used for reservation. - * @val_buf: Guest memory buffer information. - */ -static void -vmw_resource_backoff_reservation(struct ww_acquire_ctx *ticket, - struct ttm_validate_buffer *val_buf) -{ - struct list_head val_list; - - if (likely(val_buf->bo == NULL)) - return; - - INIT_LIST_HEAD(&val_list); - list_add_tail(&val_buf->head, &val_list); - ttm_eu_backoff_reservation(ticket, &val_list); - ttm_bo_put(val_buf->bo); - val_buf->bo = NULL; -} - /** * vmw_resource_do_evict - Evict a resource, and transfer its data * to a backup buffer. @@ -642,7 +622,7 @@ static int vmw_resource_do_evict(struct ww_acquire_ctx *ticket, val_buf.bo = NULL; val_buf.num_shared = 0; - ret = vmw_resource_check_buffer(ticket, res, interruptible, &val_buf); + ret = vmw_resource_check_buffer(ticket, res, interruptible, &val_buf.bo); if (unlikely(ret != 0)) return ret; @@ -657,7 +637,10 @@ static int vmw_resource_do_evict(struct ww_acquire_ctx *ticket, res->guest_memory_dirty = true; res->res_dirty = false; out_no_unbind: - vmw_resource_backoff_reservation(ticket, &val_buf); + dma_resv_unlock(val_buf.bo->base.resv); + if (ticket) + ww_acquire_fini(ticket); + ttm_bo_put(val_buf.bo); return ret; } From patchwork Thu Nov 14 15:30:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13875267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D8BE7D68B38 for ; Thu, 14 Nov 2024 15:30:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 307D610E80E; Thu, 14 Nov 2024 15:30:30 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="SrbjcJet"; dkim-atps=neutral Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3A98210E80C; Thu, 14 Nov 2024 15:30:29 +0000 (UTC) Received: by mail-ej1-f52.google.com with SMTP id a640c23a62f3a-a9acafdb745so151300166b.0; Thu, 14 Nov 2024 07:30:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731598227; x=1732203027; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=5FnZ9AMjiL+5LBp0zOTics7Ga5/djcimKqH32qBiMpo=; b=SrbjcJet+bnVKyPsxO7yO6q5OdbAVfn7kpaYtD6J/fAZ6W5XVO+EsTB2s1fyAZ7RYB SmVA4qVxpGRpmEG3ugQiGjaxxF3Nlt3YFsur+gX+4gcI1EGftBpe2RlWW1TbSVcFDXp+ 7mIwcpmCQH8igvPpv+heFoho5i/qBK0brCc2BHldkzNxXrFj9Bpj2bjGLorAi9X428vA +Jn4fSRf6XSWY8cwtFggMrLVpi77ejKTaRrj9nkqk3xtyzFSseR7AV/unp/4PdX2A+4N KedCior1ubXDv8tV+gx6BUW+RBv5Pms7X0qb29FtQGZe98NXTWf740bVQRewsaJ8pjk5 OJiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731598227; x=1732203027; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5FnZ9AMjiL+5LBp0zOTics7Ga5/djcimKqH32qBiMpo=; b=Ti58NUJPo206JmFkYGuoZWtaTs0IpUCytKw9HwLIYR3EP983SN0VZV5fQIous0rk7w IB7G9alhVnIJ7fnKbGVjo+8GZTSnVN9tfS/3L04ongHkyRVTFgPMh9Rj5XeW2M73e6PZ 8AwzQ+WC3PvtUnCBgodpQ8puBK4Cx5XKf+/NowKkZf48aEOKgqEoD0yqZNShupkTIRfU zgXK6Rqg7PlFYEEzg0Thq0wG59NKIrUeYwcgTl4KXuRyxNxWeGziUiRtkshsIMlB+84K Yb6AZ+nyBnJIBgGgzwDzMdHVun/Puw+lZJKC66ewp/+Ouo3g1tDw0+TSTwgeEfrJYz7L FONA== X-Forwarded-Encrypted: i=1; AJvYcCWDzYllhUpNx4yHY9fJDuF0taMbbOqiGb/Hefk5M/NOCFOEPGIs76P4t1f3aWMHtECBg+ca+pU52bfD@lists.freedesktop.org, AJvYcCWYOQCqRBcJxuFDMN4hKw0LyJ8jDw7zVENP3Gukc+84tkfSpHkNPf7PqRsufu/D7cU6JaLYRprPdZQ=@lists.freedesktop.org, AJvYcCXws/wFmH9bOydRyLroLcyU0BxUMdEdNy2gFu2IIYaZgM9wbcjXunEVoXL+1VMernVCbbbYu+7D@lists.freedesktop.org X-Gm-Message-State: AOJu0Yx3Vmt7MnjaOWbhdY16Y8ECVc/8kfHuoQciQ/sC73cUwkg6w+ns soCWvlcwllKNXt5AdfH8slyknOf9/NT9naS/75t3tDfanLCrjASn X-Google-Smtp-Source: AGHT+IG0MhpL2bfsjO7oFhiu1DPNn8UYJ1U5jKc9Ladp8n5xy/VQpNkMetFUYhpz4NXL0lkdlF24CQ== X-Received: by 2002:a17:906:c9ce:b0:a99:facf:cfc with SMTP id a640c23a62f3a-aa20775759bmr308768466b.17.1731598227138; Thu, 14 Nov 2024 07:30:27 -0800 (PST) Received: from able.fritz.box ([2a00:e180:15c9:2500:bb23:40f5:fe29:201]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aa20e046919sm74063266b.156.2024.11.14.07.30.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Nov 2024 07:30:26 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: kraxel@redhat.com, airlied@redhat.com, alexander.deucher@amd.com, zack.rusin@broadcom.com, bcm-kernel-feedback-list@broadcom.com, virtualization@lists.linux.dev, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 4/7] drm/vmwgfx: use the new drm_exec object Date: Thu, 14 Nov 2024 16:30:17 +0100 Message-Id: <20241114153020.6209-5-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241114153020.6209-1-christian.koenig@amd.com> References: <20241114153020.6209-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Basically just switching over to the new infrastructure like we did for other drivers as well. No intentional functional change, but only compile tested. Signed-off-by: Christian König --- drivers/gpu/drm/vmwgfx/vmwgfx_validation.c | 56 +++++++++++++++++++++- drivers/gpu/drm/vmwgfx/vmwgfx_validation.h | 41 ++-------------- 2 files changed, 59 insertions(+), 38 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c index e7625b3f71e0..34436504fcdb 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c @@ -817,6 +817,59 @@ int vmw_validation_preload_res(struct vmw_validation_context *ctx, return 0; } +/** + * vmw_validation_bo_reserve - Reserve buffer objects registered with a + * validation context + * @ctx: The validation context + * @intr: Perform waits interruptible + * + * Return: Zero on success, -ERESTARTSYS when interrupted, negative error + * code on failure + */ +int vmw_validation_bo_reserve(struct vmw_validation_context *ctx, bool intr) +{ + struct vmw_validation_bo_node *entry; + int ret; + + drm_exec_init(&ctx->exec, intr ? DRM_EXEC_INTERRUPTIBLE_WAIT : 0, 0); + drm_exec_until_all_locked(&ctx->exec) { + list_for_each_entry(entry, &ctx->bo_list, base.head) { + ret = drm_exec_prepare_obj(&ctx->exec, + &entry->base.bo->base, 1); + drm_exec_retry_on_contention(&ctx->exec); + if (ret) + goto error; + } + } + return 0; + +error: + drm_exec_fini(&ctx->exec); + return ret; +} + +/** + * vmw_validation_bo_fence - Unreserve and fence buffer objects registered + * with a validation context + * @ctx: The validation context + * + * This function unreserves the buffer objects previously reserved using + * vmw_validation_bo_reserve, and fences them with a fence object. + */ +void vmw_validation_bo_fence(struct vmw_validation_context *ctx, + struct vmw_fence_obj *fence) +{ + struct vmw_validation_bo_node *entry; + + list_for_each_entry(entry, &ctx->bo_list, base.head) { + dma_resv_add_fence(entry->base.bo->base.resv, &fence->base, + DMA_RESV_USAGE_READ); + } + drm_exec_fini(&ctx->exec); +} + + + /** * vmw_validation_bo_backoff - Unreserve buffer objects registered with a * validation context @@ -842,6 +895,5 @@ void vmw_validation_bo_backoff(struct vmw_validation_context *ctx) vmw_bo_dirty_release(vbo); } } - - ttm_eu_backoff_reservation(&ctx->ticket, &ctx->bo_list); + drm_exec_fini(&ctx->exec); } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h index 353d837907d8..55a7d8b68d5c 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h @@ -31,8 +31,7 @@ #include #include #include - -#include +#include #define VMW_RES_DIRTY_NONE 0 #define VMW_RES_DIRTY_SET BIT(0) @@ -59,7 +58,7 @@ struct vmw_validation_context { struct list_head resource_ctx_list; struct list_head bo_list; struct list_head page_list; - struct ww_acquire_ctx ticket; + struct drm_exec exec; struct mutex *res_mutex; unsigned int merge_dups; unsigned int mem_size_left; @@ -106,39 +105,6 @@ vmw_validation_has_bos(struct vmw_validation_context *ctx) return !list_empty(&ctx->bo_list); } -/** - * vmw_validation_bo_reserve - Reserve buffer objects registered with a - * validation context - * @ctx: The validation context - * @intr: Perform waits interruptible - * - * Return: Zero on success, -ERESTARTSYS when interrupted, negative error - * code on failure - */ -static inline int -vmw_validation_bo_reserve(struct vmw_validation_context *ctx, - bool intr) -{ - return ttm_eu_reserve_buffers(&ctx->ticket, &ctx->bo_list, intr, - NULL); -} - -/** - * vmw_validation_bo_fence - Unreserve and fence buffer objects registered - * with a validation context - * @ctx: The validation context - * - * This function unreserves the buffer objects previously reserved using - * vmw_validation_bo_reserve, and fences them with a fence object. - */ -static inline void -vmw_validation_bo_fence(struct vmw_validation_context *ctx, - struct vmw_fence_obj *fence) -{ - ttm_eu_fence_buffer_objects(&ctx->ticket, &ctx->bo_list, - (void *) fence); -} - /** * vmw_validation_align - Align a validation memory allocation * @val: The size to be aligned @@ -185,6 +151,9 @@ int vmw_validation_preload_res(struct vmw_validation_context *ctx, unsigned int size); void vmw_validation_res_set_dirty(struct vmw_validation_context *ctx, void *val_private, u32 dirty); +int vmw_validation_bo_reserve(struct vmw_validation_context *ctx, bool intr); +void vmw_validation_bo_fence(struct vmw_validation_context *ctx, + struct vmw_fence_obj *fence); void vmw_validation_bo_backoff(struct vmw_validation_context *ctx); #endif From patchwork Thu Nov 14 15:30:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13875269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91B0DD68B39 for ; Thu, 14 Nov 2024 15:30:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 09C6B10E815; Thu, 14 Nov 2024 15:30:32 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="mg92O/vy"; dkim-atps=neutral Received: from mail-ej1-f44.google.com (mail-ej1-f44.google.com [209.85.218.44]) by gabe.freedesktop.org (Postfix) with ESMTPS id 653D410E811; Thu, 14 Nov 2024 15:30:30 +0000 (UTC) Received: by mail-ej1-f44.google.com with SMTP id a640c23a62f3a-a9a4031f69fso124339966b.0; Thu, 14 Nov 2024 07:30:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731598229; x=1732203029; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=xPm55Pwtu4kgmX9u5F7awU8bGFwJkGhVqwW5/U+ueg0=; b=mg92O/vyWO8hEwHzrXxDc+z/RmTrTBLZmP0F1FKPHKnK7vpXvern5N/hpFChAUBt17 I6jNhWFVGZcEAJUk1QjGxPwy5tK2eiuv0mFCb3XLtXGZynRmT5ccLSr2wOrEnk9lgQLQ mXFPOdhYSt+L5wNPTtdCO2K10uuOii8Ayc6bK73ETdwUxqpefH1/BnFtsoZInDolvuC0 Vd1ylkPkLhfQ5dWO3WidTWGPFFBM0rT7VVLliCvLuwYMWf0pvb46Gy/HjPaMmxxrV0lw jqI/Z609hweNY3lui8MqTVnXqw7gX6Q1ZZ3O7fF+Y6ai3k8EbHaiZBtEUiXcrAZm8uYb Ua8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731598229; x=1732203029; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xPm55Pwtu4kgmX9u5F7awU8bGFwJkGhVqwW5/U+ueg0=; b=i1lsuPMBQHQl9lL+/N/8mgIcNYfeOMv+NFLZr8DhapWwdvwBzQgy2dyECCCEP+Mm2H l1E6lZ6N70zCyeDSN52DJJrViQgrHh+d2X5mbqzooYRpL4DYz94+MUtdpuSNCsZKaq43 zyfpYW9jJQkSbmNKL4/v4uPkOAXQkXcKlGo4r96275xmwgzErHM9cxNAfrZOTLhhvK6I Q/aUJcDXQ69rBF02wP1ZNgqSlTB767dvS09GppvTkkrlFjTCCWo9NG7UmGMcMz1t1E/9 yOdSiZTk+bS3Kwt2y+G6SSN9psmx9/swiNCNeprpcIZo6SI6V8Lje69cWO2F5SgXUXnd PZRQ== X-Forwarded-Encrypted: i=1; AJvYcCUjkhgwAJ1yKMj4drAI7SkTwmJxfJXc8izWXKk0wkFM0WuxS9v+A+mo925cXujelzsvZyTjK0n0fac=@lists.freedesktop.org, AJvYcCWTDuULfP99CVr9NHKKyxZmAtTHK6pnGJXrG+h/0Vdx7hiovFTcKJOSJ2rZEQlMDJaB2JzE79QnpBbJ@lists.freedesktop.org, AJvYcCXzgnLYML3JuIeuRdj74CXYHoAo5R75vwpdbFNlPoH1dUMWz9o1oqT7dcGpH/KewfX9FdmLK/0S@lists.freedesktop.org X-Gm-Message-State: AOJu0YwAVcOpW5y67uAzfnVvoRTFsKSkI3gDEilptBNV5PS+1qC2t7NH R4P3UqOXzpFpUkBES0IfbIy+/fzdWpIxlan+6AXvZKLJIOax7qVm X-Google-Smtp-Source: AGHT+IHez/r+pxObixtYZyLuh+6hvpx1jN0SLlNc2B+bwVGL+guo/uQ9btCQ0hVVFC0EvW3aSuEo8g== X-Received: by 2002:a17:906:b44:b0:aa1:dd58:aebc with SMTP id a640c23a62f3a-aa1dd58b0cfmr811146366b.39.1731598228398; Thu, 14 Nov 2024 07:30:28 -0800 (PST) Received: from able.fritz.box ([2a00:e180:15c9:2500:bb23:40f5:fe29:201]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aa20e046919sm74063266b.156.2024.11.14.07.30.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Nov 2024 07:30:27 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: kraxel@redhat.com, airlied@redhat.com, alexander.deucher@amd.com, zack.rusin@broadcom.com, bcm-kernel-feedback-list@broadcom.com, virtualization@lists.linux.dev, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 5/7] drm/vmwgfx: replace ttm_validate_buffer with separate struct Date: Thu, 14 Nov 2024 16:30:18 +0100 Message-Id: <20241114153020.6209-6-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241114153020.6209-1-christian.koenig@amd.com> References: <20241114153020.6209-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Finish remove the ttm_eu depoendency. No functional difference. Signed-off-by: Christian König --- drivers/gpu/drm/vmwgfx/vmwgfx_context.c | 16 ++++++------- drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c | 12 +++++----- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 1 - drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 14 ++++------- drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_shader.c | 16 ++++++------- drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c | 8 +++---- drivers/gpu/drm/vmwgfx/vmwgfx_surface.c | 24 +++++++++---------- drivers/gpu/drm/vmwgfx/vmwgfx_validation.c | 5 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_validation.h | 10 ++++++++ 10 files changed, 57 insertions(+), 53 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_context.c b/drivers/gpu/drm/vmwgfx/vmwgfx_context.c index ecc503e42790..c496413e7c86 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_context.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_context.c @@ -48,17 +48,17 @@ vmw_user_context_base_to_res(struct ttm_base_object *base); static int vmw_gb_context_create(struct vmw_resource *res); static int vmw_gb_context_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_gb_context_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_gb_context_destroy(struct vmw_resource *res); static int vmw_dx_context_create(struct vmw_resource *res); static int vmw_dx_context_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_dx_context_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_dx_context_destroy(struct vmw_resource *res); static const struct vmw_user_resource_conv user_context_conv = { @@ -339,7 +339,7 @@ static int vmw_gb_context_create(struct vmw_resource *res) } static int vmw_gb_context_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct { @@ -367,7 +367,7 @@ static int vmw_gb_context_bind(struct vmw_resource *res, static int vmw_gb_context_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct ttm_buffer_object *bo = val_buf->bo; @@ -506,7 +506,7 @@ static int vmw_dx_context_create(struct vmw_resource *res) } static int vmw_dx_context_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct { @@ -576,7 +576,7 @@ void vmw_dx_context_scrub_cotables(struct vmw_resource *ctx, static int vmw_dx_context_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct ttm_buffer_object *bo = val_buf->bo; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c index a7c07692262b..2714238e21da 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c @@ -122,10 +122,10 @@ const SVGACOTableType vmw_cotable_scrub_order[] = { }; static int vmw_cotable_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_cotable_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_cotable_create(struct vmw_resource *res); static int vmw_cotable_destroy(struct vmw_resource *res); @@ -214,14 +214,14 @@ static int vmw_cotable_unscrub(struct vmw_resource *res) * vmw_cotable_bind - Undo a cotable unscrub operation * * @res: Pointer to the cotable resource - * @val_buf: Pointer to a struct ttm_validate_buffer prepared by the caller + * @val_buf: Pointer to a struct vmw_validate_buffer prepared by the caller * for convenience / fencing. * * This function issues commands to (re)bind the cotable to * its backing mob, which needs to be validated and reserved at this point. */ static int vmw_cotable_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { /* * The create() callback may have changed @res->backup without @@ -313,14 +313,14 @@ int vmw_cotable_scrub(struct vmw_resource *res, bool readback) * * @res: Pointer to the cotable resource. * @readback: Whether to read back cotable data to the backup buffer. - * @val_buf: Pointer to a struct ttm_validate_buffer prepared by the caller + * @val_buf: Pointer to a struct vmw_validate_buffer prepared by the caller * for convenience / fencing. * * Unbinds the cotable from the device and fences the backup buffer. */ static int vmw_cotable_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_cotable *vcotbl = vmw_cotable(res); struct vmw_private *dev_priv = res->dev_priv; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index b21831ef214a..0542e24a80e0 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -38,7 +38,6 @@ #include #include -#include #include #include #include diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 793293b59d43..495f776491d3 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -130,10 +130,9 @@ static void vmw_resource_release(struct kref *kref) BUG_ON(ret); if (vmw_resource_mob_attached(res) && res->func->unbind != NULL) { - struct ttm_validate_buffer val_buf; + struct vmw_validate_buffer val_buf; val_buf.bo = bo; - val_buf.num_shared = 0; res->func->unbind(res, false, &val_buf); } res->guest_memory_size = false; @@ -370,7 +369,7 @@ static int vmw_resource_buf_alloc(struct vmw_resource *res, * should be retried once resources have been freed up. */ static int vmw_resource_do_validate(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf, + struct vmw_validate_buffer *val_buf, bool dirtying) { int ret = 0; @@ -614,14 +613,13 @@ int vmw_resource_reserve(struct vmw_resource *res, bool interruptible, static int vmw_resource_do_evict(struct ww_acquire_ctx *ticket, struct vmw_resource *res, bool interruptible) { - struct ttm_validate_buffer val_buf; + struct vmw_validate_buffer val_buf; const struct vmw_res_func *func = res->func; int ret; BUG_ON(!func->may_evict); val_buf.bo = NULL; - val_buf.num_shared = 0; ret = vmw_resource_check_buffer(ticket, res, interruptible, &val_buf.bo); if (unlikely(ret != 0)) return ret; @@ -668,14 +666,13 @@ int vmw_resource_validate(struct vmw_resource *res, bool intr, struct vmw_resource *evict_res; struct vmw_private *dev_priv = res->dev_priv; struct list_head *lru_list = &dev_priv->res_lru[res->func->res_type]; - struct ttm_validate_buffer val_buf; + struct vmw_validate_buffer val_buf; unsigned err_count = 0; if (!res->func->create) return 0; val_buf.bo = NULL; - val_buf.num_shared = 0; if (res->guest_memory_bo) val_buf.bo = &res->guest_memory_bo->tbo; do { @@ -742,9 +739,8 @@ int vmw_resource_validate(struct vmw_resource *res, bool intr, */ void vmw_resource_unbind_list(struct vmw_bo *vbo) { - struct ttm_validate_buffer val_buf = { + struct vmw_validate_buffer val_buf = { .bo = &vbo->tbo, - .num_shared = 0 }; dma_resv_assert_held(vbo->tbo.base.resv); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h index aa7cbd396bea..ac2ea9d688c1 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h @@ -93,10 +93,10 @@ struct vmw_res_func { int (*create) (struct vmw_resource *res); int (*destroy) (struct vmw_resource *res); int (*bind) (struct vmw_resource *res, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); int (*unbind) (struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); void (*commit_notify)(struct vmw_resource *res, enum vmw_cmdbuf_res_state state); int (*dirty_alloc)(struct vmw_resource *res); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c index a01ca3226d0a..b1eea51b2aba 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c @@ -60,18 +60,18 @@ vmw_user_shader_base_to_res(struct ttm_base_object *base); static int vmw_gb_shader_create(struct vmw_resource *res); static int vmw_gb_shader_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_gb_shader_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_gb_shader_destroy(struct vmw_resource *res); static int vmw_dx_shader_create(struct vmw_resource *res); static int vmw_dx_shader_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_dx_shader_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static void vmw_dx_shader_commit_notify(struct vmw_resource *res, enum vmw_cmdbuf_res_state state); static bool vmw_shader_id_ok(u32 user_key, SVGA3dShaderType shader_type); @@ -243,7 +243,7 @@ static int vmw_gb_shader_create(struct vmw_resource *res) } static int vmw_gb_shader_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct { @@ -271,7 +271,7 @@ static int vmw_gb_shader_bind(struct vmw_resource *res, static int vmw_gb_shader_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct { @@ -443,7 +443,7 @@ static int vmw_dx_shader_create(struct vmw_resource *res) * */ static int vmw_dx_shader_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct ttm_buffer_object *bo = val_buf->bo; @@ -505,7 +505,7 @@ static int vmw_dx_shader_scrub(struct vmw_resource *res) */ static int vmw_dx_shader_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct vmw_fence_obj *fence; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c b/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c index edcc40659038..4d6dcf585f58 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c @@ -55,9 +55,9 @@ struct vmw_dx_streamoutput { static int vmw_dx_streamoutput_create(struct vmw_resource *res); static int vmw_dx_streamoutput_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_dx_streamoutput_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static void vmw_dx_streamoutput_commit_notify(struct vmw_resource *res, enum vmw_cmdbuf_res_state state); @@ -136,7 +136,7 @@ static int vmw_dx_streamoutput_create(struct vmw_resource *res) } static int vmw_dx_streamoutput_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct ttm_buffer_object *bo = val_buf->bo; @@ -191,7 +191,7 @@ static int vmw_dx_streamoutput_scrub(struct vmw_resource *res) } static int vmw_dx_streamoutput_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct vmw_fence_obj *fence; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c index 5721c74da3e0..f16f0d85fe2c 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c @@ -81,18 +81,18 @@ static void vmw_user_surface_free(struct vmw_resource *res); static struct vmw_resource * vmw_user_surface_base_to_res(struct ttm_base_object *base); static int vmw_legacy_srf_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_legacy_srf_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_legacy_srf_create(struct vmw_resource *res); static int vmw_legacy_srf_destroy(struct vmw_resource *res); static int vmw_gb_surface_create(struct vmw_resource *res); static int vmw_gb_surface_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_gb_surface_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf); + struct vmw_validate_buffer *val_buf); static int vmw_gb_surface_destroy(struct vmw_resource *res); static int vmw_gb_surface_define_internal(struct drm_device *dev, @@ -461,7 +461,7 @@ static int vmw_legacy_srf_create(struct vmw_resource *res) * * @res: Pointer to a struct vmw_res embedded in a struct * vmw_surface. - * @val_buf: Pointer to a struct ttm_validate_buffer containing + * @val_buf: Pointer to a struct vmw_validate_buffer containing * information about the backup buffer. * @bind: Boolean wether to DMA to the surface. * @@ -473,7 +473,7 @@ static int vmw_legacy_srf_create(struct vmw_resource *res) * will also be returned reserved iff @bind is true. */ static int vmw_legacy_srf_dma(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf, + struct vmw_validate_buffer *val_buf, bool bind) { SVGAGuestPtr ptr; @@ -515,14 +515,14 @@ static int vmw_legacy_srf_dma(struct vmw_resource *res, * * @res: Pointer to a struct vmw_res embedded in a struct * vmw_surface. - * @val_buf: Pointer to a struct ttm_validate_buffer containing + * @val_buf: Pointer to a struct vmw_validate_buffer containing * information about the backup buffer. * * This function will copy backup data to the surface if the * backup buffer is dirty. */ static int vmw_legacy_srf_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { if (!res->guest_memory_dirty) return 0; @@ -538,14 +538,14 @@ static int vmw_legacy_srf_bind(struct vmw_resource *res, * @res: Pointer to a struct vmw_res embedded in a struct * vmw_surface. * @readback: Readback - only true if dirty - * @val_buf: Pointer to a struct ttm_validate_buffer containing + * @val_buf: Pointer to a struct vmw_validate_buffer containing * information about the backup buffer. * * This function will copy backup data from the surface. */ static int vmw_legacy_srf_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { if (unlikely(readback)) return vmw_legacy_srf_dma(res, val_buf, false); @@ -1285,7 +1285,7 @@ static int vmw_gb_surface_create(struct vmw_resource *res) static int vmw_gb_surface_bind(struct vmw_resource *res, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct { @@ -1331,7 +1331,7 @@ static int vmw_gb_surface_bind(struct vmw_resource *res, static int vmw_gb_surface_unbind(struct vmw_resource *res, bool readback, - struct ttm_validate_buffer *val_buf) + struct vmw_validate_buffer *val_buf) { struct vmw_private *dev_priv = res->dev_priv; struct ttm_buffer_object *bo = val_buf->bo; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c index 34436504fcdb..c0977c853244 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c @@ -43,7 +43,7 @@ * large numbers and space conservation is desired. */ struct vmw_validation_bo_node { - struct ttm_validate_buffer base; + struct vmw_validate_buffer base; struct vmwgfx_hash_item hash; unsigned int coherent_count; }; @@ -250,7 +250,7 @@ int vmw_validation_add_bo(struct vmw_validation_context *ctx, bo_node = vmw_validation_find_bo_dup(ctx, vbo); if (!bo_node) { - struct ttm_validate_buffer *val_buf; + struct vmw_validate_buffer *val_buf; bo_node = vmw_validation_mem_alloc(ctx, sizeof(*bo_node)); if (!bo_node) @@ -265,7 +265,6 @@ int vmw_validation_add_bo(struct vmw_validation_context *ctx, val_buf->bo = ttm_bo_get_unless_zero(&vbo->tbo); if (!val_buf->bo) return -ESRCH; - val_buf->num_shared = 0; list_add_tail(&val_buf->head, &ctx->bo_list); } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h index 55a7d8b68d5c..f68cc1fd1eb4 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h @@ -65,6 +65,16 @@ struct vmw_validation_context { u8 *page_address; }; +/** + * struct vmw_validate_buffer - Linked list of TTM BOs for validation + * @head: linked list node + * @bo: The TTM BO + */ +struct vmw_validate_buffer { + struct list_head head; + struct ttm_buffer_object *bo; +}; + struct vmw_bo; struct vmw_resource; struct vmw_fence_obj; From patchwork Thu Nov 14 15:30:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13875268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4FD34D68B3A for ; Thu, 14 Nov 2024 15:30:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 168DE10E817; Thu, 14 Nov 2024 15:30:33 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="UOee51/R"; dkim-atps=neutral Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3765210E813; Thu, 14 Nov 2024 15:30:31 +0000 (UTC) Received: by mail-ej1-f53.google.com with SMTP id a640c23a62f3a-a9ec86a67feso143963866b.1; Thu, 14 Nov 2024 07:30:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731598229; x=1732203029; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=es+F3M5SSYwvI4y5uEWYNZ4MyERHw7dt72hObBxhhQo=; b=UOee51/RZeBR+bnMpBWI3rT73wo5ciTCY4djpkd57ZZ49LvjHRZXMO94e10Dy7csw3 Uyl2uGEpVHah3Bjun08lnpCOK/Xq+GHCdGmcPjPwRzjWeoUwTre5qrm/MN2YCuhDWWUJ h1DNSJPEPGeKe7E4qsoGhV7uYd4bAk7y9hQbHMXI4hUT42JIwpax9Xt+D0fKReIPqkg2 JYJtcnJpzYORaDutzXsp7p62thsvf2P/7rVJvbKQL1BHcegfoo2gb3EOlOXaow0WBAH6 zsXZaG/zGv/5Np19KCRr+h26bcUTcStys2vXprHOIZL5uk+/kmHKxDVyr4Ab1YDEgrG/ AGpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731598229; x=1732203029; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=es+F3M5SSYwvI4y5uEWYNZ4MyERHw7dt72hObBxhhQo=; b=a3gWuh5Vv3ykVozFFRGzonBz/t+JDZm0ztHYPzm++JrXQd8IWABrxrFKQdwU7AmKCc 8vb7wZCgUNTC8/Q+rCWqqivxSfxAwgtpoC4BVC0UzHE/xyMmkYg8cMe5IjZqVt0a29WL adka2JOmYRTBCE7aCVq4D6Q0/UlMXKZe5kbYNEDUe9yRNmgrhuaIde5QMAHvkJPAHGFP Q2jK8e9Sci4pnWt402FSM1hq+8QE0cM2Zy2p6tnqeGK535UlX/Rf7HhBObTWTQiGpnzX wap3UP+e/JQCfuTtlz1susz/PP/OcoWLNzwhRw3IkbLOGURNntPAFGZD6fVABLapiBz2 M/aQ== X-Forwarded-Encrypted: i=1; AJvYcCV6xe3wLD3DxLsquZKKrDt1LhnO3OSWiIiNCgOy1EyBaNhMn0KpJka7r/cxEGxWwtf/LnjCCwL20YAY@lists.freedesktop.org, AJvYcCVM20OpWQE2g07JWtAd5Mf47vFvfSlMZYy57FR+vhnv/gtRo02lM/XFi0IymF3LcwWbAs5wVmkQ@lists.freedesktop.org, AJvYcCWvQot3aF8lQdXQr06kJ9H+x1o4RH+wOAWHLftttP40KOuuXJ7Vwj3bL7oJ61LKScsDsinFItqSwko=@lists.freedesktop.org X-Gm-Message-State: AOJu0YwahdZICMbH+zl+vymcIKqXUesdlsd6b+rnJuAFoadD8nLraOHG AQqR5f/0oZnJcw0haiH5l98dkByzIlKR9t9X8lguRmpMIW2aq/V878eeDoGLWmQ= X-Google-Smtp-Source: AGHT+IG10VBr9gvvw5qDFtZWGzbKJcUB7UabePBt7gF7vc2MTqmsgNLE1c1SwGG3/4/IruuC9U/80g== X-Received: by 2002:a17:906:da85:b0:a99:f56e:ce40 with SMTP id a640c23a62f3a-a9ef0021723mr2489316766b.47.1731598229325; Thu, 14 Nov 2024 07:30:29 -0800 (PST) Received: from able.fritz.box ([2a00:e180:15c9:2500:bb23:40f5:fe29:201]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aa20e046919sm74063266b.156.2024.11.14.07.30.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Nov 2024 07:30:28 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: kraxel@redhat.com, airlied@redhat.com, alexander.deucher@amd.com, zack.rusin@broadcom.com, bcm-kernel-feedback-list@broadcom.com, virtualization@lists.linux.dev, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 6/7] drm/xe: drop unused component dependencies Date: Thu, 14 Nov 2024 16:30:19 +0100 Message-Id: <20241114153020.6209-7-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241114153020.6209-1-christian.koenig@amd.com> References: <20241114153020.6209-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" XE switched over to drm_exec quite some time ago. Signed-off-by: Christian König Acked-by: Lucas De Marchi --- drivers/gpu/drm/xe/xe_bo_types.h | 1 - drivers/gpu/drm/xe/xe_gt_pagefault.c | 1 - drivers/gpu/drm/xe/xe_vm.c | 1 - drivers/gpu/drm/xe/xe_vm.h | 1 - 4 files changed, 4 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h index 13c6d8a69e91..99196228dcc8 100644 --- a/drivers/gpu/drm/xe/xe_bo_types.h +++ b/drivers/gpu/drm/xe/xe_bo_types.h @@ -10,7 +10,6 @@ #include #include -#include #include #include "xe_ggtt_types.h" diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c index 79c426dc2505..2606cd396df5 100644 --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c @@ -10,7 +10,6 @@ #include #include -#include #include "abi/guc_actions_abi.h" #include "xe_bo.h" diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index c99380271de6..00ea57c2f4b9 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -10,7 +10,6 @@ #include #include -#include #include #include #include diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h index c864dba35e1d..23adb7442881 100644 --- a/drivers/gpu/drm/xe/xe_vm.h +++ b/drivers/gpu/drm/xe/xe_vm.h @@ -17,7 +17,6 @@ struct drm_printer; struct drm_file; struct ttm_buffer_object; -struct ttm_validate_buffer; struct xe_exec_queue; struct xe_file; From patchwork Thu Nov 14 15:30:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 13875270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 21A46D68B37 for ; Thu, 14 Nov 2024 15:30:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 213D610E80A; Thu, 14 Nov 2024 15:30:37 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="cWJkFtPW"; dkim-atps=neutral Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) by gabe.freedesktop.org (Postfix) with ESMTPS id 18A4910E816; Thu, 14 Nov 2024 15:30:32 +0000 (UTC) Received: by mail-ej1-f52.google.com with SMTP id a640c23a62f3a-a9e8522445dso143050366b.1; Thu, 14 Nov 2024 07:30:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731598230; x=1732203030; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=ETDe7un/JWIbuM3P/iFQm3j6qLUZiVYIWE/wIZIF+UQ=; b=cWJkFtPW/MDMOq02Ra9halagBMGSRWxgVFpnW8z7FsgwM8Ce01o4ElctzJS7kH7++I 9M57Jo/NK0aDoXozy9AYCc0chC/vSggxLMQnrVG+dwlC4gNJ4+MqC40x42+r9K9gqFx/ AKCQzlIST43WMq4x+yguiTQSbT6hTC1wB96YwuCNBRiq9XFmmAsSU7aWl1w8NYmcnuKZ LL5HKOXd8kLyzjFisCw57CxpneVPEsEIZ6dImypLrzlzvzBvtX16bRObswNve9ecZCEp 8caftWfZPDajFMNbwVutqeMIaipf8wbjpEqqpBx0Hx5Nl3gh7aAgo3/aYXXWIpfpAOq0 1yaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731598230; x=1732203030; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ETDe7un/JWIbuM3P/iFQm3j6qLUZiVYIWE/wIZIF+UQ=; b=lC/gz4O/GRHGxrmXVVNLbdmbgNlyY7JsffwYfvQvkgh01YsXLIbRDBDcphSAHYZTBO 6D+/PniFI1P35cchAifR+LnEoP50RWPzRfWrjHdEWL/ltklKjfHcX9Gq6K/0GPst8crO 96/MOoRCmQHCbxHQaG6/AYDEoW6ytJnlO86Inf5zJI6RVA3QTOtaaU+xwk7Rxh0L460+ AkWWuQ8kVotjmdZk65XwTAWUmIOY+b8khft7l9TeJ4RtOZ28E4Ep5qCv7ThFHFRZHP8R BBV8TDotOHpEkt2RTc60Mx29eePthxej8W7jzEFXyPJlJL8zyFOr93RwfFjBjO11ZY5+ fuIg== X-Forwarded-Encrypted: i=1; AJvYcCVQUBOeONuMSZeFjOVZDTQd6VWA++FRcDxaxjZHBO3iVynBnAUWu8HQnXbBiFBoPlXy75RLyMkYJZ86@lists.freedesktop.org, AJvYcCX3cXwgQQG7Wc+I/v17n2V5MgcoOgvktU6yPbMKZdiMurDvd2NsCIovRwCO7CGVLDvdiVCF07r/XkQ=@lists.freedesktop.org, AJvYcCXYo5g3bSbrrnN5z9hNeAVr5Alv+Zqv2Qq3RobBCUKQ0e2E9Y69U5tQSz9SJ5Q4xaae5siQhhgI@lists.freedesktop.org X-Gm-Message-State: AOJu0YwF+LsnRrx/RJN6gHSceHaljnr5sJgoi9n6acPvN5jcnMRlJVlL zwAjIZfNUDvROrgqjYpZZusR0TEqoq7DCltBnsp5UZ9A9KZQwQth X-Google-Smtp-Source: AGHT+IHMJ32RN2lfwiWsru/LwaWKk6ticRANDC7RG1vXcvThsUFNdNXUsMhAHyN7Z0gz/veli4lJ8A== X-Received: by 2002:a17:907:3f07:b0:a9a:82e2:e8ce with SMTP id a640c23a62f3a-aa20cdc555cmr258205666b.40.1731598230245; Thu, 14 Nov 2024 07:30:30 -0800 (PST) Received: from able.fritz.box ([2a00:e180:15c9:2500:bb23:40f5:fe29:201]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aa20e046919sm74063266b.156.2024.11.14.07.30.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Nov 2024 07:30:29 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: kraxel@redhat.com, airlied@redhat.com, alexander.deucher@amd.com, zack.rusin@broadcom.com, bcm-kernel-feedback-list@broadcom.com, virtualization@lists.linux.dev, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 7/7] drm/ttm: remove ttm_execbug_util Date: Thu, 14 Nov 2024 16:30:20 +0100 Message-Id: <20241114153020.6209-8-christian.koenig@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241114153020.6209-1-christian.koenig@amd.com> References: <20241114153020.6209-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Replaced by drm_exec and not used any more. Signed-off-by: Christian König --- drivers/gpu/drm/ttm/Makefile | 4 +- drivers/gpu/drm/ttm/ttm_execbuf_util.c | 161 ------------------------- include/drm/ttm/ttm_execbuf_util.h | 119 ------------------ 3 files changed, 2 insertions(+), 282 deletions(-) delete mode 100644 drivers/gpu/drm/ttm/ttm_execbuf_util.c delete mode 100644 include/drm/ttm/ttm_execbuf_util.h diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile index dad298127226..25937e4ad91a 100644 --- a/drivers/gpu/drm/ttm/Makefile +++ b/drivers/gpu/drm/ttm/Makefile @@ -3,8 +3,8 @@ # Makefile for the drm device driver. This driver provides support for the ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ - ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o \ - ttm_device.o ttm_sys_manager.o + ttm_range_manager.o ttm_resource.o ttm_pool.o ttm_device.o \ + ttm_sys_manager.o ttm-$(CONFIG_AGP) += ttm_agp_backend.o obj-$(CONFIG_DRM_TTM) += ttm.o diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c deleted file mode 100644 index f1c60fa80c2d..000000000000 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ /dev/null @@ -1,161 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 OR MIT */ -/************************************************************************** - * - * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA - * All Rights Reserved. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - **************************************************************************/ - -#include -#include - -static void ttm_eu_backoff_reservation_reverse(struct list_head *list, - struct ttm_validate_buffer *entry) -{ - list_for_each_entry_continue_reverse(entry, list, head) { - struct ttm_buffer_object *bo = entry->bo; - - dma_resv_unlock(bo->base.resv); - } -} - -void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket, - struct list_head *list) -{ - struct ttm_validate_buffer *entry; - - if (list_empty(list)) - return; - - list_for_each_entry(entry, list, head) { - struct ttm_buffer_object *bo = entry->bo; - - ttm_bo_move_to_lru_tail_unlocked(bo); - dma_resv_unlock(bo->base.resv); - } - - if (ticket) - ww_acquire_fini(ticket); -} -EXPORT_SYMBOL(ttm_eu_backoff_reservation); - -/* - * Reserve buffers for validation. - * - * If a buffer in the list is marked for CPU access, we back off and - * wait for that buffer to become free for GPU access. - * - * If a buffer is reserved for another validation, the validator with - * the highest validation sequence backs off and waits for that buffer - * to become unreserved. This prevents deadlocks when validating multiple - * buffers in different orders. - */ - -int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, - struct list_head *list, bool intr, - struct list_head *dups) -{ - struct ttm_validate_buffer *entry; - int ret; - - if (list_empty(list)) - return 0; - - if (ticket) - ww_acquire_init(ticket, &reservation_ww_class); - - list_for_each_entry(entry, list, head) { - struct ttm_buffer_object *bo = entry->bo; - unsigned int num_fences; - - ret = ttm_bo_reserve(bo, intr, (ticket == NULL), ticket); - if (ret == -EALREADY && dups) { - struct ttm_validate_buffer *safe = entry; - entry = list_prev_entry(entry, head); - list_del(&safe->head); - list_add(&safe->head, dups); - continue; - } - - num_fences = max(entry->num_shared, 1u); - if (!ret) { - ret = dma_resv_reserve_fences(bo->base.resv, - num_fences); - if (!ret) - continue; - } - - /* uh oh, we lost out, drop every reservation and try - * to only reserve this buffer, then start over if - * this succeeds. - */ - ttm_eu_backoff_reservation_reverse(list, entry); - - if (ret == -EDEADLK) { - ret = ttm_bo_reserve_slowpath(bo, intr, ticket); - } - - if (!ret) - ret = dma_resv_reserve_fences(bo->base.resv, - num_fences); - - if (unlikely(ret != 0)) { - if (ticket) { - ww_acquire_done(ticket); - ww_acquire_fini(ticket); - } - return ret; - } - - /* move this item to the front of the list, - * forces correct iteration of the loop without keeping track - */ - list_del(&entry->head); - list_add(&entry->head, list); - } - - return 0; -} -EXPORT_SYMBOL(ttm_eu_reserve_buffers); - -void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket, - struct list_head *list, - struct dma_fence *fence) -{ - struct ttm_validate_buffer *entry; - - if (list_empty(list)) - return; - - list_for_each_entry(entry, list, head) { - struct ttm_buffer_object *bo = entry->bo; - - dma_resv_add_fence(bo->base.resv, fence, entry->num_shared ? - DMA_RESV_USAGE_READ : DMA_RESV_USAGE_WRITE); - ttm_bo_move_to_lru_tail_unlocked(bo); - dma_resv_unlock(bo->base.resv); - } - if (ticket) - ww_acquire_fini(ticket); -} -EXPORT_SYMBOL(ttm_eu_fence_buffer_objects); diff --git a/include/drm/ttm/ttm_execbuf_util.h b/include/drm/ttm/ttm_execbuf_util.h deleted file mode 100644 index fac1e3e57ebd..000000000000 --- a/include/drm/ttm/ttm_execbuf_util.h +++ /dev/null @@ -1,119 +0,0 @@ -/************************************************************************** - * - * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA - * All Rights Reserved. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - **************************************************************************/ -/* - * Authors: Thomas Hellstrom - */ - -#ifndef _TTM_EXECBUF_UTIL_H_ -#define _TTM_EXECBUF_UTIL_H_ - -#include - -struct ww_acquire_ctx; -struct dma_fence; -struct ttm_buffer_object; - -/** - * struct ttm_validate_buffer - * - * @head: list head for thread-private list. - * @bo: refcounted buffer object pointer. - * @num_shared: How many shared fences we want to add. - */ - -struct ttm_validate_buffer { - struct list_head head; - struct ttm_buffer_object *bo; - unsigned int num_shared; -}; - -/** - * ttm_eu_backoff_reservation - * - * @ticket: ww_acquire_ctx from reserve call - * @list: thread private list of ttm_validate_buffer structs. - * - * Undoes all buffer validation reservations for bos pointed to by - * the list entries. - */ -void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket, - struct list_head *list); - -/** - * ttm_eu_reserve_buffers - * - * @ticket: [out] ww_acquire_ctx filled in by call, or NULL if only - * non-blocking reserves should be tried. - * @list: thread private list of ttm_validate_buffer structs. - * @intr: should the wait be interruptible - * @dups: [out] optional list of duplicates. - * - * Tries to reserve bos pointed to by the list entries for validation. - * If the function returns 0, all buffers are marked as "unfenced", - * taken off the lru lists and are not synced for write CPU usage. - * - * If the function detects a deadlock due to multiple threads trying to - * reserve the same buffers in reverse order, all threads except one will - * back off and retry. This function may sleep while waiting for - * CPU write reservations to be cleared, and for other threads to - * unreserve their buffers. - * - * If intr is set to true, this function may return -ERESTARTSYS if the - * calling process receives a signal while waiting. In that case, no - * buffers on the list will be reserved upon return. - * - * If dups is non NULL all buffers already reserved by the current thread - * (e.g. duplicates) are added to this list, otherwise -EALREADY is returned - * on the first already reserved buffer and all buffers from the list are - * unreserved again. - * - * Buffers reserved by this function should be unreserved by - * a call to either ttm_eu_backoff_reservation() or - * ttm_eu_fence_buffer_objects() when command submission is complete or - * has failed. - */ -int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, - struct list_head *list, bool intr, - struct list_head *dups); - -/** - * ttm_eu_fence_buffer_objects - * - * @ticket: ww_acquire_ctx from reserve call - * @list: thread private list of ttm_validate_buffer structs. - * @fence: The new exclusive fence for the buffers. - * - * This function should be called when command submission is complete, and - * it will add a new sync object to bos pointed to by entries on @list. - * It also unreserves all buffers, putting them on lru lists. - * - */ -void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket, - struct list_head *list, - struct dma_fence *fence); - -#endif