From patchwork Wed Sep 18 17:55:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 11151115 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B679714DB for ; Wed, 18 Sep 2019 17:55:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7BF8321907 for ; Wed, 18 Sep 2019 17:55:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="W32YcAxZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727127AbfIRRz3 (ORCPT ); Wed, 18 Sep 2019 13:55:29 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:33462 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726369AbfIRRz3 (ORCPT ); Wed, 18 Sep 2019 13:55:29 -0400 Received: by mail-wr1-f65.google.com with SMTP id b9so393344wrs.0 for ; Wed, 18 Sep 2019 10:55:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=cnfgUXOSYJoNrbqhoykUBse7iwxXL6NxI0NEtGgJqVY=; b=W32YcAxZQgAkQmD3LY6uY1f4K2HB+VN5PTlosXvvtBlCZezf2yvojAcB1IAfKBihsI S6XaCb7k7Sz+TWbyexgNdFV/e4VNg3DLfbxWnQcREEkZ3Nak4z/h2HiO0k+RoEQzDHG6 YetF8F2cpwTmK+skNIo4sDM5HwVKTw6Nmr5FokggEiI62q82wj3CoF9BZHD8P1rVK8cK cafny9u+Z8nPc5v0XE2hGVu2fRLT8JXqOLYBpdu5m0ip9CI4sFNwWoOEfdH0oUGblzNA bVx8TheM4BS04TVbZw+CUzUMbjTDgqHZkTW5pKyukXFt4pDOuYc69P9WHBO1n5TY3osS kcQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=cnfgUXOSYJoNrbqhoykUBse7iwxXL6NxI0NEtGgJqVY=; b=GHLCG3aiKT0BpNC9OJZN/29HpY0nEjuarfKWACXlU73g4Kid5HtTh85fgQDSB7Ubat MnOt+eyk85lT+to6QCF4r3j+Z5/0f7+cdHhfeqV+E2NTev0T8bGvoU+xEBmsES/k91Wh hwGKF3My4mbIRAWJLwf/4Fb3pr7M+3uiT6xAQo6+wjDHiooCyE60hhaO+weJKLWYCAoG UxGVC7nlLyRb0wbYFLufWBb4xyrbWh3DqT3+0iXcaJ0/MIfLh8nYVgWKJwZAry1aseUN 2Jk3qAy5b2IG1lJebmqcKc6tW8JtTmBz/qX4BlJncbuj0y3n8f7U4pfOUdbRI6byaXcO wSBg== X-Gm-Message-State: APjAAAU63I1PmLWcpxmbNPJ5MZ4YbLVFhVJtDBCnt0PNdrKmaiY9rQmJ sTEfVH6dLF+H/dEyFaSee30= X-Google-Smtp-Source: APXvYqx3MCUPldqva5/DncZ6EVIsWZTF7igxX5q/3C+6umWoXnSgY5zAxgN4TrbW8Fm8DMKZ/oZDKw== X-Received: by 2002:adf:fa90:: with SMTP id h16mr3951810wrr.52.1568829326284; Wed, 18 Sep 2019 10:55:26 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:f002:ffad:c852:eff6]) by smtp.gmail.com with ESMTPSA id q3sm4074027wru.33.2019.09.18.10.55.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Sep 2019 10:55:25 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, chris@chris-wilson.co.uk, daniel@ffwll.ch, sumit.semwal@linaro.org Subject: [PATCH 1/3] dma-buf: add dma_resv_ctx for deadlock handling Date: Wed, 18 Sep 2019 19:55:23 +0200 Message-Id: <20190918175525.49441-1-christian.koenig@amd.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The ww_mutex framework allows for detecting deadlocks when multiple threads try to acquire the same set of locks in different order. The problem is that handling those deadlocks was the burden of the user of the ww_mutex implementation and at least some users didn't got that right on the first try. So introduce a new dma_resv_ctx object which can be used to simplify the deadlock handling. This is done by tracking all locked dma_resv objects in the context as well as the last contended object. When a deadlock occurse we now unlock all previously locked objects and acquire the contended lock in the slow path. After this is done -EDEADLK is still returned to signal that all other locks now need to be re-acquired again. Signed-off-by: Christian König --- drivers/dma-buf/Makefile | 2 +- drivers/dma-buf/dma-resv-ctx.c | 108 +++++++++++++++++++++++++++++++++ include/linux/dma-resv-ctx.h | 68 +++++++++++++++++++++ include/linux/dma-resv.h | 1 + 4 files changed, 178 insertions(+), 1 deletion(-) create mode 100644 drivers/dma-buf/dma-resv-ctx.c create mode 100644 include/linux/dma-resv-ctx.h diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile index 03479da06422..da7701c85de2 100644 --- a/drivers/dma-buf/Makefile +++ b/drivers/dma-buf/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0-only obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \ - dma-resv.o seqno-fence.o + dma-resv.o dma-resv-ctx.o seqno-fence.o obj-$(CONFIG_SYNC_FILE) += sync_file.o obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o obj-$(CONFIG_UDMABUF) += udmabuf.o diff --git a/drivers/dma-buf/dma-resv-ctx.c b/drivers/dma-buf/dma-resv-ctx.c new file mode 100644 index 000000000000..cad10fa6f80b --- /dev/null +++ b/drivers/dma-buf/dma-resv-ctx.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2019 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Christian König + */ + +#include + +/** + * dma_resv_ctx_init - initialize a reservation context + * @ctx: the context to initialize + * + * Start using this reservation context to lock reservation objects for update. + */ +void dma_resv_ctx_init(struct dma_resv_ctx *ctx) +{ + ww_acquire_init(&ctx->base, &reservation_ww_class); + init_llist_head(&ctx->locked); + ctx->contended = NULL; +} +EXPORT_SYMBOL(dma_resv_ctx_init); + +/** + * dma_resv_ctx_unlock_all - unlock all reservation objects + * @ctx: the context which holds the reservation objects + * + * Unlocks all reservation objects locked with this context. + */ +void dma_resv_ctx_unlock_all(struct dma_resv_ctx *ctx) +{ + struct dma_resv *obj, *next; + + if (ctx->contended) + dma_resv_unlock(ctx->contended); + ctx->contended = NULL; + + llist_for_each_entry_safe(obj, next, ctx->locked.first, locked) + ww_mutex_unlock(&obj->lock); + init_llist_head(&ctx->locked); +} +EXPORT_SYMBOL(dma_resv_ctx_unlock_all); + +/** + * dma_resv_ctx_lock - lock a reservation object with deadlock handling + * @ctx: the context which should be used to lock the object + * @obj: the object which needs to be locked + * @interruptible: if we should wait interruptible + * + * Use @ctx to lock the reservation object. If a deadlock is detected we backoff + * by releasing all locked objects and use the slow path to lock the reservation + * object. After successfully locking in the slow path -EDEADLK is returned to + * signal that all other locks must be re-taken as well. + */ +int dma_resv_ctx_lock(struct dma_resv_ctx *ctx, struct dma_resv *obj, + bool interruptible) +{ + int ret = 0; + + if (unlikely(ctx->contended == obj)) + ctx->contended = NULL; + else if (interruptible) + ret = dma_resv_lock_interruptible(obj, &ctx->base); + else + ret = dma_resv_lock(obj, &ctx->base); + + if (likely(!ret)) { + /* don't use llist_add here, we have separate locking */ + obj->locked.next = ctx->locked.first; + ctx->locked.first = &obj->locked; + return 0; + } + if (unlikely(ret != -EDEADLK)) + return ret; + + dma_resv_ctx_unlock_all(ctx); + + if (interruptible) { + ret = dma_resv_lock_slow_interruptible(obj, &ctx->base); + if (unlikely(ret)) + return ret; + } else { + dma_resv_lock_slow(obj, &ctx->base); + } + + ctx->contended = obj; + return -EDEADLK; +} +EXPORT_SYMBOL(dma_resv_ctx_lock); diff --git a/include/linux/dma-resv-ctx.h b/include/linux/dma-resv-ctx.h new file mode 100644 index 000000000000..86473de65167 --- /dev/null +++ b/include/linux/dma-resv-ctx.h @@ -0,0 +1,68 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2019 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Christian König + */ + +#ifndef _LINUX_DMA_RESV_CTX_H +#define _LINUX_DMA_RESV_CTX_H + +#include +#include + +/** + * struct dma_resv_ctx - context to lock reservation objects + * @base: ww_acquire_ctx used for deadlock detection + * @locked: list of dma_resv objects locked in this context + * @contended: contended dma_resv object + */ +struct dma_resv_ctx { + struct ww_acquire_ctx base; + struct llist_head locked; + struct dma_resv *contended; +}; + +/** + * dma_resv_ctx_done - wrapper for ww_acquire_done + * @ctx: the reservation context which is done with locking + */ +static inline void dma_resv_ctx_done(struct dma_resv_ctx *ctx) +{ + ww_acquire_done(&ctx->base); +} + +/** + * dma_resv_ctx_fini - wrapper for ww_acquire_fini + * @ctx: the reservation context which is finished + */ +static inline void dma_resv_ctx_fini(struct dma_resv_ctx *ctx) +{ + ww_acquire_fini(&ctx->base); +} + +void dma_resv_ctx_init(struct dma_resv_ctx *ctx); +void dma_resv_ctx_unlock_all(struct dma_resv_ctx *ctx); +int dma_resv_ctx_lock(struct dma_resv_ctx *ctx, struct dma_resv *obj, + bool interruptible); + +#endif /* _LINUX_DMA_RESV_CTX_H */ diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index ee50d10f052b..1267822c2669 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -71,6 +71,7 @@ struct dma_resv_list { */ struct dma_resv { struct ww_mutex lock; + struct llist_node locked; seqcount_t seq; struct dma_fence __rcu *fence_excl; From patchwork Wed Sep 18 17:55:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 11151121 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F6F614DB for ; Wed, 18 Sep 2019 17:55:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 542D021907 for ; Wed, 18 Sep 2019 17:55:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Ksm+3Di0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727436AbfIRRzb (ORCPT ); Wed, 18 Sep 2019 13:55:31 -0400 Received: from mail-wm1-f49.google.com ([209.85.128.49]:34853 "EHLO mail-wm1-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727056AbfIRRzb (ORCPT ); Wed, 18 Sep 2019 13:55:31 -0400 Received: by mail-wm1-f49.google.com with SMTP id y21so1060097wmi.0 for ; Wed, 18 Sep 2019 10:55:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ho1LQMIIQ97XIEoVYAf5CIZaPAfuRW7Cy16MxRs6ujQ=; b=Ksm+3Di0ABS6Qg9FjzyrKFN8JI6d2swkzFHRdfYn4PN8JAf8uMgJ+P4LVQJlwDobb/ Iun2TMSjMsxOC+Cb/aaWN5lrksjUX5YhO7Ce0iwDIg9IWTb27601EPhlh6KCCgvfFuUK evaJOZpMeP5X6RI8T2B08t9sUw7nWcDoDj1OuczTKYdH3YzD2l0UmPQmeeIjtj7kmvsD WFkmysSFYzNIDa1iLYsjjG9UNlFoqEJgAepLFH4MS6h/31HwUOOYb1jLMn6GcRK2Kgyf olUsQvDE/fsB49+Io4sRHqv8SyZfHZs6yujaNP9ATuK4FRkGypVQ9NU8Jozt+ATxrxJq Xfqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ho1LQMIIQ97XIEoVYAf5CIZaPAfuRW7Cy16MxRs6ujQ=; b=ExPxesSMyH0LTX/R5qLzSsIGIRto1HzSgNkoEpwqyiN3EXqQrHUYKdCkm/kZBEDq/9 kQBDFEj563kGyTKRxGHs4/FFkqHJZFIZb+XQ29z3Gku1Mt6ipmmczlP5p0LKxT+SGUYo /xzJ7ExfIU99BObpqOOqSh1vVI5tY5/MLhrADv/2ZcuW4mccmJ2xZQlZC9lhNPXqt7ZD W74I2weXjC3Qn4I2dBqagFTTOQE0SuYzQYYzDaMi71890QU2t4Nt7eVy9oeGK+0/hkSW eh3rsEkbwR+HTlNBaDsqQXm2AHdr0stRKUZN2xG8wx31smiDxBEH7yros/iDSo30HsPu D9YA== X-Gm-Message-State: APjAAAWGMU5RpM+wnuRFtEFvWFTjmMxIFw4GvLi8+a0wHulkwovRhKXu zTBfCfyMr901IRC/MQd+kKU90+YZ X-Google-Smtp-Source: APXvYqzQg3ToFRk1PeghwFyRpg+5dQFEdUeTj50aVbA24YhfSNWvUZtM98Oh3Q/XHRBCswFoyI19Dg== X-Received: by 2002:a1c:1bcf:: with SMTP id b198mr4139859wmb.0.1568829327086; Wed, 18 Sep 2019 10:55:27 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:f002:ffad:c852:eff6]) by smtp.gmail.com with ESMTPSA id q3sm4074027wru.33.2019.09.18.10.55.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Sep 2019 10:55:26 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, chris@chris-wilson.co.uk, daniel@ffwll.ch, sumit.semwal@linaro.org Subject: [PATCH 2/3] drm/ttm: switch ttm_execbuf_util to new drm_resv_ctx Date: Wed, 18 Sep 2019 19:55:24 +0200 Message-Id: <20190918175525.49441-2-christian.koenig@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190918175525.49441-1-christian.koenig@amd.com> References: <20190918175525.49441-1-christian.koenig@amd.com> MIME-Version: 1.0 Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Change ttm_execbuf_util to use the new reservation context object for deadlock handling. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 +- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 4 +- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +- drivers/gpu/drm/qxl/qxl_drv.h | 2 +- drivers/gpu/drm/qxl/qxl_release.c | 2 +- drivers/gpu/drm/radeon/radeon.h | 2 +- drivers/gpu/drm/radeon/radeon_gem.c | 2 +- drivers/gpu/drm/radeon/radeon_object.c | 2 +- drivers/gpu/drm/radeon/radeon_object.h | 2 +- drivers/gpu/drm/ttm/ttm_execbuf_util.c | 117 ++++++++---------- drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 8 +- drivers/gpu/drm/vmwgfx/vmwgfx_validation.h | 2 +- include/drm/ttm/ttm_execbuf_util.h | 13 +- 16 files changed, 78 insertions(+), 92 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 8199d201b43a..0644829d990e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -465,7 +465,7 @@ struct amdgpu_cs_parser { struct drm_sched_entity *entity; /* buffer objects */ - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; struct amdgpu_bo_list *bo_list; struct amdgpu_mn *mn; struct amdgpu_bo_list_entry vm_pd; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 76e3516484e7..b9bb35d1699e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -540,7 +540,7 @@ struct bo_vm_reservation_context { struct amdgpu_bo_list_entry kfd_bo; /* BO list entry for the KFD BO */ unsigned int n_vms; /* Number of VMs reserved */ struct amdgpu_bo_list_entry *vm_pd; /* Array of VM BO list entries */ - struct ww_acquire_ctx ticket; /* Reservation ticket */ + struct dma_resv_ctx ticket; /* Reservation ticket */ struct list_head list, duplicates; /* BO lists */ struct amdgpu_sync *sync; /* Pointer to sync object */ bool reserved; /* Whether BOs are reserved */ @@ -1760,7 +1760,7 @@ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info) { struct amdgpu_bo_list_entry *pd_bo_list_entries; struct list_head resv_list, duplicates; - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; struct amdgpu_sync sync; struct amdgpu_vm *peer_vm; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 22236d367e26..95ec965fcc2d 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1325,7 +1325,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, amdgpu_job_free_resources(job); trace_amdgpu_cs_ioctl(job); - amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket); + amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket.base); priority = job->base.s_priority; drm_sched_entity_push_job(&job->base, entity); @@ -1729,7 +1729,7 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser, *map = mapping; /* Double check that the BO is reserved by this CS */ - if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->ticket) + if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->ticket.base) return -EINVAL; if (!((*bo)->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c index 35a8d3c96fc9..605f83046039 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c @@ -66,7 +66,7 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm, struct amdgpu_bo *bo, struct amdgpu_bo_va **bo_va, uint64_t csa_addr, uint32_t size) { - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; struct list_head list; struct amdgpu_bo_list_entry pd; struct ttm_validate_buffer csa_tv; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index 40f673cfbbfe..b25a59c4bec6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -162,7 +162,7 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj, struct amdgpu_bo_list_entry vm_pd; struct list_head list, duplicates; struct ttm_validate_buffer tv; - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; struct amdgpu_bo_va *bo_va; int r; @@ -549,7 +549,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data, struct amdgpu_bo_va *bo_va; struct amdgpu_bo_list_entry vm_pd; struct ttm_validate_buffer tv; - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; struct list_head list, duplicates; uint64_t va_flags; int r = 0; diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 760af668f678..5df2ee1e10d8 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -4416,7 +4416,7 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, struct dm_plane_state *dm_plane_state_new, *dm_plane_state_old; struct list_head list; struct ttm_validate_buffer tv; - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; uint64_t tiling_flags; uint32_t domain; int r; diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index d4051409ce64..378b2c81920a 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -154,7 +154,7 @@ struct qxl_release { struct qxl_bo *release_bo; uint32_t release_offset; uint32_t surface_release_id; - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; struct list_head bos; }; diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 312216caeea2..aa7a28795645 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -463,6 +463,6 @@ void qxl_release_fence_buffer_objects(struct qxl_release *release) dma_resv_unlock(bo->base.resv); } spin_unlock(&glob->lru_lock); - ww_acquire_fini(&release->ticket); + dma_resv_ctx_fini(&release->ticket); } diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index de1d090df034..5aadb55731c3 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -1083,7 +1083,7 @@ struct radeon_cs_parser { u32 cs_flags; u32 ring; s32 priority; - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; }; static inline u32 radeon_get_ib_value(struct radeon_cs_parser *p, int idx) diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index 4cf58dbbe439..c48c2fb35456 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -549,7 +549,7 @@ static void radeon_gem_va_update_vm(struct radeon_device *rdev, { struct ttm_validate_buffer tv, *entry; struct radeon_bo_list *vm_bos; - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; struct list_head list; unsigned domain; int r; diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index 2abe1eab471f..653fd7937b39 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -531,7 +531,7 @@ static u64 radeon_bo_get_threshold_for_moves(struct radeon_device *rdev) } int radeon_bo_list_validate(struct radeon_device *rdev, - struct ww_acquire_ctx *ticket, + struct dma_resv_ctx *ticket, struct list_head *head, int ring) { struct ttm_operation_ctx ctx = { true, false }; diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h index d23f2ed4126e..e5638c919c98 100644 --- a/drivers/gpu/drm/radeon/radeon_object.h +++ b/drivers/gpu/drm/radeon/radeon_object.h @@ -141,7 +141,7 @@ extern void radeon_bo_force_delete(struct radeon_device *rdev); extern int radeon_bo_init(struct radeon_device *rdev); extern void radeon_bo_fini(struct radeon_device *rdev); extern int radeon_bo_list_validate(struct radeon_device *rdev, - struct ww_acquire_ctx *ticket, + struct dma_resv_ctx *ticket, struct list_head *head, int ring); extern int radeon_bo_set_tiling_flags(struct radeon_bo *bo, u32 tiling_flags, u32 pitch); diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c index 131dae8f4170..71148c83cc4f 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c @@ -33,16 +33,6 @@ #include #include -static void ttm_eu_backoff_reservation_reverse(struct list_head *list, - struct ttm_validate_buffer *entry) -{ - list_for_each_entry_continue_reverse(entry, list, head) { - struct ttm_buffer_object *bo = entry->bo; - - dma_resv_unlock(bo->base.resv); - } -} - static void ttm_eu_del_from_lru_locked(struct list_head *list) { struct ttm_validate_buffer *entry; @@ -53,7 +43,7 @@ static void ttm_eu_del_from_lru_locked(struct list_head *list) } } -void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket, +void ttm_eu_backoff_reservation(struct dma_resv_ctx *ticket, struct list_head *list) { struct ttm_validate_buffer *entry; @@ -71,12 +61,15 @@ void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket, if (list_empty(&bo->lru)) ttm_bo_add_to_lru(bo); - dma_resv_unlock(bo->base.resv); + if (!ticket) + dma_resv_unlock(bo->base.resv); } spin_unlock(&glob->lru_lock); - if (ticket) - ww_acquire_fini(ticket); + if (ticket) { + dma_resv_ctx_unlock_all(ticket); + dma_resv_ctx_fini(ticket); + } } EXPORT_SYMBOL(ttm_eu_backoff_reservation); @@ -92,12 +85,12 @@ EXPORT_SYMBOL(ttm_eu_backoff_reservation); * buffers in different orders. */ -int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, +int ttm_eu_reserve_buffers(struct dma_resv_ctx *ticket, struct list_head *list, bool intr, struct list_head *dups, bool del_lru) { - struct ttm_bo_global *glob; struct ttm_validate_buffer *entry; + struct ttm_bo_global *glob; int ret; if (list_empty(list)) @@ -107,70 +100,46 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, glob = entry->bo->bdev->glob; if (ticket) - ww_acquire_init(ticket, &reservation_ww_class); + dma_resv_ctx_init(ticket); +retry: list_for_each_entry(entry, list, head) { struct ttm_buffer_object *bo = entry->bo; - ret = __ttm_bo_reserve(bo, intr, (ticket == NULL), ticket); - if (!ret && unlikely(atomic_read(&bo->cpu_writers) > 0)) { - dma_resv_unlock(bo->base.resv); + if (likely(ticket)) { + ret = dma_resv_ctx_lock(ticket, bo->base.resv, intr); + if (ret == -EDEADLK) + goto retry; + } else { + ret = dma_resv_trylock(bo->base.resv) ? 0 : -EBUSY; + } + if (!ret && unlikely(atomic_read(&bo->cpu_writers) > 0)) { + if (!ticket) + dma_resv_unlock(bo->base.resv); ret = -EBUSY; } else if (ret == -EALREADY && dups) { struct ttm_validate_buffer *safe = entry; + entry = list_prev_entry(entry, head); list_del(&safe->head); list_add(&safe->head, dups); continue; } - if (!ret) { - if (!entry->num_shared) - continue; + if (unlikely(ret)) + goto error; - ret = dma_resv_reserve_shared(bo->base.resv, - entry->num_shared); - if (!ret) - continue; - } - - /* uh oh, we lost out, drop every reservation and try - * to only reserve this buffer, then start over if - * this succeeds. - */ - ttm_eu_backoff_reservation_reverse(list, entry); - - if (ret == -EDEADLK) { - if (intr) { - ret = dma_resv_lock_slow_interruptible(bo->base.resv, - ticket); - } else { - dma_resv_lock_slow(bo->base.resv, ticket); - ret = 0; - } - } + if (!entry->num_shared) + continue; - if (!ret && entry->num_shared) - ret = dma_resv_reserve_shared(bo->base.resv, - entry->num_shared); - - if (unlikely(ret != 0)) { - if (ret == -EINTR) - ret = -ERESTARTSYS; - if (ticket) { - ww_acquire_done(ticket); - ww_acquire_fini(ticket); - } - return ret; + ret = dma_resv_reserve_shared(bo->base.resv, entry->num_shared); + if (unlikely(ret)) { + if (!ticket) + dma_resv_unlock(bo->base.resv); + goto error; } - - /* move this item to the front of the list, - * forces correct iteration of the loop without keeping track - */ - list_del(&entry->head); - list_add(&entry->head, list); } if (del_lru) { @@ -179,10 +148,23 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, spin_unlock(&glob->lru_lock); } return 0; + +error: + if (ret == -EINTR) + ret = -ERESTARTSYS; + if (ticket) { + dma_resv_ctx_unlock_all(ticket); + dma_resv_ctx_done(ticket); + dma_resv_ctx_fini(ticket); + } else { + list_for_each_entry_continue_reverse(entry, list, head) + dma_resv_unlock(entry->bo->base.resv); + } + return ret; } EXPORT_SYMBOL(ttm_eu_reserve_buffers); -void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket, +void ttm_eu_fence_buffer_objects(struct dma_resv_ctx *ticket, struct list_head *list, struct dma_fence *fence) { @@ -208,10 +190,13 @@ void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket, ttm_bo_add_to_lru(bo); else ttm_bo_move_to_lru_tail(bo, NULL); - dma_resv_unlock(bo->base.resv); + if (!ticket) + dma_resv_unlock(bo->base.resv); } spin_unlock(&glob->lru_lock); - if (ticket) - ww_acquire_fini(ticket); + if (ticket) { + dma_resv_ctx_unlock_all(ticket); + dma_resv_ctx_fini(ticket); + } } EXPORT_SYMBOL(ttm_eu_fence_buffer_objects); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 0b5472450633..2d7c5ad25359 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -443,7 +443,7 @@ void vmw_resource_unreserve(struct vmw_resource *res, * reserved and validated backup buffer. */ static int -vmw_resource_check_buffer(struct ww_acquire_ctx *ticket, +vmw_resource_check_buffer(struct dma_resv_ctx *ticket, struct vmw_resource *res, bool interruptible, struct ttm_validate_buffer *val_buf) @@ -535,7 +535,7 @@ int vmw_resource_reserve(struct vmw_resource *res, bool interruptible, * @val_buf: Backup buffer information. */ static void -vmw_resource_backoff_reservation(struct ww_acquire_ctx *ticket, +vmw_resource_backoff_reservation(struct dma_resv_ctx *ticket, struct ttm_validate_buffer *val_buf) { struct list_head val_list; @@ -558,7 +558,7 @@ vmw_resource_backoff_reservation(struct ww_acquire_ctx *ticket, * @res: The resource to evict. * @interruptible: Whether to wait interruptible. */ -static int vmw_resource_do_evict(struct ww_acquire_ctx *ticket, +static int vmw_resource_do_evict(struct dma_resv_ctx *ticket, struct vmw_resource *res, bool interruptible) { struct ttm_validate_buffer val_buf; @@ -822,7 +822,7 @@ static void vmw_resource_evict_type(struct vmw_private *dev_priv, struct vmw_resource *evict_res; unsigned err_count = 0; int ret; - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; do { spin_lock(&dev_priv->resource_lock); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h index 1d2322ad6fd5..43f48df3844f 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h @@ -77,7 +77,7 @@ struct vmw_validation_context { struct list_head resource_ctx_list; struct list_head bo_list; struct list_head page_list; - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; struct mutex *res_mutex; unsigned int merge_dups; unsigned int mem_size_left; diff --git a/include/drm/ttm/ttm_execbuf_util.h b/include/drm/ttm/ttm_execbuf_util.h index 7e46cc678e7e..4e86b6fd6c57 100644 --- a/include/drm/ttm/ttm_execbuf_util.h +++ b/include/drm/ttm/ttm_execbuf_util.h @@ -32,6 +32,7 @@ #define _TTM_EXECBUF_UTIL_H_ #include +#include #include "ttm_bo_api.h" @@ -52,20 +53,20 @@ struct ttm_validate_buffer { /** * function ttm_eu_backoff_reservation * - * @ticket: ww_acquire_ctx from reserve call + * @ticket: reservation_context from reserve call * @list: thread private list of ttm_validate_buffer structs. * * Undoes all buffer validation reservations for bos pointed to by * the list entries. */ -extern void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket, +extern void ttm_eu_backoff_reservation(struct dma_resv_ctx *ticket, struct list_head *list); /** * function ttm_eu_reserve_buffers * - * @ticket: [out] ww_acquire_ctx filled in by call, or NULL if only + * @ticket: [out] reservation_context filled in by caller, or NULL if only * non-blocking reserves should be tried. * @list: thread private list of ttm_validate_buffer structs. * @intr: should the wait be interruptible @@ -97,14 +98,14 @@ extern void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket, * has failed. */ -extern int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, +extern int ttm_eu_reserve_buffers(struct dma_resv_ctx *ticket, struct list_head *list, bool intr, struct list_head *dups, bool del_lru); /** * function ttm_eu_fence_buffer_objects. * - * @ticket: ww_acquire_ctx from reserve call + * @ticket: reservation_context from reserve call * @list: thread private list of ttm_validate_buffer structs. * @fence: The new exclusive fence for the buffers. * @@ -114,7 +115,7 @@ extern int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, * */ -extern void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket, +extern void ttm_eu_fence_buffer_objects(struct dma_resv_ctx *ticket, struct list_head *list, struct dma_fence *fence); From patchwork Wed Sep 18 17:55:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 11151119 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73260197C for ; Wed, 18 Sep 2019 17:55:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4637421920 for ; Wed, 18 Sep 2019 17:55:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="q4REcxlT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727397AbfIRRza (ORCPT ); Wed, 18 Sep 2019 13:55:30 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:39460 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726983AbfIRRza (ORCPT ); Wed, 18 Sep 2019 13:55:30 -0400 Received: by mail-wr1-f66.google.com with SMTP id r3so348897wrj.6 for ; Wed, 18 Sep 2019 10:55:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=rsp933T4bclNOh5WQSR/JAsXxgUbsieI1rwAApL1kOY=; b=q4REcxlThKfapbc0wRjfc1rbbya5nTk5rLcnYRk/a8IgmOTE0zzK1MuQiQOheEXZHs rAs6nIYBgql0QbfgDhw1lw2BMS/znJ6feLVuctSnOGAp5XUKASFShJrw/sf9kvRWrRM/ gagndFh9TuX2PB3OHBlEaf1gGGroWKljMDRK+xaFADpAk3wV34HXbIbr/wAjf/0nWRvL 5mqswFitafI4T2atxsnZUmWUEWxukwPDHbleN9mggEKEZmaHIeZhLwiW18dtXn9Bq0GR 0guUOqF2q/QECvKK5atvO8oFnMUUbQeT9yAMqON+xC0WZ/vZlcGBjz4eNbMoNm0NK6CD 2ocw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rsp933T4bclNOh5WQSR/JAsXxgUbsieI1rwAApL1kOY=; b=BXUpOnYeLrE093mfGhpSeFk82EwThwZLgY5qdkNw6Uv9CdglNhPHBosAUplHC+RL8y zCPLdv4P7EZuy9/yTXlvCspL0OcFkcwJPtrdTzv3PEe84undJVx82wdPmhiuvtTNGJig FIFEy/pZcKIMdhHMLd0X2ms1C1fMtqjwOvRiIpDDVJAGMLWYulVL0cNm3RU2Ijrkhx2V q71VZOjLUSPbzV+GpIZh9Z/czP9zdXctpTTcUEFw6gCQG/rU+20d6epXXKsRJjsXBGn+ UHH60TskkXaaR06azlyHXEn7FcH7mLQ0bK9BvBer/krQUUAFo8dqRgqfsNQEcVQziJst LmhQ== X-Gm-Message-State: APjAAAUCjbcltAdKJIH5v8HYjJIVN1vvpyiPVqt6OsSoRqSXNk8TekIF 05J6xtiF7cSLfcTyHx0/J7AykwxB X-Google-Smtp-Source: APXvYqzvsBJyXCxevf8+N4gWZ6qAdG50HFdz2f+4KNTDRfDSgqjHuZ35F+Pgc1L+cTaDY2BVzD5IZQ== X-Received: by 2002:adf:e546:: with SMTP id z6mr3966449wrm.113.1568829327811; Wed, 18 Sep 2019 10:55:27 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:f002:ffad:c852:eff6]) by smtp.gmail.com with ESMTPSA id q3sm4074027wru.33.2019.09.18.10.55.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Sep 2019 10:55:27 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, chris@chris-wilson.co.uk, daniel@ffwll.ch, sumit.semwal@linaro.org Subject: [PATCH 3/3] drm/gem: use new dma_resv_ctx Date: Wed, 18 Sep 2019 19:55:25 +0200 Message-Id: <20190918175525.49441-3-christian.koenig@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190918175525.49441-1-christian.koenig@amd.com> References: <20190918175525.49441-1-christian.koenig@amd.com> MIME-Version: 1.0 Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Use the new dma_resv_ctx object instead of implementing deadlock handling on our own. Signed-off-by: Christian König --- drivers/gpu/drm/drm_gem.c | 62 +++++++------------------ drivers/gpu/drm/panfrost/panfrost_job.c | 2 +- drivers/gpu/drm/v3d/v3d_gem.c | 10 ++-- drivers/gpu/drm/virtio/virtgpu_drv.h | 2 +- include/drm/drm_gem.h | 5 +- 5 files changed, 27 insertions(+), 54 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 6854f5867d51..b3b684cf1e1b 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1280,66 +1280,38 @@ void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) */ int drm_gem_lock_reservations(struct drm_gem_object **objs, int count, - struct ww_acquire_ctx *acquire_ctx) + struct dma_resv_ctx *acquire_ctx) { - int contended = -1; int i, ret; - ww_acquire_init(acquire_ctx, &reservation_ww_class); + dma_resv_ctx_init(acquire_ctx); retry: - if (contended != -1) { - struct drm_gem_object *obj = objs[contended]; - - ret = dma_resv_lock_slow_interruptible(obj->resv, - acquire_ctx); - if (ret) { - ww_acquire_done(acquire_ctx); - return ret; - } - } - for (i = 0; i < count; i++) { - if (i == contended) - continue; - - ret = dma_resv_lock_interruptible(objs[i]->resv, - acquire_ctx); - if (ret) { - int j; - - for (j = 0; j < i; j++) - dma_resv_unlock(objs[j]->resv); - - if (contended != -1 && contended >= i) - dma_resv_unlock(objs[contended]->resv); - - if (ret == -EDEADLK) { - contended = i; - goto retry; - } - - ww_acquire_done(acquire_ctx); - return ret; - } + ret = dma_resv_ctx_lock(acquire_ctx, objs[i]->resv, true); + if (ret) + goto error; } - ww_acquire_done(acquire_ctx); - + dma_resv_ctx_done(acquire_ctx); return 0; + +error: + if (ret == -EDEADLK) + goto retry; + + dma_resv_ctx_unlock_all(acquire_ctx); + dma_resv_ctx_done(acquire_ctx); + return ret; } EXPORT_SYMBOL(drm_gem_lock_reservations); void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, - struct ww_acquire_ctx *acquire_ctx) + struct dma_resv_ctx *acquire_ctx) { - int i; - - for (i = 0; i < count; i++) - dma_resv_unlock(objs[i]->resv); - - ww_acquire_fini(acquire_ctx); + dma_resv_ctx_unlock_all(acquire_ctx); + dma_resv_ctx_fini(acquire_ctx); } EXPORT_SYMBOL(drm_gem_unlock_reservations); diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 05c85f45a0de..b73079173c57 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -218,7 +218,7 @@ int panfrost_job_push(struct panfrost_job *job) struct panfrost_device *pfdev = job->pfdev; int slot = panfrost_job_get_slot(job); struct drm_sched_entity *entity = &job->file_priv->sched_entity[slot]; - struct ww_acquire_ctx acquire_ctx; + struct dma_resv_ctx acquire_ctx; int ret = 0; mutex_lock(&pfdev->sched_lock); diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 5d80507b539b..745570fccad9 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -248,7 +248,7 @@ v3d_invalidate_caches(struct v3d_dev *v3d) */ static int v3d_lock_bo_reservations(struct v3d_job *job, - struct ww_acquire_ctx *acquire_ctx) + struct dma_resv_ctx *acquire_ctx) { int i, ret; @@ -486,7 +486,7 @@ v3d_push_job(struct v3d_file_priv *v3d_priv, static void v3d_attach_fences_and_unlock_reservation(struct drm_file *file_priv, struct v3d_job *job, - struct ww_acquire_ctx *acquire_ctx, + struct dma_resv_ctx *acquire_ctx, u32 out_sync, struct dma_fence *done_fence) { @@ -530,7 +530,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, struct drm_v3d_submit_cl *args = data; struct v3d_bin_job *bin = NULL; struct v3d_render_job *render; - struct ww_acquire_ctx acquire_ctx; + struct dma_resv_ctx acquire_ctx; int ret = 0; trace_v3d_submit_cl_ioctl(&v3d->drm, args->rcl_start, args->rcl_end); @@ -642,7 +642,7 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data, struct v3d_file_priv *v3d_priv = file_priv->driver_priv; struct drm_v3d_submit_tfu *args = data; struct v3d_tfu_job *job; - struct ww_acquire_ctx acquire_ctx; + struct dma_resv_ctx acquire_ctx; int ret = 0; trace_v3d_submit_tfu_ioctl(&v3d->drm, args->iia); @@ -738,7 +738,7 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, struct drm_v3d_submit_csd *args = data; struct v3d_csd_job *job; struct v3d_job *clean_job; - struct ww_acquire_ctx acquire_ctx; + struct dma_resv_ctx acquire_ctx; int ret; trace_v3d_submit_csd_ioctl(&v3d->drm, args->cfg[5], args->cfg[6]); diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 314e02f94d9c..02b4ee1deeb4 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -77,7 +77,7 @@ struct virtio_gpu_object { container_of((gobj), struct virtio_gpu_object, base.base) struct virtio_gpu_object_array { - struct ww_acquire_ctx ticket; + struct dma_resv_ctx ticket; struct list_head next; u32 nents, total; struct drm_gem_object *objs[]; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 6aaba14f5972..dff4d45a2c41 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -36,6 +36,7 @@ #include #include +#include #include @@ -393,9 +394,9 @@ struct drm_gem_object *drm_gem_object_lookup(struct drm_file *filp, u32 handle); long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle, bool wait_all, unsigned long timeout); int drm_gem_lock_reservations(struct drm_gem_object **objs, int count, - struct ww_acquire_ctx *acquire_ctx); + struct dma_resv_ctx *acquire_ctx); void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, - struct ww_acquire_ctx *acquire_ctx); + struct dma_resv_ctx *acquire_ctx); int drm_gem_fence_array_add(struct xarray *fence_array, struct dma_fence *fence); int drm_gem_fence_array_add_implicit(struct xarray *fence_array,