From patchwork Tue Nov 23 14:20:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74DCEC43217 for ; Tue, 23 Nov 2021 14:21:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237791AbhKWOY2 (ORCPT ); Tue, 23 Nov 2021 09:24:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237651AbhKWOY2 (ORCPT ); Tue, 23 Nov 2021 09:24:28 -0500 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDDF2C06173E for ; Tue, 23 Nov 2021 06:21:19 -0800 (PST) Received: by mail-wr1-x432.google.com with SMTP id i5so39316698wrb.2 for ; Tue, 23 Nov 2021 06:21:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7hlTa22QWarOC/D/GwKDXYcY+ynnKGNkNozyiGKICA8=; b=FCdGllJcVHqCf8Mmrtx2o7c65xOYOBgWAvUxQotbylOv8rRvg/Bq+IgN1z+q3ft3PG 6M53Gq3PXfQiwwptvIh7434kf2II0sG1H/89bI2FFMyTEBuHwBtoMnhc3MzehUMsPJyt iez5pmeOKWR8S95xi4ThBz9xsuoLgfOSjHJmbbaTlV84X0YDkTjIz08b6sZlOhBymIAS kB24R7Ap1E+ADonJbwkmN7X7hUAAXUUNqGw9yEQ4cUr4XqamE5tBXVAlfPfZ9kjtdKkp pCf9WvPHz3qOgaa66apwZF2VTyjjoVrO1h25QN8PKOUrXgjWo76nWwIi1JzqYPhiWR7s 5QqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7hlTa22QWarOC/D/GwKDXYcY+ynnKGNkNozyiGKICA8=; b=0urALGJGfxTvD10YvJk+tTbcCt2QuZ7i0JdK2Qkz3eCe0l++5VqvcbgB1/+ruTMfaQ X+alLf/xRndH8PYyRZnovXmL3SkfP4O0Rnb5+OguEzUa2CglBlclrubMd62dOA8iNlri OD0b9hWWq0DxoXeOSWA+0PArP56xfg5ub5Gem1Q1+CeZn9PlqFjEL+U8rZQAAveaRrM8 MV7kSLvzgt7t+4ZxfxVL7LJ7MDx9DbCmfLNMY5kWFWG9afXhh5cYow1zUOAdZZvtS7VF UHaFzM+qdGRmV5KWvkMd3WBwzETnwxvuqSMIMyIWL5rFSy9ZGSvUs7H7Kdxx7L6I43a6 1yiw== X-Gm-Message-State: AOAM530xQSutUCWTkdxnLuHhciv3StUrU+gV1yGyLosLEb8XYMTXCuJj rWsCEVFDsmwinXA7vpV4Red/ZgzazyU= X-Google-Smtp-Source: ABdhPJwCq/sc20/U9XtjBTriweP61eqo5pPpAjSWR3OVyPxWa7Xtu5CDeOGg3nJ6y0Zp+4UlErbNFg== X-Received: by 2002:a5d:47a1:: with SMTP id 1mr7407992wrb.436.1637677278619; Tue, 23 Nov 2021 06:21:18 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:18 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 01/26] drm/amdgpu: partially revert "svm bo enable_signal call condition" Date: Tue, 23 Nov 2021 15:20:46 +0100 Message-Id: <20211123142111.3885-2-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Partially revert commit 5f319c5c21b5909abb43d8aadc92a8aa549ee443. First of all this is illegal use of RCU to call dma_fence_enable_sw_signaling() since we don't hold a reference to the fence in question and can crash badly. Then the code doesn't seem to have the intended effect since only the exclusive fence is handled, but the KFD fences are always added as shared fence. Only keep the handling to throw away the content of SVM BOs. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index eab4380f28e5..c15687ce67c4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -116,17 +116,8 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo, abo = ttm_to_amdgpu_bo(bo); if (abo->flags & AMDGPU_AMDKFD_CREATE_SVM_BO) { - struct dma_fence *fence; - struct dma_resv *resv = &bo->base._resv; - - rcu_read_lock(); - fence = rcu_dereference(resv->fence_excl); - if (fence && !fence->ops->signaled) - dma_fence_enable_sw_signaling(fence); - placement->num_placement = 0; placement->num_busy_placement = 0; - rcu_read_unlock(); return; } From patchwork Tue Nov 23 14:20:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FF18C433F5 for ; Tue, 23 Nov 2021 14:21:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237798AbhKWOYa (ORCPT ); Tue, 23 Nov 2021 09:24:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237651AbhKWOY3 (ORCPT ); Tue, 23 Nov 2021 09:24:29 -0500 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A96CC061574 for ; Tue, 23 Nov 2021 06:21:21 -0800 (PST) Received: by mail-wr1-x434.google.com with SMTP id v11so772417wrw.10 for ; Tue, 23 Nov 2021 06:21:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9+dwW97gH67Jafy2gJlQwXu5zD+OiAgM7afxncIm1T0=; b=FJ1RwpJZoV8EG/5xM3FbjTmmHDDzianD0xTR9fCNS7fUSKcHYkgb41OgM2537o4+Ob zsovgZ4+AkeqShQ+QLpTY9wvTbvwfiaKhMbrZE/6b4FIODy9Y1oRoNZiLubieHW72XSY 5lk3I6rz7/AUyKYn67sfIfQCDSz4PeoEARzUB4crJLGkaOsNib/9cjqAUTVJfJWbm2AK aqLc11n6due2SfM8LeSuBd70fsikxaUzZxo7639MXBrZhgpoVyD5d2iCwHhCu3/d526C RnDQ35yKnUz4ARNaV/WkKdkVKR056LDCVgJNyXGO2HAp/3gHjcebLHeMRfvGVrQDLTxv MKnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9+dwW97gH67Jafy2gJlQwXu5zD+OiAgM7afxncIm1T0=; b=XWvFmx6rimRHFF4JGkrXPkzslvs7ftQoGezOrR8HWJ0+H6sxkAJztAzWvyPengFVzR MZEqPhgpUR0kAojQdputu5dT26P0pIxxGvtlSDFaW7oCD3A1eng4xWkFQjvMS4q/JX5q +HL0qKHHCFPZZtkZhPgrIaqzFrMfqjBoQeZc6jeg9baNu+B6tUsJC6RUI2vRitndkxc7 bJIY1E7LbZwN2fbhppFoqJIdQMcOHK5TdW0wloXpoSPkmLwl1J0UYd69B8/maZy9CQU/ thTUxKvyn7pkCBWoaOyHr6eW5kMCMEgvjbj3EXcb68l0BIAGqxRClZ9xpcEiYxuXz9K2 53uQ== X-Gm-Message-State: AOAM530ATgLg0FXZTdUifk8HKqxWmndcwnhmzj+1WcB/OiMD95kKnYpf FKgsqIF0a7T/35FqY5u7lLw= X-Google-Smtp-Source: ABdhPJwfpwTOHlslYH7Qv6yArotMaJO3eWYKauPJjMWXnAgyleXdEUKXLBxqjlOQapkCMiMM72cm9g== X-Received: by 2002:a5d:6c6a:: with SMTP id r10mr8105195wrz.211.1637677280044; Tue, 23 Nov 2021 06:21:20 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:19 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 02/26] dma-buf: cleanup pruning of dma_resv objects Date: Tue, 23 Nov 2021 15:20:47 +0100 Message-Id: <20211123142111.3885-3-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The i915 driver implements a prune function which is called when it is very likely that the fences inside the dma_resv object can be removed because they are all signaled. TTM does something similar after waiting for the dma_resv to be idle. Move those functions into the dma-resv.c code since the behavior of pruning fences is something internal to the object. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 32 ++++++++++++++++++++ drivers/gpu/drm/i915/Makefile | 1 - drivers/gpu/drm/i915/dma_resv_utils.c | 17 ----------- drivers/gpu/drm/i915/dma_resv_utils.h | 13 -------- drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 3 +- drivers/gpu/drm/i915/gem/i915_gem_wait.c | 3 +- drivers/gpu/drm/ttm/ttm_bo.c | 2 +- include/linux/dma-resv.h | 2 ++ 8 files changed, 37 insertions(+), 36 deletions(-) delete mode 100644 drivers/gpu/drm/i915/dma_resv_utils.c delete mode 100644 drivers/gpu/drm/i915/dma_resv_utils.h diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index ff3c0558b3b8..f6499e87963c 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -324,6 +324,38 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) } EXPORT_SYMBOL(dma_resv_add_excl_fence); +/** + * dma_resv_prune - remove signaled fences + * @obj: The dma_resv object to prune + * + * Remove all the signaled fences from the dma_resv object. + */ +void dma_resv_prune(struct dma_resv *obj) +{ + dma_resv_assert_held(obj); + + if (dma_resv_test_signaled(obj, true)) + dma_resv_add_excl_fence(obj, NULL); +} +EXPORT_SYMBOL(dma_resv_prune_unlocked); + +/** + * dma_resv_prune_unlocked - try to remove signaled fences + * @obj: The dma_resv object to prune + * + * Try to lock the object, test if it is signaled and if yes then remove all the + * signaled fences. + */ +void dma_resv_prune_unlocked(struct dma_resv *obj) +{ + if (!dma_resv_trylock(obj)) + return; + + dma_resv_prune(obj); + dma_resv_unlock(obj); +} +EXPORT_SYMBOL(dma_resv_prune); + /** * dma_resv_iter_restart_unlocked - restart the unlocked iterator * @cursor: The dma_resv_iter object to restart diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 660bb03de6fc..5c1af130cb6d 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -60,7 +60,6 @@ i915-y += i915_drv.o \ # core library code i915-y += \ - dma_resv_utils.o \ i915_memcpy.o \ i915_mm.o \ i915_sw_fence.o \ diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c deleted file mode 100644 index 7df91b7e4ca8..000000000000 --- a/drivers/gpu/drm/i915/dma_resv_utils.c +++ /dev/null @@ -1,17 +0,0 @@ -// SPDX-License-Identifier: MIT -/* - * Copyright © 2020 Intel Corporation - */ - -#include - -#include "dma_resv_utils.h" - -void dma_resv_prune(struct dma_resv *resv) -{ - if (dma_resv_trylock(resv)) { - if (dma_resv_test_signaled(resv, true)) - dma_resv_add_excl_fence(resv, NULL); - dma_resv_unlock(resv); - } -} diff --git a/drivers/gpu/drm/i915/dma_resv_utils.h b/drivers/gpu/drm/i915/dma_resv_utils.h deleted file mode 100644 index b9d8fb5f8367..000000000000 --- a/drivers/gpu/drm/i915/dma_resv_utils.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: MIT */ -/* - * Copyright © 2020 Intel Corporation - */ - -#ifndef DMA_RESV_UTILS_H -#define DMA_RESV_UTILS_H - -struct dma_resv; - -void dma_resv_prune(struct dma_resv *resv); - -#endif /* DMA_RESV_UTILS_H */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index 5ab136ffdeb2..48029bbda682 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -15,7 +15,6 @@ #include "gt/intel_gt_requests.h" -#include "dma_resv_utils.h" #include "i915_trace.h" static bool swap_available(void) @@ -229,7 +228,7 @@ i915_gem_shrink(struct i915_gem_ww_ctx *ww, i915_gem_object_unlock(obj); } - dma_resv_prune(obj->base.resv); + dma_resv_prune_unlocked(obj->base.resv); scanned += obj->base.size >> PAGE_SHIFT; skip: diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c index f11325484110..75b58aa8d4a7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c @@ -10,7 +10,6 @@ #include "gt/intel_engine.h" -#include "dma_resv_utils.h" #include "i915_gem_ioctls.h" #include "i915_gem_object.h" @@ -57,7 +56,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv, * signaled. */ if (timeout > 0) - dma_resv_prune(resv); + dma_resv_prune_unlocked(resv); return ret; } diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index e4a20a3a5d16..e43f551594a8 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1086,7 +1086,7 @@ int ttm_bo_wait(struct ttm_buffer_object *bo, if (timeout == 0) return -EBUSY; - dma_resv_add_excl_fence(bo->base.resv, NULL); + dma_resv_prune(bo->base.resv); return 0; } EXPORT_SYMBOL(ttm_bo_wait); diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index eebf04325b34..2594fef75f51 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -458,6 +458,8 @@ void dma_resv_fini(struct dma_resv *obj); int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence); void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); +void dma_resv_prune(struct dma_resv *obj); +void dma_resv_prune_unlocked(struct dma_resv *obj); int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl, unsigned *pshared_count, struct dma_fence ***pshared); int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); From patchwork Tue Nov 23 14:20:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B37CC433EF for ; Tue, 23 Nov 2021 14:21:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237806AbhKWOYb (ORCPT ); Tue, 23 Nov 2021 09:24:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237651AbhKWOYb (ORCPT ); Tue, 23 Nov 2021 09:24:31 -0500 Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com [IPv6:2a00:1450:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDB17C061574 for ; Tue, 23 Nov 2021 06:21:22 -0800 (PST) Received: by mail-wr1-x429.google.com with SMTP id l16so1311189wrp.11 for ; Tue, 23 Nov 2021 06:21:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JsZbCHHc39IwzipI/I2HHPgu845k5tKZQFQyhwuStgA=; b=IX0TjWXBUPhObIl0cCO8dd0jzSFtS894HIT0F0Z2KMoXPbPouWGDfaIF9A90NJvHIG pG8AReYdr0nH8qXNW/1LkNgroSUdWQMSCqVKUr9tE9juMhxaCvlmJd2UxRqUpqScC4a3 S5j+/n4i1Lvpo7g4EnUHWXWwoh3sn3FfYbFoc5QVtysddljxokR8se+z2dnur3rJsO5P rMQ+sv8zYIOURRTl97xDQkAb9syxslZW+OSAwCQaDzg5ZIxDxD18wbhNTF6sjoZzLd4a QSifb5ykd7eS9CdVSflB/q5lkoLI7s9FUfLrX0uy0bkAbrrufZq4PiQjJWfUIBXHnwuf eeTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JsZbCHHc39IwzipI/I2HHPgu845k5tKZQFQyhwuStgA=; b=Xvqvpfn1ZuFrBI6yDDszBvRS8lX5caZWA9bQYjpEIHr977ZhlxINrlj5AzITZhTaxa aLr95zPvQngsxX6NiINMgxp0tOiUX4wSWLtPA2hTy983Dtpc8a/XD6lUCV/V5CQ981hV 8Vldb9RgCHzh3L6aMY8EJPKM+utNviG3R+KwQx09hFsI3ZxMwZoJc2AMfhURkoSvwxS9 m5hapGovo9oqiTGRmd8vwro+ZzaJrM+nuIxAAakfG8K3g9Hw6MPkbtSK58xZ1thTDJYN 3AEBScIPYCaWMRxyhXODMWtcMcEcNSssX6Q4mhx2HtjEEImXi6HRJfdh7NQCv9HDSPt6 /YaA== X-Gm-Message-State: AOAM531xmyq9uwDk6y3ZSbTYa9LNmtixX+6mTaKLifMXTZDU1BcOCWY0 y++ql5+WHNe35Dc8jDOS8ps= X-Google-Smtp-Source: ABdhPJz/nc1WFMdvHu4CFDHPlz97bSqoSpO6wUDq9ZW9KJvprUNgLFbMxPH8+ZwGrMQkSi2yRX9w9A== X-Received: by 2002:a5d:58f9:: with SMTP id f25mr7948234wrd.206.1637677281357; Tue, 23 Nov 2021 06:21:21 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:21 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 03/26] dma-buf: make fence mandatory for dma_resv_add_excl_fence Date: Tue, 23 Nov 2021 15:20:48 +0100 Message-Id: <20211123142111.3885-4-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Calling dma_resv_add_excl_fence() with the fence as NULL and expecting that that this frees up the fences is simply abuse of the internals of the dma_resv object. Rework how pruning fences works and make the fence parameter mandatory. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 39 ++++++++++++++++++++++++++++++++++---- 1 file changed, 35 insertions(+), 4 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index f6499e87963c..e627a4274ff6 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -96,6 +96,34 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); } +/** + * dma_resv_list_prune - drop all signaled fences + * @list: list to check for signaled fences + * @obj: dma_resv object for lockdep + * + * Replace all the signaled fences with the stub fence to free them up. + */ +static void dma_resv_list_prune(struct dma_resv_list *list, + struct dma_resv *obj) +{ + unsigned int i; + + if (!list) + return; + + for (i = 0; i < list->shared_count; ++i) { + struct dma_fence *fence; + + fence = rcu_dereference_protected(list->shared[i], + dma_resv_held(obj)); + if (!dma_fence_is_signaled(fence)) + continue; + + RCU_INIT_POINTER(list->shared[i], dma_fence_get_stub()); + dma_fence_put(fence); + } +} + /** * dma_resv_init - initialize a reservation object * @obj: the reservation object @@ -305,8 +333,7 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) if (old) i = old->shared_count; - if (fence) - dma_fence_get(fence); + dma_fence_get(fence); write_seqcount_begin(&obj->seq); /* write_seqcount_begin provides the necessary memory barrier */ @@ -334,8 +361,12 @@ void dma_resv_prune(struct dma_resv *obj) { dma_resv_assert_held(obj); - if (dma_resv_test_signaled(obj, true)) - dma_resv_add_excl_fence(obj, NULL); + write_seqcount_begin(&obj->seq); + if (obj->fence_excl && dma_fence_is_signaled(obj->fence_excl)) + dma_fence_put(rcu_replace_pointer(obj->fence_excl, NULL, + dma_resv_held(obj))); + dma_resv_list_prune(dma_resv_shared_list(obj), obj); + write_seqcount_end(&obj->seq); } EXPORT_SYMBOL(dma_resv_prune_unlocked); From patchwork Tue Nov 23 14:20:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65BEDC433FE for ; Tue, 23 Nov 2021 14:21:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237807AbhKWOYc (ORCPT ); Tue, 23 Nov 2021 09:24:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237651AbhKWOYc (ORCPT ); Tue, 23 Nov 2021 09:24:32 -0500 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6096C061574 for ; Tue, 23 Nov 2021 06:21:23 -0800 (PST) Received: by mail-wr1-x42b.google.com with SMTP id a18so2696067wrn.6 for ; Tue, 23 Nov 2021 06:21:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4CDt1ISjRTMYLH7FxvvLqp3He8TQZQtgj+2GfXtaN90=; b=X3YR3D7jDkth608Q2xrTyxVz7YVn86lN3yZSoQxZoFexTRrnS+6kKAZ43G2JITYJe4 Upf0DDhuxnYXzFqL/5DOfI5UymbhoD2NU48tvC5vzrY9HrLR+SI2Zw3cTFDibnNCnoj2 NOII2dZ4s8fV78kn8MOa2nXC+kIVWJJr/6ofnCZF/kP7/DP8n+E++Ul8DMTt7xiRi21q ygW5e6u35v8g/yVbPSvpjlFGh68bDqCgmk/BjgKDgqsmYQU4Kkk9g1fVPinc7SvJYpw1 QMx2j4lrDSNddvbBoD9RytzzNM41FOcPL8mrKv/BsUvQxa8gda5YdR+2Ppr+meqYmCg7 p1iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4CDt1ISjRTMYLH7FxvvLqp3He8TQZQtgj+2GfXtaN90=; b=AJG4MIeElwdrY6PBGKRmf4UNCTCAYJc9x+a/A0rL6gAzMSDdJ1ZqIBDdw/fbawl23K fIPtEUpPWLtFMvZ/Ox4AlKfNHBn1qR0G1zALOnTtWC+jx0mJkd3mRpZ+0Nm1RNb92UI8 aZv3/7MSI5YnUvRy9afxswJLJ0YJ8GfIVkaXnwpPHRsBmaxYSMSvUKGvJIrVMaFaycDN eaR4+dzLdKDp7Cg7xZAJOi3eFJQnG6DeWHlln7MNbST0+QQMDpNxgw/Qbos4AA3k93cy +u51494yEJF8x+QgonVwDE3ba5zOkp+YzKOLGdzxnvCraA4GpQimyvIlLxfVBH7acxlD B6pA== X-Gm-Message-State: AOAM530AbHs/wApMbInVKTsAi9i4whAw4B8e7YPeoyp6Be1vh4R1fVT4 e6P/mS4QKLLQnrE1sKeJwfk= X-Google-Smtp-Source: ABdhPJxk7UP438KpT4viIsr1qknl/aSpARQvQaanASXfRg0L+I71MFJyOabCkFsyhYeNEnBTf1q57g== X-Received: by 2002:a5d:4cd1:: with SMTP id c17mr7755625wrt.31.1637677282511; Tue, 23 Nov 2021 06:21:22 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:22 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 04/26] drm/qxl: use iterator instead of dma_resv_shared_list Date: Tue, 23 Nov 2021 15:20:49 +0100 Message-Id: <20211123142111.3885-5-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org I'm not sure why it is useful to know the number of fences in the reservation object, but we try to avoid exposing the dma_resv_shared_list() function. So use the iterator instead. If more information is desired we could use dma_resv_describe() as well. Signed-off-by: Christian König --- drivers/gpu/drm/qxl/qxl_debugfs.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/qxl/qxl_debugfs.c b/drivers/gpu/drm/qxl/qxl_debugfs.c index 1f9a59601bb1..6a36b0fd845c 100644 --- a/drivers/gpu/drm/qxl/qxl_debugfs.c +++ b/drivers/gpu/drm/qxl/qxl_debugfs.c @@ -57,13 +57,16 @@ qxl_debugfs_buffers_info(struct seq_file *m, void *data) struct qxl_bo *bo; list_for_each_entry(bo, &qdev->gem.objects, list) { - struct dma_resv_list *fobj; - int rel; - - rcu_read_lock(); - fobj = dma_resv_shared_list(bo->tbo.base.resv); - rel = fobj ? fobj->shared_count : 0; - rcu_read_unlock(); + struct dma_resv_iter cursor; + struct dma_fence *fence; + int rel = 0; + + dma_resv_iter_begin(&cursor, bo->tbo.base.resv, true); + dma_resv_for_each_fence_unlocked(&cursor, fence) { + if (dma_resv_iter_is_restarted(&cursor)) + rel = 0; + ++rel; + } seq_printf(m, "size %ld, pc %d, num releases %d\n", (unsigned long)bo->tbo.base.size, From patchwork Tue Nov 23 14:20:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D794C4332F for ; Tue, 23 Nov 2021 14:21:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237831AbhKWOYe (ORCPT ); Tue, 23 Nov 2021 09:24:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237651AbhKWOYe (ORCPT ); Tue, 23 Nov 2021 09:24:34 -0500 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E27BDC061574 for ; Tue, 23 Nov 2021 06:21:25 -0800 (PST) Received: by mail-wm1-x336.google.com with SMTP id p3-20020a05600c1d8300b003334fab53afso2412039wms.3 for ; Tue, 23 Nov 2021 06:21:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0/rDAVFiY30/F7xfD+fR2MOJrR1IoDd86y6cHd2nyko=; b=h0pv0Pdyf+oL1w0BsXu44X4uBTbwUvURvbCWvcbKOMQyViix4JWyfnq7ym+4UH/Eja wGQUZKwqk49GO0IYE2rDiiUx+qZS2Nz10PWclCtUmYncyrB3VdGVqwkcvmbOZoufeja/ Pb9JvZa8l3DoYwGpFcV7EY7XDXovA53INScLiAaGxt4EYd8xbC7uwP3EUMM2tWfq7VSw r6n3bftTeg1eRLvLcU5Cpv8+YjEXqPPmqIrl+4Zdk51J5ZOe0UtslrjFgkfJtxlVaDiI eXb1PEeTvb1IzjdMl0ByPsK08gFb5GrrFB1YcqngVPZ7Nz466r4UWXFXE9A9FWsxgDbi fuqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0/rDAVFiY30/F7xfD+fR2MOJrR1IoDd86y6cHd2nyko=; b=sEeyVdYvY2K29KcLvBSMyBibffpTHzJ1uF/Yp8UaLbZeD1B+f34uVUDimnsCpSN8Oa +FswnIXMHKQJaDmr7O7QfffC/hs6sjOfGKqgwpmu7JUcBZMAbLka7Y8dF5F9bF6X53XN dUJXIfARUcx2uUbXRT8G7q+w5oFvloDmiYK7c7LoFpqUk7SAaO/Pk9Jg4JVOPt+MdSU4 tAL3YXPSvYGQs/tPRr/Ty8II4vm/lpYFFu9B9zsy5YS1iUyWF/j6k2yIdlocq/DuVNb0 o5yrz3hKEJYnhtNc6f2n8DyDz6iXCWK6vXclldBCRWyiLk8HUx5wXFA2H7fBYqygV0vP p/Pw== X-Gm-Message-State: AOAM530xLUfusQ5vPdU0wCHG+jGkQb4p04s/1Ec3uM+t+NfPX+fPbb9K BOr61sz8KqXS+nVk9rdaI2o= X-Google-Smtp-Source: ABdhPJzJm0k1bViw8FfTiZBj8bt9LF28qDqjJ2YfXJSbPVBEjDTVT43j+zdWWe40jILSZnC8E2jFJA== X-Received: by 2002:a05:600c:224a:: with SMTP id a10mr3512024wmm.154.1637677284299; Tue, 23 Nov 2021 06:21:24 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:23 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 05/26] dma-buf: add dma_resv_replace_fences Date: Tue, 23 Nov 2021 15:20:50 +0100 Message-Id: <20211123142111.3885-6-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This function allows to replace fences from the shared fence list when we can gurantee that the operation represented by the original fence has finished or no accesses to the resources protected by the dma_resv object any more when the new fence finishes. Then use this function in the amdkfd code when BOs are unmapped from the process. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 43 ++++++++++++++++ .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 49 +++---------------- include/linux/dma-resv.h | 2 + 3 files changed, 52 insertions(+), 42 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index e627a4274ff6..0daed67cab0e 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -312,6 +312,49 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) } EXPORT_SYMBOL(dma_resv_add_shared_fence); +/** + * dma_resv_replace_fences - replace fences in the dma_resv obj + * @obj: the reservation object + * @context: the context of the fences to replace + * @replacement: the new fence to use instead + * + * Replace fences with a specified context with a new fence. Only valid if the + * operation represented by the original fences is completed or has no longer + * access to the resources protected by the dma_resv object when the new fence + * completes. + */ +void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, + struct dma_fence *replacement) +{ + struct dma_resv_list *list; + struct dma_fence *old; + unsigned int i; + + dma_resv_assert_held(obj); + + write_seqcount_begin(&obj->seq); + + old = dma_resv_excl_fence(obj); + if (old->context == context) { + RCU_INIT_POINTER(obj->fence_excl, dma_fence_get(replacement)); + dma_fence_put(old); + } + + list = dma_resv_shared_list(obj); + for (i = 0; list && i < list->shared_count; ++i) { + old = rcu_dereference_protected(list->shared[i], + dma_resv_held(obj)); + if (old->context != context) + continue; + + rcu_assign_pointer(list->shared[i], dma_fence_get(replacement)); + dma_fence_put(old); + } + + write_seqcount_end(&obj->seq); +} +EXPORT_SYMBOL(dma_resv_replace_fences); + /** * dma_resv_add_excl_fence - Add an exclusive fence. * @obj: the reservation object diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 71acd577803e..b558ef0f8c4a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -236,53 +236,18 @@ void amdgpu_amdkfd_release_notify(struct amdgpu_bo *bo) static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, struct amdgpu_amdkfd_fence *ef) { - struct dma_resv *resv = bo->tbo.base.resv; - struct dma_resv_list *old, *new; - unsigned int i, j, k; + struct dma_fence *replacement; if (!ef) return -EINVAL; - old = dma_resv_shared_list(resv); - if (!old) - return 0; - - new = kmalloc(struct_size(new, shared, old->shared_max), GFP_KERNEL); - if (!new) - return -ENOMEM; - - /* Go through all the shared fences in the resevation object and sort - * the interesting ones to the end of the list. + /* TODO: Instead of block before we should use the fence of the page + * table update and TLB flush here directly. */ - for (i = 0, j = old->shared_count, k = 0; i < old->shared_count; ++i) { - struct dma_fence *f; - - f = rcu_dereference_protected(old->shared[i], - dma_resv_held(resv)); - - if (f->context == ef->base.context) - RCU_INIT_POINTER(new->shared[--j], f); - else - RCU_INIT_POINTER(new->shared[k++], f); - } - new->shared_max = old->shared_max; - new->shared_count = k; - - /* Install the new fence list, seqcount provides the barriers */ - write_seqcount_begin(&resv->seq); - RCU_INIT_POINTER(resv->fence, new); - write_seqcount_end(&resv->seq); - - /* Drop the references to the removed fences or move them to ef_list */ - for (i = j; i < old->shared_count; ++i) { - struct dma_fence *f; - - f = rcu_dereference_protected(new->shared[i], - dma_resv_held(resv)); - dma_fence_put(f); - } - kfree_rcu(old, rcu); - + replacement = dma_fence_get_stub(); + dma_resv_replace_fences(bo->tbo.base.resv, ef->base.context, + replacement); + dma_fence_put(replacement); return 0; } diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 2594fef75f51..0eb0c08c51c9 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -457,6 +457,8 @@ void dma_resv_init(struct dma_resv *obj); void dma_resv_fini(struct dma_resv *obj); int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence); +void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, + struct dma_fence *fence); void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); void dma_resv_prune(struct dma_resv *obj); void dma_resv_prune_unlocked(struct dma_resv *obj); From patchwork Tue Nov 23 14:20:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDDDFC433EF for ; Tue, 23 Nov 2021 14:21:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237847AbhKWOYf (ORCPT ); Tue, 23 Nov 2021 09:24:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237651AbhKWOYf (ORCPT ); Tue, 23 Nov 2021 09:24:35 -0500 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12911C061574 for ; Tue, 23 Nov 2021 06:21:27 -0800 (PST) Received: by mail-wr1-x432.google.com with SMTP id b12so39258545wrh.4 for ; Tue, 23 Nov 2021 06:21:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UeVGp8prroNFBBkWzJroj9VafnuWlSITqFksKk2su68=; b=Iv/TxAnifxh6vEjMeYuZ6VDiXws0v9pyilzUms4criacfWdFgIHwDzUEWNw2Zyn+pz /PPD0mlLezrFH+YgYJKHorUsBH0V4Wuk0Vd30jgvh79S+r7kitwMigGIzu4VhRb6oXfN 7zxOTOwr3I9IF6NraEtVNH/JdeYcSBssHUwpPDi9XFhWmINa8h2XAGNiNaYYwDWYK6a/ jrW6STCbvvCJiCuPd9rX5D6W4rNttYQaqEzmlHbv43yxHisnW0+slFJfUHOpcOqk3Otj dj/1oaKtEFy872NPuZ61YUHSqgkywuPynxzIBZR1D2dhdy5Y7aHIlNsxatbGwpV+Q7Gb Rq/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UeVGp8prroNFBBkWzJroj9VafnuWlSITqFksKk2su68=; b=YytLOaozZ45odnshmbv7DecwSV6uw5ignK5UvL5r//xbd7r6WCzLmPcIU7KiBPUyDt eBkqsVep9jzSB+Y4lI00dFE+NW3tZX5gkK5tcNRx2Q/1fqxhcZNXDhFO3OS4/tPOjMXs VWM16NDBb8MJMVP7fxemGrKvAOlsN0U3LlSQwwUtGuwCCH6Nw7iUChgfsGljezhBVXlp AbvBop/iTuGWN1enAZNZ4Z+cN7n2+0ie9+qi2X8/NGtkq6nSsMVDwwma//HOx79cVA/h UTV6pVFRKP2I7sVxATutjjWm5eVPB4eoFvoF8THrlaLhgctkYe0ui7pNW+nvWY84C9BC 0mdA== X-Gm-Message-State: AOAM530zOk0u9nUIlm+iX9zFELCBrB20pTRwTIQOWuRuoJjx/eZ72JNw h+GmaMJ7/13g418Hg15oQCneDLoFJI0= X-Google-Smtp-Source: ABdhPJydoWLun2Lona4C6JNk9ptBjjSxh1zZg9ZRryxr/+ClCo5eRph7eLxajU/yyhS7EUN9rcTUww== X-Received: by 2002:adf:fb86:: with SMTP id a6mr7603351wrr.35.1637677285648; Tue, 23 Nov 2021 06:21:25 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:25 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 06/26] dma-buf: finally make the dma_resv_list private Date: Tue, 23 Nov 2021 15:20:51 +0100 Message-Id: <20211123142111.3885-7-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Drivers should never touch this directly. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 26 ++++++++++++++++++++++++++ include/linux/dma-resv.h | 26 +------------------------- 2 files changed, 27 insertions(+), 25 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 0daed67cab0e..611bba5528ad 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -56,6 +56,19 @@ DEFINE_WD_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class); +/** + * struct dma_resv_list - a list of shared fences + * @rcu: for internal use + * @shared_count: table of shared fences + * @shared_max: for growing shared fence table + * @shared: shared fence table + */ +struct dma_resv_list { + struct rcu_head rcu; + u32 shared_count, shared_max; + struct dma_fence __rcu *shared[]; +}; + /** * dma_resv_list_alloc - allocate fence list * @shared_max: number of fences we need space for @@ -161,6 +174,19 @@ void dma_resv_fini(struct dma_resv *obj) } EXPORT_SYMBOL(dma_resv_fini); +/** + * dma_resv_shared_list - get the reservation object's shared fence list + * @obj: the reservation object + * + * Returns the shared fence list. Caller must either hold the objects + * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), + * or one of the variants of each + */ +static inline struct dma_resv_list *dma_resv_shared_list(struct dma_resv *obj) +{ + return rcu_dereference_check(obj->fence, dma_resv_held(obj)); +} + /** * dma_resv_reserve_shared - Reserve space to add shared fences to * a dma_resv. diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 0eb0c08c51c9..e0cec3a57c08 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -47,18 +47,7 @@ extern struct ww_class reservation_ww_class; -/** - * struct dma_resv_list - a list of shared fences - * @rcu: for internal use - * @shared_count: table of shared fences - * @shared_max: for growing shared fence table - * @shared: shared fence table - */ -struct dma_resv_list { - struct rcu_head rcu; - u32 shared_count, shared_max; - struct dma_fence __rcu *shared[]; -}; +struct dma_resv_list; /** * struct dma_resv - a reservation object manages fences for a buffer @@ -440,19 +429,6 @@ dma_resv_excl_fence(struct dma_resv *obj) return rcu_dereference_check(obj->fence_excl, dma_resv_held(obj)); } -/** - * dma_resv_shared_list - get the reservation object's shared fence list - * @obj: the reservation object - * - * Returns the shared fence list. Caller must either hold the objects - * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), - * or one of the variants of each - */ -static inline struct dma_resv_list *dma_resv_shared_list(struct dma_resv *obj) -{ - return rcu_dereference_check(obj->fence, dma_resv_held(obj)); -} - void dma_resv_init(struct dma_resv *obj); void dma_resv_fini(struct dma_resv *obj); int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); From patchwork Tue Nov 23 14:20:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1171C4332F for ; Tue, 23 Nov 2021 14:21:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237861AbhKWOYi (ORCPT ); Tue, 23 Nov 2021 09:24:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237651AbhKWOYh (ORCPT ); Tue, 23 Nov 2021 09:24:37 -0500 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 293F1C061574 for ; Tue, 23 Nov 2021 06:21:29 -0800 (PST) Received: by mail-wr1-x436.google.com with SMTP id l16so1311770wrp.11 for ; Tue, 23 Nov 2021 06:21:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Y/HKiuvsLvtggr8wqFG79/kykHy5z0YuU04uWMf5ENc=; b=YwrlxJJSIcNyF4Ry5YW0SnT6p6dpgvKvskfDIOCH8d4EqSG7z3kMnI84B2KTj25PI5 VNnfBT8nGAGfIxFQQ4B8NN9Bf449kUnOmMVEpHnfCmcqo64QvW5sdhTMNoAsOObvIsKM Enq1QTHnSFhtd2k3MSN9tGhC7XXMdKlle9kLcMXO2GS2JM1x83HUiNeV6TuwH/j07x7a TpvoF7lqfsHOVNa8v1/AWG6lg37RuPcXm8FASXSt1E5KH0zsBS7IZ6iYzq5TBYa7KDa3 RFua3xqBQIZdeaAs4ryhsQuxtdp64/l1iBZ8yml+oAOcyVcRywF6jEQOQoXnW55xFyBY hXbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Y/HKiuvsLvtggr8wqFG79/kykHy5z0YuU04uWMf5ENc=; b=EdHbWIVSc7XwRqvITGF+9fZGUELQ/uVkR5qquGHGpoccIEtFCVGemvrf2tifKyGha8 Ultv3WCkb6wcAWoelWABLcaeW30xr6qneQBhRRuNUE2QHmKsfm1ALb8d2uDS+ANStipO e5Y0HV7eFdQ0/alZP9du1g7B5+ykF0+kguzMKMgd1FUNgpOzooi8yaa4dXjK/I/PLG4G PnSgYnD3+h+Fyo9s8N9FfzWaqNduvW+6kkfY8M7f2Dshok2cPS6t5rs7UDt6xQCedRoJ IQrvUJMSI6rgfrk3vDopeCoB03+/mgtfrmBw0Ojz5Mb7C7EhB1lS7+u1lh5/DBM+5PHB 9gjw== X-Gm-Message-State: AOAM530LVUcTur0ANAACH7woDTvFo5ZCdiJcTTNsyeB/Z+rNVBhI+bMq oQodRcWgCsIhvi4AaXBTjGQ= X-Google-Smtp-Source: ABdhPJy/Oxm3/2cPVW0DILaOGjbR1vdCwE8aul25iOb3IDZgJ7HF7jb4JTtR6Qs2OxK9XpWcMe3uPQ== X-Received: by 2002:adf:a389:: with SMTP id l9mr7717381wrb.121.1637677287705; Tue, 23 Nov 2021 06:21:27 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:27 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 07/26] dma-buf: drop excl_fence parameter from dma_resv_get_fences Date: Tue, 23 Nov 2021 15:20:52 +0100 Message-Id: <20211123142111.3885-8-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Returning the exclusive fence separately is no longer used. Instead add a write parameter to indicate the use case. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 48 ++++++++------------ drivers/dma-buf/st-dma-resv.c | 26 ++--------- drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 6 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 3 +- include/linux/dma-resv.h | 4 +- 6 files changed, 31 insertions(+), 58 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 611bba5528ad..0a69f4b7e6b5 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -675,57 +675,45 @@ EXPORT_SYMBOL(dma_resv_copy_fences); * dma_resv_get_fences - Get an object's shared and exclusive * fences without update side lock held * @obj: the reservation object - * @fence_excl: the returned exclusive fence (or NULL) - * @shared_count: the number of shared fences returned - * @shared: the array of shared fence ptrs returned (array is krealloc'd to - * the required size, and must be freed by caller) - * - * Retrieve all fences from the reservation object. If the pointer for the - * exclusive fence is not specified the fence is put into the array of the - * shared fences as well. Returns either zero or -ENOMEM. + * @write: true if we should return all fences + * @num_fences: the number of fences returned + * @fences: the array of fence ptrs returned (array is krealloc'd to the + * required size, and must be freed by caller) + * + * Retrieve all fences from the reservation object. + * Returns either zero or -ENOMEM. */ -int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **fence_excl, - unsigned int *shared_count, struct dma_fence ***shared) +int dma_resv_get_fences(struct dma_resv *obj, bool write, + unsigned int *num_fences, struct dma_fence ***fences) { struct dma_resv_iter cursor; struct dma_fence *fence; - *shared_count = 0; - *shared = NULL; - - if (fence_excl) - *fence_excl = NULL; + *num_fences = 0; + *fences = NULL; - dma_resv_iter_begin(&cursor, obj, true); + dma_resv_iter_begin(&cursor, obj, write); dma_resv_for_each_fence_unlocked(&cursor, fence) { if (dma_resv_iter_is_restarted(&cursor)) { unsigned int count; - while (*shared_count) - dma_fence_put((*shared)[--(*shared_count)]); + while (*num_fences) + dma_fence_put((*fences)[--(*num_fences)]); - if (fence_excl) - dma_fence_put(*fence_excl); - - count = cursor.shared_count; - count += fence_excl ? 0 : 1; + count = cursor.shared_count + 1; /* Eventually re-allocate the array */ - *shared = krealloc_array(*shared, count, + *fences = krealloc_array(*fences, count, sizeof(void *), GFP_KERNEL); - if (count && !*shared) { + if (count && !*fences) { dma_resv_iter_end(&cursor); return -ENOMEM; } } - dma_fence_get(fence); - if (dma_resv_iter_is_exclusive(&cursor) && fence_excl) - *fence_excl = fence; - else - (*shared)[(*shared_count)++] = fence; + (*fences)[(*num_fences)++] = dma_fence_get(fence); } dma_resv_iter_end(&cursor); diff --git a/drivers/dma-buf/st-dma-resv.c b/drivers/dma-buf/st-dma-resv.c index bc32b3eedcb6..cbe999c6e7a6 100644 --- a/drivers/dma-buf/st-dma-resv.c +++ b/drivers/dma-buf/st-dma-resv.c @@ -275,7 +275,7 @@ static int test_shared_for_each_unlocked(void *arg) static int test_get_fences(void *arg, bool shared) { - struct dma_fence *f, *excl = NULL, **fences = NULL; + struct dma_fence *f, **fences = NULL; struct dma_resv resv; int r, i; @@ -304,35 +304,19 @@ static int test_get_fences(void *arg, bool shared) } dma_resv_unlock(&resv); - r = dma_resv_get_fences(&resv, &excl, &i, &fences); + r = dma_resv_get_fences(&resv, shared, &i, &fences); if (r) { pr_err("get_fences failed\n"); goto err_free; } - if (shared) { - if (excl != NULL) { - pr_err("get_fences returned unexpected excl fence\n"); - goto err_free; - } - if (i != 1 || fences[0] != f) { - pr_err("get_fences returned unexpected shared fence\n"); - goto err_free; - } - } else { - if (excl != f) { - pr_err("get_fences returned unexpected excl fence\n"); - goto err_free; - } - if (i != 0) { - pr_err("get_fences returned unexpected shared fence\n"); - goto err_free; - } + if (i != 1 || fences[0] != f) { + pr_err("get_fences returned unexpected fence\n"); + goto err_free; } dma_fence_signal(f); err_free: - dma_fence_put(excl); while (i--) dma_fence_put(fences[i]); kfree(fences); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c index 68108f151dad..d17e1c911689 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c @@ -200,8 +200,10 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc, goto unpin; } - r = dma_resv_get_fences(new_abo->tbo.base.resv, NULL, - &work->shared_count, &work->shared); + /* TODO: Unify this with other drivers */ + r = dma_resv_get_fences(new_abo->tbo.base.resv, true, + &work->shared_count, + &work->shared); if (unlikely(r != 0)) { DRM_ERROR("failed to get fences for buffer\n"); goto unpin; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c index b7fb72bff2c1..be48487e2ca7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv, unsigned count; int r; - r = dma_resv_get_fences(resv, NULL, &count, &fences); + r = dma_resv_get_fences(resv, true, &count, &fences); if (r) goto fallback; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index b5e8ce86dbe7..64c90ff348f2 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -189,8 +189,7 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) continue; if (bo->flags & ETNA_SUBMIT_BO_WRITE) { - ret = dma_resv_get_fences(robj, NULL, - &bo->nr_shared, + ret = dma_resv_get_fences(robj, true, &bo->nr_shared, &bo->shared); if (ret) return ret; diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index e0cec3a57c08..09b676b87c35 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -438,8 +438,8 @@ void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); void dma_resv_prune(struct dma_resv *obj); void dma_resv_prune_unlocked(struct dma_resv *obj); -int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl, - unsigned *pshared_count, struct dma_fence ***pshared); +int dma_resv_get_fences(struct dma_resv *obj, bool write, + unsigned int *num_fences, struct dma_fence ***fences); int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout); From patchwork Tue Nov 23 14:20:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E7D6C433EF for ; Tue, 23 Nov 2021 14:21:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237907AbhKWOYj (ORCPT ); Tue, 23 Nov 2021 09:24:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237868AbhKWOYi (ORCPT ); Tue, 23 Nov 2021 09:24:38 -0500 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62945C061574 for ; Tue, 23 Nov 2021 06:21:30 -0800 (PST) Received: by mail-wm1-x334.google.com with SMTP id 137so15356761wma.1 for ; Tue, 23 Nov 2021 06:21:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6H/ngFKC9poxGUW/buF3Z6IUXcBgboVM+XR0Uqd9ot8=; b=M4/xhPjOBKzleBS57cpHfNRoq3nzkZDtU3rcnNA5M7yoqTW/GkLN3Go0GFKYwFwe8V pWfW1D9Qq2Uiy8eVugNtps+xEdeYlycLSKNtEPPtjWNxEUIIYLuQZW0+4YAYfONPLeZE UrTkTO9uGFHcyNU3a56i6EETimPyNUoHvmNKAy3dg9gWr6keZw/8gCTNo7QSwB9ddfZJ qQnEgmFZxZxOtin+uZxgd1Y8WIWINAxRobeiiFVibIpNN1WoZPPxp3rqS9WEzzveNPae ITBBAf+RRaR2xVVkR7S4jbF0fE8VnRJYaOLUnVGrXFjjSIbZNHg/HAXDxZte9CzyNG6r H/WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6H/ngFKC9poxGUW/buF3Z6IUXcBgboVM+XR0Uqd9ot8=; b=6xwU1FHU5DC/MwwLeJKxeADt744AKUBcz9jggkiCW89FjSFnPKqWJpUO8T+X6hXABt zH/DFWi8fk0lqIZoib55g5lUCZ3lOGiAFAt5YqG0CeDQzwzWajGnmtoPG4LYaPK7mcl3 2f/KT40WJO8P6C1YsR1SSblabCsCepPrWHK47yQZjSSebd2PBVH3sadMAxPiNShE28Jt Zd3QwXGPEqMbfTp2aD94wTyALzKTBsYpIw7fQbbT7lrFhp+mJMvdnFb3/2sff5SaNheV mvHHnClumaQO8AkAousA1mQYTHLo51bgXp/0g9RLeb+f81SoYT/WuOOVd38veWCAap8b SRaA== X-Gm-Message-State: AOAM530GUwcB0bbm8PlnigsgGe9sLtZ4x34FTpD6C33VTKQNSgAOfVss 6PSMeyK0+9iT/jf33Gu8hteDuq2kxFA= X-Google-Smtp-Source: ABdhPJxXb7SYBCRj2Ub2g+4sXnWVWktbYjdK81QMPFY4KhRBrf8naWsp1xWADtOextjGaVzR61Wo8A== X-Received: by 2002:a1c:2685:: with SMTP id m127mr3661518wmm.42.1637677289034; Tue, 23 Nov 2021 06:21:29 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:28 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 08/26] dma-buf: add dma_resv_get_singleton Date: Tue, 23 Nov 2021 15:20:53 +0100 Message-Id: <20211123142111.3885-9-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add a function to simplify getting a single fence for all the fences in the dma_resv object. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 50 ++++++++++++++++++++++++++++++++++++++ include/linux/dma-resv.h | 2 ++ 2 files changed, 52 insertions(+) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 0a69f4b7e6b5..f91ca023b550 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@ */ #include +#include #include #include #include @@ -721,6 +722,55 @@ int dma_resv_get_fences(struct dma_resv *obj, bool write, } EXPORT_SYMBOL_GPL(dma_resv_get_fences); +/** + * dma_resv_get_singleton - Get a single fence for all the fences + * @obj: the reservation object + * @write: true if we should return all fences + * @fence: the resulting fence + * + * Get a single fence representing all the fences inside the resv object. + * Returns either 0 for success or -ENOMEM. + * + * Warning: This can't be used like this when adding the fence back to the resv + * object since that can lead to stack corruption when finalizing the + * dma_fence_array. + */ +int dma_resv_get_singleton(struct dma_resv *obj, bool write, + struct dma_fence **fence) +{ + struct dma_fence_array *array; + struct dma_fence **fences; + unsigned count; + int r; + + r = dma_resv_get_fences(obj, write, &count, &fences); + if (r) + return r; + + if (count == 0) { + *fence = NULL; + return 0; + } + + if (count == 1) { + *fence = fences[0]; + kfree(fences); + return 0; + } + + array = dma_fence_array_create(count, fences, + dma_fence_context_alloc(1), + 1, false); + if (!array) { + kfree(fences); + return -ENOMEM; + } + + *fence = &array->base; + return 0; +} +EXPORT_SYMBOL_GPL(dma_resv_get_singleton); + /** * dma_resv_wait_timeout - Wait on reservation's objects * shared and/or exclusive fences. diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 09b676b87c35..082f77b7bc63 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -440,6 +440,8 @@ void dma_resv_prune(struct dma_resv *obj); void dma_resv_prune_unlocked(struct dma_resv *obj); int dma_resv_get_fences(struct dma_resv *obj, bool write, unsigned int *num_fences, struct dma_fence ***fences); +int dma_resv_get_singleton(struct dma_resv *obj, bool write, + struct dma_fence **fence); int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout); From patchwork Tue Nov 23 14:20:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7F27C433EF for ; Tue, 23 Nov 2021 14:21:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237871AbhKWOYm (ORCPT ); Tue, 23 Nov 2021 09:24:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237898AbhKWOYk (ORCPT ); Tue, 23 Nov 2021 09:24:40 -0500 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92488C061574 for ; Tue, 23 Nov 2021 06:21:32 -0800 (PST) Received: by mail-wr1-x432.google.com with SMTP id s13so39318211wrb.3 for ; Tue, 23 Nov 2021 06:21:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dLcViOC9XeBrn0i9874Oh2sdjjvie7K1Wv9yyTZ+XyU=; b=akxICZXzLT9qtfacHcLRkxnnH4e1GhkjSh2NfXBDrFUxWDM0CP1y0XpXg/zOJaAQqS CkuBD1CEOo8tlhe2JFZ8flhAhx/8iEmveVxWH9XRkXbyQBfJaXJ4XwExglAnvBW+M9Hf RUtoSugHiY20kI7TAookU8Wudy+fiUsIek1LKBiRBjvlRXt8Wsr2R0Gczi1efK7I2jni fEPCuI+Gbme2NajqmrMc75CrMDBp9o/EXq1SnV6tq0h9HikqCjIIKg1ChE+71uWgOGxZ aGOnwEt0w9P+DIXzmzPYZcCrd+6e7gxjwuUcfTLFUyoS9eflr6A04/T4A/QQGPDpFT67 UBLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dLcViOC9XeBrn0i9874Oh2sdjjvie7K1Wv9yyTZ+XyU=; b=hIJ/rSiIAzcyLDAq0cOZ+aR9rBDXYhVeYi47R/grOoSSvWTbnQrSb6jEDKs7yKZZu4 BVAigIxN8laK3hULKEkFwIuq6hrxDueCnDSrVTghx7gHHQ+rt9rZ+9crCmIT+mgbd8b6 cf7cXUj5KItlo1bXG+EXfCnvzPU43jhUHBDRFgiB8kLS/SVXoVOkd/vGWoPUhoQ9aJFn 8rF2soARLKjd0cjAO5dLb088R7wXljbakTaa1mZgE/eqY81Jc8JkXzICHkMAM65IHIE4 kFxjJZU7mrVTvjZyE63+VPH2i37K7JheuwvbYLRKJR0xbrTJYx+t6kqnHijDwGYuG2zD H8XQ== X-Gm-Message-State: AOAM53273bPF45vV0D3FXwhxNx9Sb+K4eFNzQT2hPVmrLfFEgE4IZUoT wBy3kEfiRWE+Y3Buv0hxKZaq+SIqk9s= X-Google-Smtp-Source: ABdhPJwL0j2p6QTFl0fJ4Pb+Hz/6hTqyAYg8+HAIBA7AcBESaZ9Hk4Pop+uh7AuarwcCBR/GqvX45g== X-Received: by 2002:adf:d1e2:: with SMTP id g2mr7509995wrd.179.1637677291125; Tue, 23 Nov 2021 06:21:31 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:30 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 09/26] RDMA: use dma_resv_wait() instead of extracting the fence Date: Tue, 23 Nov 2021 15:20:54 +0100 Message-Id: <20211123142111.3885-10-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Use dma_resv_wait() instead of extracting the exclusive fence and waiting on it manually. Signed-off-by: Christian König --- drivers/infiniband/core/umem_dmabuf.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index f0760741f281..d32cd7538835 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -16,7 +16,6 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) { struct sg_table *sgt; struct scatterlist *sg; - struct dma_fence *fence; unsigned long start, end, cur = 0; unsigned int nmap = 0; int i; @@ -68,11 +67,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) * may be not up-to-date. Wait for the exporter to finish * the migration. */ - fence = dma_resv_excl_fence(umem_dmabuf->attach->dmabuf->resv); - if (fence) - return dma_fence_wait(fence, false); - - return 0; + return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv, false, + false, MAX_SCHEDULE_TIMEOUT); } EXPORT_SYMBOL(ib_umem_dmabuf_map_pages); From patchwork Tue Nov 23 14:20:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 432FFC433F5 for ; Tue, 23 Nov 2021 14:21:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237868AbhKWOYq (ORCPT ); Tue, 23 Nov 2021 09:24:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237918AbhKWOYm (ORCPT ); Tue, 23 Nov 2021 09:24:42 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 757EFC061574 for ; Tue, 23 Nov 2021 06:21:34 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id j140-20020a1c2392000000b003399ae48f58so2398893wmj.5 for ; Tue, 23 Nov 2021 06:21:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MPKJq2Wt16wetIpfIT15/6v8GODpjJBfKyDGmSXKrmY=; b=IZZJm5wBcrUw5S+SV0DKy9QiVgxymjOR1JQhuUoWkg1bCpNsQkqMwSrSv6D8LTRV57 MIkZKFn/6/OMUOeJvWvBfjJ5Too5L/A4vMWbkzToERXVSHue1i1df6RpFjH2h8a0dXSG TbOznsKwabznnnt4s5NshynHZxaTV9qPXvjna9Yl+5S73tXfoJiwkz2BX54zHuQ4dauo 7eX7IguwjBJDZ/BB2v5InrL7EYbPYLRcLrJhXkHhn/v5NVEmEzY277OHesnTzdixPAgo +c3VaSap3DGW9WTjVmwzRRzqwF4SW6a1kFgM6jcy4cw1uYzJ5l8P0o8KtCsr+5ZptJaJ ggxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MPKJq2Wt16wetIpfIT15/6v8GODpjJBfKyDGmSXKrmY=; b=DkZfeySzE6C33RUjWXZYE0w85d6+xv/8Y8vmIyFw6ArAfk3PInVymgtwO8PA+wyaB0 p0LxjUByCBLXD88EzEBDt5YTpaTR2RQoMxRurBpHy9WEGEf8vH3MMPfQoyMsTEu8Y4fF Tm83r3v1FlmOD6t3dDraH7iDzgL8uKYs1a3qDE87M+7cpio3aazG8M4KcYaEwPLn/rzd aT0jpuN4mjvvgknxUj9YNLLwuiLJnAe7Jdd1a3jxoS1cG0v7Zsh9gioeILXbnPFoHhw1 vxPTeTXGLcLkiftUW2JC5ZLAXruwxYNj4UQxeADcja1VIOfZttmAlJOSmjiktA8uh/mP y5dA== X-Gm-Message-State: AOAM532u5qL1GPtHGr0hLbaBQDnGMlhAkWZBy3xMAvUq8Z+ktXLjALB+ sadzGnQMzOh7ZY62K1gdkBY= X-Google-Smtp-Source: ABdhPJzExkfEmvQQbn+dzA8SjGvNJAesPqLUleluPP95rkU+PB6EWtrF6hZ828idEP7AoWJqT9uNkQ== X-Received: by 2002:a05:600c:4e8d:: with SMTP id f13mr3660138wmq.7.1637677293126; Tue, 23 Nov 2021 06:21:33 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:32 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 10/26] drm/etnaviv: stop using dma_resv_excl_fence Date: Tue, 23 Nov 2021 15:20:55 +0100 Message-Id: <20211123142111.3885-11-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org We can get the excl fence together with the shared ones as well. Signed-off-by: Christian König --- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 14 +++++--------- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 10 ---------- 3 files changed, 5 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h index 98e60df882b6..f596d743baa3 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h @@ -80,7 +80,6 @@ struct etnaviv_gem_submit_bo { u64 va; struct etnaviv_gem_object *obj; struct etnaviv_vram_mapping *mapping; - struct dma_fence *excl; unsigned int nr_shared; struct dma_fence **shared; }; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 64c90ff348f2..4286dc93fdaa 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -188,15 +188,11 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT) continue; - if (bo->flags & ETNA_SUBMIT_BO_WRITE) { - ret = dma_resv_get_fences(robj, true, &bo->nr_shared, - &bo->shared); - if (ret) - return ret; - } else { - bo->excl = dma_fence_get(dma_resv_excl_fence(robj)); - } - + ret = dma_resv_get_fences(robj, + !!(bo->flags & ETNA_SUBMIT_BO_WRITE), + &bo->nr_shared, &bo->shared); + if (ret) + return ret; } return ret; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 180bb633d5c5..8c038a363d15 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -39,16 +39,6 @@ etnaviv_sched_dependency(struct drm_sched_job *sched_job, struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; int j; - if (bo->excl) { - fence = bo->excl; - bo->excl = NULL; - - if (!dma_fence_is_signaled(fence)) - return fence; - - dma_fence_put(fence); - } - for (j = 0; j < bo->nr_shared; j++) { if (!bo->shared[j]) continue; From patchwork Tue Nov 23 14:20:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64507C4332F for ; Tue, 23 Nov 2021 14:21:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237914AbhKWOYr (ORCPT ); Tue, 23 Nov 2021 09:24:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237830AbhKWOYo (ORCPT ); Tue, 23 Nov 2021 09:24:44 -0500 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36333C061714 for ; Tue, 23 Nov 2021 06:21:36 -0800 (PST) Received: by mail-wm1-x334.google.com with SMTP id o19-20020a1c7513000000b0033a93202467so2733829wmc.2 for ; Tue, 23 Nov 2021 06:21:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cLhcJ6c+gStaRG1gv+P6ttqcHjmkscgCl8MWmUQCq2k=; b=oh/rTKJa7q7MBdD7KnRP/2E+q5ITp3UVnwYEL/Dc366iJpn5w7ze/43jJgObxeontv TV1Lder2M71UqpmBR2g8Y3nr4BhGjDgeG4Fh/ydEb35zsctop+uR8aGNgnDzLAiU8eUP G3wLUW7E2+1iF1gUzgSAEHQrv8rEoOgwyLUyAAX0kXEXyeB8OH+LsU+K7Lltnu6WjRVu jS14Dh80rYQmWn7yFmdRXIvX53ACxrdgZoD+HtEdVTNs5QhJO4lR0WY0wrgfBg+PAwS7 qObzEDAw+32Hs/DW3RAl3MW4PaVR6MfmbqYtDzo3MZfDmWspVRdmUlfaY+uKcAAPC0Xa 0iRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cLhcJ6c+gStaRG1gv+P6ttqcHjmkscgCl8MWmUQCq2k=; b=QuI+mSOgoqA5m6sdgEirSPqx47GbinPumYL6kXxnWBKW+GE1uWvHVR01al5otI3YEw e8RLRTToUxjsuxCAqniwQZXv/oNkqDK532rNWuss6oie/PKQWZ8Onc57soNTatXFHUbO 0/5QodP0Y/QPZKJBMy9XbpkiM9WFi8WBEgzJFVJgtyo0mRMpzmoW91HMyr1xQOEap0TF oay26RX8gc/82gvu8JoZb8CZLkpklXa+a9s4KxlZJhB6WOPz7V9CwcG7WHyIl8Ug4vtX S8OACM/xJcmz0B0YNxsl3zYNPQibKt6NhcqcvkMLi0WWurzU28TIYLZn0VQR9uHG+hTK J7ew== X-Gm-Message-State: AOAM5315yrEEs6BaBeHQTt1RrmEG9QZF28W0mqihEUYW3Be501zLzzL4 3wyCF9RyOlDV3gnILmJ24jA= X-Google-Smtp-Source: ABdhPJxZNYWUTmSHRmW1N2dM7XUNic0YwGfB5C5dAoDF90ZbMxwVuVGVxS/MWIjJMy1ULKsssYTcfA== X-Received: by 2002:a05:600c:202:: with SMTP id 2mr3470222wmi.167.1637677294799; Tue, 23 Nov 2021 06:21:34 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:34 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 11/26] drm/nouveau: stop using dma_resv_excl_fence Date: Tue, 23 Nov 2021 15:20:56 +0100 Message-Id: <20211123142111.3885-12-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Instead use the new dma_resv_get_singleton function. Signed-off-by: Christian König --- drivers/gpu/drm/nouveau/nouveau_bo.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index fa73fe57f97b..74f8652d2bd3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -959,7 +959,14 @@ nouveau_bo_vm_cleanup(struct ttm_buffer_object *bo, { struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct drm_device *dev = drm->dev; - struct dma_fence *fence = dma_resv_excl_fence(bo->base.resv); + struct dma_fence *fence; + int ret; + + /* TODO: This is actually a memory management dependency */ + ret = dma_resv_get_singleton(bo->base.resv, false, &fence); + if (ret) + dma_resv_wait_timeout(bo->base.resv, false, false, + MAX_SCHEDULE_TIMEOUT); nv10_bo_put_tile_region(dev, *old_tile, fence); *old_tile = new_tile; From patchwork Tue Nov 23 14:20:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8C45C433EF for ; Tue, 23 Nov 2021 14:21:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237898AbhKWOYr (ORCPT ); Tue, 23 Nov 2021 09:24:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237929AbhKWOYp (ORCPT ); Tue, 23 Nov 2021 09:24:45 -0500 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB95CC061574 for ; Tue, 23 Nov 2021 06:21:37 -0800 (PST) Received: by mail-wm1-x32f.google.com with SMTP id g191-20020a1c9dc8000000b0032fbf912885so2718082wme.4 for ; Tue, 23 Nov 2021 06:21:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IRN4qp5Iqo3k44JzsPUytvzY07JybWV7N7oXbJsRX+E=; b=eLwYR4ZKYqhkoKNUcBKIl/SAfy9xF87d9II49MKgVRWKZESAQ+iJI5bsmv/O0K5s1e Y2efcqQ5qKcudBl35z+gXv3uY1Y45G/tmeoZ4VhYgTm9X3+G7ohLsmgSFtqM4IguOr9b pKcIBxkYs34qGvCySVkuPFfnf3KNeoMX7k8L80PPcgdZU5A1gj+HZtmEds4lh+2+dHGK dYaeTBQinuxL21oFo6vRVJSshjdqgm8z0SzkvdARpXJ7+lAj3ZD3AdeIU9TJ6IEhHQNk MRVy30mak+KWX/i7iWZjSMvOIRk21rfcF+2HG85EtBDMKeoZnhgYqpOjmtPF9nBMu8l+ CHsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IRN4qp5Iqo3k44JzsPUytvzY07JybWV7N7oXbJsRX+E=; b=tg2Rl14dc09otWwR+TUcRVhQjI2uAUX9CVIvoxsbbEnUi2rWtSbZvWO2KFpCkPwCMm dXdQ9nXAOjIQevXz784Gf4ASLxKXmRmNg2euFyhDa1C7pICLBz1zU26K/SDuGITt4Svg fyNGdxddahBSApR+Zu4oh+GeG3cswOSxpyYc7q8fdQwUpdIKni1C+pxL4kHFRbzSy6Rg Np0JEjNWXVvMQHMXRcgaexpcyqip4oT1nxq1O2tu/RIlSzWm57vNO4NRL9YoxwwFex5j vCCazj8aUI53y9XAdPIRAdIsyYOQH8os3s1WY++NkH5AZ2VAoC6iCfGUqHMeq5PT/iT0 Wp/A== X-Gm-Message-State: AOAM532+IYAoGqwlWJdqfomTzKZw8Za7B1BjG5R0Wq3wklEJjyEjQ7lD uVTaKcsu1F2S/X1v0xsjUHo= X-Google-Smtp-Source: ABdhPJwdudJRBQUyuM6N/s07Qo3lmLid8mFUOvJ3ndWpjmCPTwrY9JZYMgsS6jk9TZToXh3BkzwbPA== X-Received: by 2002:a1c:f418:: with SMTP id z24mr3642026wma.95.1637677296400; Tue, 23 Nov 2021 06:21:36 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:36 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 12/26] drm/vmwgfx: stop using dma_resv_excl_fence Date: Tue, 23 Nov 2021 15:20:57 +0100 Message-Id: <20211123142111.3885-13-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Instead use the new dma_resv_get_singleton function. Signed-off-by: Christian König --- drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 8d1e869cc196..23c3fc2cbf10 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -1168,8 +1168,10 @@ int vmw_resources_clean(struct vmw_buffer_object *vbo, pgoff_t start, vmw_bo_fence_single(bo, NULL); if (bo->moving) dma_fence_put(bo->moving); - bo->moving = dma_fence_get - (dma_resv_excl_fence(bo->base.resv)); + + /* TODO: This is actually a memory management dependency */ + return dma_resv_get_singleton(bo->base.resv, false, + &bo->moving); } return 0; From patchwork Tue Nov 23 14:20:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D6C9C433FE for ; Tue, 23 Nov 2021 14:21:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237929AbhKWOYs (ORCPT ); Tue, 23 Nov 2021 09:24:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237881AbhKWOYr (ORCPT ); Tue, 23 Nov 2021 09:24:47 -0500 Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70FE2C061574 for ; Tue, 23 Nov 2021 06:21:39 -0800 (PST) Received: by mail-wm1-x32e.google.com with SMTP id o29so18892279wms.2 for ; Tue, 23 Nov 2021 06:21:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MHiuy8hFM9J8q0FM4IEtsQAAAvGX3NWM32T+WyyUIsc=; b=Aap9qb0CGJMazHPh76Zb+XR1OFS8Jd/j6yOThIDHqJ6I7ART5j3CY5Ipx2vkhBzcze QlI7onwvUrpPJY9975hB53OwS3Vy9uPVDO8z2uqK00BFn3NtLS2WWDTH0nYEq17ZRaXd NVbIAAtVVhRUkFwmDnCpIVN9MZW6l0/i0L2b14TR7QHs2hPT3sUW0i+Z4VjpSc6KF0KL MCWEIlyf8P8lAHUGA0f9oj7TSfxyjtJq4VjJ5OYtQ/R4allQQPRZpEZZFaWkaRYcvN+3 jigULy1EE+5MzZLa4ZjE5k1kDEKydxv1lyxw1JiIokAki8vmDSBdOTuz0dyMU8YbVFzD sBow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MHiuy8hFM9J8q0FM4IEtsQAAAvGX3NWM32T+WyyUIsc=; b=49r6c1Ek8WbBN+hCwQbP+TB3kUNHd0asYgzt/N/kOqF44WewFkriJvNsuBSGka3CwV oMDxJc7SLidLuwxgSI09+8deS5LRlKrJs8vYUQHTIraYvgVioklYFmImPgL5IQQM+SmQ BGddOJVQYpmyxRiyNsey0SVT8kq5nNSwXq8vvIAQ1Z9dvzGnrwLU0lSZvFbal7UQxg8Q vRFy1OtyA6A/768qvcCn2MSsJk0Hs72iyzvm0P5ZLgCnkt8/hVnnNqbGevdYjWrzrTxb B3SG6nKsReGW4A/bPZsSso3uY822O82v9gw4INH7dH41sx87mEdZx8pgWOnf0kuxIx0W iWyg== X-Gm-Message-State: AOAM532rOlf6blBBG9LAR7iOLQd3e6QgtpN50RFDPe72zjPIxp62mo++ bqiY2AepkDUSdOYzEuh+e14= X-Google-Smtp-Source: ABdhPJyZg/cZ9stJjKdPoHG3fhLgr87QipAPe+4ZbOIDj9J1X5VBkse+LtX/RaUO8k5CsW0Rixncew== X-Received: by 2002:a1c:1c8:: with SMTP id 191mr3498372wmb.90.1637677298054; Tue, 23 Nov 2021 06:21:38 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:37 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 13/26] drm/radeon: stop using dma_resv_excl_fence Date: Tue, 23 Nov 2021 15:20:58 +0100 Message-Id: <20211123142111.3885-14-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Instead use the new dma_resv_get_singleton function. Signed-off-by: Christian König --- drivers/gpu/drm/radeon/radeon_display.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c index 573154268d43..a6f875118f01 100644 --- a/drivers/gpu/drm/radeon/radeon_display.c +++ b/drivers/gpu/drm/radeon/radeon_display.c @@ -533,7 +533,12 @@ static int radeon_crtc_page_flip_target(struct drm_crtc *crtc, DRM_ERROR("failed to pin new rbo buffer before flip\n"); goto cleanup; } - work->fence = dma_fence_get(dma_resv_excl_fence(new_rbo->tbo.base.resv)); + r = dma_resv_get_singleton(new_rbo->tbo.base.resv, false, &work->fence); + if (r) { + radeon_bo_unreserve(new_rbo); + DRM_ERROR("failed to get new rbo buffer fences\n"); + goto cleanup; + } radeon_bo_get_tiling_flags(new_rbo, &tiling_flags, NULL); radeon_bo_unreserve(new_rbo); From patchwork Tue Nov 23 14:20:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 356A6C43219 for ; Tue, 23 Nov 2021 14:21:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237881AbhKWOYt (ORCPT ); Tue, 23 Nov 2021 09:24:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237830AbhKWOYs (ORCPT ); Tue, 23 Nov 2021 09:24:48 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97248C061574 for ; Tue, 23 Nov 2021 06:21:40 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id 133so18933542wme.0 for ; Tue, 23 Nov 2021 06:21:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pldTMC4RaLfk2VlDsi1M7JoBSDyFXLyiGqqy6PtuOhM=; b=Bj8VXu498aYHYw2Lu4AJkw2hjKPHM004v59aCYY/c412UQ+F9D+lBFy88aNGRE64Zm W/f+lajYacxiR9UAXH4vKn6KnNaDN8gPGNG0imzR0ICVxtOQmIct0SC2g42pam0pOnGT VWFNJquTH06BhLX2NWLSW3uk5nOkmzONn9jHG9ymLVvGmVyODCSdNbbKiMglKBeHL0Ks 90orZdqnAW0//9BUybqCSVg8hEWNH0vGH0tQ7knBclRM2BWYPWUd0pPIjM6fkCU7WzM+ PK5xtlMzKxjROS9h7K+ahT0AKrwoGarqDvgE59PB4dnmzRTGJm5d76tKIEKX2lh5NrvT Qtaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pldTMC4RaLfk2VlDsi1M7JoBSDyFXLyiGqqy6PtuOhM=; b=IyBxlgjzkExX1IfR9ZyFsw7tajTdnKyY050LGB912wBGK1gT6ANcoq4qmkhH4K1a1L aAfezoVjd831RlEFYk+bth/NvfZLNKdkieEY5NY7cBQ48HJvOucfw2qeByTHDUmaG/TB F4kZn5DP6ZpANS1ftbIfaKIYbx/JgsJxROXDHht7ZTzN2Zku+/gIwUejMrF2i+x7UeYy fB3ZVradPE5fnd8s0gFoL25bg47sGAUxSKeqjMhMugKydtZA1NkNkKqejXFlD/3F9w8o U2qYBH+bqQRbkui5opkfDIawFINpkMfF6Kme8Y20uUEcAEiAJdwFo5f3E1MhpgbrPYJe h62g== X-Gm-Message-State: AOAM533l8tFa8Vi3jMZDbFGF4jJwLg5YG+s/na4fReE/4Iy5+arGj7ti cZTCRLyil17/h/nUC3ypbxA= X-Google-Smtp-Source: ABdhPJz911F8IY9Y0VBvle8G+wVlmlq4mcVedn/iG/AUr/XpKjKYgEubS8eqreLXD4JqwGaF9cPBEw== X-Received: by 2002:a05:600c:c1:: with SMTP id u1mr3426832wmm.163.1637677299261; Tue, 23 Nov 2021 06:21:39 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:38 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 14/26] drm/amdgpu: remove excl as shared workarounds Date: Tue, 23 Nov 2021 15:20:59 +0100 Message-Id: <20211123142111.3885-15-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This was added because of the now dropped shared on excl dependency. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 +---- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 6 ------ 2 files changed, 1 insertion(+), 10 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 0311d799a010..53e407ea4c89 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1275,14 +1275,11 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, /* * Work around dma_resv shortcommings by wrapping up the * submission in a dma_fence_chain and add it as exclusive - * fence, but first add the submission as shared fence to make - * sure that shared fences never signal before the exclusive - * one. + * fence. */ dma_fence_chain_init(chain, dma_resv_excl_fence(resv), dma_fence_get(p->fence), 1); - dma_resv_add_shared_fence(resv, p->fence); rcu_assign_pointer(resv->fence_excl, &chain->base); e->chain = NULL; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index a1e63ba4c54a..85d31d85c384 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -226,12 +226,6 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj, if (!amdgpu_vm_ready(vm)) goto out_unlock; - fence = dma_resv_excl_fence(bo->tbo.base.resv); - if (fence) { - amdgpu_bo_fence(bo, fence, true); - fence = NULL; - } - r = amdgpu_vm_clear_freed(adev, vm, &fence); if (r || !fence) goto out_unlock; From patchwork Tue Nov 23 14:21:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D91EC433EF for ; Tue, 23 Nov 2021 14:21:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237941AbhKWOYu (ORCPT ); Tue, 23 Nov 2021 09:24:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237830AbhKWOYu (ORCPT ); Tue, 23 Nov 2021 09:24:50 -0500 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31D4DC061574 for ; Tue, 23 Nov 2021 06:21:42 -0800 (PST) Received: by mail-wm1-x332.google.com with SMTP id n33-20020a05600c502100b0032fb900951eso2407591wmr.4 for ; Tue, 23 Nov 2021 06:21:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qBmtGbBu1QXhKmWazU6APumSvKhR6nIEoW+jVAv5tCQ=; b=aEZ/5Co18nmkp4u62saEitAQxChSJWjNnhcmwRyA7HJWR3C3GTieWR2z66rxkMFfq+ tS4X5c47nYBWxBnijFMqa9zD0nNrtGgqN+EOxv92Fi7QkseSFSHXuT3xv6kWnOAzPbDV 4zy4ec69l7ELE9julXZ/0CnQijZK4TNtcZjHH/YagK1wCHMCLtM80e6mc8qroihM/xYt +aBvAZ7f1373MuUiM2oZ9wjxQ2g7tfmrnh+eybJqnpzG56xXtCGv+qr8VbtplHFlj3vL pkhUgI2D5Lk+le3R0J9UE94THxI2DlHtKNlmmKKHVsbbpcMjL9kPE9tczUyAHjCcZJpL R35Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qBmtGbBu1QXhKmWazU6APumSvKhR6nIEoW+jVAv5tCQ=; b=dTUbUPWkAABtSK3N9CoKKr+lueO4G4WMXZGu5m1G+yG6CYNJEoWj6G0MrrUSrUsEtv g6Ml2bMMN1rQFgTNZh2rFQDQXKrItHpjgZr7mdwzQdP7M1Dqd3FOvWtBcofBU75aMgO1 0C8TDanWOCkn2/mSKtK4GLZSSegTVQ/FW+g6mXj0R7jUxBtVIkxZq6ig2uDvI4IBUMxh OfrIJIQriEocF3fqOiWW/X5gwBYhKcRfvoW8itzoSWkfjgIDmIvO+9KLBapi7csyNu8r 9g+tQSQyh9Hs7VcMtcIpvgQjk+dtwxCClX1Birs3Zv65CUiitn+QRaHeqbkw0IcEFvt3 0l2A== X-Gm-Message-State: AOAM533zHFpNP0YDywLco6JmZBHvwyAPELX7PCa0oXhBkqtKejPMuQy1 ss+LRAR+1tbo8lKt1T71RIQ= X-Google-Smtp-Source: ABdhPJwU5ehlAAReHLP8vmHUQVVd0ybuX/CrJukp6q5v77UxVmI5GfpdLM8DWJnlQ9mVR3h91d45Bg== X-Received: by 2002:a05:600c:4e8d:: with SMTP id f13mr3661249wmq.7.1637677300824; Tue, 23 Nov 2021 06:21:40 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:40 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 15/26] drm/amdgpu: use dma_resv_for_each_fence for CS workaround Date: Tue, 23 Nov 2021 15:21:00 +0100 Message-Id: <20211123142111.3885-16-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Get the write fence using dma_resv_for_each_fence instead of accessing it manually. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 53e407ea4c89..7facd614e50a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1268,6 +1268,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, amdgpu_bo_list_for_each_entry(e, p->bo_list) { struct dma_resv *resv = e->tv.bo->base.resv; struct dma_fence_chain *chain = e->chain; + struct dma_resv_iter cursor; + struct dma_fence *fence; if (!chain) continue; @@ -1277,9 +1279,10 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, * submission in a dma_fence_chain and add it as exclusive * fence. */ - dma_fence_chain_init(chain, dma_resv_excl_fence(resv), - dma_fence_get(p->fence), 1); - + dma_resv_for_each_fence(&cursor, resv, false, fence) { + break; + } + dma_fence_chain_init(chain, fence, dma_fence_get(p->fence), 1); rcu_assign_pointer(resv->fence_excl, &chain->base); e->chain = NULL; } From patchwork Tue Nov 23 14:21:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E6F5C433FE for ; Tue, 23 Nov 2021 14:21:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237962AbhKWOYw (ORCPT ); Tue, 23 Nov 2021 09:24:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237830AbhKWOYw (ORCPT ); Tue, 23 Nov 2021 09:24:52 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B8FAC061574 for ; Tue, 23 Nov 2021 06:21:44 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id k37-20020a05600c1ca500b00330cb84834fso2428193wms.2 for ; Tue, 23 Nov 2021 06:21:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EIK2Rv95IimliWtAgRf2dZ+cHmRPtVUYQaZZCSFa3qI=; b=qpAC6sKMWV77V3k95KiVNbdd0UySTnLgi0VzWZv9Q/1SL/ZWKpsllwduR/ovq5Dr+J ywTgHzTZoIJgyj3ewsHXUd52Lzq9vtLBHP65/iyTjILANi35OpUnB0MS7/X9n9qaluZR V0kgzextpuq/SrBT5VTOBXSVJ6PXmByAkcuOy8kCpggQO50erPELeetj7+40NrV9E1EG 4i6dlCW0c9FlpKe6LZuT/Oe402StZL6CSZuYYEA70wv/UjDtfYYhi3guOwdcvU/NBEc6 BgvMAjSmJn8fnuXa0Iruw4M5bYZo7qKRc+Xa13ro30whfFd0JJpb1O172tloOWZDQ92R 7pZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EIK2Rv95IimliWtAgRf2dZ+cHmRPtVUYQaZZCSFa3qI=; b=PF09Gx5UQdmDJcomD10hQuwzVikqE+YWqa/YWHXfIVP/kBRtLRBpZAhnjTBtVvVv/w FjtJ4TS+dj6MEtF6wKsjNvxdUtAs3ya3rolnkEXPwgZPw/zks5EpglsQ7rz5rGSi8bd9 F8HLZ3MqbQ/FEPuJHGC5jlXsjdiLd6Xl1k2vtgKsh/DxlxLPShe1ooG2O0ZSbHTHUGv4 ZHBDoyhHledeWRfx2wj6CvItmBlzSwt18l/iqNNFqj4OPgHaurPNfx4lEGpz8Cw4dO2F H6ETpZV/YyzzmNgE3E7sMEJwkuL4KyYSUwfozHsV8/RREK+xIacmxaJpUyvK0efSm2UZ 2mlQ== X-Gm-Message-State: AOAM533HGI1qEsOY9zKYZpw6yXzEouZsxPgWAk3XRLrjvANxC6WazWXG 3hyMPvxsgWDUnfgM9I94bfs= X-Google-Smtp-Source: ABdhPJwBWMzR4LY8DWWvZwbJbBOfuTy7fXUYMxmrzrxW3z5zk3L6f+uEC9yCYX+eJ8XMwLU9hUdRXg== X-Received: by 2002:a05:600c:1f17:: with SMTP id bd23mr3678317wmb.57.1637677302713; Tue, 23 Nov 2021 06:21:42 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:42 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 16/26] dma-buf: finally make dma_resv_excl_fence private Date: Tue, 23 Nov 2021 15:21:01 +0100 Message-Id: <20211123142111.3885-17-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Drivers should never touch this directly. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 17 +++++++++++++++++ include/linux/dma-resv.h | 17 ----------------- 2 files changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index f91ca023b550..539b9b1df640 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -175,6 +175,23 @@ void dma_resv_fini(struct dma_resv *obj) } EXPORT_SYMBOL(dma_resv_fini); +/** + * dma_resv_excl_fence - return the object's exclusive fence + * @obj: the reservation object + * + * Returns the exclusive fence (if any). Caller must either hold the objects + * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), + * or one of the variants of each + * + * RETURNS + * The exclusive fence or NULL + */ +static inline struct dma_fence * +dma_resv_excl_fence(struct dma_resv *obj) +{ + return rcu_dereference_check(obj->fence_excl, dma_resv_held(obj)); +} + /** * dma_resv_shared_list - get the reservation object's shared fence list * @obj: the reservation object diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 082f77b7bc63..062571c04bca 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -412,23 +412,6 @@ static inline void dma_resv_unlock(struct dma_resv *obj) ww_mutex_unlock(&obj->lock); } -/** - * dma_resv_excl_fence - return the object's exclusive fence - * @obj: the reservation object - * - * Returns the exclusive fence (if any). Caller must either hold the objects - * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), - * or one of the variants of each - * - * RETURNS - * The exclusive fence or NULL - */ -static inline struct dma_fence * -dma_resv_excl_fence(struct dma_resv *obj) -{ - return rcu_dereference_check(obj->fence_excl, dma_resv_held(obj)); -} - void dma_resv_init(struct dma_resv *obj); void dma_resv_fini(struct dma_resv *obj); int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); From patchwork Tue Nov 23 14:21:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DB7AC433EF for ; Tue, 23 Nov 2021 14:21:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237967AbhKWOYy (ORCPT ); Tue, 23 Nov 2021 09:24:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237830AbhKWOYx (ORCPT ); Tue, 23 Nov 2021 09:24:53 -0500 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2318C061574 for ; Tue, 23 Nov 2021 06:21:45 -0800 (PST) Received: by mail-wr1-x42b.google.com with SMTP id l16so1313042wrp.11 for ; Tue, 23 Nov 2021 06:21:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0m5E3AR75LeGECS5IYeG+cVquPxCmS2lBy6eXY5PgkM=; b=HUA7qtJj+ZTsIM+Q0l+p/sj0hiFdpqRUzm96MTGPVdFim7sLKlNlrGNBX7+z2Si3w8 W8HIBSxSgd4tjVbWP+oOjAi9pXrMfh66OkJAy8gyOFI8kMGazPUGLV89Rgdh33xY3EMK 0lORdbgAbIb+zb/sll7Ty1fjKmOLZvfN5Zu5fBWx3BpMHMnOTomfr7t7Ehd2d4dRz0Kl V0VlNeCL6t5/BKnycoi21iMekOGg8o4l9Pk42FJ+qPN1xZPXKTF6ZN08vB6h9KrW0csK UwZsdvjweAH3FvzHTNQHRPHnJjlqdVAmefSLKt/6JROAMV+npNkSFNc4Nqoh+106MCvl dioA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0m5E3AR75LeGECS5IYeG+cVquPxCmS2lBy6eXY5PgkM=; b=REAlXaio9Qj27bFWu5rbqtt7gRyrOkcMJNXGsgwfQVk3smVtpe9s/dtgChOsQ/Gl9B /ozRVxHOn/6fQcIRgXVFG3zoWPDBOuYhUly9jTv+FEligU5iuyhLbThiiLqdDtcvmpk+ ZBiKWIE/PPtFOh8EoxF9tDbnZhVTvWMXKMbbY9BUwqjT3g/qfxxAVv8DZBLVBymV+URR FdZHlYew9Ui7ojdFFivhl4gj63y0AKvWB+sZdEperHfaYVaCyb3MAkxcbsAeYbXO8Q7t r8QB20NAwseMjRnKzXv7vhys3FQS6W8Q5ry+qlRmmqE3zBRjWn/55Aiv2GCZEqxBvz3C aVrw== X-Gm-Message-State: AOAM530zYYIH8B0GMziK2cBZWsEmiKKYVfbFlQrVFLPwtSm3iojluYhq 1/HG9PxlbTiRuwkY9tF/Kjc= X-Google-Smtp-Source: ABdhPJxFodcSQdq8IIGizebR8zbNcFO3Pr22YZuFiZsgi8Yi6eRNaaAPNxYIGsFqGL0V/ZOQmIRetg== X-Received: by 2002:adf:f98c:: with SMTP id f12mr7662667wrr.184.1637677304304; Tue, 23 Nov 2021 06:21:44 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:44 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 17/26] dma-buf: drop the DAG approach for the dma_resv object Date: Tue, 23 Nov 2021 15:21:02 +0100 Message-Id: <20211123142111.3885-18-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org So far we had the approach of using a directed acyclic graph with the dma_resv obj. This turned out to have many downsides, especially it means that every single driver and user of this interface needs to be aware of this restriction when adding fences. If the rules for the DAG are not followed then we end up with potential hard to debug memory corruption, information leaks or even elephant big security holes because we allow userspace to access freed up memory. Since we already took a step back from that by always looking at all fences we now go a step further and stop dropping the shared fences when a new exclusive one is added. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 539b9b1df640..3b0001c5ff3a 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -411,29 +411,17 @@ EXPORT_SYMBOL(dma_resv_replace_fences); void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) { struct dma_fence *old_fence = dma_resv_excl_fence(obj); - struct dma_resv_list *old; - u32 i = 0; dma_resv_assert_held(obj); - old = dma_resv_shared_list(obj); - if (old) - i = old->shared_count; - dma_fence_get(fence); write_seqcount_begin(&obj->seq); /* write_seqcount_begin provides the necessary memory barrier */ RCU_INIT_POINTER(obj->fence_excl, fence); - if (old) - old->shared_count = 0; + dma_resv_list_prune(dma_resv_shared_list(obj), obj); write_seqcount_end(&obj->seq); - /* inplace update, no shared fences */ - while (i--) - dma_fence_put(rcu_dereference_protected(old->shared[i], - dma_resv_held(obj))); - dma_fence_put(old_fence); } EXPORT_SYMBOL(dma_resv_add_excl_fence); From patchwork Tue Nov 23 14:21:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90C57C433F5 for ; Tue, 23 Nov 2021 14:21:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237970AbhKWOY4 (ORCPT ); Tue, 23 Nov 2021 09:24:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237636AbhKWOY4 (ORCPT ); Tue, 23 Nov 2021 09:24:56 -0500 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBE56C061574 for ; Tue, 23 Nov 2021 06:21:47 -0800 (PST) Received: by mail-wr1-x42d.google.com with SMTP id a18so2698405wrn.6 for ; Tue, 23 Nov 2021 06:21:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vWCwx9Su+VRnoL1JlAwGcp1087mputsOyY+fssc6IfU=; b=HuhTw0grcBHeINXCILZO2ql2nDB3Ux5faafaeqZ6CHfr855Jy8dpUDSukIOArXAAwL biEHyo8EU2NpTqXyFe1Ixdqem4RVCir6XEv0glqr79A5Njk9/wGcrPGLsvqPfr3Nre7h TxxXvx+k2KTCB5qsLNQNztiDWuZQRn2hZ+/Nerikq0TenooJa4S76k10NMDKUbC8DgZN UfrDCN+wciPDJ1h+DCAeK5OI5NmxOT5z3YTDM3oYMAHEG5AdC0TGQcj0a2L833X42dhC 1eQSuzsPhm9tnV7s6tfTWhLDMf7DrFanRiiQVnAK6mSQTx67bJz3sjQjSaor7WKKbxB8 YbkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vWCwx9Su+VRnoL1JlAwGcp1087mputsOyY+fssc6IfU=; b=1hjlHly1lsrd5sa93untQBkiOTpQtALhiaFdeR0Tjo2E2NKTk7NFxhug669DrIrrnA 5h0SZM8x0NimEwAyDEjdfAt7WD2pkOW/gsjnK8t6PdsFCbmrPjOnjUbvMK3qW/m0prNO UQKUhfgxk+ei2Gw7XL9I4nMBBXFFhRQetlFCKYOUGm7/3lukBiEGAyruHaBzGraqqMY1 gGyAFQ7wp1BIQX5tcUZs76mqhrF4MG72adXFwBdZCIW2u+hKibYHuouTzAhXoZR4O1JN 2ZJ645+df5HpbwB8av+Z+skV+c3TapX2CATNyAN4yAyzX6F6U6ncEcJMqnPCRNaMOBBV DSeA== X-Gm-Message-State: AOAM531rRM8/Ff/3FMzHdBPZJmtVzibVSlA9jeQ55QwG9HKZUKxfQOcO A2Bod57z89TGddyxqql4WEM= X-Google-Smtp-Source: ABdhPJx4MnuwdWj7TaMedc/latmqYtoZGDWuUerO2x4hWGVCJ26qK0auSMMPdZg+F6MOMj75lgZ3eA== X-Received: by 2002:a5d:6043:: with SMTP id j3mr7511469wrt.375.1637677306387; Tue, 23 Nov 2021 06:21:46 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:46 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 18/26] dma-buf/drivers: make reserving a shared slot mandatory Date: Tue, 23 Nov 2021 15:21:03 +0100 Message-Id: <20211123142111.3885-19-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Audit all the users of dma_resv_add_excl_fence() and make sure they reserve a shared slot also when only trying to add an exclusive fence. This is the next step towards handling the exclusive fence like a shared one. Signed-off-by: Christian König --- drivers/dma-buf/st-dma-resv.c | 64 +++++++++---------- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 8 +++ drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 8 +-- drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 3 +- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +-- .../drm/i915/gem/selftests/i915_gem_migrate.c | 5 +- drivers/gpu/drm/i915/i915_vma.c | 6 ++ .../drm/i915/selftests/intel_memory_region.c | 7 ++ drivers/gpu/drm/lima/lima_gem.c | 10 ++- drivers/gpu/drm/msm/msm_gem_submit.c | 18 +++--- drivers/gpu/drm/nouveau/nouveau_fence.c | 9 +-- drivers/gpu/drm/panfrost/panfrost_job.c | 4 ++ drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++- drivers/gpu/drm/ttm/ttm_execbuf_util.c | 11 ++-- drivers/gpu/drm/v3d/v3d_gem.c | 15 +++-- drivers/gpu/drm/vgem/vgem_fence.c | 12 ++-- drivers/gpu/drm/virtio/virtgpu_gem.c | 9 +++ drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 16 +++-- 18 files changed, 133 insertions(+), 92 deletions(-) diff --git a/drivers/dma-buf/st-dma-resv.c b/drivers/dma-buf/st-dma-resv.c index cbe999c6e7a6..f33bafc78693 100644 --- a/drivers/dma-buf/st-dma-resv.c +++ b/drivers/dma-buf/st-dma-resv.c @@ -75,17 +75,16 @@ static int test_signaling(void *arg, bool shared) goto err_free; } - if (shared) { - r = dma_resv_reserve_shared(&resv, 1); - if (r) { - pr_err("Resv shared slot allocation failed\n"); - goto err_unlock; - } + r = dma_resv_reserve_shared(&resv, 1); + if (r) { + pr_err("Resv shared slot allocation failed\n"); + goto err_unlock; + } + if (shared) dma_resv_add_shared_fence(&resv, f); - } else { + else dma_resv_add_excl_fence(&resv, f); - } if (dma_resv_test_signaled(&resv, shared)) { pr_err("Resv unexpectedly signaled\n"); @@ -134,17 +133,16 @@ static int test_for_each(void *arg, bool shared) goto err_free; } - if (shared) { - r = dma_resv_reserve_shared(&resv, 1); - if (r) { - pr_err("Resv shared slot allocation failed\n"); - goto err_unlock; - } + r = dma_resv_reserve_shared(&resv, 1); + if (r) { + pr_err("Resv shared slot allocation failed\n"); + goto err_unlock; + } + if (shared) dma_resv_add_shared_fence(&resv, f); - } else { + else dma_resv_add_excl_fence(&resv, f); - } r = -ENOENT; dma_resv_for_each_fence(&cursor, &resv, shared, fence) { @@ -206,18 +204,17 @@ static int test_for_each_unlocked(void *arg, bool shared) goto err_free; } - if (shared) { - r = dma_resv_reserve_shared(&resv, 1); - if (r) { - pr_err("Resv shared slot allocation failed\n"); - dma_resv_unlock(&resv); - goto err_free; - } + r = dma_resv_reserve_shared(&resv, 1); + if (r) { + pr_err("Resv shared slot allocation failed\n"); + dma_resv_unlock(&resv); + goto err_free; + } + if (shared) dma_resv_add_shared_fence(&resv, f); - } else { + else dma_resv_add_excl_fence(&resv, f); - } dma_resv_unlock(&resv); r = -ENOENT; @@ -290,18 +287,17 @@ static int test_get_fences(void *arg, bool shared) goto err_resv; } - if (shared) { - r = dma_resv_reserve_shared(&resv, 1); - if (r) { - pr_err("Resv shared slot allocation failed\n"); - dma_resv_unlock(&resv); - goto err_resv; - } + r = dma_resv_reserve_shared(&resv, 1); + if (r) { + pr_err("Resv shared slot allocation failed\n"); + dma_resv_unlock(&resv); + goto err_resv; + } + if (shared) dma_resv_add_shared_fence(&resv, f); - } else { + else dma_resv_add_excl_fence(&resv, f); - } dma_resv_unlock(&resv); r = dma_resv_get_fences(&resv, shared, &i, &fences); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index 4fcfc2313b8c..1becd4e7e463 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -1367,6 +1367,14 @@ void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence, bool shared) { struct dma_resv *resv = bo->tbo.base.resv; + int r; + + r = dma_resv_reserve_fences(resv, 1); + if (r) { + /* As last resort on OOM we block for the fence */ + dma_fence_wait(fence, false); + return; + } if (shared) dma_resv_add_shared_fence(resv, fence); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 4286dc93fdaa..d4a7073190ec 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -179,11 +179,9 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; struct dma_resv *robj = bo->obj->base.resv; - if (!(bo->flags & ETNA_SUBMIT_BO_WRITE)) { - ret = dma_resv_reserve_shared(robj, 1); - if (ret) - return ret; - } + ret = dma_resv_reserve_shared(robj, 1); + if (ret) + return ret; if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT) continue; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c index f0435c6feb68..fc57ab914b60 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c @@ -100,7 +100,8 @@ bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, trace_i915_gem_object_clflush(obj); clflush = NULL; - if (!(flags & I915_CLFLUSH_SYNC)) + if (!(flags & I915_CLFLUSH_SYNC) && + dma_resv_reserve_shared(obj->base.resv, 1) == 0) clflush = clflush_work_create(obj); if (clflush) { i915_sw_fence_await_reservation(&clflush->base.chain, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 4d7da07442f2..fc0e1625847c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -989,11 +989,9 @@ static int eb_validate_vmas(struct i915_execbuffer *eb) } } - if (!(ev->flags & EXEC_OBJECT_WRITE)) { - err = dma_resv_reserve_shared(vma->resv, 1); - if (err) - return err; - } + err = dma_resv_reserve_shared(vma->resv, 1); + if (err) + return err; GEM_BUG_ON(drm_mm_node_allocated(&vma->node) && eb_vma_misplaced(&eb->exec[i], vma, ev->flags)); diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c index 28a700f08b49..2bf491fd5cdf 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c @@ -179,7 +179,10 @@ static int igt_lmem_pages_migrate(void *arg) i915_gem_object_is_lmem(obj), 0xdeadbeaf, &rq); if (rq) { - dma_resv_add_excl_fence(obj->base.resv, &rq->fence); + err = dma_resv_reserve_shared(obj->base.resv, 1); + if (!err) + dma_resv_add_excl_fence(obj->base.resv, + &rq->fence); i915_request_put(rq); } if (err) diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index bef795e265a6..5ec87de63963 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -1255,6 +1255,12 @@ int _i915_vma_move_to_active(struct i915_vma *vma, intel_frontbuffer_put(front); } + if (!(flags & __EXEC_OBJECT_NO_RESERVE)) { + err = dma_resv_reserve_shared(vma->resv, 1); + if (unlikely(err)) + return err; + } + if (fence) { dma_resv_add_excl_fence(vma->resv, fence); obj->write_domain = I915_GEM_DOMAIN_RENDER; diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 418caae84759..b85af1672a7e 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -894,6 +894,13 @@ static int igt_lmem_write_cpu(void *arg) } i915_gem_object_lock(obj, NULL); + + err = dma_resv_reserve_shared(obj->base.resv, 1); + if (err) { + i915_gem_object_unlock(obj); + goto out_put; + } + /* Put the pages into a known state -- from the gpu for added fun */ intel_engine_pm_get(engine); err = intel_context_migrate_clear(engine->gt->migrate.context, NULL, diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 2723d333c608..487581e2f716 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -255,13 +255,11 @@ int lima_gem_get_info(struct drm_file *file, u32 handle, u32 *va, u64 *offset) static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo, bool write, bool explicit) { - int err = 0; + int err; - if (!write) { - err = dma_resv_reserve_shared(lima_bo_resv(bo), 1); - if (err) - return err; - } + err = dma_resv_reserve_shared(lima_bo_resv(bo), 1); + if (err) + return err; /* explicit sync use user passed dep fence */ if (explicit) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 3cb029f10925..e874d09b74ef 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -320,16 +320,14 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) struct drm_gem_object *obj = &submit->bos[i].obj->base; bool write = submit->bos[i].flags & MSM_SUBMIT_BO_WRITE; - if (!write) { - /* NOTE: _reserve_shared() must happen before - * _add_shared_fence(), which makes this a slightly - * strange place to call it. OTOH this is a - * convenient can-fail point to hook it in. - */ - ret = dma_resv_reserve_shared(obj->resv, 1); - if (ret) - return ret; - } + /* NOTE: _reserve_shared() must happen before + * _add_shared_fence(), which makes this a slightly + * strange place to call it. OTOH this is a + * convenient can-fail point to hook it in. + */ + ret = dma_resv_reserve_shared(obj->resv, 1); + if (ret) + return ret; /* exclusive fences must be ordered */ if (no_implicit && !write) diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c index 26f9299df881..cd6715bd6d6b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.c +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c @@ -349,12 +349,9 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, struct nouveau_fence *f; int ret; - if (!exclusive) { - ret = dma_resv_reserve_shared(resv, 1); - - if (ret) - return ret; - } + ret = dma_resv_reserve_shared(resv, 1); + if (ret) + return ret; dma_resv_for_each_fence(&cursor, resv, exclusive, fence) { struct nouveau_channel *prev = NULL; diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 908d79520853..89c3fe389476 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -247,6 +247,10 @@ static int panfrost_acquire_object_fences(struct drm_gem_object **bos, int i, ret; for (i = 0; i < bo_count; i++) { + ret = dma_resv_reserve_shared(bos[i]->resv, 1); + if (ret) + return ret; + /* panfrost always uses write mode in its current uapi */ ret = drm_sched_job_add_implicit_dependencies(job, bos[i], true); diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 72a94301bc95..ea9eabcc0a0c 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -221,9 +221,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, fbo->base = *bo; - ttm_bo_get(bo); - fbo->bo = bo; - /** * Fix up members that we shouldn't copy directly: * TODO: Explicit member copy would probably be better here. @@ -246,6 +243,15 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, ret = dma_resv_trylock(&fbo->base.base._resv); WARN_ON(!ret); + ret = dma_resv_reserve_shared(&fbo->base.base._resv, 1); + if (ret) { + kfree(fbo); + return ret; + } + + ttm_bo_get(bo); + fbo->bo = bo; + ttm_bo_move_to_lru_tail_unlocked(&fbo->base); *new_obj = &fbo->base; diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c index 071c48d672c6..5da922639d54 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c @@ -90,6 +90,7 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, list_for_each_entry(entry, list, head) { struct ttm_buffer_object *bo = entry->bo; + unsigned int num_fences; ret = ttm_bo_reserve(bo, intr, (ticket == NULL), ticket); if (ret == -EALREADY && dups) { @@ -100,12 +101,10 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, continue; } + num_fences = min(entry->num_shared, 1u); if (!ret) { - if (!entry->num_shared) - continue; - ret = dma_resv_reserve_shared(bo->base.resv, - entry->num_shared); + num_fences); if (!ret) continue; } @@ -120,9 +119,9 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, ret = ttm_bo_reserve_slowpath(bo, intr, ticket); } - if (!ret && entry->num_shared) + if (!ret) ret = dma_resv_reserve_shared(bo->base.resv, - entry->num_shared); + num_fences); if (unlikely(ret != 0)) { if (ticket) { diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index c7ed2e1cbab6..1bea90e40ce1 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -259,16 +259,21 @@ v3d_lock_bo_reservations(struct v3d_job *job, return ret; for (i = 0; i < job->bo_count; i++) { + ret = dma_resv_reserve_shared(job->bo[i]->resv, 1); + if (ret) + goto fail; + ret = drm_sched_job_add_implicit_dependencies(&job->base, job->bo[i], true); - if (ret) { - drm_gem_unlock_reservations(job->bo, job->bo_count, - acquire_ctx); - return ret; - } + if (ret) + goto fail; } return 0; + +fail: + drm_gem_unlock_reservations(job->bo, job->bo_count, acquire_ctx); + return ret; } /** diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c index bd6f75285fd9..a4cb296d4fcd 100644 --- a/drivers/gpu/drm/vgem/vgem_fence.c +++ b/drivers/gpu/drm/vgem/vgem_fence.c @@ -157,12 +157,14 @@ int vgem_fence_attach_ioctl(struct drm_device *dev, } /* Expose the fence via the dma-buf */ - ret = 0; dma_resv_lock(resv, NULL); - if (arg->flags & VGEM_FENCE_WRITE) - dma_resv_add_excl_fence(resv, fence); - else if ((ret = dma_resv_reserve_shared(resv, 1)) == 0) - dma_resv_add_shared_fence(resv, fence); + ret = dma_resv_reserve_shared(resv, 1); + if (!ret) { + if (arg->flags & VGEM_FENCE_WRITE) + dma_resv_add_excl_fence(resv, fence); + else + dma_resv_add_shared_fence(resv, fence); + } dma_resv_unlock(resv); /* Record the fence in our idr for later signaling */ diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 2de61b63ef91..aec105cdd64c 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -214,6 +214,7 @@ void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs, int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) { + unsigned int i; int ret; if (objs->nents == 1) { @@ -222,6 +223,14 @@ int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) ret = drm_gem_lock_reservations(objs->objs, objs->nents, &objs->ticket); } + if (ret) + return ret; + + for (i = 0; i < objs->nents; ++i) { + ret = dma_resv_reserve_shared(objs->objs[i]->resv, 1); + if (ret) + return ret; + } return ret; } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index fd007f1c1776..f81767f0a5cc 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -1053,16 +1053,22 @@ void vmw_bo_fence_single(struct ttm_buffer_object *bo, struct vmw_fence_obj *fence) { struct ttm_device *bdev = bo->bdev; - struct vmw_private *dev_priv = container_of(bdev, struct vmw_private, bdev); + int ret; - if (fence == NULL) { + if (fence == NULL) vmw_execbuf_fence_commands(NULL, dev_priv, &fence, NULL); + else + dma_fence_get(&fence->base); + + ret = dma_resv_reserve_shared(bo->base.resv, 1); + if (!ret) dma_resv_add_excl_fence(bo->base.resv, &fence->base); - dma_fence_put(&fence->base); - } else - dma_resv_add_excl_fence(bo->base.resv, &fence->base); + else + /* Last resort fallback when we are OOM */ + dma_fence_wait(&fence->base, false); + dma_fence_put(&fence->base); } From patchwork Tue Nov 23 14:21:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB63DC433EF for ; Tue, 23 Nov 2021 14:21:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237069AbhKWOY7 (ORCPT ); Tue, 23 Nov 2021 09:24:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237636AbhKWOY6 (ORCPT ); Tue, 23 Nov 2021 09:24:58 -0500 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 167A8C061574 for ; Tue, 23 Nov 2021 06:21:50 -0800 (PST) Received: by mail-wm1-x332.google.com with SMTP id o29so18892770wms.2 for ; Tue, 23 Nov 2021 06:21:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xR1nHIQR+HY5rurJTXhG523nepCCoi6AKz+ZxgPV9e4=; b=BTqipD6axEdvFS1qNdyue067ov95lW4E2a6zp708LmkVTtkeerVpwqnJhPFVrfumv5 xuFk26cb70nj8GVHRLOG39Vvc7V/P2EZdcXBxNZNOD2NsKSknt5c27AARzTevTtm2l+v IzcyNOSovh/8bHnvDDoZQM60pa9MIUQBtVINjJqOtiAhluOxMhPtqxK89mdvMu+vOyFX JLWKvlTQxKjNKHvhQVl8vfh+LMGXeWTb0Gz95PfKUPbVp58UHNSo96bX6Nn8H6Oh6mKO 1GyeZKiYXijk1nFVpBzuBdxGOBTnm7lFJbmry3asbr57LXcqvIAS3Wp4y6jmjODSdzFf KGCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xR1nHIQR+HY5rurJTXhG523nepCCoi6AKz+ZxgPV9e4=; b=h7WLVxhQyqffgQWn8Rix+hKSKg5ZRuBr2YdLWpkwHdAAs8r13Quf/imaIUydfMO1YX CeXgpkXlonq3xZA3i8biOWfedATmeTY0AxtNmS4RT1lD0Vd3yH6Z8kTpp1eKhmHq7ZFm pfQRKiAORCY+728S6EYUuWqa4ri2u0ozENj1kbSUqnWtned1i0l0O26h5txJbxLujIIV jE5llTnKRa6t4XExHQxw4mGqj978UzLwaOd0eHeZx50CCasvWL3fU0JaoSLnTyz+Uewc GEdX42PsUabDFseNVZcOGTSVrqOb/g4BlXBKOab9k2PDNrifB58ujsiCTvQ245V26ibU s0RQ== X-Gm-Message-State: AOAM533WXxal6bTb8S4VKFfgRs5nfU0Pc8QcGkHDwEbXRgL9W9H/1Sm5 gnCQRL4fSmWCjDeBB3Iv+AUE39AJ2RU= X-Google-Smtp-Source: ABdhPJwsYTfsgAB3lOAVoiiFf/7GcQC8rbzKcBIGp6cEfhtBHCNSmGTA2nFXLk/Ck8OjVaGW+Vb6rg== X-Received: by 2002:a7b:c444:: with SMTP id l4mr3443925wmi.115.1637677308688; Tue, 23 Nov 2021 06:21:48 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:48 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 19/26] drm: support more than one write fence in drm_gem_plane_helper_prepare_fb Date: Tue, 23 Nov 2021 15:21:04 +0100 Message-Id: <20211123142111.3885-20-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Use dma_resv_get_singleton() here to eventually get more than one write fence as single fence. Signed-off-by: Christian König --- drivers/gpu/drm/drm_gem_atomic_helper.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c index c3189afe10cb..9338ddb7edff 100644 --- a/drivers/gpu/drm/drm_gem_atomic_helper.c +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c @@ -143,25 +143,21 @@ */ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) { - struct dma_resv_iter cursor; struct drm_gem_object *obj; struct dma_fence *fence; + int ret; if (!state->fb) return 0; obj = drm_gem_fb_get_obj(state->fb, 0); - dma_resv_iter_begin(&cursor, obj->resv, false); - dma_resv_for_each_fence_unlocked(&cursor, fence) { - /* TODO: Currently there should be only one write fence, so this - * here works fine. But drm_atomic_set_fence_for_plane() should - * be changed to be able to handle more fences in general for - * multiple BOs per fb anyway. */ - dma_fence_get(fence); - break; - } - dma_resv_iter_end(&cursor); + ret = dma_resv_get_singleton(obj->resv, false, &fence); + if (ret) + return ret; + /* TODO: drm_atomic_set_fence_for_plane() should be changed to be able + * to handle more fences in general for multiple BOs per fb. + */ drm_atomic_set_fence_for_plane(state, fence); return 0; } From patchwork Tue Nov 23 14:21:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76326C433F5 for ; Tue, 23 Nov 2021 14:21:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237636AbhKWOZA (ORCPT ); Tue, 23 Nov 2021 09:25:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237432AbhKWOY7 (ORCPT ); Tue, 23 Nov 2021 09:24:59 -0500 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57D66C061574 for ; Tue, 23 Nov 2021 06:21:51 -0800 (PST) Received: by mail-wm1-x331.google.com with SMTP id g191-20020a1c9dc8000000b0032fbf912885so2718691wme.4 for ; Tue, 23 Nov 2021 06:21:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VGR7sEyP5xUI57aswHQi5HhbTWAZp7wO5W8n1Qo9nVA=; b=Ja54Ddld8qqcZzstvG4AWszv693k3+shcyjQQbG2LPx7hKiWSdJNF2r3KSmAbJXvdm 8McOvaW400TjaB2kWzfse3ek0xNbhsiKxJ0kmCOq/Zb3P078UY+yYrLPDCyQ16MmUXFL 60wxFFhfIDxNHxTQCAJ34NqXkVp4RpdzXHRVhqwlPUD7ffpMYR/6TDD9l88ECCjV8kI+ kzKCX90I2GvSJSo3rnMES+fwkX2bHC7ujjVEnJiAalnRBsClJLm5xl7wVjv1BaHFdL+c kjWWj23ceMuVppYKU9CJhxgap7sY46qvF1aRJVpm/wNzsfnHY5amUQtG8C5E4KdyBU3w WryQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VGR7sEyP5xUI57aswHQi5HhbTWAZp7wO5W8n1Qo9nVA=; b=Hibqt9Dllel23uF5Xj4wGZ4ePJiPPBKNsYAKPFhJFc9Bwm0MLidfrHHcmzNV+lQDZE 8UK9KMh8VRcvplbGpVpB6BUjhVqN1flUHJV/sN+PCMz4cKYkKcvnw2kRf86E2Svaw+Xk 4HbirQQ8AQc8zn0UPom4B93flD/IVeIlJWyKpaB0Lc8s/VRHQx6Ni6jB1ZCuORp6rczY OY17NZks+yBP57oSdt30hTA9A0nAMIz9BEFAxu2BbUFS1/DDl4j84IEeVeOpVA7UDQdJ iJN8Yf10KvWeK8LH+qMVP0i88jdodPLAnqXExdWBvMRMkkP04nyTFZ+hPJLi79Y3iDfH YElA== X-Gm-Message-State: AOAM532QwFa/wPNsXOnfpmIcKSd+ByN9h1LrSN+ZwtQVH5sKo0kF+u4W JTPnNZWjlInOy/h6o+rvGvs= X-Google-Smtp-Source: ABdhPJxz92PdTZx0DqNVL5rxsfV9YMFTqsBGNNgkSTOxdwTO2DMKRD+TWoxN1P6DgrLeGDMhFYfXhA== X-Received: by 2002:a05:600c:224a:: with SMTP id a10mr3515571wmm.154.1637677309951; Tue, 23 Nov 2021 06:21:49 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:49 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 20/26] drm/nouveau: support more than one write fence in fenv50_wndw_prepare_fb Date: Tue, 23 Nov 2021 15:21:05 +0100 Message-Id: <20211123142111.3885-21-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Use dma_resv_get_singleton() here to eventually get more than one write fence as single fence. Signed-off-by: Christian König --- drivers/gpu/drm/nouveau/dispnv50/wndw.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c index 133c8736426a..b55a8a723581 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c @@ -536,8 +536,6 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) struct nouveau_bo *nvbo; struct nv50_head_atom *asyh; struct nv50_wndw_ctxdma *ctxdma; - struct dma_resv_iter cursor; - struct dma_fence *fence; int ret; NV_ATOMIC(drm, "%s prepare: %p\n", plane->name, fb); @@ -560,13 +558,11 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) asyw->image.handle[0] = ctxdma->object.handle; } - dma_resv_iter_begin(&cursor, nvbo->bo.base.resv, false); - dma_resv_for_each_fence_unlocked(&cursor, fence) { - /* TODO: We only use the first writer here */ - asyw->state.fence = dma_fence_get(fence); - break; - } - dma_resv_iter_end(&cursor); + ret = dma_resv_get_singleton(nvbo->bo.base.resv, false, + &asyw->state.fence); + if (ret) + return ret; + asyw->image.offset[0] = nvbo->offset; if (wndw->func->prepare) { From patchwork Tue Nov 23 14:21:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29A47C433FE for ; Tue, 23 Nov 2021 14:21:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237684AbhKWOZB (ORCPT ); Tue, 23 Nov 2021 09:25:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233318AbhKWOZB (ORCPT ); Tue, 23 Nov 2021 09:25:01 -0500 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAF5DC061574 for ; Tue, 23 Nov 2021 06:21:52 -0800 (PST) Received: by mail-wm1-x332.google.com with SMTP id j140-20020a1c2392000000b003399ae48f58so2399850wmj.5 for ; Tue, 23 Nov 2021 06:21:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZA3eVOgXYF/5OpQnArz1+ZTX63Lsh0HMT2Me6ubTWAg=; b=TEEklyYpr4YJqPs69chmgb+yzwP1mQMdy5K778Nj1w/9iJe9GLbIj0hWFq/Lky/vUz H5mXDRlrL24KotjypM4vV77fCZ/aNpEUXvwcIt7+4bDCHKl4+AyLCL5tvPu4qyErGWhg O2F8V7bw6W6rUapXS1ogoyimjwoghSKK7XmycBAlxrhKnZ2WY7jJrupkejqsgn8TroYT u4RvOoRTQT8pTkxDjwDcWw+mLs8zkr9UBlyQS340kc6PcE9b7LgckVhg+rDGeOdhC/cX zfJb8n2ThmkuaqRkKH5WMaimkI323FelfWs1DadGdSevM24tTmT1jfY4VpjABMXc25DD 9E/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZA3eVOgXYF/5OpQnArz1+ZTX63Lsh0HMT2Me6ubTWAg=; b=1r93BkRzTsumRutSQR+rkxb75y65C6dfjLbyRLxYrcy68hZAe0cKW2Gt0uh2JG9/Kp 0CH2lejAMjktAg4f3RNBl9rF8y5kyxH+Z6riT+xw/A3b9jOPIuYSnWWm1vW7uQGgvaKV wt6yG/Lhap53jXUQXbfSBZXVXUExlPQAU/48kWKuCm9yXa6GhEj0Ivv6ug2pT+KOMrHg txH6fDk0CjokaAuFw6TOb50/8WAwGYUWjc2UOX9Ivt34jkhWqkIfGQaRAUkenjeCf0ze slEy5UoR/1bgZ5uus8W0n84syp+RzC4JsqY639WDNqzHz/UUkUaa7GUP9z/10ewWSA17 EIgQ== X-Gm-Message-State: AOAM530F1KPAhnXGRloS9CeY4w7qwgOIXa5Rsb1FvVG1WhXmtXQuNWKp B95lDrNSeeFWsrVK5ftsuTs= X-Google-Smtp-Source: ABdhPJyjlMjyW4yKQFiqSqmTSpZBQ8JHZc2WoFrzLFwWHowvF5mfVuCgRsFk6nGHySXsRcT3UMN0fA== X-Received: by 2002:a1c:f418:: with SMTP id z24mr3644229wma.95.1637677311585; Tue, 23 Nov 2021 06:21:51 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:51 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 21/26] drm/amdgpu: use dma_resv_get_singleton in amdgpu_pasid_free_cb Date: Tue, 23 Nov 2021 15:21:06 +0100 Message-Id: <20211123142111.3885-22-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Makes the code a bit more simpler. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 23 +++-------------------- 1 file changed, 3 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c index be48487e2ca7..888d97143177 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c @@ -107,36 +107,19 @@ static void amdgpu_pasid_free_cb(struct dma_fence *fence, void amdgpu_pasid_free_delayed(struct dma_resv *resv, u32 pasid) { - struct dma_fence *fence, **fences; struct amdgpu_pasid_cb *cb; - unsigned count; + struct dma_fence *fence; int r; - r = dma_resv_get_fences(resv, true, &count, &fences); + r = dma_resv_get_singleton(resv, true, &fence); if (r) goto fallback; - if (count == 0) { + if (!fence) { amdgpu_pasid_free(pasid); return; } - if (count == 1) { - fence = fences[0]; - kfree(fences); - } else { - uint64_t context = dma_fence_context_alloc(1); - struct dma_fence_array *array; - - array = dma_fence_array_create(count, fences, context, - 1, false); - if (!array) { - kfree(fences); - goto fallback; - } - fence = &array->base; - } - cb = kmalloc(sizeof(*cb), GFP_KERNEL); if (!cb) { /* Last resort when we are OOM */ From patchwork Tue Nov 23 14:21:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FD8CC433FE for ; Tue, 23 Nov 2021 14:21:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238011AbhKWOZF (ORCPT ); Tue, 23 Nov 2021 09:25:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237991AbhKWOZD (ORCPT ); Tue, 23 Nov 2021 09:25:03 -0500 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86D12C061574 for ; Tue, 23 Nov 2021 06:21:55 -0800 (PST) Received: by mail-wm1-x335.google.com with SMTP id j140-20020a1c2392000000b003399ae48f58so2399984wmj.5 for ; Tue, 23 Nov 2021 06:21:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rS8A993TTflO1QJ7w6S2X3WJqfxfFc4wplw/CjwYX3A=; b=LMrNw9QOfirZqDH5tEmoVGS5Md/CQcjc/C8vNWPprdo6Sd3/rrkW62FykoSLyU6QmK 4Ut9GxH2W78bnWGxO40W3BMQYfsQPW3Rc8ELLCw3z1p7rv7havfviKaAKw8UggKV/Tkz m3qKk9QTRb5BNLOyDuWzsgweajf07+zDCUGCNCdx1+XBksalyVgHqX3Il0qoWpTdJCiM GhBN6HS7c8eNskg5WGNbHo2F/DiRbK/jc/xI5e1xdIoMhUiO9CiZDFWd+42mEOaq4Q+N KI9KATqPG7bT5d6FjC9OgPQmVQ+lIfQGpYDzkFCakR4g8TiwjhscaZA/RsqiAYKxOgKW YE2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rS8A993TTflO1QJ7w6S2X3WJqfxfFc4wplw/CjwYX3A=; b=lxb6W93Nxjtd0wMXCf34+7HmEtIG5u6LDRvbiPyiXBIq7z+evuiwFnZNaa4+nfUnj/ jLBkbQVnIFRF5Qk6DSP5PICCXRAs03xG1lN/PaQ+8hVcfdUBk+MVPydcbCRutNH1EJuR pT7evUHvV7mrozOTHEOAmkSlwIEPkXfdCeUwJTCyebx7+E9NgUnJeofWGttB+H6ETX8C EyA/9jmTqS9C134T015mD25jOU+M/1N7X6pX254gAIsIOOvBwOhqMtdVoYYdOTTs/MtV CzHrKrKXhMgIvd+gMI3F9LAtj5KVb02y7Sk+jVXlecn2cxXJXN/ZgogIdel1zOcr97Sz 0aYw== X-Gm-Message-State: AOAM531GOdgzUpPOjwvjid3ev0rINCT9/0dK5dpxUhBUA3klrjpGyJaV ENn1cUNICF/A4/NxFr9xV7v64DthqhI= X-Google-Smtp-Source: ABdhPJytMNv2x1jQcqYJixaedgaqafVOFxHhs/wUz6ON0L3K4rl6IRpMRiY5sHh6fj05YNSqe6eDKA== X-Received: by 2002:a7b:c119:: with SMTP id w25mr3582098wmi.70.1637677313693; Tue, 23 Nov 2021 06:21:53 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:53 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 22/26] dma-buf: add enum dma_resv_usage Date: Tue, 23 Nov 2021 15:21:07 +0100 Message-Id: <20211123142111.3885-23-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This change adds the dma_resv_usage enum and allows us to specify why a dma_resv object is queried for its containing fences. Additional to that a dma_resv_usage_rw() helper function is added to aid retrieving the fences for a read or write userspace submission. This is then deployed to the different query functions of the dma_resv object and all of their users. Signed-off-by: Christian König --- drivers/dma-buf/dma-buf.c | 3 +- drivers/dma-buf/dma-resv.c | 33 +++--- drivers/dma-buf/st-dma-resv.c | 48 ++++---- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 7 +- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 3 +- drivers/gpu/drm/drm_gem.c | 6 +- drivers/gpu/drm/drm_gem_atomic_helper.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 6 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 10 +- .../gpu/drm/i915/display/intel_atomic_plane.c | 3 +- drivers/gpu/drm/i915/gem/i915_gem_busy.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_wait.c | 6 +- .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 3 +- drivers/gpu/drm/i915/i915_request.c | 3 +- drivers/gpu/drm/i915/i915_sw_fence.c | 2 +- drivers/gpu/drm/msm/msm_gem.c | 3 +- drivers/gpu/drm/nouveau/dispnv50/wndw.c | 3 +- drivers/gpu/drm/nouveau/nouveau_bo.c | 8 +- drivers/gpu/drm/nouveau/nouveau_fence.c | 3 +- drivers/gpu/drm/nouveau/nouveau_gem.c | 3 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 3 +- drivers/gpu/drm/qxl/qxl_debugfs.c | 3 +- drivers/gpu/drm/radeon/radeon_display.c | 3 +- drivers/gpu/drm/radeon/radeon_gem.c | 9 +- drivers/gpu/drm/radeon/radeon_mn.c | 4 +- drivers/gpu/drm/radeon/radeon_sync.c | 2 +- drivers/gpu/drm/radeon/radeon_uvd.c | 4 +- drivers/gpu/drm/scheduler/sched_main.c | 3 +- drivers/gpu/drm/ttm/ttm_bo.c | 18 +-- drivers/gpu/drm/vgem/vgem_fence.c | 4 +- drivers/gpu/drm/virtio/virtgpu_ioctl.c | 5 +- drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 4 +- drivers/infiniband/core/umem_dmabuf.c | 3 +- include/linux/dma-resv.h | 106 +++++++++++++++--- 46 files changed, 244 insertions(+), 126 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 602b12d7470d..528983d3ba64 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1124,7 +1124,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, long ret; /* Wait on any implicit rendering fences */ - ret = dma_resv_wait_timeout(resv, write, true, MAX_SCHEDULE_TIMEOUT); + ret = dma_resv_wait_timeout(resv, dma_resv_usage_rw(write), + true, MAX_SCHEDULE_TIMEOUT); if (ret < 0) return ret; diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 3b0001c5ff3a..7ef8182a4b59 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -473,7 +473,7 @@ static void dma_resv_iter_restart_unlocked(struct dma_resv_iter *cursor) cursor->seq = read_seqcount_begin(&cursor->obj->seq); cursor->index = -1; cursor->shared_count = 0; - if (cursor->all_fences) { + if (cursor->usage >= DMA_RESV_USAGE_READ) { cursor->fences = dma_resv_shared_list(cursor->obj); if (cursor->fences) cursor->shared_count = cursor->fences->shared_count; @@ -580,7 +580,7 @@ struct dma_fence *dma_resv_iter_first(struct dma_resv_iter *cursor) dma_resv_assert_held(cursor->obj); cursor->index = 0; - if (cursor->all_fences) + if (cursor->usage >= DMA_RESV_USAGE_READ) cursor->fences = dma_resv_shared_list(cursor->obj); else cursor->fences = NULL; @@ -635,7 +635,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src) list = NULL; excl = NULL; - dma_resv_iter_begin(&cursor, src, true); + dma_resv_iter_begin(&cursor, src, DMA_RESV_USAGE_OTHER); dma_resv_for_each_fence_unlocked(&cursor, f) { if (dma_resv_iter_is_restarted(&cursor)) { @@ -681,7 +681,7 @@ EXPORT_SYMBOL(dma_resv_copy_fences); * dma_resv_get_fences - Get an object's shared and exclusive * fences without update side lock held * @obj: the reservation object - * @write: true if we should return all fences + * @usage: controls which fences to include * @num_fences: the number of fences returned * @fences: the array of fence ptrs returned (array is krealloc'd to the * required size, and must be freed by caller) @@ -689,7 +689,7 @@ EXPORT_SYMBOL(dma_resv_copy_fences); * Retrieve all fences from the reservation object. * Returns either zero or -ENOMEM. */ -int dma_resv_get_fences(struct dma_resv *obj, bool write, +int dma_resv_get_fences(struct dma_resv *obj, enum dma_resv_usage usage, unsigned int *num_fences, struct dma_fence ***fences) { struct dma_resv_iter cursor; @@ -698,7 +698,7 @@ int dma_resv_get_fences(struct dma_resv *obj, bool write, *num_fences = 0; *fences = NULL; - dma_resv_iter_begin(&cursor, obj, write); + dma_resv_iter_begin(&cursor, obj, usage); dma_resv_for_each_fence_unlocked(&cursor, fence) { if (dma_resv_iter_is_restarted(&cursor)) { @@ -730,7 +730,7 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences); /** * dma_resv_get_singleton - Get a single fence for all the fences * @obj: the reservation object - * @write: true if we should return all fences + * @usage: controls which fences to include * @fence: the resulting fence * * Get a single fence representing all the fences inside the resv object. @@ -740,7 +740,7 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences); * object since that can lead to stack corruption when finalizing the * dma_fence_array. */ -int dma_resv_get_singleton(struct dma_resv *obj, bool write, +int dma_resv_get_singleton(struct dma_resv *obj, enum dma_resv_usage usage, struct dma_fence **fence) { struct dma_fence_array *array; @@ -748,7 +748,7 @@ int dma_resv_get_singleton(struct dma_resv *obj, bool write, unsigned count; int r; - r = dma_resv_get_fences(obj, write, &count, &fences); + r = dma_resv_get_fences(obj, usage, &count, &fences); if (r) return r; @@ -780,7 +780,7 @@ EXPORT_SYMBOL_GPL(dma_resv_get_singleton); * dma_resv_wait_timeout - Wait on reservation's objects * shared and/or exclusive fences. * @obj: the reservation object - * @wait_all: if true, wait on all fences, else wait on just exclusive fence + * @usage: controls which fences to include in the wait * @intr: if true, do interruptible wait * @timeout: timeout value in jiffies or zero to return immediately * @@ -790,14 +790,14 @@ EXPORT_SYMBOL_GPL(dma_resv_get_singleton); * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or * greater than zer on success. */ -long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr, - unsigned long timeout) +long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage, + bool intr, unsigned long timeout) { long ret = timeout ? timeout : 1; struct dma_resv_iter cursor; struct dma_fence *fence; - dma_resv_iter_begin(&cursor, obj, wait_all); + dma_resv_iter_begin(&cursor, obj, usage); dma_resv_for_each_fence_unlocked(&cursor, fence) { ret = dma_fence_wait_timeout(fence, intr, ret); @@ -817,8 +817,7 @@ EXPORT_SYMBOL_GPL(dma_resv_wait_timeout); * dma_resv_test_signaled - Test if a reservation object's fences have been * signaled. * @obj: the reservation object - * @test_all: if true, test all fences, otherwise only test the exclusive - * fence + * @usage: controls which fences to include in the test * * Callers are not required to hold specific locks, but maybe hold * dma_resv_lock() already. @@ -827,12 +826,12 @@ EXPORT_SYMBOL_GPL(dma_resv_wait_timeout); * * True if all fences signaled, else false. */ -bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all) +bool dma_resv_test_signaled(struct dma_resv *obj, enum dma_resv_usage usage) { struct dma_resv_iter cursor; struct dma_fence *fence; - dma_resv_iter_begin(&cursor, obj, test_all); + dma_resv_iter_begin(&cursor, obj, usage); dma_resv_for_each_fence_unlocked(&cursor, fence) { dma_resv_iter_end(&cursor); return false; diff --git a/drivers/dma-buf/st-dma-resv.c b/drivers/dma-buf/st-dma-resv.c index f33bafc78693..a52c5fbea87a 100644 --- a/drivers/dma-buf/st-dma-resv.c +++ b/drivers/dma-buf/st-dma-resv.c @@ -58,7 +58,7 @@ static int sanitycheck(void *arg) return r; } -static int test_signaling(void *arg, bool shared) +static int test_signaling(void *arg, enum dma_resv_usage usage) { struct dma_resv resv; struct dma_fence *f; @@ -81,18 +81,18 @@ static int test_signaling(void *arg, bool shared) goto err_unlock; } - if (shared) + if (usage >= DMA_RESV_USAGE_READ) dma_resv_add_shared_fence(&resv, f); else dma_resv_add_excl_fence(&resv, f); - if (dma_resv_test_signaled(&resv, shared)) { + if (dma_resv_test_signaled(&resv, usage)) { pr_err("Resv unexpectedly signaled\n"); r = -EINVAL; goto err_unlock; } dma_fence_signal(f); - if (!dma_resv_test_signaled(&resv, shared)) { + if (!dma_resv_test_signaled(&resv, usage)) { pr_err("Resv not reporting signaled\n"); r = -EINVAL; goto err_unlock; @@ -107,15 +107,15 @@ static int test_signaling(void *arg, bool shared) static int test_excl_signaling(void *arg) { - return test_signaling(arg, false); + return test_signaling(arg, DMA_RESV_USAGE_WRITE); } static int test_shared_signaling(void *arg) { - return test_signaling(arg, true); + return test_signaling(arg, DMA_RESV_USAGE_READ); } -static int test_for_each(void *arg, bool shared) +static int test_for_each(void *arg, enum dma_resv_usage usage) { struct dma_resv_iter cursor; struct dma_fence *f, *fence; @@ -139,13 +139,13 @@ static int test_for_each(void *arg, bool shared) goto err_unlock; } - if (shared) + if (usage >= DMA_RESV_USAGE_READ) dma_resv_add_shared_fence(&resv, f); else dma_resv_add_excl_fence(&resv, f); r = -ENOENT; - dma_resv_for_each_fence(&cursor, &resv, shared, fence) { + dma_resv_for_each_fence(&cursor, &resv, usage, fence) { if (!r) { pr_err("More than one fence found\n"); r = -EINVAL; @@ -156,7 +156,8 @@ static int test_for_each(void *arg, bool shared) r = -EINVAL; goto err_unlock; } - if (dma_resv_iter_is_exclusive(&cursor) != !shared) { + if (dma_resv_iter_is_exclusive(&cursor) != + (usage >= DMA_RESV_USAGE_READ)) { pr_err("Unexpected fence usage\n"); r = -EINVAL; goto err_unlock; @@ -178,15 +179,15 @@ static int test_for_each(void *arg, bool shared) static int test_excl_for_each(void *arg) { - return test_for_each(arg, false); + return test_for_each(arg, DMA_RESV_USAGE_WRITE); } static int test_shared_for_each(void *arg) { - return test_for_each(arg, true); + return test_for_each(arg, DMA_RESV_USAGE_READ); } -static int test_for_each_unlocked(void *arg, bool shared) +static int test_for_each_unlocked(void *arg, enum dma_resv_usage usage) { struct dma_resv_iter cursor; struct dma_fence *f, *fence; @@ -211,14 +212,14 @@ static int test_for_each_unlocked(void *arg, bool shared) goto err_free; } - if (shared) + if (usage >= DMA_RESV_USAGE_READ) dma_resv_add_shared_fence(&resv, f); else dma_resv_add_excl_fence(&resv, f); dma_resv_unlock(&resv); r = -ENOENT; - dma_resv_iter_begin(&cursor, &resv, shared); + dma_resv_iter_begin(&cursor, &resv, usage); dma_resv_for_each_fence_unlocked(&cursor, fence) { if (!r) { pr_err("More than one fence found\n"); @@ -234,7 +235,8 @@ static int test_for_each_unlocked(void *arg, bool shared) r = -EINVAL; goto err_iter_end; } - if (dma_resv_iter_is_exclusive(&cursor) != !shared) { + if (dma_resv_iter_is_exclusive(&cursor) != + (usage >= DMA_RESV_USAGE_READ)) { pr_err("Unexpected fence usage\n"); r = -EINVAL; goto err_iter_end; @@ -262,15 +264,15 @@ static int test_for_each_unlocked(void *arg, bool shared) static int test_excl_for_each_unlocked(void *arg) { - return test_for_each_unlocked(arg, false); + return test_for_each_unlocked(arg, DMA_RESV_USAGE_WRITE); } static int test_shared_for_each_unlocked(void *arg) { - return test_for_each_unlocked(arg, true); + return test_for_each_unlocked(arg, DMA_RESV_USAGE_READ); } -static int test_get_fences(void *arg, bool shared) +static int test_get_fences(void *arg, enum dma_resv_usage usage) { struct dma_fence *f, **fences = NULL; struct dma_resv resv; @@ -294,13 +296,13 @@ static int test_get_fences(void *arg, bool shared) goto err_resv; } - if (shared) + if (usage >= DMA_RESV_USAGE_READ) dma_resv_add_shared_fence(&resv, f); else dma_resv_add_excl_fence(&resv, f); dma_resv_unlock(&resv); - r = dma_resv_get_fences(&resv, shared, &i, &fences); + r = dma_resv_get_fences(&resv, usage, &i, &fences); if (r) { pr_err("get_fences failed\n"); goto err_free; @@ -324,12 +326,12 @@ static int test_get_fences(void *arg, bool shared) static int test_excl_get_fences(void *arg) { - return test_get_fences(arg, false); + return test_get_fences(arg, DMA_RESV_USAGE_WRITE); } static int test_shared_get_fences(void *arg) { - return test_get_fences(arg, true); + return test_get_fences(arg, DMA_RESV_USAGE_READ); } int dma_resv(void) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 7facd614e50a..af0a61ce2ec7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1279,7 +1279,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, * submission in a dma_fence_chain and add it as exclusive * fence. */ - dma_resv_for_each_fence(&cursor, resv, false, fence) { + dma_resv_for_each_fence(&cursor, resv, + DMA_RESV_USAGE_WRITE, + fence) { break; } dma_fence_chain_init(chain, fence, dma_fence_get(p->fence), 1); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c index d17e1c911689..c2b1208a8c7f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c @@ -200,8 +200,7 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc, goto unpin; } - /* TODO: Unify this with other drivers */ - r = dma_resv_get_fences(new_abo->tbo.base.resv, true, + r = dma_resv_get_fences(new_abo->tbo.base.resv, DMA_RESV_USAGE_WRITE, &work->shared_count, &work->shared); if (unlikely(r != 0)) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index 85d31d85c384..e97b3a522b36 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -526,7 +526,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data, return -ENOENT; } robj = gem_to_amdgpu_bo(gobj); - ret = dma_resv_wait_timeout(robj->tbo.base.resv, true, true, timeout); + ret = dma_resv_wait_timeout(robj->tbo.base.resv, DMA_RESV_USAGE_READ, + true, timeout); /* ret == 0 means not signaled, * ret > 0 means signaled diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c index 888d97143177..1b0bab15d5bc 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c @@ -111,7 +111,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv, struct dma_fence *fence; int r; - r = dma_resv_get_singleton(resv, true, &fence); + r = dma_resv_get_singleton(resv, DMA_RESV_USAGE_OTHER, &fence); if (r) goto fallback; @@ -139,7 +139,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv, /* Not enough memory for the delayed delete, as last resort * block for all the fences to complete. */ - dma_resv_wait_timeout(resv, true, false, MAX_SCHEDULE_TIMEOUT); + dma_resv_wait_timeout(resv, DMA_RESV_USAGE_OTHER, + false, MAX_SCHEDULE_TIMEOUT); amdgpu_pasid_free(pasid); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c index 4b153daf283d..94262805912e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni, mmu_interval_set_seq(mni, cur_seq); - r = dma_resv_wait_timeout(bo->tbo.base.resv, true, false, - MAX_SCHEDULE_TIMEOUT); + r = dma_resv_wait_timeout(bo->tbo.base.resv, DMA_RESV_USAGE_OTHER, + false, MAX_SCHEDULE_TIMEOUT); mutex_unlock(&adev->notifier_lock); if (r <= 0) DRM_ERROR("(%ld) failed to wait for user bo\n", r); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index 1becd4e7e463..e99958290782 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -764,8 +764,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr) return 0; } - r = dma_resv_wait_timeout(bo->tbo.base.resv, false, false, - MAX_SCHEDULE_TIMEOUT); + r = dma_resv_wait_timeout(bo->tbo.base.resv, DMA_RESV_USAGE_KERNEL, + false, MAX_SCHEDULE_TIMEOUT); if (r < 0) return r; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c index f7d8487799b2..bd5f38e57c6c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c @@ -259,7 +259,8 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, struct amdgpu_sync *sync, if (resv == NULL) return -EINVAL; - dma_resv_for_each_fence(&cursor, resv, true, f) { + /* TODO: Use DMA_RESV_USAGE_READ here */ + dma_resv_for_each_fence(&cursor, resv, DMA_RESV_USAGE_OTHER, f) { dma_fence_chain_for_each(f, f) { struct dma_fence_chain *chain = to_dma_fence_chain(f); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index c15687ce67c4..9be56ecaf39a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -1360,7 +1360,8 @@ static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, * If true, then return false as any KFD process needs all its BOs to * be resident to run successfully */ - dma_resv_for_each_fence(&resv_cursor, bo->base.resv, true, f) { + dma_resv_for_each_fence(&resv_cursor, bo->base.resv, + DMA_RESV_USAGE_OTHER, f) { if (amdkfd_fence_check_mm(f, current->mm)) return false; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c index 6f8de11a17f1..9e102080dad9 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c @@ -1162,7 +1162,8 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo, ib->length_dw = 16; if (direct) { - r = dma_resv_wait_timeout(bo->tbo.base.resv, true, false, + r = dma_resv_wait_timeout(bo->tbo.base.resv, + DMA_RESV_USAGE_KERNEL, false, msecs_to_jiffies(10)); if (r == 0) r = -ETIMEDOUT; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index a96ae4c0e040..39d1736d13e8 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -2105,7 +2105,7 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm) struct dma_resv_iter cursor; struct dma_fence *fence; - dma_resv_for_each_fence(&cursor, resv, true, fence) { + dma_resv_for_each_fence(&cursor, resv, DMA_RESV_USAGE_OTHER, fence) { /* Add a callback for each fence in the reservation object */ amdgpu_vm_prt_get(adev); amdgpu_vm_add_prt_cb(adev, fence); @@ -2707,7 +2707,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo) return true; /* Don't evict VM page tables while they are busy */ - if (!dma_resv_test_signaled(bo->tbo.base.resv, true)) + if (!dma_resv_test_signaled(bo->tbo.base.resv, DMA_RESV_USAGE_OTHER)) return false; /* Try to block ongoing updates */ @@ -2887,7 +2887,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size, */ long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout) { - timeout = dma_resv_wait_timeout(vm->root.bo->tbo.base.resv, true, + timeout = dma_resv_wait_timeout(vm->root.bo->tbo.base.resv, + DMA_RESV_USAGE_OTHER, true, timeout); if (timeout <= 0) return timeout; diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 4130082c5873..932edfab9cb0 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -9048,7 +9048,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, * deadlock during GPU reset when this fence will not signal * but we hold reservation lock for the BO. */ - r = dma_resv_wait_timeout(abo->tbo.base.resv, true, false, + r = dma_resv_wait_timeout(abo->tbo.base.resv, + DMA_RESV_USAGE_WRITE, false, msecs_to_jiffies(5000)); if (unlikely(r <= 0)) DRM_ERROR("Waiting for fences timed out!"); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 4dcdec6487bb..d96355f98e75 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -770,7 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle, return -EINVAL; } - ret = dma_resv_wait_timeout(obj->resv, wait_all, true, timeout); + ret = dma_resv_wait_timeout(obj->resv, dma_resv_usage_rw(wait_all), + true, timeout); if (ret == 0) ret = -ETIME; else if (ret > 0) @@ -1344,7 +1345,8 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array, struct dma_fence *fence; int ret = 0; - dma_resv_for_each_fence(&cursor, obj->resv, write, fence) { + dma_resv_for_each_fence(&cursor, obj->resv, dma_resv_usage_rw(write), + fence) { ret = drm_gem_fence_array_add(fence_array, fence); if (ret) break; diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c index 9338ddb7edff..a6d89aed0bda 100644 --- a/drivers/gpu/drm/drm_gem_atomic_helper.c +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c @@ -151,7 +151,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st return 0; obj = drm_gem_fb_get_obj(state->fb, 0); - ret = dma_resv_get_singleton(obj->resv, false, &fence); + ret = dma_resv_get_singleton(obj->resv, DMA_RESV_USAGE_WRITE, &fence); if (ret) return ret; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index d5314aa28ff7..507172e2780b 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -380,12 +380,14 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op, } if (op & ETNA_PREP_NOSYNC) { - if (!dma_resv_test_signaled(obj->resv, write)) + if (!dma_resv_test_signaled(obj->resv, + dma_resv_usage_rw(write))) return -EBUSY; } else { unsigned long remain = etnaviv_timeout_to_jiffies(timeout); - ret = dma_resv_wait_timeout(obj->resv, write, true, remain); + ret = dma_resv_wait_timeout(obj->resv, dma_resv_usage_rw(write), + true, remain); if (ret <= 0) return ret == 0 ? -ETIMEDOUT : ret; } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index d4a7073190ec..26cd93c8e9cc 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -178,17 +178,19 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; struct dma_resv *robj = bo->obj->base.resv; + enum dma_resv_usage usage; ret = dma_resv_reserve_shared(robj, 1); if (ret) return ret; if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT) - continue; + usage = DMA_RESV_USAGE_KERNEL; + else + usage = dma_resv_usage_rw(bo->flags & ETNA_SUBMIT_BO_WRITE); - ret = dma_resv_get_fences(robj, - !!(bo->flags & ETNA_SUBMIT_BO_WRITE), - &bo->nr_shared, &bo->shared); + ret = dma_resv_get_fences(robj, usage, &bo->nr_shared, + &bo->shared); if (ret) return ret; } diff --git a/drivers/gpu/drm/i915/display/intel_atomic_plane.c b/drivers/gpu/drm/i915/display/intel_atomic_plane.c index cdc68fb51ba6..e10f2536837b 100644 --- a/drivers/gpu/drm/i915/display/intel_atomic_plane.c +++ b/drivers/gpu/drm/i915/display/intel_atomic_plane.c @@ -749,7 +749,8 @@ intel_prepare_plane_fb(struct drm_plane *_plane, if (ret < 0) goto unpin_fb; - dma_resv_iter_begin(&cursor, obj->base.resv, false); + dma_resv_iter_begin(&cursor, obj->base.resv, + DMA_RESV_USAGE_WRITE); dma_resv_for_each_fence_unlocked(&cursor, fence) { add_rps_boost_after_vblank(new_plane_state->hw.crtc, fence); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c index 470fdfd61a0f..14a1c0ad8c3c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c @@ -138,12 +138,12 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data, * Alternatively, we can trade that extra information on read/write * activity with * args->busy = - * !dma_resv_test_signaled(obj->resv, true); + * !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ); * to report the overall busyness. This is what the wait-ioctl does. * */ args->busy = 0; - dma_resv_iter_begin(&cursor, obj->base.resv, true); + dma_resv_iter_begin(&cursor, obj->base.resv, DMA_RESV_USAGE_READ); dma_resv_for_each_fence_unlocked(&cursor, fence) { if (dma_resv_iter_is_restarted(&cursor)) args->busy = 0; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index 444f8268b9c5..ca2e14c65b3b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -66,7 +66,7 @@ bool __i915_gem_object_is_lmem(struct drm_i915_gem_object *obj) struct intel_memory_region *mr = READ_ONCE(obj->mm.region); #ifdef CONFIG_LOCKDEP - GEM_WARN_ON(dma_resv_test_signaled(obj->base.resv, true) && + GEM_WARN_ON(dma_resv_test_signaled(obj->base.resv, DMA_RESV_USAGE_OTHER) && i915_gem_object_evictable(obj)); #endif return mr && (mr->type == INTEL_MEMORY_LOCAL || diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 3173c9f9a040..4cec06cdb643 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -85,7 +85,7 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni, return true; /* we will unbind on next submission, still have userptr pins */ - r = dma_resv_wait_timeout(obj->base.resv, true, false, + r = dma_resv_wait_timeout(obj->base.resv, DMA_RESV_USAGE_OTHER, false, MAX_SCHEDULE_TIMEOUT); if (r <= 0) drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c index 75b58aa8d4a7..fff79e8f89ab 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c @@ -40,7 +40,8 @@ i915_gem_object_wait_reservation(struct dma_resv *resv, struct dma_fence *fence; long ret = timeout ?: 1; - dma_resv_iter_begin(&cursor, resv, flags & I915_WAIT_ALL); + dma_resv_iter_begin(&cursor, resv, + dma_resv_usage_rw(flags & I915_WAIT_ALL)); dma_resv_for_each_fence_unlocked(&cursor, fence) { ret = i915_gem_object_wait_fence(fence, flags, timeout); if (ret <= 0) @@ -124,7 +125,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj, struct dma_resv_iter cursor; struct dma_fence *fence; - dma_resv_iter_begin(&cursor, obj->base.resv, flags & I915_WAIT_ALL); + dma_resv_iter_begin(&cursor, obj->base.resv, + dma_resv_usage_rw(flags & I915_WAIT_ALL)); dma_resv_for_each_fence_unlocked(&cursor, fence) i915_gem_fence_wait_priority(fence, attr); dma_resv_iter_end(&cursor); diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c index 4a6bb64c3a35..95985bcc47f6 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c @@ -219,7 +219,8 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, goto out_detach; } - timeout = dma_resv_wait_timeout(dmabuf->resv, false, true, 5 * HZ); + timeout = dma_resv_wait_timeout(dmabuf->resv, DMA_RESV_USAGE_WRITE, + true, 5 * HZ); if (!timeout) { pr_err("dmabuf wait for exclusive fence timed out.\n"); timeout = -ETIME; diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 42cd17357771..f11c070c3262 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1541,7 +1541,8 @@ i915_request_await_object(struct i915_request *to, struct dma_fence *fence; int ret = 0; - dma_resv_for_each_fence(&cursor, obj->base.resv, write, fence) { + dma_resv_for_each_fence(&cursor, obj->base.resv, + dma_resv_usage_rw(write), fence) { ret = i915_request_await_dma_fence(to, fence); if (ret) break; diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c index 7ea0dbf81530..303d792a8912 100644 --- a/drivers/gpu/drm/i915/i915_sw_fence.c +++ b/drivers/gpu/drm/i915/i915_sw_fence.c @@ -579,7 +579,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence, debug_fence_assert(fence); might_sleep_if(gfpflags_allow_blocking(gfp)); - dma_resv_iter_begin(&cursor, resv, write); + dma_resv_iter_begin(&cursor, resv, dma_resv_usage_rw(write)); dma_resv_for_each_fence_unlocked(&cursor, f) { pending = i915_sw_fence_await_dma_fence(fence, f, timeout, gfp); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 2916480d9115..19e09a88dcca 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -848,7 +848,8 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout) op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout); long ret; - ret = dma_resv_wait_timeout(obj->resv, write, true, remain); + ret = dma_resv_wait_timeout(obj->resv, dma_resv_usage_rw(write), + true, remain); if (ret == 0) return remain == 0 ? -EBUSY : -ETIMEDOUT; else if (ret < 0) diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c index b55a8a723581..53baf9aae4b1 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c @@ -558,7 +558,8 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) asyw->image.handle[0] = ctxdma->object.handle; } - ret = dma_resv_get_singleton(nvbo->bo.base.resv, false, + ret = dma_resv_get_singleton(nvbo->bo.base.resv, + DMA_RESV_USAGE_WRITE, &asyw->state.fence); if (ret) return ret; diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 74f8652d2bd3..378182638c7a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -962,11 +962,11 @@ nouveau_bo_vm_cleanup(struct ttm_buffer_object *bo, struct dma_fence *fence; int ret; - /* TODO: This is actually a memory management dependency */ - ret = dma_resv_get_singleton(bo->base.resv, false, &fence); + ret = dma_resv_get_singleton(bo->base.resv, DMA_RESV_USAGE_KERNEL, + &fence); if (ret) - dma_resv_wait_timeout(bo->base.resv, false, false, - MAX_SCHEDULE_TIMEOUT); + dma_resv_wait_timeout(bo->base.resv, DMA_RESV_USAGE_KERNEL, + false, MAX_SCHEDULE_TIMEOUT); nv10_bo_put_tile_region(dev, *old_tile, fence); *old_tile = new_tile; diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c index cd6715bd6d6b..26725e23c075 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.c +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c @@ -353,7 +353,8 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, if (ret) return ret; - dma_resv_for_each_fence(&cursor, resv, exclusive, fence) { + dma_resv_for_each_fence(&cursor, resv, dma_resv_usage_rw(exclusive), + fence) { struct nouveau_channel *prev = NULL; bool must_wait = true; diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 9416bee92141..fab542a758ff 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -962,7 +962,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data, return -ENOENT; nvbo = nouveau_gem_object(gem); - lret = dma_resv_wait_timeout(nvbo->bo.base.resv, write, true, + lret = dma_resv_wait_timeout(nvbo->bo.base.resv, + dma_resv_usage_rw(write), true, no_wait ? 0 : 30 * HZ); if (!lret) ret = -EBUSY; diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 96bb5a465627..0deb2d21422f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -316,7 +316,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data, if (!gem_obj) return -ENOENT; - ret = dma_resv_wait_timeout(gem_obj->resv, true, true, timeout); + ret = dma_resv_wait_timeout(gem_obj->resv, DMA_RESV_USAGE_READ, + true, timeout); if (!ret) ret = timeout ? -ETIMEDOUT : -EBUSY; diff --git a/drivers/gpu/drm/qxl/qxl_debugfs.c b/drivers/gpu/drm/qxl/qxl_debugfs.c index 6a36b0fd845c..9a45fbadf80d 100644 --- a/drivers/gpu/drm/qxl/qxl_debugfs.c +++ b/drivers/gpu/drm/qxl/qxl_debugfs.c @@ -61,7 +61,8 @@ qxl_debugfs_buffers_info(struct seq_file *m, void *data) struct dma_fence *fence; int rel = 0; - dma_resv_iter_begin(&cursor, bo->tbo.base.resv, true); + dma_resv_iter_begin(&cursor, bo->tbo.base.resv, + DMA_RESV_USAGE_OTHER); dma_resv_for_each_fence_unlocked(&cursor, fence) { if (dma_resv_iter_is_restarted(&cursor)) rel = 0; diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c index a6f875118f01..9872d0b3e31a 100644 --- a/drivers/gpu/drm/radeon/radeon_display.c +++ b/drivers/gpu/drm/radeon/radeon_display.c @@ -533,7 +533,8 @@ static int radeon_crtc_page_flip_target(struct drm_crtc *crtc, DRM_ERROR("failed to pin new rbo buffer before flip\n"); goto cleanup; } - r = dma_resv_get_singleton(new_rbo->tbo.base.resv, false, &work->fence); + r = dma_resv_get_singleton(new_rbo->tbo.base.resv, DMA_RESV_USAGE_WRITE, + &work->fence); if (r) { radeon_bo_unreserve(new_rbo); DRM_ERROR("failed to get new rbo buffer fences\n"); diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index a36a4f2c76b0..603b0111d50d 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -161,7 +161,9 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj, } if (domain == RADEON_GEM_DOMAIN_CPU) { /* Asking for cpu access wait for object idle */ - r = dma_resv_wait_timeout(robj->tbo.base.resv, true, true, 30 * HZ); + r = dma_resv_wait_timeout(robj->tbo.base.resv, + DMA_RESV_USAGE_OTHER, + true, 30 * HZ); if (!r) r = -EBUSY; @@ -523,7 +525,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data, } robj = gem_to_radeon_bo(gobj); - r = dma_resv_test_signaled(robj->tbo.base.resv, true); + r = dma_resv_test_signaled(robj->tbo.base.resv, DMA_RESV_USAGE_READ); if (r == 0) r = -EBUSY; else @@ -552,7 +554,8 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data, } robj = gem_to_radeon_bo(gobj); - ret = dma_resv_wait_timeout(robj->tbo.base.resv, true, true, 30 * HZ); + ret = dma_resv_wait_timeout(robj->tbo.base.resv, DMA_RESV_USAGE_READ, + true, 30 * HZ); if (ret == 0) r = -EBUSY; else if (ret < 0) diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c index 9fa88549c89e..9613024da25e 100644 --- a/drivers/gpu/drm/radeon/radeon_mn.c +++ b/drivers/gpu/drm/radeon/radeon_mn.c @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn, return true; } - r = dma_resv_wait_timeout(bo->tbo.base.resv, true, false, - MAX_SCHEDULE_TIMEOUT); + r = dma_resv_wait_timeout(bo->tbo.base.resv, DMA_RESV_USAGE_OTHER, + false, MAX_SCHEDULE_TIMEOUT); if (r <= 0) DRM_ERROR("(%ld) failed to wait for user bo\n", r); diff --git a/drivers/gpu/drm/radeon/radeon_sync.c b/drivers/gpu/drm/radeon/radeon_sync.c index b991ba1bcd51..49bbb2266c0f 100644 --- a/drivers/gpu/drm/radeon/radeon_sync.c +++ b/drivers/gpu/drm/radeon/radeon_sync.c @@ -96,7 +96,7 @@ int radeon_sync_resv(struct radeon_device *rdev, struct dma_fence *f; int r = 0; - dma_resv_for_each_fence(&cursor, resv, shared, f) { + dma_resv_for_each_fence(&cursor, resv, dma_resv_usage_rw(shared), f) { fence = to_radeon_fence(f); if (fence && fence->rdev == rdev) radeon_sync_fence(sync, fence); diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c index 377f9cdb5b53..488e78889dd6 100644 --- a/drivers/gpu/drm/radeon/radeon_uvd.c +++ b/drivers/gpu/drm/radeon/radeon_uvd.c @@ -478,8 +478,8 @@ static int radeon_uvd_cs_msg(struct radeon_cs_parser *p, struct radeon_bo *bo, return -EINVAL; } - r = dma_resv_wait_timeout(bo->tbo.base.resv, false, false, - MAX_SCHEDULE_TIMEOUT); + r = dma_resv_wait_timeout(bo->tbo.base.resv, DMA_RESV_USAGE_KERNEL, + false, MAX_SCHEDULE_TIMEOUT); if (r <= 0) { DRM_ERROR("Failed waiting for UVD message (%d)!\n", r); return r ? r : -ETIME; diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 94fe51b3caa2..a53506d21635 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -703,7 +703,8 @@ int drm_sched_job_add_implicit_dependencies(struct drm_sched_job *job, struct dma_fence *fence; int ret; - dma_resv_for_each_fence(&cursor, obj->resv, write, fence) { + dma_resv_for_each_fence(&cursor, obj->resv, dma_resv_usage_rw(write), + fence) { ret = drm_sched_job_add_dependency(job, fence); if (ret) return ret; diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index e43f551594a8..3ca4882513c5 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -272,7 +272,7 @@ static void ttm_bo_flush_all_fences(struct ttm_buffer_object *bo) struct dma_resv_iter cursor; struct dma_fence *fence; - dma_resv_iter_begin(&cursor, resv, true); + dma_resv_iter_begin(&cursor, resv, DMA_RESV_USAGE_OTHER); dma_resv_for_each_fence_unlocked(&cursor, fence) { if (!fence->ops->signaled) dma_fence_enable_sw_signaling(fence); @@ -301,7 +301,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo, struct dma_resv *resv = &bo->base._resv; int ret; - if (dma_resv_test_signaled(resv, true)) + if (dma_resv_test_signaled(resv, DMA_RESV_USAGE_OTHER)) ret = 0; else ret = -EBUSY; @@ -313,7 +313,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo, dma_resv_unlock(bo->base.resv); spin_unlock(&bo->bdev->lru_lock); - lret = dma_resv_wait_timeout(resv, true, interruptible, + lret = dma_resv_wait_timeout(resv, DMA_RESV_USAGE_OTHER, + interruptible, 30 * HZ); if (lret < 0) @@ -416,7 +417,8 @@ static void ttm_bo_release(struct kref *kref) /* Last resort, if we fail to allocate memory for the * fences block for the BO to become idle */ - dma_resv_wait_timeout(bo->base.resv, true, false, + dma_resv_wait_timeout(bo->base.resv, + DMA_RESV_USAGE_OTHER, false, 30 * HZ); } @@ -427,7 +429,7 @@ static void ttm_bo_release(struct kref *kref) ttm_mem_io_free(bdev, bo->resource); } - if (!dma_resv_test_signaled(bo->base.resv, true) || + if (!dma_resv_test_signaled(bo->base.resv, DMA_RESV_USAGE_OTHER) || !dma_resv_trylock(bo->base.resv)) { /* The BO is not idle, resurrect it for delayed destroy */ ttm_bo_flush_all_fences(bo); @@ -1072,14 +1074,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo, long timeout = 15 * HZ; if (no_wait) { - if (dma_resv_test_signaled(bo->base.resv, true)) + if (dma_resv_test_signaled(bo->base.resv, DMA_RESV_USAGE_OTHER)) return 0; else return -EBUSY; } - timeout = dma_resv_wait_timeout(bo->base.resv, true, interruptible, - timeout); + timeout = dma_resv_wait_timeout(bo->base.resv, DMA_RESV_USAGE_OTHER, + interruptible, timeout); if (timeout < 0) return timeout; diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c index a4cb296d4fcd..74ebadf4e592 100644 --- a/drivers/gpu/drm/vgem/vgem_fence.c +++ b/drivers/gpu/drm/vgem/vgem_fence.c @@ -130,6 +130,7 @@ int vgem_fence_attach_ioctl(struct drm_device *dev, struct vgem_file *vfile = file->driver_priv; struct dma_resv *resv; struct drm_gem_object *obj; + enum dma_resv_usage usage; struct dma_fence *fence; int ret; @@ -151,7 +152,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev, /* Check for a conflicting fence */ resv = obj->resv; - if (!dma_resv_test_signaled(resv, arg->flags & VGEM_FENCE_WRITE)) { + usage = dma_resv_usage_rw(arg->flags & VGEM_FENCE_WRITE); + if (!dma_resv_test_signaled(resv, usage)) { ret = -EBUSY; goto err_fence; } diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 0007e423d885..35bcb015e714 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -518,9 +518,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data, return -ENOENT; if (args->flags & VIRTGPU_WAIT_NOWAIT) { - ret = dma_resv_test_signaled(obj->resv, true); + ret = dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ); } else { - ret = dma_resv_wait_timeout(obj->resv, true, true, timeout); + ret = dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, + true, timeout); } if (ret == 0) ret = -EBUSY; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index f81767f0a5cc..72715e452fdd 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -739,8 +739,8 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo, if (flags & drm_vmw_synccpu_allow_cs) { long lret; - lret = dma_resv_wait_timeout(bo->base.resv, true, true, - nonblock ? 0 : + lret = dma_resv_wait_timeout(bo->base.resv, DMA_RESV_USAGE_READ, + true, nonblock ? 0 : MAX_SCHEDULE_TIMEOUT); if (!lret) return -EBUSY; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 23c3fc2cbf10..9e3dcbb573e7 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -1169,8 +1169,8 @@ int vmw_resources_clean(struct vmw_buffer_object *vbo, pgoff_t start, if (bo->moving) dma_fence_put(bo->moving); - /* TODO: This is actually a memory management dependency */ - return dma_resv_get_singleton(bo->base.resv, false, + return dma_resv_get_singleton(bo->base.resv, + DMA_RESV_USAGE_KERNEL, &bo->moving); } diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index d32cd7538835..fce80a4a5147 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -67,7 +67,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) * may be not up-to-date. Wait for the exporter to finish * the migration. */ - return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv, false, + return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv, + DMA_RESV_USAGE_KERNEL, false, MAX_SCHEDULE_TIMEOUT); } EXPORT_SYMBOL(ib_umem_dmabuf_map_pages); diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 062571c04bca..37552935bca6 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -49,6 +49,86 @@ extern struct ww_class reservation_ww_class; struct dma_resv_list; +/** + * enum dma_resv_usage - how the fences from a dma_resv obj are used + * + * This enum describes the different use cases for a dma_resv object and + * controls which fences are returned when queried. + * + * An important fact is that there is the order KERNELobj = obj; - cursor->all_fences = all_fences; + cursor->usage = usage; cursor->fence = NULL; } @@ -242,7 +322,7 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor) * dma_resv_for_each_fence - fence iterator * @cursor: a struct dma_resv_iter pointer * @obj: a dma_resv object pointer - * @all_fences: true if all fences should be returned + * @usage: controls which fences to return * @fence: the current fence * * Iterate over the fences in a struct dma_resv object while holding the @@ -251,8 +331,8 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor) * valid as long as the lock is held and so no extra reference to the fence is * taken. */ -#define dma_resv_for_each_fence(cursor, obj, all_fences, fence) \ - for (dma_resv_iter_begin(cursor, obj, all_fences), \ +#define dma_resv_for_each_fence(cursor, obj, usage, fence) \ + for (dma_resv_iter_begin(cursor, obj, usage), \ fence = dma_resv_iter_first(cursor); fence; \ fence = dma_resv_iter_next(cursor)) @@ -421,14 +501,14 @@ void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); void dma_resv_prune(struct dma_resv *obj); void dma_resv_prune_unlocked(struct dma_resv *obj); -int dma_resv_get_fences(struct dma_resv *obj, bool write, +int dma_resv_get_fences(struct dma_resv *obj, enum dma_resv_usage usage, unsigned int *num_fences, struct dma_fence ***fences); -int dma_resv_get_singleton(struct dma_resv *obj, bool write, +int dma_resv_get_singleton(struct dma_resv *obj, enum dma_resv_usage usage, struct dma_fence **fence); int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); -long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr, - unsigned long timeout); -bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all); +long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage, + bool intr, unsigned long timeout); +bool dma_resv_test_signaled(struct dma_resv *obj, enum dma_resv_usage usage); void dma_resv_describe(struct dma_resv *obj, struct seq_file *seq); #endif /* _LINUX_RESERVATION_H */ From patchwork Tue Nov 23 14:21:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09589C433EF for ; Tue, 23 Nov 2021 14:22:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238022AbhKWOZH (ORCPT ); Tue, 23 Nov 2021 09:25:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237991AbhKWOZG (ORCPT ); Tue, 23 Nov 2021 09:25:06 -0500 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 469A2C061714 for ; Tue, 23 Nov 2021 06:21:58 -0800 (PST) Received: by mail-wm1-x32f.google.com with SMTP id p27-20020a05600c1d9b00b0033bf8532855so2141864wms.3 for ; Tue, 23 Nov 2021 06:21:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ju8EwTK7trmUrnX/FTklfpX+GWO3X0gDTV88n43k+Zg=; b=bY8vB8k8VxnDmYjkioP38ijbpRpJ8HLmrCZfRuK5wpG3fwdjxhuKVaHRDLMwIHvZ7g oY6ZlAP6exNVDxcWbgFOyxPDs2gJ4ijYgpS//rxlvX1HLXCdwlXw+NdY1KKPQ2XaAsXP 6Viw0fee1hj+y2qvVt1NQ0I3waYqQL3x3kEHnuzIfhA9Rwjr+JcskBAWlW1m8B4kRVwr Guzd23cxOii5A0LfEInrO5SykGbk837Ss0X7HApRSPf6EsEGXG7mKnIcTckypF928jz5 N+j4mrsNBFlh1LGaSYxJfMfQyrilj/ICy01INfTeE+YSTbU+yF5/g8oKwkhJkB6BdHOu VuLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ju8EwTK7trmUrnX/FTklfpX+GWO3X0gDTV88n43k+Zg=; b=2ccoMfMCX995hQtI89XpaAvDLzs8uUTGCwD1cUy3HPUo1ZVCcLildiLoPp/VjRxrBj rAW3SsbXyOevS8PW4m2VIlwSP0anF1sEwNTvCWXpxWoV+A/l+sZsCISwS9E0f64HtdKA /gyb1oLQO88s5SW8WlHLmQucSCiY1kw7k9tLk39WnO90K/+szBNaqlDPbnjaYd0aP5XT pcI59m0a1NRl8zsxJsLJqwXFCi0CeeK7iZuSMQ1el3URM3HYZKaSVK3V7Xns+HMwfpxs oYiDe0z8skukpirMSJWjwVQuAIGBm5D2WTpvVkQYicpkXI2YIY8tWnjjEOFEsurFZhbg R+VQ== X-Gm-Message-State: AOAM532ao1MHF13l3v7zNdVpWbk8biOp3TFMoQ2texJyR98ubjEqpWW5 ka12R0NUbgBG6jf736Yl5qA= X-Google-Smtp-Source: ABdhPJxEPN+ItF+4UdsKhZzR9CUKJyF7yfNRWmkmtdOepLoQUdynsi9SRq4z82i3upynqvMWLzeEPQ== X-Received: by 2002:a7b:cbc3:: with SMTP id n3mr3644286wmi.90.1637677316177; Tue, 23 Nov 2021 06:21:56 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:55 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 23/26] dma-buf: specify usage while adding fences to dma_resv obj Date: Tue, 23 Nov 2021 15:21:08 +0100 Message-Id: <20211123142111.3885-24-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Instead of distingting between shared and exclusive fences specify the fence usage while adding fences. Rework all drivers to use this interface instead and deprecate the old one. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 389 ++++++++---------- drivers/dma-buf/st-dma-resv.c | 109 ++--- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 4 +- drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 12 +- drivers/gpu/drm/i915/gem/i915_gem_busy.c | 13 +- drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 5 +- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 4 +- .../drm/i915/gem/selftests/i915_gem_migrate.c | 2 +- drivers/gpu/drm/i915/i915_vma.c | 10 +- .../drm/i915/selftests/intel_memory_region.c | 2 +- drivers/gpu/drm/lima/lima_gem.c | 4 +- drivers/gpu/drm/msm/msm_gem_submit.c | 4 +- drivers/gpu/drm/nouveau/nouveau_bo.c | 9 +- drivers/gpu/drm/nouveau/nouveau_fence.c | 2 +- drivers/gpu/drm/panfrost/panfrost_job.c | 2 +- drivers/gpu/drm/qxl/qxl_release.c | 5 +- drivers/gpu/drm/radeon/radeon_object.c | 6 +- drivers/gpu/drm/radeon/radeon_vm.c | 2 +- drivers/gpu/drm/ttm/ttm_bo.c | 6 +- drivers/gpu/drm/ttm/ttm_bo_util.c | 7 +- drivers/gpu/drm/ttm/ttm_execbuf_util.c | 10 +- drivers/gpu/drm/v3d/v3d_gem.c | 6 +- drivers/gpu/drm/vc4/vc4_gem.c | 4 +- drivers/gpu/drm/vgem/vgem_fence.c | 11 +- drivers/gpu/drm/virtio/virtgpu_gem.c | 5 +- drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 5 +- include/linux/dma-resv.h | 88 ++-- 31 files changed, 312 insertions(+), 434 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 7ef8182a4b59..c1e5372bac6f 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -44,12 +44,12 @@ /** * DOC: Reservation Object Overview * - * The reservation object provides a mechanism to manage shared and - * exclusive fences associated with a buffer. A reservation object - * can have attached one exclusive fence (normally associated with - * write operations) or N shared fences (read operations). The RCU - * mechanism is used to protect read access to fences from locked - * write-side updates. + * The reservation object provides a mechanism to manage a container of + * dma_fence object associated with a resource. A reservation object + * can have any number of fences attaches to it. Each fence carring an usage + * parameter determining how the operation represented by the fence is using the + * resource. The RCU mechanism is used to protect read access to fences from + * locked write-side updates. * * See struct dma_resv for more details. */ @@ -57,36 +57,80 @@ DEFINE_WD_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class); +/* Mask for the lower fence pointer bits */ +#define DMA_RESV_LIST_MASK 0x3 + /** - * struct dma_resv_list - a list of shared fences + * struct dma_resv_list - an array of fences * @rcu: for internal use - * @shared_count: table of shared fences - * @shared_max: for growing shared fence table - * @shared: shared fence table + * @num_fences: table of fences + * @max_fences: for growing fence table + * @table: fence table */ struct dma_resv_list { struct rcu_head rcu; - u32 shared_count, shared_max; - struct dma_fence __rcu *shared[]; + u32 num_fences, max_fences; + struct dma_fence __rcu *table[]; }; +/** + * dma_resv_list_entry - extract fence and usage from a list entry + * @list: the list to extract and entry from + * @index: which entry we want + * @check: lockdep check that the access is allowed + * @fence: the resulting fence + * @usage: the resulting usage + * + * Extract the fence and usage flags from an RCU protected entry in the list. + */ +static void dma_resv_list_entry(struct dma_resv_list *list, unsigned int index, + bool check, struct dma_fence **fence, + enum dma_resv_usage *usage) +{ + long tmp; + + tmp = (long)rcu_dereference_check(list->table[index], check); + *fence = (struct dma_fence *)(tmp & ~DMA_RESV_LIST_MASK); + if (usage) + *usage = tmp & DMA_RESV_LIST_MASK; +} + +/** + * dma_resv_list_set - set fence and usage at a specific index + * @list: the list to modify + * @index: where to make the change + * @fence: the fence to set + * @usage: the usage to set + * + * Set the fence and usage flags at the specific index in the list. + */ +static void dma_resv_list_set(struct dma_resv_list *list, + unsigned int index, + struct dma_fence *fence, + enum dma_resv_usage usage) +{ + long tmp = ((long)fence) | usage; + + RCU_INIT_POINTER(list->table[index], (struct dma_fence *)tmp); +} + /** * dma_resv_list_alloc - allocate fence list - * @shared_max: number of fences we need space for + * @max_fences: number of fences we need space for * * Allocate a new dma_resv_list and make sure to correctly initialize - * shared_max. + * max_fences. */ -static struct dma_resv_list *dma_resv_list_alloc(unsigned int shared_max) +static struct dma_resv_list *dma_resv_list_alloc(unsigned int max_fences) { struct dma_resv_list *list; - list = kmalloc(struct_size(list, shared, shared_max), GFP_KERNEL); + list = kmalloc(struct_size(list, table, max_fences), GFP_KERNEL); if (!list) return NULL; - list->shared_max = (ksize(list) - offsetof(typeof(*list), shared)) / - sizeof(*list->shared); + list->max_fences = (ksize(list) - offsetof(typeof(*list), table)) / + sizeof(*list->table); return list; } @@ -104,9 +148,12 @@ static void dma_resv_list_free(struct dma_resv_list *list) if (!list) return; - for (i = 0; i < list->shared_count; ++i) - dma_fence_put(rcu_dereference_protected(list->shared[i], true)); + for (i = 0; i < list->num_fences; ++i) { + struct dma_fence *fence; + dma_resv_list_entry(list, i, true, &fence, NULL); + dma_fence_put(fence); + } kfree_rcu(list, rcu); } @@ -125,15 +172,15 @@ static void dma_resv_list_prune(struct dma_resv_list *list, if (!list) return; - for (i = 0; i < list->shared_count; ++i) { + for (i = 0; i < list->num_fences; ++i) { struct dma_fence *fence; - fence = rcu_dereference_protected(list->shared[i], - dma_resv_held(obj)); + dma_resv_list_entry(list, i, true, &fence, NULL); if (!dma_fence_is_signaled(fence)) continue; - RCU_INIT_POINTER(list->shared[i], dma_fence_get_stub()); + dma_resv_list_set(list, i, dma_fence_get_stub(), + DMA_RESV_USAGE_OTHER); dma_fence_put(fence); } } @@ -147,8 +194,7 @@ void dma_resv_init(struct dma_resv *obj) ww_mutex_init(&obj->lock, &reservation_ww_class); seqcount_ww_mutex_init(&obj->seq, &obj->lock); - RCU_INIT_POINTER(obj->fence, NULL); - RCU_INIT_POINTER(obj->fence_excl, NULL); + RCU_INIT_POINTER(obj->fences, NULL); } EXPORT_SYMBOL(dma_resv_init); @@ -158,81 +204,54 @@ EXPORT_SYMBOL(dma_resv_init); */ void dma_resv_fini(struct dma_resv *obj) { - struct dma_resv_list *fobj; - struct dma_fence *excl; - /* * This object should be dead and all references must have * been released to it, so no need to be protected with rcu. */ - excl = rcu_dereference_protected(obj->fence_excl, 1); - if (excl) - dma_fence_put(excl); - - fobj = rcu_dereference_protected(obj->fence, 1); - dma_resv_list_free(fobj); + dma_resv_list_free(rcu_dereference_protected(obj->fences, true)); ww_mutex_destroy(&obj->lock); } EXPORT_SYMBOL(dma_resv_fini); /** - * dma_resv_excl_fence - return the object's exclusive fence - * @obj: the reservation object - * - * Returns the exclusive fence (if any). Caller must either hold the objects - * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), - * or one of the variants of each - * - * RETURNS - * The exclusive fence or NULL - */ -static inline struct dma_fence * -dma_resv_excl_fence(struct dma_resv *obj) -{ - return rcu_dereference_check(obj->fence_excl, dma_resv_held(obj)); -} - -/** - * dma_resv_shared_list - get the reservation object's shared fence list + * dma_resv_fences_list - return the dma_resv object's fence list * @obj: the reservation object * - * Returns the shared fence list. Caller must either hold the objects - * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), - * or one of the variants of each + * Returns the fence list. Caller must either hold the objects through + * dma_resv_lock() or the RCU read side lock through rcu_read_lock(). */ -static inline struct dma_resv_list *dma_resv_shared_list(struct dma_resv *obj) +static inline struct dma_resv_list *dma_resv_fences_list(struct dma_resv *obj) { - return rcu_dereference_check(obj->fence, dma_resv_held(obj)); + return rcu_dereference_check(obj->fences, dma_resv_held(obj)); } /** - * dma_resv_reserve_shared - Reserve space to add shared fences to - * a dma_resv. + * dma_resv_reserve_fences - Reserve space to add fences to a dma_resv object. * @obj: reservation object * @num_fences: number of fences we want to add * - * Should be called before dma_resv_add_shared_fence(). Must - * be called with @obj locked through dma_resv_lock(). + * Should be called before dma_resv_add_fence(). Must be called with @obj + * locked through dma_resv_lock(). * * Note that the preallocated slots need to be re-reserved if @obj is unlocked - * at any time before calling dma_resv_add_shared_fence(). This is validated - * when CONFIG_DEBUG_MUTEXES is enabled. + * at any time before calling dma_resv_add_fence(). This is validated when + * CONFIG_DEBUG_MUTEXES is enabled. * * RETURNS * Zero for success, or -errno */ -int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences) +int dma_resv_reserve_fences(struct dma_resv *obj, unsigned int num_fences) { struct dma_resv_list *old, *new; unsigned int i, j, k, max; dma_resv_assert_held(obj); - old = dma_resv_shared_list(obj); - if (old && old->shared_max) { - if ((old->shared_count + num_fences) <= old->shared_max) + old = dma_resv_fences_list(obj); + if (old && old->max_fences) { + if ((old->num_fences + num_fences) <= old->max_fences) return 0; - max = max(old->shared_count + num_fences, old->shared_max * 2); + max = max(old->num_fences + num_fences, old->max_fences * 2); } else { max = max(4ul, roundup_pow_of_two(num_fences)); } @@ -247,27 +266,27 @@ int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences) * references from the old struct are carried over to * the new. */ - for (i = 0, j = 0, k = max; i < (old ? old->shared_count : 0); ++i) { + for (i = 0, j = 0, k = max; i < (old ? old->num_fences : 0); ++i) { + enum dma_resv_usage usage; struct dma_fence *fence; - fence = rcu_dereference_protected(old->shared[i], - dma_resv_held(obj)); + dma_resv_list_entry(old, i, dma_resv_held(obj), &fence, &usage); if (dma_fence_is_signaled(fence)) - RCU_INIT_POINTER(new->shared[--k], fence); + RCU_INIT_POINTER(new->table[--k], fence); else - RCU_INIT_POINTER(new->shared[j++], fence); + dma_resv_list_set(new, j++, fence, usage); } - new->shared_count = j; + new->num_fences = j; /* * We are not changing the effective set of fences here so can * merely update the pointer to the new array; both existing * readers and new readers will see exactly the same set of - * active (unsignaled) shared fences. Individual fences and the + * active (unsignaled) fences. Individual fences and the * old array are protected by RCU and so will not vanish under * the gaze of the rcu_read_lock() readers. */ - rcu_assign_pointer(obj->fence, new); + rcu_assign_pointer(obj->fences, new); if (!old) return 0; @@ -276,7 +295,7 @@ int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences) for (i = k; i < max; ++i) { struct dma_fence *fence; - fence = rcu_dereference_protected(new->shared[i], + fence = rcu_dereference_protected(new->table[i], dma_resv_held(obj)); dma_fence_put(fence); } @@ -284,41 +303,44 @@ int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences) return 0; } -EXPORT_SYMBOL(dma_resv_reserve_shared); +EXPORT_SYMBOL(dma_resv_reserve_fences); #ifdef CONFIG_DEBUG_MUTEXES /** - * dma_resv_reset_shared_max - reset shared fences for debugging + * dma_resv_reset_max_fences - reset fences for debugging * @obj: the dma_resv object to reset * - * Reset the number of pre-reserved shared slots to test that drivers do - * correct slot allocation using dma_resv_reserve_shared(). See also - * &dma_resv_list.shared_max. + * Reset the number of pre-reserved fence slots to test that drivers do + * correct slot allocation using dma_resv_reserve_fences(). See also + * &dma_resv_list.max_fences. */ -void dma_resv_reset_shared_max(struct dma_resv *obj) +void dma_resv_reset_max_fences(struct dma_resv *obj) { - struct dma_resv_list *fences = dma_resv_shared_list(obj); + struct dma_resv_list *fences = dma_resv_fences_list(obj); dma_resv_assert_held(obj); - /* Test shared fence slot reservation */ + /* Test fence slot reservation */ if (fences) - fences->shared_max = fences->shared_count; + fences->max_fences = fences->num_fences; } -EXPORT_SYMBOL(dma_resv_reset_shared_max); +EXPORT_SYMBOL(dma_resv_reset_max_fences); #endif /** - * dma_resv_add_shared_fence - Add a fence to a shared slot + * dma_resv_add_fence - Add a fence to the dma_resv obj * @obj: the reservation object - * @fence: the shared fence to add + * @fence: the fence to add + * @usage: how the fence is used with the resource protected by the dma_resv + * obj. * - * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and - * dma_resv_reserve_shared() has been called. + * Add a fence to a slot, @obj must be locked with dma_resv_lock(), and + * dma_resv_reserve_fences() has been called. * * See also &dma_resv.fence for a discussion of the semantics. */ -void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) +void dma_resv_add_fence(struct dma_resv *obj, struct dma_fence *fence, + enum dma_resv_usage usage) { struct dma_resv_list *fobj; struct dma_fence *old; @@ -328,39 +350,41 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) dma_resv_assert_held(obj); - fobj = dma_resv_shared_list(obj); - count = fobj->shared_count; + fobj = dma_resv_fences_list(obj); + count = fobj->num_fences; write_seqcount_begin(&obj->seq); for (i = 0; i < count; ++i) { + enum dma_resv_usage old_usage; - old = rcu_dereference_protected(fobj->shared[i], - dma_resv_held(obj)); - if (old->context == fence->context || + dma_resv_list_entry(fobj, i, dma_resv_held(obj), + &old, &old_usage); + if ((old->context == fence->context && old_usage >= usage) || dma_fence_is_signaled(old)) goto replace; } - BUG_ON(fobj->shared_count >= fobj->shared_max); + BUG_ON(fobj->num_fences >= fobj->max_fences); old = NULL; count++; replace: - RCU_INIT_POINTER(fobj->shared[i], fence); - /* pointer update must be visible before we extend the shared_count */ - smp_store_mb(fobj->shared_count, count); + dma_resv_list_set(fobj, i, fence, usage); + /* pointer update must be visible before we extend the num_fences */ + smp_store_mb(fobj->num_fences, count); write_seqcount_end(&obj->seq); dma_fence_put(old); } -EXPORT_SYMBOL(dma_resv_add_shared_fence); +EXPORT_SYMBOL(dma_resv_add_fence); /** * dma_resv_replace_fences - replace fences in the dma_resv obj * @obj: the reservation object * @context: the context of the fences to replace * @replacement: the new fence to use instead + * @usage: how the fence is used * * Replace fences with a specified context with a new fence. Only valid if the * operation represented by the original fences is completed or has no longer @@ -368,64 +392,30 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence); * completes. */ void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, - struct dma_fence *replacement) + struct dma_fence *replacement, + enum dma_resv_usage usage) { struct dma_resv_list *list; - struct dma_fence *old; unsigned int i; dma_resv_assert_held(obj); + list = dma_resv_fences_list(obj); write_seqcount_begin(&obj->seq); + for (i = 0; list && i < list->num_fences; ++i) { + struct dma_fence *old; - old = dma_resv_excl_fence(obj); - if (old->context == context) { - RCU_INIT_POINTER(obj->fence_excl, dma_fence_get(replacement)); - dma_fence_put(old); - } - - list = dma_resv_shared_list(obj); - for (i = 0; list && i < list->shared_count; ++i) { - old = rcu_dereference_protected(list->shared[i], - dma_resv_held(obj)); + dma_resv_list_entry(list, i, dma_resv_held(obj), &old, NULL); if (old->context != context) continue; - rcu_assign_pointer(list->shared[i], dma_fence_get(replacement)); + dma_resv_list_set(list, i, replacement, usage); dma_fence_put(old); } - write_seqcount_end(&obj->seq); } EXPORT_SYMBOL(dma_resv_replace_fences); -/** - * dma_resv_add_excl_fence - Add an exclusive fence. - * @obj: the reservation object - * @fence: the exclusive fence to add - * - * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock(). - * Note that this function replaces all fences attached to @obj, see also - * &dma_resv.fence_excl for a discussion of the semantics. - */ -void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) -{ - struct dma_fence *old_fence = dma_resv_excl_fence(obj); - - dma_resv_assert_held(obj); - - dma_fence_get(fence); - - write_seqcount_begin(&obj->seq); - /* write_seqcount_begin provides the necessary memory barrier */ - RCU_INIT_POINTER(obj->fence_excl, fence); - dma_resv_list_prune(dma_resv_shared_list(obj), obj); - write_seqcount_end(&obj->seq); - - dma_fence_put(old_fence); -} -EXPORT_SYMBOL(dma_resv_add_excl_fence); - /** * dma_resv_prune - remove signaled fences * @obj: The dma_resv object to prune @@ -437,10 +427,7 @@ void dma_resv_prune(struct dma_resv *obj) dma_resv_assert_held(obj); write_seqcount_begin(&obj->seq); - if (obj->fence_excl && dma_fence_is_signaled(obj->fence_excl)) - dma_fence_put(rcu_replace_pointer(obj->fence_excl, NULL, - dma_resv_held(obj))); - dma_resv_list_prune(dma_resv_shared_list(obj), obj); + dma_resv_list_prune(dma_resv_fences_list(obj), obj); write_seqcount_end(&obj->seq); } EXPORT_SYMBOL(dma_resv_prune_unlocked); @@ -471,15 +458,11 @@ EXPORT_SYMBOL(dma_resv_prune); static void dma_resv_iter_restart_unlocked(struct dma_resv_iter *cursor) { cursor->seq = read_seqcount_begin(&cursor->obj->seq); - cursor->index = -1; - cursor->shared_count = 0; - if (cursor->usage >= DMA_RESV_USAGE_READ) { - cursor->fences = dma_resv_shared_list(cursor->obj); - if (cursor->fences) - cursor->shared_count = cursor->fences->shared_count; - } else { - cursor->fences = NULL; - } + cursor->index = 0; + cursor->num_fences = 0; + cursor->fences = dma_resv_fences_list(cursor->obj); + if (cursor->fences) + cursor->num_fences = cursor->fences->num_fences; cursor->is_restarted = true; } @@ -493,31 +476,29 @@ static void dma_resv_iter_restart_unlocked(struct dma_resv_iter *cursor) */ static void dma_resv_iter_walk_unlocked(struct dma_resv_iter *cursor) { - struct dma_resv *obj = cursor->obj; + if (!cursor->fences) + return; do { /* Drop the reference from the previous round */ dma_fence_put(cursor->fence); - if (cursor->index == -1) { - cursor->fence = dma_resv_excl_fence(obj); - cursor->index++; - if (!cursor->fence) - continue; - - } else if (!cursor->fences || - cursor->index >= cursor->shared_count) { + if (cursor->index >= cursor->num_fences) { cursor->fence = NULL; break; - } else { - struct dma_resv_list *fences = cursor->fences; - unsigned int idx = cursor->index++; - - cursor->fence = rcu_dereference(fences->shared[idx]); } + + dma_resv_list_entry(cursor->fences, cursor->index++, + dma_resv_held(cursor->obj), + &cursor->fence, + &cursor->fence_usage); cursor->fence = dma_fence_get_rcu(cursor->fence); - if (!cursor->fence || !dma_fence_is_signaled(cursor->fence)) + if (!cursor->fence) + break; + + if (!dma_fence_is_signaled(cursor->fence) && + cursor->usage >= cursor->fence_usage) break; } while (true); } @@ -580,15 +561,9 @@ struct dma_fence *dma_resv_iter_first(struct dma_resv_iter *cursor) dma_resv_assert_held(cursor->obj); cursor->index = 0; - if (cursor->usage >= DMA_RESV_USAGE_READ) - cursor->fences = dma_resv_shared_list(cursor->obj); - else - cursor->fences = NULL; - - fence = dma_resv_excl_fence(cursor->obj); - if (!fence) - fence = dma_resv_iter_next(cursor); + cursor->fences = dma_resv_fences_list(cursor->obj); + fence = dma_resv_iter_next(cursor); cursor->is_restarted = true; return fence; } @@ -603,17 +578,18 @@ EXPORT_SYMBOL_GPL(dma_resv_iter_first); */ struct dma_fence *dma_resv_iter_next(struct dma_resv_iter *cursor) { - unsigned int idx; + struct dma_fence *fence; dma_resv_assert_held(cursor->obj); cursor->is_restarted = false; - if (!cursor->fences || cursor->index >= cursor->fences->shared_count) + if (!cursor->fences || cursor->index >= cursor->fences->num_fences) return NULL; - idx = cursor->index++; - return rcu_dereference_protected(cursor->fences->shared[idx], - dma_resv_held(cursor->obj)); + dma_resv_list_entry(cursor->fences, cursor->index++, + dma_resv_held(cursor->obj), + &fence, &cursor->fence_usage); + return fence; } EXPORT_SYMBOL_GPL(dma_resv_iter_next); @@ -628,57 +604,43 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src) { struct dma_resv_iter cursor; struct dma_resv_list *list; - struct dma_fence *f, *excl; + struct dma_fence *f; dma_resv_assert_held(dst); list = NULL; - excl = NULL; dma_resv_iter_begin(&cursor, src, DMA_RESV_USAGE_OTHER); dma_resv_for_each_fence_unlocked(&cursor, f) { if (dma_resv_iter_is_restarted(&cursor)) { dma_resv_list_free(list); - dma_fence_put(excl); - - if (cursor.shared_count) { - list = dma_resv_list_alloc(cursor.shared_count); - if (!list) { - dma_resv_iter_end(&cursor); - return -ENOMEM; - } - - list->shared_count = 0; - } else { - list = NULL; + list = dma_resv_list_alloc(cursor.num_fences); + if (!list) { + dma_resv_iter_end(&cursor); + return -ENOMEM; } - excl = NULL; + list->num_fences = 0; } dma_fence_get(f); - if (dma_resv_iter_is_exclusive(&cursor)) - excl = f; - else - RCU_INIT_POINTER(list->shared[list->shared_count++], f); + dma_resv_list_set(list, list->num_fences++, f, + dma_resv_iter_usage(&cursor)); } dma_resv_iter_end(&cursor); write_seqcount_begin(&dst->seq); - excl = rcu_replace_pointer(dst->fence_excl, excl, dma_resv_held(dst)); - list = rcu_replace_pointer(dst->fence, list, dma_resv_held(dst)); + list = rcu_replace_pointer(dst->fences, list, dma_resv_held(dst)); write_seqcount_end(&dst->seq); dma_resv_list_free(list); - dma_fence_put(excl); - return 0; } EXPORT_SYMBOL(dma_resv_copy_fences); /** - * dma_resv_get_fences - Get an object's shared and exclusive + * dma_resv_get_fences - Get an object's fences * fences without update side lock held * @obj: the reservation object * @usage: controls which fences to include @@ -707,7 +669,7 @@ int dma_resv_get_fences(struct dma_resv *obj, enum dma_resv_usage usage, while (*num_fences) dma_fence_put((*fences)[--(*num_fences)]); - count = cursor.shared_count + 1; + count = cursor.num_fences + 1; /* Eventually re-allocate the array */ *fences = krealloc_array(*fences, count, @@ -777,8 +739,7 @@ int dma_resv_get_singleton(struct dma_resv *obj, enum dma_resv_usage usage, EXPORT_SYMBOL_GPL(dma_resv_get_singleton); /** - * dma_resv_wait_timeout - Wait on reservation's objects - * shared and/or exclusive fences. + * dma_resv_wait_timeout - Wait on reservation's objects fences * @obj: the reservation object * @usage: controls which fences to include in the wait * @intr: if true, do interruptible wait @@ -851,13 +812,13 @@ EXPORT_SYMBOL_GPL(dma_resv_test_signaled); */ void dma_resv_describe(struct dma_resv *obj, struct seq_file *seq) { + static const char *usage[] = { "kernel", "write", "read", "other" }; struct dma_resv_iter cursor; struct dma_fence *fence; dma_resv_for_each_fence(&cursor, obj, true, fence) { seq_printf(seq, "\t%s fence:", - dma_resv_iter_is_exclusive(&cursor) ? - "Exclusive" : "Shared"); + usage[dma_resv_iter_usage(&cursor)]); dma_fence_describe(fence, seq); } } diff --git a/drivers/dma-buf/st-dma-resv.c b/drivers/dma-buf/st-dma-resv.c index a52c5fbea87a..bef5872de1af 100644 --- a/drivers/dma-buf/st-dma-resv.c +++ b/drivers/dma-buf/st-dma-resv.c @@ -58,8 +58,9 @@ static int sanitycheck(void *arg) return r; } -static int test_signaling(void *arg, enum dma_resv_usage usage) +static int test_signaling(void *arg) { + enum dma_resv_usage usage = (unsigned long)arg; struct dma_resv resv; struct dma_fence *f; int r; @@ -75,17 +76,13 @@ static int test_signaling(void *arg, enum dma_resv_usage usage) goto err_free; } - r = dma_resv_reserve_shared(&resv, 1); + r = dma_resv_reserve_fences(&resv, 1); if (r) { pr_err("Resv shared slot allocation failed\n"); goto err_unlock; } - if (usage >= DMA_RESV_USAGE_READ) - dma_resv_add_shared_fence(&resv, f); - else - dma_resv_add_excl_fence(&resv, f); - + dma_resv_add_fence(&resv, f, usage); if (dma_resv_test_signaled(&resv, usage)) { pr_err("Resv unexpectedly signaled\n"); r = -EINVAL; @@ -105,18 +102,9 @@ static int test_signaling(void *arg, enum dma_resv_usage usage) return r; } -static int test_excl_signaling(void *arg) -{ - return test_signaling(arg, DMA_RESV_USAGE_WRITE); -} - -static int test_shared_signaling(void *arg) -{ - return test_signaling(arg, DMA_RESV_USAGE_READ); -} - -static int test_for_each(void *arg, enum dma_resv_usage usage) +static int test_for_each(void *arg) { + enum dma_resv_usage usage = (unsigned long)arg; struct dma_resv_iter cursor; struct dma_fence *f, *fence; struct dma_resv resv; @@ -133,16 +121,13 @@ static int test_for_each(void *arg, enum dma_resv_usage usage) goto err_free; } - r = dma_resv_reserve_shared(&resv, 1); + r = dma_resv_reserve_fences(&resv, 1); if (r) { pr_err("Resv shared slot allocation failed\n"); goto err_unlock; } - if (usage >= DMA_RESV_USAGE_READ) - dma_resv_add_shared_fence(&resv, f); - else - dma_resv_add_excl_fence(&resv, f); + dma_resv_add_fence(&resv, f, usage); r = -ENOENT; dma_resv_for_each_fence(&cursor, &resv, usage, fence) { @@ -156,8 +141,7 @@ static int test_for_each(void *arg, enum dma_resv_usage usage) r = -EINVAL; goto err_unlock; } - if (dma_resv_iter_is_exclusive(&cursor) != - (usage >= DMA_RESV_USAGE_READ)) { + if (dma_resv_iter_usage(&cursor) != usage) { pr_err("Unexpected fence usage\n"); r = -EINVAL; goto err_unlock; @@ -177,18 +161,9 @@ static int test_for_each(void *arg, enum dma_resv_usage usage) return r; } -static int test_excl_for_each(void *arg) -{ - return test_for_each(arg, DMA_RESV_USAGE_WRITE); -} - -static int test_shared_for_each(void *arg) -{ - return test_for_each(arg, DMA_RESV_USAGE_READ); -} - -static int test_for_each_unlocked(void *arg, enum dma_resv_usage usage) +static int test_for_each_unlocked(void *arg) { + enum dma_resv_usage usage = (unsigned long)arg; struct dma_resv_iter cursor; struct dma_fence *f, *fence; struct dma_resv resv; @@ -205,17 +180,14 @@ static int test_for_each_unlocked(void *arg, enum dma_resv_usage usage) goto err_free; } - r = dma_resv_reserve_shared(&resv, 1); + r = dma_resv_reserve_fences(&resv, 1); if (r) { pr_err("Resv shared slot allocation failed\n"); dma_resv_unlock(&resv); goto err_free; } - if (usage >= DMA_RESV_USAGE_READ) - dma_resv_add_shared_fence(&resv, f); - else - dma_resv_add_excl_fence(&resv, f); + dma_resv_add_fence(&resv, f, usage); dma_resv_unlock(&resv); r = -ENOENT; @@ -235,8 +207,7 @@ static int test_for_each_unlocked(void *arg, enum dma_resv_usage usage) r = -EINVAL; goto err_iter_end; } - if (dma_resv_iter_is_exclusive(&cursor) != - (usage >= DMA_RESV_USAGE_READ)) { + if (dma_resv_iter_usage(&cursor) != usage) { pr_err("Unexpected fence usage\n"); r = -EINVAL; goto err_iter_end; @@ -262,18 +233,9 @@ static int test_for_each_unlocked(void *arg, enum dma_resv_usage usage) return r; } -static int test_excl_for_each_unlocked(void *arg) -{ - return test_for_each_unlocked(arg, DMA_RESV_USAGE_WRITE); -} - -static int test_shared_for_each_unlocked(void *arg) -{ - return test_for_each_unlocked(arg, DMA_RESV_USAGE_READ); -} - -static int test_get_fences(void *arg, enum dma_resv_usage usage) +static int test_get_fences(void *arg) { + enum dma_resv_usage usage = (unsigned long)arg; struct dma_fence *f, **fences = NULL; struct dma_resv resv; int r, i; @@ -289,17 +251,14 @@ static int test_get_fences(void *arg, enum dma_resv_usage usage) goto err_resv; } - r = dma_resv_reserve_shared(&resv, 1); + r = dma_resv_reserve_fences(&resv, 1); if (r) { pr_err("Resv shared slot allocation failed\n"); dma_resv_unlock(&resv); goto err_resv; } - if (usage >= DMA_RESV_USAGE_READ) - dma_resv_add_shared_fence(&resv, f); - else - dma_resv_add_excl_fence(&resv, f); + dma_resv_add_fence(&resv, f, usage); dma_resv_unlock(&resv); r = dma_resv_get_fences(&resv, usage, &i, &fences); @@ -324,30 +283,24 @@ static int test_get_fences(void *arg, enum dma_resv_usage usage) return r; } -static int test_excl_get_fences(void *arg) -{ - return test_get_fences(arg, DMA_RESV_USAGE_WRITE); -} - -static int test_shared_get_fences(void *arg) -{ - return test_get_fences(arg, DMA_RESV_USAGE_READ); -} - int dma_resv(void) { static const struct subtest tests[] = { SUBTEST(sanitycheck), - SUBTEST(test_excl_signaling), - SUBTEST(test_shared_signaling), - SUBTEST(test_excl_for_each), - SUBTEST(test_shared_for_each), - SUBTEST(test_excl_for_each_unlocked), - SUBTEST(test_shared_for_each_unlocked), - SUBTEST(test_excl_get_fences), - SUBTEST(test_shared_get_fences), + SUBTEST(test_signaling), + SUBTEST(test_for_each), + SUBTEST(test_for_each_unlocked), + SUBTEST(test_get_fences), }; + enum dma_resv_usage usage; + int r; spin_lock_init(&fence_lock); - return subtests(tests, NULL); + for (usage = DMA_RESV_USAGE_KERNEL; usage <= DMA_RESV_USAGE_OTHER; + ++usage) { + r = subtests(tests, (void *)(unsigned long)usage); + if (r) + return r; + } + return 0; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index b558ef0f8c4a..0201a44ff630 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -246,7 +246,7 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, */ replacement = dma_fence_get_stub(); dma_resv_replace_fences(bo->tbo.base.resv, ef->base.context, - replacement); + replacement, DMA_RESV_USAGE_OTHER); dma_fence_put(replacement); return 0; } @@ -1207,7 +1207,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void **process_info, AMDGPU_FENCE_OWNER_KFD, false); if (ret) goto wait_pd_fail; - ret = dma_resv_reserve_shared(vm->root.bo->tbo.base.resv, 1); + ret = dma_resv_reserve_fences(vm->root.bo->tbo.base.resv, 1); if (ret) goto reserve_shared_fail; amdgpu_bo_fence(vm->root.bo, @@ -2454,7 +2454,7 @@ int amdgpu_amdkfd_add_gws_to_process(void *info, void *gws, struct kgd_mem **mem * Add process eviction fence to bo so they can * evict each other. */ - ret = dma_resv_reserve_shared(gws_bo->tbo.base.resv, 1); + ret = dma_resv_reserve_fences(gws_bo->tbo.base.resv, 1); if (ret) goto reserve_shared_fail; amdgpu_bo_fence(gws_bo, &process_info->eviction_fence->base, true); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index af0a61ce2ec7..92091e800022 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -54,8 +54,8 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p, bo = amdgpu_bo_ref(gem_to_amdgpu_bo(gobj)); p->uf_entry.priority = 0; p->uf_entry.tv.bo = &bo->tbo; - /* One for TTM and one for the CS job */ - p->uf_entry.tv.num_shared = 2; + /* One for TTM and two for the CS job */ + p->uf_entry.tv.num_shared = 3; drm_gem_object_put(gobj); @@ -1285,7 +1285,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, break; } dma_fence_chain_init(chain, fence, dma_fence_get(p->fence), 1); - rcu_assign_pointer(resv->fence_excl, &chain->base); + dma_resv_add_fence(resv, &chain->base, DMA_RESV_USAGE_WRITE); e->chain = NULL; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index e99958290782..a40ede9bccd0 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -1376,10 +1376,8 @@ void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence, return; } - if (shared) - dma_resv_add_shared_fence(resv, fence); - else - dma_resv_add_excl_fence(resv, fence); + dma_resv_add_fence(resv, fence, shared ? DMA_RESV_USAGE_READ : + DMA_RESV_USAGE_WRITE); } /** diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 39d1736d13e8..6e631254dc7f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -2969,7 +2969,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm) if (r) goto error_free_root; - r = dma_resv_reserve_shared(root_bo->tbo.base.resv, 1); + r = dma_resv_reserve_fences(root_bo->tbo.base.resv, 1); if (r) goto error_unreserve; @@ -3412,7 +3412,7 @@ bool amdgpu_vm_handle_fault(struct amdgpu_device *adev, u32 pasid, value = 0; } - r = dma_resv_reserve_shared(root->tbo.base.resv, 1); + r = dma_resv_reserve_fences(root->tbo.base.resv, 1); if (r) { pr_debug("failed %d to reserve fence slot\n", r); goto error_unlock; diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c index 16137c4247bb..a30aa9bf41a4 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c @@ -524,7 +524,7 @@ svm_range_vram_node_new(struct amdgpu_device *adev, struct svm_range *prange, goto reserve_bo_failed; } - r = dma_resv_reserve_shared(bo->tbo.base.resv, 1); + r = dma_resv_reserve_fences(bo->tbo.base.resv, 1); if (r) { pr_debug("failed %d to reserve bo\n", r); amdgpu_bo_unreserve(bo); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 26cd93c8e9cc..2f1a57dce8d5 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -180,7 +180,7 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) struct dma_resv *robj = bo->obj->base.resv; enum dma_resv_usage usage; - ret = dma_resv_reserve_shared(robj, 1); + ret = dma_resv_reserve_fences(robj, 1); if (ret) return ret; @@ -204,14 +204,10 @@ static void submit_attach_object_fences(struct etnaviv_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = &submit->bos[i].obj->base; + bool write = submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE; - if (submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE) - dma_resv_add_excl_fence(obj->resv, - submit->out_fence); - else - dma_resv_add_shared_fence(obj->resv, - submit->out_fence); - + dma_resv_add_fence(obj->resv, submit->out_fence, write ? + DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_READ); submit_unlock_object(submit, i); } } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c index 14a1c0ad8c3c..e7ae94ee1b44 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c @@ -148,12 +148,13 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data, if (dma_resv_iter_is_restarted(&cursor)) args->busy = 0; - if (dma_resv_iter_is_exclusive(&cursor)) - /* Translate the exclusive fence to the READ *and* WRITE engine */ - args->busy |= busy_check_writer(fence); - else - /* Translate shared fences to READ set of engines */ - args->busy |= busy_check_reader(fence); + /* Translate read fences to READ set of engines */ + args->busy |= busy_check_reader(fence); + } + dma_resv_iter_begin(&cursor, obj->base.resv, DMA_RESV_USAGE_WRITE); + dma_resv_for_each_fence_unlocked(&cursor, fence) { + /* Translate the write fences to the READ *and* WRITE engine */ + args->busy |= busy_check_writer(fence); } dma_resv_iter_end(&cursor); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c index fc57ab914b60..b9281ca96ece 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c @@ -101,14 +101,15 @@ bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, clflush = NULL; if (!(flags & I915_CLFLUSH_SYNC) && - dma_resv_reserve_shared(obj->base.resv, 1) == 0) + dma_resv_reserve_fences(obj->base.resv, 1) == 0) clflush = clflush_work_create(obj); if (clflush) { i915_sw_fence_await_reservation(&clflush->base.chain, obj->base.resv, NULL, true, i915_fence_timeout(to_i915(obj->base.dev)), I915_FENCE_GFP); - dma_resv_add_excl_fence(obj->base.resv, &clflush->base.dma); + dma_resv_add_fence(obj->base.resv, &clflush->base.dma, + DMA_RESV_USAGE_KERNEL); dma_fence_work_commit(&clflush->base); } else if (obj->mm.pages) { __do_clflush(obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index fc0e1625847c..3eb3716a35f1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -989,7 +989,7 @@ static int eb_validate_vmas(struct i915_execbuffer *eb) } } - err = dma_resv_reserve_shared(vma->resv, 1); + err = dma_resv_reserve_fences(vma->resv, 1); if (err) return err; @@ -2162,7 +2162,7 @@ static int eb_parse(struct i915_execbuffer *eb) goto err_trampoline; } - err = dma_resv_reserve_shared(shadow->resv, 1); + err = dma_resv_reserve_fences(shadow->resv, 1); if (err) goto err_trampoline; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c index 2bf491fd5cdf..2b4cde8250b6 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c @@ -179,7 +179,7 @@ static int igt_lmem_pages_migrate(void *arg) i915_gem_object_is_lmem(obj), 0xdeadbeaf, &rq); if (rq) { - err = dma_resv_reserve_shared(obj->base.resv, 1); + err = dma_resv_reserve_fences(obj->base.resv, 1); if (!err) dma_resv_add_excl_fence(obj->base.resv, &rq->fence); diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 5ec87de63963..23d92b233b3b 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -1256,25 +1256,27 @@ int _i915_vma_move_to_active(struct i915_vma *vma, } if (!(flags & __EXEC_OBJECT_NO_RESERVE)) { - err = dma_resv_reserve_shared(vma->resv, 1); + err = dma_resv_reserve_fences(vma->resv, 1); if (unlikely(err)) return err; } if (fence) { - dma_resv_add_excl_fence(vma->resv, fence); + dma_resv_add_fence(vma->resv, fence, + DMA_RESV_USAGE_WRITE); obj->write_domain = I915_GEM_DOMAIN_RENDER; obj->read_domains = 0; } } else { if (!(flags & __EXEC_OBJECT_NO_RESERVE)) { - err = dma_resv_reserve_shared(vma->resv, 1); + err = dma_resv_reserve_fences(vma->resv, 1); if (unlikely(err)) return err; } if (fence) { - dma_resv_add_shared_fence(vma->resv, fence); + dma_resv_add_fence(vma->resv, fence, + DMA_RESV_USAGE_READ); obj->write_domain = 0; } } diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index b85af1672a7e..3e0edee1b238 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -895,7 +895,7 @@ static int igt_lmem_write_cpu(void *arg) i915_gem_object_lock(obj, NULL); - err = dma_resv_reserve_shared(obj->base.resv, 1); + err = dma_resv_reserve_fences(obj->base.resv, 1); if (err) { i915_gem_object_unlock(obj); goto out_put; diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 487581e2f716..a60d91b3c445 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -257,7 +257,7 @@ static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo, { int err; - err = dma_resv_reserve_shared(lima_bo_resv(bo), 1); + err = dma_resv_reserve_fences(lima_bo_resv(bo), 1); if (err) return err; @@ -365,7 +365,7 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit) if (submit->bos[i].flags & LIMA_SUBMIT_BO_WRITE) dma_resv_add_excl_fence(lima_bo_resv(bos[i]), fence); else - dma_resv_add_shared_fence(lima_bo_resv(bos[i]), fence); + dma_resv_add_fence(lima_bo_resv(bos[i]), fence); } drm_gem_unlock_reservations((struct drm_gem_object **)bos, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index e874d09b74ef..dcaaa7e2a342 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -325,7 +325,7 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) * strange place to call it. OTOH this is a * convenient can-fail point to hook it in. */ - ret = dma_resv_reserve_shared(obj->resv, 1); + ret = dma_resv_reserve_fences(obj->resv, 1); if (ret) return ret; @@ -397,7 +397,7 @@ static void submit_attach_object_fences(struct msm_gem_submit *submit) if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) dma_resv_add_excl_fence(obj->resv, submit->user_fence); else if (submit->bos[i].flags & MSM_SUBMIT_BO_READ) - dma_resv_add_shared_fence(obj->resv, submit->user_fence); + dma_resv_add_fence(obj->resv, submit->user_fence); } } diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 378182638c7a..13deb6c70ba6 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1308,10 +1308,11 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct nouveau_fence *fence, bool excl { struct dma_resv *resv = nvbo->bo.base.resv; - if (exclusive) - dma_resv_add_excl_fence(resv, &fence->base); - else if (fence) - dma_resv_add_shared_fence(resv, &fence->base); + if (!fence) + return; + + dma_resv_add_fence(resv, &fence->base, exclusive ? + DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_READ); } static void diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c index 26725e23c075..02a7b86b74a7 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.c +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c @@ -349,7 +349,7 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, struct nouveau_fence *f; int ret; - ret = dma_resv_reserve_shared(resv, 1); + ret = dma_resv_reserve_fences(resv, 1); if (ret) return ret; diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 89c3fe389476..cca50cbb7f0b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -247,7 +247,7 @@ static int panfrost_acquire_object_fences(struct drm_gem_object **bos, int i, ret; for (i = 0; i < bo_count; i++) { - ret = dma_resv_reserve_shared(bos[i]->resv, 1); + ret = dma_resv_reserve_fences(bos[i]->resv, 1); if (ret) return ret; diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 469979cd0341..368d26da0d6a 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -200,7 +200,7 @@ static int qxl_release_validate_bo(struct qxl_bo *bo) return ret; } - ret = dma_resv_reserve_shared(bo->tbo.base.resv, 1); + ret = dma_resv_reserve_fences(bo->tbo.base.resv, 1); if (ret) return ret; @@ -429,7 +429,8 @@ void qxl_release_fence_buffer_objects(struct qxl_release *release) list_for_each_entry(entry, &release->bos, head) { bo = entry->bo; - dma_resv_add_shared_fence(bo->base.resv, &release->base); + dma_resv_add_fence(bo->base.resv, &release->base, + DMA_RESV_USAGE_READ); ttm_bo_move_to_lru_tail_unlocked(bo); dma_resv_unlock(bo->base.resv); } diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index 56ede9d63b12..915194345e20 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -809,8 +809,6 @@ void radeon_bo_fence(struct radeon_bo *bo, struct radeon_fence *fence, { struct dma_resv *resv = bo->tbo.base.resv; - if (shared) - dma_resv_add_shared_fence(resv, &fence->base); - else - dma_resv_add_excl_fence(resv, &fence->base); + dma_resv_add_fence(resv, &fence->base, shared ? + DMA_RESV_USAGE_READ : DMA_RESV_USAGE_WRITE); } diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index bb53016f3138..987cabbf1318 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -831,7 +831,7 @@ static int radeon_vm_update_ptes(struct radeon_device *rdev, int r; radeon_sync_resv(rdev, &ib->sync, pt->tbo.base.resv, true); - r = dma_resv_reserve_shared(pt->tbo.base.resv, 1); + r = dma_resv_reserve_fences(pt->tbo.base.resv, 1); if (r) return r; diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 3ca4882513c5..95e6633775f2 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -762,9 +762,9 @@ static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo, return ret; } - dma_resv_add_shared_fence(bo->base.resv, fence); + dma_resv_add_fence(bo->base.resv, fence, DMA_RESV_USAGE_KERNEL); - ret = dma_resv_reserve_shared(bo->base.resv, 1); + ret = dma_resv_reserve_fences(bo->base.resv, 1); if (unlikely(ret)) { dma_fence_put(fence); return ret; @@ -823,7 +823,7 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo, bool type_found = false; int i, ret; - ret = dma_resv_reserve_shared(bo->base.resv, 1); + ret = dma_resv_reserve_fences(bo->base.resv, 1); if (unlikely(ret)) return ret; diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index ea9eabcc0a0c..b9cfb62c4b6e 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -243,7 +243,7 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, ret = dma_resv_trylock(&fbo->base.base._resv); WARN_ON(!ret); - ret = dma_resv_reserve_shared(&fbo->base.base._resv, 1); + ret = dma_resv_reserve_fences(&fbo->base.base._resv, 1); if (ret) { kfree(fbo); return ret; @@ -503,7 +503,8 @@ static int ttm_bo_move_to_ghost(struct ttm_buffer_object *bo, if (ret) return ret; - dma_resv_add_excl_fence(&ghost_obj->base._resv, fence); + dma_resv_add_fence(&ghost_obj->base._resv, fence, + DMA_RESV_USAGE_KERNEL); /** * If we're not moving to fixed memory, the TTM object @@ -558,7 +559,7 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo, struct ttm_resource_manager *man = ttm_manager_type(bdev, new_mem->mem_type); int ret = 0; - dma_resv_add_excl_fence(bo->base.resv, fence); + dma_resv_add_fence(bo->base.resv, fence, DMA_RESV_USAGE_KERNEL); if (!evict) ret = ttm_bo_move_to_ghost(bo, fence, man->use_tt); else if (!from->use_tt && pipeline) diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c index 5da922639d54..0eb995d25df1 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c @@ -103,7 +103,7 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, num_fences = min(entry->num_shared, 1u); if (!ret) { - ret = dma_resv_reserve_shared(bo->base.resv, + ret = dma_resv_reserve_fences(bo->base.resv, num_fences); if (!ret) continue; @@ -120,7 +120,7 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, } if (!ret) - ret = dma_resv_reserve_shared(bo->base.resv, + ret = dma_resv_reserve_fences(bo->base.resv, num_fences); if (unlikely(ret != 0)) { @@ -154,10 +154,8 @@ void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket, list_for_each_entry(entry, list, head) { struct ttm_buffer_object *bo = entry->bo; - if (entry->num_shared) - dma_resv_add_shared_fence(bo->base.resv, fence); - else - dma_resv_add_excl_fence(bo->base.resv, fence); + dma_resv_add_fence(bo->base.resv, fence, entry->num_shared ? + DMA_RESV_USAGE_READ : DMA_RESV_USAGE_WRITE); ttm_bo_move_to_lru_tail_unlocked(bo); dma_resv_unlock(bo->base.resv); } diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 1bea90e40ce1..e3a09e33822f 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -259,7 +259,7 @@ v3d_lock_bo_reservations(struct v3d_job *job, return ret; for (i = 0; i < job->bo_count; i++) { - ret = dma_resv_reserve_shared(job->bo[i]->resv, 1); + ret = dma_resv_reserve_fences(job->bo[i]->resv, 1); if (ret) goto fail; @@ -550,8 +550,8 @@ v3d_attach_fences_and_unlock_reservation(struct drm_file *file_priv, for (i = 0; i < job->bo_count; i++) { /* XXX: Use shared fences for read-only objects. */ - dma_resv_add_excl_fence(job->bo[i]->resv, - job->done_fence); + dma_resv_add_fence(job->bo[i]->resv, job->done_fence, + DMA_RESV_USAGE_WRITE); } drm_gem_unlock_reservations(job->bo, job->bo_count, acquire_ctx); diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c index 445d3bab89e0..e925e65fcd9d 100644 --- a/drivers/gpu/drm/vc4/vc4_gem.c +++ b/drivers/gpu/drm/vc4/vc4_gem.c @@ -543,7 +543,7 @@ vc4_update_bo_seqnos(struct vc4_exec_info *exec, uint64_t seqno) bo = to_vc4_bo(&exec->bo[i]->base); bo->seqno = seqno; - dma_resv_add_shared_fence(bo->base.base.resv, exec->fence); + dma_resv_add_fence(bo->base.base.resv, exec->fence); } list_for_each_entry(bo, &exec->unref_list, unref_head) { @@ -641,7 +641,7 @@ vc4_lock_bo_reservations(struct drm_device *dev, for (i = 0; i < exec->bo_count; i++) { bo = &exec->bo[i]->base; - ret = dma_resv_reserve_shared(bo->resv, 1); + ret = dma_resv_reserve_fences(bo->resv, 1); if (ret) { vc4_unlock_bo_reservations(dev, exec, acquire_ctx); return ret; diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c index 74ebadf4e592..c2a879734d40 100644 --- a/drivers/gpu/drm/vgem/vgem_fence.c +++ b/drivers/gpu/drm/vgem/vgem_fence.c @@ -160,13 +160,10 @@ int vgem_fence_attach_ioctl(struct drm_device *dev, /* Expose the fence via the dma-buf */ dma_resv_lock(resv, NULL); - ret = dma_resv_reserve_shared(resv, 1); - if (!ret) { - if (arg->flags & VGEM_FENCE_WRITE) - dma_resv_add_excl_fence(resv, fence); - else - dma_resv_add_shared_fence(resv, fence); - } + ret = dma_resv_reserve_fences(resv, 1); + if (!ret) + dma_resv_add_fence(resv, fence, arg->flags & VGEM_FENCE_WRITE ? + DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_READ); dma_resv_unlock(resv); /* Record the fence in our idr for later signaling */ diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index aec105cdd64c..a24ab63063e5 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -227,7 +227,7 @@ int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) return ret; for (i = 0; i < objs->nents; ++i) { - ret = dma_resv_reserve_shared(objs->objs[i]->resv, 1); + ret = dma_resv_reserve_fences(objs->objs[i]->resv, 1); if (ret) return ret; } @@ -250,7 +250,8 @@ void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs, int i; for (i = 0; i < objs->nents; i++) - dma_resv_add_excl_fence(objs->objs[i]->resv, fence); + dma_resv_add_fence(objs->objs[i]->resv, fence, + DMA_RESV_USAGE_WRITE); } void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index 72715e452fdd..b18fc58ba6ef 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -1062,9 +1062,10 @@ void vmw_bo_fence_single(struct ttm_buffer_object *bo, else dma_fence_get(&fence->base); - ret = dma_resv_reserve_shared(bo->base.resv, 1); + ret = dma_resv_reserve_fences(bo->base.resv, 1); if (!ret) - dma_resv_add_excl_fence(bo->base.resv, &fence->base); + dma_resv_add_fence(bo->base.resv, &fence->base, + DMA_RESV_USAGE_KERNEL); else /* Last resort fallback when we are OOM */ dma_fence_wait(&fence->base, false); diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 37552935bca6..92d3be0c41d8 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -132,8 +132,8 @@ static inline enum dma_resv_usage dma_resv_usage_rw(bool write) /** * struct dma_resv - a reservation object manages fences for a buffer * - * There are multiple uses for this, with sometimes slightly different rules in - * how the fence slots are used. + * This is a container for dma_fence objects which needs to handle multiple use + * cases. * * One use is to synchronize cross-driver access to a struct dma_buf, either for * dynamic buffer management or just to handle implicit synchronization between @@ -163,59 +163,22 @@ struct dma_resv { * @seq: * * Sequence count for managing RCU read-side synchronization, allows - * read-only access to @fence_excl and @fence while ensuring we take a - * consistent snapshot. + * read-only access to @fences while ensuring we take a consistent + * snapshot. */ seqcount_ww_mutex_t seq; /** - * @fence_excl: + * @fences: * - * The exclusive fence, if there is one currently. + * Array of fences which where added to the dma_resv object * - * There are two ways to update this fence: - * - * - First by calling dma_resv_add_excl_fence(), which replaces all - * fences attached to the reservation object. To guarantee that no - * fences are lost, this new fence must signal only after all previous - * fences, both shared and exclusive, have signalled. In some cases it - * is convenient to achieve that by attaching a struct dma_fence_array - * with all the new and old fences. - * - * - Alternatively the fence can be set directly, which leaves the - * shared fences unchanged. To guarantee that no fences are lost, this - * new fence must signal only after the previous exclusive fence has - * signalled. Since the shared fences are staying intact, it is not - * necessary to maintain any ordering against those. If semantically - * only a new access is added without actually treating the previous - * one as a dependency the exclusive fences can be strung together - * using struct dma_fence_chain. - * - * Note that actual semantics of what an exclusive or shared fence mean - * is defined by the user, for reservation objects shared across drivers - * see &dma_buf.resv. - */ - struct dma_fence __rcu *fence_excl; - - /** - * @fence: - * - * List of current shared fences. - * - * There are no ordering constraints of shared fences against the - * exclusive fence slot. If a waiter needs to wait for all access, it - * has to wait for both sets of fences to signal. - * - * A new fence is added by calling dma_resv_add_shared_fence(). Since - * this often needs to be done past the point of no return in command + * A new fence is added by calling dma_resv_add_fence(). Since this + * often needs to be done past the point of no return in command * submission it cannot fail, and therefore sufficient slots need to be - * reserved by calling dma_resv_reserve_shared(). - * - * Note that actual semantics of what an exclusive or shared fence mean - * is defined by the user, for reservation objects shared across drivers - * see &dma_buf.resv. + * reserved by calling dma_resv_reserve_fences(). */ - struct dma_resv_list __rcu *fence; + struct dma_resv_list __rcu *fences; }; /** @@ -233,6 +196,9 @@ struct dma_resv_iter { /** @fence: the currently handled fence */ struct dma_fence *fence; + /** @fence_usage: the usage of the current fence */ + enum dma_resv_usage fence_usage; + /** @seq: sequence number to check for modifications */ unsigned int seq; @@ -242,8 +208,8 @@ struct dma_resv_iter { /** @fences: the shared fences; private, *MUST* not dereference */ struct dma_resv_list *fences; - /** @shared_count: number of shared fences */ - unsigned int shared_count; + /** @num_fences: number of fences */ + unsigned int num_fences; /** @is_restarted: true if this is the first returned fence */ bool is_restarted; @@ -282,14 +248,15 @@ static inline void dma_resv_iter_end(struct dma_resv_iter *cursor) } /** - * dma_resv_iter_is_exclusive - test if the current fence is the exclusive one + * dma_resv_iter_usage - Return the usage of the current fence * @cursor: the cursor of the current position * - * Returns true if the currently returned fence is the exclusive one. + * Returns the usage of the currently processed fence. */ -static inline bool dma_resv_iter_is_exclusive(struct dma_resv_iter *cursor) +static inline enum dma_resv_usage +dma_resv_iter_usage(struct dma_resv_iter *cursor) { - return cursor->index == 0; + return cursor->fence_usage; } /** @@ -340,9 +307,9 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor) #define dma_resv_assert_held(obj) lockdep_assert_held(&(obj)->lock.base) #ifdef CONFIG_DEBUG_MUTEXES -void dma_resv_reset_shared_max(struct dma_resv *obj); +void dma_resv_reset_max_fences(struct dma_resv *obj); #else -static inline void dma_resv_reset_shared_max(struct dma_resv *obj) {} +static inline void dma_resv_reset_max_fences(struct dma_resv *obj) {} #endif /** @@ -488,17 +455,18 @@ static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj) */ static inline void dma_resv_unlock(struct dma_resv *obj) { - dma_resv_reset_shared_max(obj); + dma_resv_reset_max_fences(obj); ww_mutex_unlock(&obj->lock); } void dma_resv_init(struct dma_resv *obj); void dma_resv_fini(struct dma_resv *obj); -int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); -void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence); +int dma_resv_reserve_fences(struct dma_resv *obj, unsigned int num_fences); +void dma_resv_add_fence(struct dma_resv *obj, struct dma_fence *fence, + enum dma_resv_usage usage); void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, - struct dma_fence *fence); -void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); + struct dma_fence *fence, + enum dma_resv_usage usage); void dma_resv_prune(struct dma_resv *obj); void dma_resv_prune_unlocked(struct dma_resv *obj); int dma_resv_get_fences(struct dma_resv *obj, enum dma_resv_usage usage, From patchwork Tue Nov 23 14:21:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84DE4C433EF for ; Tue, 23 Nov 2021 14:22:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237991AbhKWOZT (ORCPT ); Tue, 23 Nov 2021 09:25:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238059AbhKWOZS (ORCPT ); Tue, 23 Nov 2021 09:25:18 -0500 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13988C061714 for ; Tue, 23 Nov 2021 06:22:10 -0800 (PST) Received: by mail-wm1-x32b.google.com with SMTP id 137so15358528wma.1 for ; Tue, 23 Nov 2021 06:22:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Zs/CUVLGUkgg0Mmeetef1+lFQcZyZCfqjZa/gpAFc9k=; b=BJWs7xWQOwsmhKXBuvGw4AOVJf0T4UhO8xiZqEZh/vPvZMXbnw+Urq4YnCNYK/RjJq zqw+8//KZ9jpthCQeAg397QuX7nWoPTr5ZPYL/jDN0p/lZvdnDAM6xDm+GhDNuKb+fNe D+ufaEoXkm9cXNsQ0HNyqYNM9XN8al7LI1IVWZyKw963Z8FiE4KIrCxZ0w9rqld3FNUm 8NZMO3U8yr2vpAkr2yeILe2E/anggh+jA4FIQ4v/AZDYVYJw/FYom5rAfqYDW3nzF5Sk nSZfvcRbsK2D2a6B9dxmA4/NUhGDXGBT7KG2ZpUd3XJJ0qCrv2s+WxXb+eDZeSzfE2eF g0bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Zs/CUVLGUkgg0Mmeetef1+lFQcZyZCfqjZa/gpAFc9k=; b=iFAuEZZGr8julQhaf0uMD4ecga5youIl+UEl0VbmLnR3rb9TTnnakXPSM4Ubnn/3+7 FUKRWQ22hP4fnk/Xn+p9bWS5icLWMJzSnDjrtSNVpQa5JEEi1pkz37BMeZGikp+uZxWG 5QfEonIHz2myLRDYtNsjUYY1hhUGXk8HSFhtQmeYo1K0AnAYfTYw6tykZAV72snwPsmM N1wIkY6b8fltHfBd/q2H2d4p6bWe7BPNdO4oZmXs3CJCsqWc/bPjY2yhv21QlFi/1E+c 4xBddzFwHkKMKuoFLwLLEdQZJSM3DjGEH7PAvhR7KvHQgqEMcJzuzvCHcFOlYpFEuTwr IzCQ== X-Gm-Message-State: AOAM533ImC2sfVby0LCvoQ+guRv/GlbghiGKxqopGraXT3suOs/SOzKe lVZgqz7g2dpYD4WvXrnwQbbgpaf86G4= X-Google-Smtp-Source: ABdhPJzziBcWJMjfV3QcpKN9Asg0M8E/MHmWxFskjrUoqmrsnXz74QuM50S2j84z1Ic3yjGrdepXhQ== X-Received: by 2002:a05:600c:4e91:: with SMTP id f17mr3648100wmq.195.1637677318184; Tue, 23 Nov 2021 06:21:58 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:57 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 24/26] dma-buf: wait for map to complete for static attachments Date: Tue, 23 Nov 2021 15:21:09 +0100 Message-Id: <20211123142111.3885-25-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org We have previously done that in the individual drivers but it is more defensive to move that into the common code. Dynamic attachments should wait for map operations to complete by themselves. Signed-off-by: Christian König --- drivers/dma-buf/dma-buf.c | 18 +++++++++++++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 14 +------------- drivers/gpu/drm/nouveau/nouveau_prime.c | 17 +---------------- drivers/gpu/drm/radeon/radeon_prime.c | 16 +++------------- 4 files changed, 20 insertions(+), 45 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 528983d3ba64..d3dd602c4753 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -660,12 +660,24 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach, enum dma_data_direction direction) { struct sg_table *sg_table; + signed long ret; sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction); + if (IS_ERR_OR_NULL(sg_table)) + return sg_table; + + if (!dma_buf_attachment_is_dynamic(attach)) { + ret = dma_resv_wait_timeout(attach->dmabuf->resv, + DMA_RESV_USAGE_KERNEL, true, + MAX_SCHEDULE_TIMEOUT); + if (ret < 0) { + attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, + direction); + return ERR_PTR(ret); + } + } - if (!IS_ERR_OR_NULL(sg_table)) - mangle_sg_table(sg_table); - + mangle_sg_table(sg_table); return sg_table; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index ae6ab93c868b..57a7a603f987 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -105,21 +105,9 @@ static int amdgpu_dma_buf_pin(struct dma_buf_attachment *attach) { struct drm_gem_object *obj = attach->dmabuf->priv; struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); - int r; /* pin buffer into GTT */ - r = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT); - if (r) - return r; - - if (bo->tbo.moving) { - r = dma_fence_wait(bo->tbo.moving, true); - if (r) { - amdgpu_bo_unpin(bo); - return r; - } - } - return 0; + return amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT); } /** diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index 60019d0532fc..347488685f74 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -93,22 +93,7 @@ int nouveau_gem_prime_pin(struct drm_gem_object *obj) if (ret) return -EINVAL; - ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); - if (ret) - goto error; - - if (nvbo->bo.moving) - ret = dma_fence_wait(nvbo->bo.moving, true); - - ttm_bo_unreserve(&nvbo->bo); - if (ret) - goto error; - - return ret; - -error: - nouveau_bo_unpin(nvbo); - return ret; + return 0; } void nouveau_gem_prime_unpin(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index 4a90807351e7..42a87948e28c 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -77,19 +77,9 @@ int radeon_gem_prime_pin(struct drm_gem_object *obj) /* pin buffer into GTT */ ret = radeon_bo_pin(bo, RADEON_GEM_DOMAIN_GTT, NULL); - if (unlikely(ret)) - goto error; - - if (bo->tbo.moving) { - ret = dma_fence_wait(bo->tbo.moving, false); - if (unlikely(ret)) { - radeon_bo_unpin(bo); - goto error; - } - } - - bo->prime_shared_count++; -error: + if (likely(ret == 0)) + bo->prime_shared_count++; + radeon_bo_unreserve(bo); return ret; } From patchwork Tue Nov 23 14:21:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA6B1C433F5 for ; Tue, 23 Nov 2021 14:22:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238033AbhKWOZJ (ORCPT ); Tue, 23 Nov 2021 09:25:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238012AbhKWOZJ (ORCPT ); Tue, 23 Nov 2021 09:25:09 -0500 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29A40C061574 for ; Tue, 23 Nov 2021 06:22:01 -0800 (PST) Received: by mail-wr1-x430.google.com with SMTP id r8so39192632wra.7 for ; Tue, 23 Nov 2021 06:22:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7XV1ycL0JA6uJA4j325yZigiICpFpwzKb7YQrY4jauM=; b=qBIj9pFKjl4qcR6tl7SKk+xIifqBkmd9fV2pfgjDGVv+NGWd2S0PJQoIbZpnF3QV6M LVDDt5CYh0/twWU22QTOeK3gEX5jxMqIPMytmmBAZdKAfAcK3neDpmHRtiYwopYhDghD ViL50N1Eu4pSFhPS4fwfPw+U7zZW2zuDFfU9t2sDFLxHNIhyV/mmOckCi5Cp+j2/UXCY 5wboZH9MQmwLv70rXeOSdvKQa0hpdDiXTWGI6xmQ9Zqh1mo0+zal5JL9/z3EN5++yuZy iPHQ1hjE/abrcKc3t8N01tfw+QlAzQb/gyaDVAj5ZC2xyrOdNpn9Dib3agBt1qXtwIfQ qoZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7XV1ycL0JA6uJA4j325yZigiICpFpwzKb7YQrY4jauM=; b=WXLB4oDSQEe9OBpK21ziflqG8im1qJs2AiZFQSt1/2tvMv3WDWBcZdwRUFyvi2wLfH QqDCSV/edszorm/OXvHgaJPCpDj2O5tvIXQP5nkh0Asdtmwlvvm6cmJdrUgoAJ58ym2x C97sJKSy0mQtmCnw7FVVOFT7HxTfom6JmXPOvvSd3HvkLexFNskpAQdpJR/vViJxBELJ oq9PKs70iSpif6TVDujwZaIPzYYbFbnM3SCg5I4HtBQaNER0rUWjGLledJpgJGOekCvT IHWVFbvokYu6hG4DWXtmkF+MIr9VXhb9H+vyOVMU893R72ILzO4hnHHe1NDdlDHLTIof yJeA== X-Gm-Message-State: AOAM5302Z+516quxGlodXka+3cBhtjWivis0vNZHKRoNe5b5i5DQnBnN 7gYRRXvv764MT+nqNtqDp/w= X-Google-Smtp-Source: ABdhPJyk/XPNVHpWgtv2k1fpbOeYNMxxrGdarYMP3lSL6EDq+kz/x2zdFZnWwSnSUcjUqOfwBoIAaQ== X-Received: by 2002:a5d:6c6a:: with SMTP id r10mr8110817wrz.211.1637677319741; Tue, 23 Nov 2021 06:21:59 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:59 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 25/26] amdgpu: remove DMA-buf fence workaround Date: Tue, 23 Nov 2021 15:21:10 +0100 Message-Id: <20211123142111.3885-26-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Not needed any more now we have that inside the framework. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h | 1 - drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 52 +++------------------ 2 files changed, 6 insertions(+), 47 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h index 044b41f0bfd9..529d52a204cf 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h @@ -34,7 +34,6 @@ struct amdgpu_fpriv; struct amdgpu_bo_list_entry { struct ttm_validate_buffer tv; struct amdgpu_bo_va *bo_va; - struct dma_fence_chain *chain; uint32_t priority; struct page **user_pages; bool user_invalidated; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 92091e800022..413606d10080 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -576,14 +576,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); e->bo_va = amdgpu_vm_bo_find(vm, bo); - - if (bo->tbo.base.dma_buf && !amdgpu_bo_explicit_sync(bo)) { - e->chain = dma_fence_chain_alloc(); - if (!e->chain) { - r = -ENOMEM; - goto error_validate; - } - } } amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold, @@ -634,13 +626,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, } error_validate: - if (r) { - amdgpu_bo_list_for_each_entry(e, p->bo_list) { - dma_fence_chain_free(e->chain); - e->chain = NULL; - } + if (r) ttm_eu_backoff_reservation(&p->ticket, &p->validated); - } out: return r; } @@ -680,17 +667,9 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, { unsigned i; - if (error && backoff) { - struct amdgpu_bo_list_entry *e; - - amdgpu_bo_list_for_each_entry(e, parser->bo_list) { - dma_fence_chain_free(e->chain); - e->chain = NULL; - } - + if (error && backoff) ttm_eu_backoff_reservation(&parser->ticket, &parser->validated); - } for (i = 0; i < parser->num_post_deps; i++) { drm_syncobj_put(parser->post_deps[i].syncobj); @@ -1265,29 +1244,10 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm); - amdgpu_bo_list_for_each_entry(e, p->bo_list) { - struct dma_resv *resv = e->tv.bo->base.resv; - struct dma_fence_chain *chain = e->chain; - struct dma_resv_iter cursor; - struct dma_fence *fence; - - if (!chain) - continue; - - /* - * Work around dma_resv shortcommings by wrapping up the - * submission in a dma_fence_chain and add it as exclusive - * fence. - */ - dma_resv_for_each_fence(&cursor, resv, - DMA_RESV_USAGE_WRITE, - fence) { - break; - } - dma_fence_chain_init(chain, fence, dma_fence_get(p->fence), 1); - dma_resv_add_fence(resv, &chain->base, DMA_RESV_USAGE_WRITE); - e->chain = NULL; - } + /* For now manually add the resulting fence as writer as well */ + amdgpu_bo_list_for_each_entry(e, p->bo_list) + dma_resv_add_fence(e->tv.bo->base.resv, p->fence, + DMA_RESV_USAGE_WRITE); ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence); mutex_unlock(&p->adev->notifier_lock); From patchwork Tue Nov 23 14:21:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12634329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34789C4332F for ; Tue, 23 Nov 2021 14:22:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238045AbhKWOZL (ORCPT ); Tue, 23 Nov 2021 09:25:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237991AbhKWOZK (ORCPT ); Tue, 23 Nov 2021 09:25:10 -0500 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F6BEC061574 for ; Tue, 23 Nov 2021 06:22:02 -0800 (PST) Received: by mail-wm1-x32b.google.com with SMTP id 77-20020a1c0450000000b0033123de3425so2455820wme.0 for ; Tue, 23 Nov 2021 06:22:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2/Oe8otQmsBfHpWwM1UvRs0zFnsdBeBxls7PqfCwDFM=; b=MN+LJx3I96htiW388SKUnfWih+lyapY/IqSuYF470GU/wyR3u+brJztKgLDIciSwzK 5HEONX8nhcmhLZw2A2zCjl3BDp0wsj4ILA6gteH9RieqWNxv27C+rbzHWxYPZRI7ayPg SppGBcdB24L2X+x9kVZOU35URev6dtFMIIkVi9l06stCtM1qSmOwtpgbfJhG3KUl56Mh wxX2aerEDOqkK2lPmQTtDQcCljGLtE+AKLPiHmisWI9hBG3V/s2I30Dp0dnmDlbJLwlN /v85KzEX7BKXayPDJ1AflW1hfCxjIsxemOlc8E9I6jX+CFfq8FtmvJ1rd/ISDxFb/0t7 LExg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2/Oe8otQmsBfHpWwM1UvRs0zFnsdBeBxls7PqfCwDFM=; b=0bm7UTVY/7M8mKh2+szUrhNxERD0m67HWqWKEL8uDdkDhdl22az685Mbt7UR9nosIy v2Kf8+PXK2kPgyJw/vY9VP9CPTWp+zNhGBmCNgWuHRkKh7Jg6HZezpqmzu/mQr8Z+cuI EO8asMVNnl9RIg1HPCNrvGa3849AMMFTi2tC9efdVZzkVlDO12YXBYU0xOpwrIdDQFe7 JsXjIfN0HfAEz1BBcZK21aF2yoewubwYN4QMhJr/4dgIfwK9hGwFR9n5YI8QvgvOCBXL 4OP08hX+e/vT1nlsPMv9sNJbNmXY39ChfW0NGwCUz3/La8hI0d2D73MgqzJNqiYpkhNn j5ZQ== X-Gm-Message-State: AOAM533xruKdDBBSSgo3yLnWI8P4Gmm3HLzS3QLnwocNt0BOwl++ezg5 GWPh0Zgt+m+cMjrvqPTg7xw= X-Google-Smtp-Source: ABdhPJy8ToiEdBvVjZX7Wi8xlUQaA/9uxmD6dmZp1EXuwuZePSXevSExyDTBWdg1hj4mEapmKVwdUA== X-Received: by 2002:a1c:790d:: with SMTP id l13mr3692329wme.101.1637677321174; Tue, 23 Nov 2021 06:22:01 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:22:00 -0800 (PST) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 26/26] drm/ttm: remove bo->moving Date: Tue, 23 Nov 2021 15:21:11 +0100 Message-Id: <20211123142111.3885-27-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This is now handled by the DMA-buf framework in the dma_resv obj. Signed-off-by: Christian König --- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 13 ++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 7 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c | 11 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 11 ++++-- drivers/gpu/drm/ttm/ttm_bo.c | 10 ++---- drivers/gpu/drm/ttm/ttm_bo_util.c | 7 ---- drivers/gpu/drm/ttm/ttm_bo_vm.c | 34 +++++++------------ drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 6 ---- include/drm/ttm/ttm_bo_api.h | 2 -- 9 files changed, 40 insertions(+), 61 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 0201a44ff630..a32035297995 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -2330,6 +2330,8 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef) struct amdgpu_bo *bo = mem->bo; uint32_t domain = mem->domain; struct kfd_mem_attachment *attachment; + struct dma_resv_iter cursor; + struct dma_fence *fence; total_size += amdgpu_bo_size(bo); @@ -2344,10 +2346,13 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef) goto validate_map_fail; } } - ret = amdgpu_sync_fence(&sync_obj, bo->tbo.moving); - if (ret) { - pr_debug("Memory eviction: Sync BO fence failed. Try again\n"); - goto validate_map_fail; + dma_resv_for_each_fence(&cursor, bo->tbo.base.resv, + DMA_RESV_USAGE_KERNEL, fence) { + ret = amdgpu_sync_fence(&sync_obj, fence); + if (ret) { + pr_debug("Memory eviction: Sync BO fence failed. Try again\n"); + goto validate_map_fail; + } } list_for_each_entry(attachment, &mem->attachments, list) { if (!attachment->is_mapped) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index a40ede9bccd0..3881a503a7bf 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -608,9 +608,8 @@ int amdgpu_bo_create(struct amdgpu_device *adev, if (unlikely(r)) goto fail_unreserve; - amdgpu_bo_fence(bo, fence, false); - dma_fence_put(bo->tbo.moving); - bo->tbo.moving = dma_fence_get(fence); + dma_resv_add_fence(bo->tbo.base.resv, fence, + DMA_RESV_USAGE_KERNEL); dma_fence_put(fence); } if (!bp->resv) @@ -1290,7 +1289,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo) r = amdgpu_fill_buffer(abo, AMDGPU_POISON, bo->base.resv, &fence); if (!WARN_ON(r)) { - amdgpu_bo_fence(abo, fence, false); + dma_resv_add_fence(bo->base.resv, fence, DMA_RESV_USAGE_KERNEL); dma_fence_put(fence); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c index e3fbf0f10add..31913ae86de6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c @@ -74,13 +74,12 @@ static int amdgpu_vm_cpu_update(struct amdgpu_vm_update_params *p, { unsigned int i; uint64_t value; - int r; + long r; - if (vmbo->bo.tbo.moving) { - r = dma_fence_wait(vmbo->bo.tbo.moving, true); - if (r) - return r; - } + r = dma_resv_wait_timeout(vmbo->bo.tbo.base.resv, DMA_RESV_USAGE_KERNEL, + true, MAX_SCHEDULE_TIMEOUT); + if (r < 0) + return r; pe += (unsigned long)amdgpu_bo_kptr(&vmbo->bo); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c index dbb551762805..bdb44cee19d3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c @@ -204,14 +204,19 @@ static int amdgpu_vm_sdma_update(struct amdgpu_vm_update_params *p, struct amdgpu_bo *bo = &vmbo->bo; enum amdgpu_ib_pool_type pool = p->immediate ? AMDGPU_IB_POOL_IMMEDIATE : AMDGPU_IB_POOL_DELAYED; + struct dma_resv_iter cursor; unsigned int i, ndw, nptes; + struct dma_fence *fence; uint64_t *pte; int r; /* Wait for PD/PT moves to be completed */ - r = amdgpu_sync_fence(&p->job->sync, bo->tbo.moving); - if (r) - return r; + dma_resv_for_each_fence(&cursor, bo->tbo.base.resv, + DMA_RESV_USAGE_KERNEL, fence) { + r = amdgpu_sync_fence(&p->job->sync, fence); + if (r) + return r; + } do { ndw = p->num_dw_left; diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 95e6633775f2..5db5c9ba166c 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -468,7 +468,6 @@ static void ttm_bo_release(struct kref *kref) dma_resv_unlock(bo->base.resv); atomic_dec(&ttm_glob.bo_count); - dma_fence_put(bo->moving); bo->destroy(bo); } @@ -737,9 +736,8 @@ int ttm_mem_evict_first(struct ttm_device *bdev, } /* - * Add the last move fence to the BO and reserve a new shared slot. We only use - * a shared slot to avoid unecessary sync and rely on the subsequent bo move to - * either stall or use an exclusive fence respectively set bo->moving. + * Add the last move fence to the BO as kernel dependency and reserve a new + * fence slot. */ static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo, struct ttm_resource_manager *man, @@ -769,9 +767,6 @@ static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo, dma_fence_put(fence); return ret; } - - dma_fence_put(bo->moving); - bo->moving = fence; return 0; } @@ -978,7 +973,6 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, bo->bdev = bdev; bo->type = type; bo->page_alignment = page_alignment; - bo->moving = NULL; bo->pin_count = 0; bo->sg = sg; if (resv) { diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index b9cfb62c4b6e..95de2691ee7c 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -229,7 +229,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, atomic_inc(&ttm_glob.bo_count); INIT_LIST_HEAD(&fbo->base.ddestroy); INIT_LIST_HEAD(&fbo->base.lru); - fbo->base.moving = NULL; drm_vma_node_reset(&fbo->base.base.vma_node); kref_init(&fbo->base.kref); @@ -496,9 +495,6 @@ static int ttm_bo_move_to_ghost(struct ttm_buffer_object *bo, * operation has completed. */ - dma_fence_put(bo->moving); - bo->moving = dma_fence_get(fence); - ret = ttm_buffer_object_transfer(bo, &ghost_obj); if (ret) return ret; @@ -543,9 +539,6 @@ static void ttm_bo_move_pipeline_evict(struct ttm_buffer_object *bo, spin_unlock(&from->move_lock); ttm_resource_free(bo, &bo->resource); - - dma_fence_put(bo->moving); - bo->moving = dma_fence_get(fence); } int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo, diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 08ba083a80d2..5b324f245265 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -46,17 +46,13 @@ static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo, struct vm_fault *vmf) { - vm_fault_t ret = 0; - int err = 0; - - if (likely(!bo->moving)) - goto out_unlock; + long err = 0; /* * Quick non-stalling check for idle. */ - if (dma_fence_is_signaled(bo->moving)) - goto out_clear; + if (dma_resv_test_signaled(bo->base.resv, DMA_RESV_USAGE_KERNEL)) + return 0; /* * If possible, avoid waiting for GPU with mmap_lock @@ -64,34 +60,30 @@ static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo, * is the first attempt. */ if (fault_flag_allow_retry_first(vmf->flags)) { - ret = VM_FAULT_RETRY; if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) - goto out_unlock; + return VM_FAULT_RETRY; ttm_bo_get(bo); mmap_read_unlock(vmf->vma->vm_mm); - (void) dma_fence_wait(bo->moving, true); + (void)dma_resv_wait_timeout(bo->base.resv, + DMA_RESV_USAGE_KERNEL, true, + MAX_SCHEDULE_TIMEOUT); dma_resv_unlock(bo->base.resv); ttm_bo_put(bo); - goto out_unlock; + return VM_FAULT_RETRY; } /* * Ordinary wait. */ - err = dma_fence_wait(bo->moving, true); - if (unlikely(err != 0)) { - ret = (err != -ERESTARTSYS) ? VM_FAULT_SIGBUS : + err = dma_resv_wait_timeout(bo->base.resv, DMA_RESV_USAGE_KERNEL, true, + MAX_SCHEDULE_TIMEOUT); + if (unlikely(err < 0)) { + return (err != -ERESTARTSYS) ? VM_FAULT_SIGBUS : VM_FAULT_NOPAGE; - goto out_unlock; } -out_clear: - dma_fence_put(bo->moving); - bo->moving = NULL; - -out_unlock: - return ret; + return 0; } static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo, diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 9e3dcbb573e7..40cc2c13e963 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -1166,12 +1166,6 @@ int vmw_resources_clean(struct vmw_buffer_object *vbo, pgoff_t start, *num_prefault = __KERNEL_DIV_ROUND_UP(last_cleaned - res_start, PAGE_SIZE); vmw_bo_fence_single(bo, NULL); - if (bo->moving) - dma_fence_put(bo->moving); - - return dma_resv_get_singleton(bo->base.resv, - DMA_RESV_USAGE_KERNEL, - &bo->moving); } return 0; diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index cd785cfa3123..9798eb097c13 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -98,7 +98,6 @@ struct ttm_tt; * @lru: List head for the lru list. * @ddestroy: List head for the delayed destroy list. * @swap: List head for swap LRU list. - * @moving: Fence set when BO is moving * @offset: The current GPU offset, which can have different meanings * depending on the memory type. For SYSTEM type memory, it should be 0. * @cur_placement: Hint of current placement. @@ -151,7 +150,6 @@ struct ttm_buffer_object { * Members protected by a bo reservation. */ - struct dma_fence *moving; unsigned priority; unsigned pin_count;