From patchwork Wed Sep 4 07:08:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13789846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2112ECD484A for ; Wed, 4 Sep 2024 07:08:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 369C910E680; Wed, 4 Sep 2024 07:08:24 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="d0tqCqrz"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1529410E67D; Wed, 4 Sep 2024 07:08:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1725433702; x=1756969702; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GOYhzbDhLe6MIG14z/KCd8kB5MW11mNvHpCKdLNuvd0=; b=d0tqCqrzPdVToHqEIW0tGqaM1ATR+y8jhaI3NbKYdMlFNDUllzv0isXH qLwO2roT81vcQpPPo7+AH+yIxZlwFm5623iyFOkqDQprM3dtmY11G/t8O giGyGq7I0ZUs6B04ArCUSZDfwXsKk6aqpeNTOncwFt2cqJjvDTY01VzIf l+1dLRa5ITBLtTOWLXVU+zHSfmuKfvtWgNyFC5UaoFiUl5NgQdh/scAfr F3waNMYkKhFCd2Z9etKd+M8+lYjQjzudwAtIb5KgY9dJ0KcxOicMgbz4j aUZc4vHnSOi0jnMU2g4oyQ5km2TezqYk5WdZdmIhwd+yRCw9mmvlbOeas w==; X-CSE-ConnectionGUID: tXnb8dbCS4yvVRnPNyYmTA== X-CSE-MsgGUID: NnpMqdxgRVWF+4nJLXw7Yg== X-IronPort-AV: E=McAfee;i="6700,10204,11184"; a="13362997" X-IronPort-AV: E=Sophos;i="6.10,201,1719903600"; d="scan'208";a="13362997" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2024 00:08:22 -0700 X-CSE-ConnectionGUID: DjdhyA/ASMORVbnq3CXsgg== X-CSE-MsgGUID: 0bz4e3clQPqcYbZVCGArHQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,201,1719903600"; d="scan'208";a="65165833" Received: from dneilan-mobl1.ger.corp.intel.com (HELO fedora..) ([10.245.245.93]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2024 00:08:20 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v4 1/2] drm/ttm: Move swapped objects off the manager's LRU list Date: Wed, 4 Sep 2024 09:08:07 +0200 Message-ID: <20240904070808.95126-2-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240904070808.95126-1-thomas.hellstrom@linux.intel.com> References: <20240904070808.95126-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Resources of swapped objects remains on the TTM_PL_SYSTEM manager's LRU list, which is bad for the LRU walk efficiency. Rename the device-wide "pinned" list to "unevictable" and move also resources of swapped-out objects to that list. An alternative would be to create an "UNEVICTABLE" priority to be able to keep the pinned- and swapped objects on their respective manager's LRU without affecting the LRU walk efficiency. v2: - Remove a bogus WARN_ON (Christian König) - Update ttm_resource_[add|del] bulk move (Christian König) - Fix TTM KUNIT tests (Intel CI) v3: - Check for non-NULL bo->resource in ttm_bo_populate(). v4: - Don't move to LRU tail during swapout until the resource is properly swapped or there was a swapout failure. (Intel Ci) - Add a newline after checkpatch check. Cc: Christian König Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c | 4 +- drivers/gpu/drm/ttm/tests/ttm_bo_test.c | 4 +- drivers/gpu/drm/ttm/tests/ttm_resource_test.c | 6 +- drivers/gpu/drm/ttm/ttm_bo.c | 59 ++++++++++++++++++- drivers/gpu/drm/ttm/ttm_bo_util.c | 6 +- drivers/gpu/drm/ttm/ttm_bo_vm.c | 2 +- drivers/gpu/drm/ttm/ttm_device.c | 4 +- drivers/gpu/drm/ttm/ttm_resource.c | 15 +++-- drivers/gpu/drm/ttm/ttm_tt.c | 3 + drivers/gpu/drm/xe/xe_bo.c | 4 +- include/drm/ttm/ttm_bo.h | 2 + include/drm/ttm/ttm_device.h | 5 +- include/drm/ttm/ttm_tt.h | 5 ++ 15 files changed, 96 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 5c72462d1f57..7de284766f82 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -808,7 +808,7 @@ static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj, } if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) { - ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx); + ret = ttm_bo_populate(bo, &ctx); if (ret) return ret; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 03b00a03a634..041dab543b78 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -624,7 +624,7 @@ int i915_ttm_move(struct ttm_buffer_object *bo, bool evict, /* Populate ttm with pages if needed. Typically system memory. */ if (ttm && (dst_man->use_tt || (ttm->page_flags & TTM_TT_FLAG_SWAPPED))) { - ret = ttm_tt_populate(bo->bdev, ttm, ctx); + ret = ttm_bo_populate(bo, ctx); if (ret) return ret; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c index ad649523d5e0..61596cecce4d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c @@ -90,7 +90,7 @@ static int i915_ttm_backup(struct i915_gem_apply_to_region *apply, goto out_no_lock; backup_bo = i915_gem_to_ttm(backup); - err = ttm_tt_populate(backup_bo->bdev, backup_bo->ttm, &ctx); + err = ttm_bo_populate(backup_bo, &ctx); if (err) goto out_no_populate; @@ -189,7 +189,7 @@ static int i915_ttm_restore(struct i915_gem_apply_to_region *apply, if (!backup_bo->resource) err = ttm_bo_validate(backup_bo, i915_ttm_sys_placement(), &ctx); if (!err) - err = ttm_tt_populate(backup_bo->bdev, backup_bo->ttm, &ctx); + err = ttm_bo_populate(backup_bo, &ctx); if (!err) { err = i915_gem_obj_copy_ttm(obj, backup, pm_apply->allow_gpu, false); diff --git a/drivers/gpu/drm/ttm/tests/ttm_bo_test.c b/drivers/gpu/drm/ttm/tests/ttm_bo_test.c index f0a7eb62116c..3139fd9128d8 100644 --- a/drivers/gpu/drm/ttm/tests/ttm_bo_test.c +++ b/drivers/gpu/drm/ttm/tests/ttm_bo_test.c @@ -308,11 +308,11 @@ static void ttm_bo_unreserve_pinned(struct kunit *test) err = ttm_resource_alloc(bo, place, &res2); KUNIT_ASSERT_EQ(test, err, 0); KUNIT_ASSERT_EQ(test, - list_is_last(&res2->lru.link, &priv->ttm_dev->pinned), 1); + list_is_last(&res2->lru.link, &priv->ttm_dev->unevictable), 1); ttm_bo_unreserve(bo); KUNIT_ASSERT_EQ(test, - list_is_last(&res1->lru.link, &priv->ttm_dev->pinned), 1); + list_is_last(&res1->lru.link, &priv->ttm_dev->unevictable), 1); ttm_resource_free(bo, &res1); ttm_resource_free(bo, &res2); diff --git a/drivers/gpu/drm/ttm/tests/ttm_resource_test.c b/drivers/gpu/drm/ttm/tests/ttm_resource_test.c index 22260e7aea58..a9f4b81921c3 100644 --- a/drivers/gpu/drm/ttm/tests/ttm_resource_test.c +++ b/drivers/gpu/drm/ttm/tests/ttm_resource_test.c @@ -164,18 +164,18 @@ static void ttm_resource_init_pinned(struct kunit *test) res = kunit_kzalloc(test, sizeof(*res), GFP_KERNEL); KUNIT_ASSERT_NOT_NULL(test, res); - KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->pinned)); + KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->unevictable)); dma_resv_lock(bo->base.resv, NULL); ttm_bo_pin(bo); ttm_resource_init(bo, place, res); - KUNIT_ASSERT_TRUE(test, list_is_singular(&bo->bdev->pinned)); + KUNIT_ASSERT_TRUE(test, list_is_singular(&bo->bdev->unevictable)); ttm_bo_unpin(bo); ttm_resource_fini(man, res); dma_resv_unlock(bo->base.resv); - KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->pinned)); + KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->unevictable)); } static void ttm_resource_fini_basic(struct kunit *test) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 320592435252..875b024913a0 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -139,7 +139,7 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo, goto out_err; if (mem->mem_type != TTM_PL_SYSTEM) { - ret = ttm_tt_populate(bo->bdev, bo->ttm, ctx); + ret = ttm_bo_populate(bo, ctx); if (ret) goto out_err; } @@ -1128,9 +1128,20 @@ ttm_bo_swapout_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo) if (bo->bdev->funcs->swap_notify) bo->bdev->funcs->swap_notify(bo); - if (ttm_tt_is_populated(bo->ttm)) + if (ttm_tt_is_populated(bo->ttm)) { + spin_lock(&bo->bdev->lru_lock); + ttm_resource_del_bulk_move(bo->resource, bo); + spin_unlock(&bo->bdev->lru_lock); + ret = ttm_tt_swapout(bo->bdev, bo->ttm, swapout_walk->gfp_flags); + spin_lock(&bo->bdev->lru_lock); + if (ret) + ttm_resource_add_bulk_move(bo->resource, bo); + ttm_resource_move_to_lru_tail(bo->resource); + spin_unlock(&bo->bdev->lru_lock); + } + out: /* Consider -ENOMEM and -ENOSPC non-fatal. */ if (ret == -ENOMEM || ret == -ENOSPC) @@ -1180,3 +1191,47 @@ void ttm_bo_tt_destroy(struct ttm_buffer_object *bo) ttm_tt_destroy(bo->bdev, bo->ttm); bo->ttm = NULL; } + +/** + * ttm_bo_populate() - Ensure that a buffer object has backing pages + * @bo: The buffer object + * @ctx: The ttm_operation_ctx governing the operation. + * + * For buffer objects in a memory type whose manager uses + * struct ttm_tt for backing pages, ensure those backing pages + * are present and with valid content. The bo's resource is also + * placed on the correct LRU list if it was previously swapped + * out. + * + * Return: 0 if successful, negative error code on failure. + * Note: May return -EINTR or -ERESTARTSYS if @ctx::interruptible + * is set to true. + */ +int ttm_bo_populate(struct ttm_buffer_object *bo, + struct ttm_operation_ctx *ctx) +{ + struct ttm_tt *tt = bo->ttm; + bool swapped; + int ret; + + dma_resv_assert_held(bo->base.resv); + + if (!tt) + return 0; + + swapped = ttm_tt_is_swapped(tt); + ret = ttm_tt_populate(bo->bdev, tt, ctx); + if (ret) + return ret; + + if (swapped && !ttm_tt_is_swapped(tt) && !bo->pin_count && + bo->resource) { + spin_lock(&bo->bdev->lru_lock); + ttm_resource_add_bulk_move(bo->resource, bo); + ttm_resource_move_to_lru_tail(bo->resource); + spin_unlock(&bo->bdev->lru_lock); + } + + return 0; +} +EXPORT_SYMBOL(ttm_bo_populate); diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 3c07f4712d5c..d939925efa81 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -163,7 +163,7 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, src_man = ttm_manager_type(bdev, src_mem->mem_type); if (ttm && ((ttm->page_flags & TTM_TT_FLAG_SWAPPED) || dst_man->use_tt)) { - ret = ttm_tt_populate(bdev, ttm, ctx); + ret = ttm_bo_populate(bo, ctx); if (ret) return ret; } @@ -350,7 +350,7 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo, BUG_ON(!ttm); - ret = ttm_tt_populate(bo->bdev, ttm, &ctx); + ret = ttm_bo_populate(bo, &ctx); if (ret) return ret; @@ -507,7 +507,7 @@ int ttm_bo_vmap(struct ttm_buffer_object *bo, struct iosys_map *map) pgprot_t prot; void *vaddr; - ret = ttm_tt_populate(bo->bdev, ttm, &ctx); + ret = ttm_bo_populate(bo, &ctx); if (ret) return ret; diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 4212b8c91dd4..2c699ed1963a 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -224,7 +224,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, }; ttm = bo->ttm; - err = ttm_tt_populate(bdev, bo->ttm, &ctx); + err = ttm_bo_populate(bo, &ctx); if (err) { if (err == -EINTR || err == -ERESTARTSYS || err == -EAGAIN) diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c index e7cc4954c1bc..02e797fd1891 100644 --- a/drivers/gpu/drm/ttm/ttm_device.c +++ b/drivers/gpu/drm/ttm/ttm_device.c @@ -216,7 +216,7 @@ int ttm_device_init(struct ttm_device *bdev, const struct ttm_device_funcs *func bdev->vma_manager = vma_manager; spin_lock_init(&bdev->lru_lock); - INIT_LIST_HEAD(&bdev->pinned); + INIT_LIST_HEAD(&bdev->unevictable); bdev->dev_mapping = mapping; mutex_lock(&ttm_global_mutex); list_add_tail(&bdev->device_list, &glob->device_list); @@ -283,7 +283,7 @@ void ttm_device_clear_dma_mappings(struct ttm_device *bdev) struct ttm_resource_manager *man; unsigned int i, j; - ttm_device_clear_lru_dma_mappings(bdev, &bdev->pinned); + ttm_device_clear_lru_dma_mappings(bdev, &bdev->unevictable); for (i = TTM_PL_SYSTEM; i < TTM_NUM_MEM_TYPES; ++i) { man = ttm_manager_type(bdev, i); diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index 6d764ba88aab..93b44043b428 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -30,6 +30,7 @@ #include #include #include +#include #include @@ -239,7 +240,8 @@ static void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk, void ttm_resource_add_bulk_move(struct ttm_resource *res, struct ttm_buffer_object *bo) { - if (bo->bulk_move && !bo->pin_count) + if (bo->bulk_move && !bo->pin_count && + (!bo->ttm || !ttm_tt_is_swapped(bo->ttm))) ttm_lru_bulk_move_add(bo->bulk_move, res); } @@ -247,7 +249,8 @@ void ttm_resource_add_bulk_move(struct ttm_resource *res, void ttm_resource_del_bulk_move(struct ttm_resource *res, struct ttm_buffer_object *bo) { - if (bo->bulk_move && !bo->pin_count) + if (bo->bulk_move && !bo->pin_count && + (!bo->ttm || !ttm_tt_is_swapped(bo->ttm))) ttm_lru_bulk_move_del(bo->bulk_move, res); } @@ -259,8 +262,8 @@ void ttm_resource_move_to_lru_tail(struct ttm_resource *res) lockdep_assert_held(&bo->bdev->lru_lock); - if (bo->pin_count) { - list_move_tail(&res->lru.link, &bdev->pinned); + if (bo->pin_count || (bo->ttm && ttm_tt_is_swapped(bo->ttm))) { + list_move_tail(&res->lru.link, &bdev->unevictable); } else if (bo->bulk_move) { struct ttm_lru_bulk_move_pos *pos = @@ -301,8 +304,8 @@ void ttm_resource_init(struct ttm_buffer_object *bo, man = ttm_manager_type(bo->bdev, place->mem_type); spin_lock(&bo->bdev->lru_lock); - if (bo->pin_count) - list_add_tail(&res->lru.link, &bo->bdev->pinned); + if (bo->pin_count || (bo->ttm && ttm_tt_is_swapped(bo->ttm))) + list_add_tail(&res->lru.link, &bo->bdev->unevictable); else list_add_tail(&res->lru.link, &man->lru[bo->priority]); man->usage += res->size; diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 4b51b9023126..3baf215eca23 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -367,7 +367,10 @@ int ttm_tt_populate(struct ttm_device *bdev, } return ret; } + +#if IS_ENABLED(CONFIG_DRM_TTM_KUNIT_TEST) EXPORT_SYMBOL(ttm_tt_populate); +#endif void ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *ttm) { diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index a8e4d46d9123..f34daae2cf2b 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -892,7 +892,7 @@ int xe_bo_evict_pinned(struct xe_bo *bo) } } - ret = ttm_tt_populate(bo->ttm.bdev, bo->ttm.ttm, &ctx); + ret = ttm_bo_populate(&bo->ttm, &ctx); if (ret) goto err_res_free; @@ -945,7 +945,7 @@ int xe_bo_restore_pinned(struct xe_bo *bo) if (ret) return ret; - ret = ttm_tt_populate(bo->ttm.bdev, bo->ttm.ttm, &ctx); + ret = ttm_bo_populate(&bo->ttm, &ctx); if (ret) goto err_res_free; diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h index 7b56d1ca36d7..5804408815be 100644 --- a/include/drm/ttm/ttm_bo.h +++ b/include/drm/ttm/ttm_bo.h @@ -462,5 +462,7 @@ int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo); pgprot_t ttm_io_prot(struct ttm_buffer_object *bo, struct ttm_resource *res, pgprot_t tmp); void ttm_bo_tt_destroy(struct ttm_buffer_object *bo); +int ttm_bo_populate(struct ttm_buffer_object *bo, + struct ttm_operation_ctx *ctx); #endif diff --git a/include/drm/ttm/ttm_device.h b/include/drm/ttm/ttm_device.h index c22f30535c84..438358f72716 100644 --- a/include/drm/ttm/ttm_device.h +++ b/include/drm/ttm/ttm_device.h @@ -252,9 +252,10 @@ struct ttm_device { spinlock_t lru_lock; /** - * @pinned: Buffer objects which are pinned and so not on any LRU list. + * @unevictable Buffer objects which are pinned or swapped and as such + * not on an LRU list. */ - struct list_head pinned; + struct list_head unevictable; /** * @dev_mapping: A pointer to the struct address_space for invalidating diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index 2b9d856ff388..991edafdb2dd 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -129,6 +129,11 @@ static inline bool ttm_tt_is_populated(struct ttm_tt *tt) return tt->page_flags & TTM_TT_FLAG_PRIV_POPULATED; } +static inline bool ttm_tt_is_swapped(const struct ttm_tt *tt) +{ + return tt->page_flags & TTM_TT_FLAG_SWAPPED; +} + /** * ttm_tt_create * From patchwork Wed Sep 4 07:08:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13789847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CBBB0CD37AE for ; Wed, 4 Sep 2024 07:08:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 363CE10E684; Wed, 4 Sep 2024 07:08:25 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="G5rkvQy6"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9FF3F10E680; Wed, 4 Sep 2024 07:08:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1725433704; x=1756969704; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ak+61B+0Kuvjkie05qPE5aTzhQZ31UCjpmn7noCrN9U=; b=G5rkvQy66Wlao4iZk2kd1PHlovDmh/8JMfjBa6XNk/VxAQGCINSV35lf eEpOtqctzJ8EM3thyyu7e6eQr6Vxnz8ek8KJxmlP7ziVbAu9A4pKzofHL UQ6l+LvC84dseRQA14stZL0UgbxUFOMoVvnj0dz2p8LdsuhaiXk3nvyuC 3no6FLynJ3AePmxn5ceAeSH9+5VmaE9S5vDchJq8K8ypshX6jMDykTSEn q74Zbq8N5EzoLr/YaAQyhGtDtdrFzDvUULv1qNe9wU9JOxA+OGTLWSa+7 /G+jXE/FBTy8WsdWbKYclvwe1eWkLyfOh7HtpfCWdwu+iLGP7wwmoyqJA Q==; X-CSE-ConnectionGUID: bfE5Xm+MQoWgyAZ+jjQSng== X-CSE-MsgGUID: 4e3D3IeKTUKGxFE7aanmOg== X-IronPort-AV: E=McAfee;i="6700,10204,11184"; a="13363001" X-IronPort-AV: E=Sophos;i="6.10,201,1719903600"; d="scan'208";a="13363001" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2024 00:08:24 -0700 X-CSE-ConnectionGUID: RuR8DKSZTNqpj0pQdyYS8A== X-CSE-MsgGUID: BYj7G5XySTKMESPpzHgj/A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,201,1719903600"; d="scan'208";a="65165839" Received: from dneilan-mobl1.ger.corp.intel.com (HELO fedora..) ([10.245.245.93]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2024 00:08:22 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v4 2/2] drm/ttm: Move pinned objects off LRU lists when pinning Date: Wed, 4 Sep 2024 09:08:08 +0200 Message-ID: <20240904070808.95126-3-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240904070808.95126-1-thomas.hellstrom@linux.intel.com> References: <20240904070808.95126-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The ttm_bo_pin() and ttm_bo_unpin() functions weren't moving their resources off the LRU list to the unevictable list. Make sure that happens so that pinned objects don't accidently linger on the LRU lists, and also make sure to move them back once they are unpinned. v2: - Removing from a bulk move must be done with the pin-count still zero. v3: - ttm_resource_move_to_lru_tail must be done after pinning with a non- NULL resource (Intel CI). Cc: Christian König Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström Reviewed-by: Christian König #v1 --- drivers/gpu/drm/ttm/ttm_bo.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 875b024913a0..4374d5eb1a98 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -594,7 +594,8 @@ void ttm_bo_pin(struct ttm_buffer_object *bo) spin_lock(&bo->bdev->lru_lock); if (bo->resource) ttm_resource_del_bulk_move(bo->resource, bo); - ++bo->pin_count; + if (!bo->pin_count++ && bo->resource) + ttm_resource_move_to_lru_tail(bo->resource); spin_unlock(&bo->bdev->lru_lock); } EXPORT_SYMBOL(ttm_bo_pin); @@ -613,9 +614,10 @@ void ttm_bo_unpin(struct ttm_buffer_object *bo) return; spin_lock(&bo->bdev->lru_lock); - --bo->pin_count; - if (bo->resource) + if (!--bo->pin_count && bo->resource) { ttm_resource_add_bulk_move(bo->resource, bo); + ttm_resource_move_to_lru_tail(bo->resource); + } spin_unlock(&bo->bdev->lru_lock); } EXPORT_SYMBOL(ttm_bo_unpin);