From patchwork Tue May 21 07:16:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BD8AAC25B74 for ; Tue, 21 May 2024 07:17:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E1F4C10E3A7; Tue, 21 May 2024 07:17:24 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="DZxNvJe8"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 61B7310E2CB; Tue, 21 May 2024 07:16:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275819; x=1747811819; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qBbccwff97CV0NmuYiT39l7J1HIT2EGRk1G9jxWfE6k=; b=DZxNvJe8SdvUypYZv+zv/u1TORWJwAsZEcfYnfl875+qf74CjCgvy0Ph OL+W0TLl4DJRxuKK5JAQ6Gx4WCsGR4opHcu+vGfPsAujHnDXf7JPRZDbZ 3QUMhoD9a/XiXnd8zwDxwBI5LIl/TXsF3jZuRiUTiIFefrSlGc3M1QE9+ jo44V8m6mgrAXNRUG0tSNSfCQ21LDC0/zrjTzf1/Gp5f344FMxfpnQGKx LTc9bjXS6UBxx3X/4RL+f/R8Sa0tR3OCLaTLEl7QQsk1Jxmj/JwXe56rn 8gQ1XvuaVnntldCf9P68Iq7uTjCjXkpClTeSSG4ktye/5JHonVE+jmNqj w==; X-CSE-ConnectionGUID: w9UtDyYdRB2VUj2HC06L4g== X-CSE-MsgGUID: jyrg3/elSv+yYGfj6txm+g== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393441" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393441" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:16:59 -0700 X-CSE-ConnectionGUID: wceHbyGpQBCE4/ZvDbFUBA== X-CSE-MsgGUID: VmOhXjJUT7u+hWVzZJYghg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336637" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:16:58 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 01/21] drm/ttm: Allow TTM LRU list nodes of different types Date: Tue, 21 May 2024 09:16:19 +0200 Message-ID: <20240521071639.77614-2-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To be able to handle list unlocking while traversing the LRU list, we want the iterators not only to point to the next position of the list traversal, but to insert themselves as list nodes at that point to work around the fact that the next node might otherwise disappear from the list while the iterator is pointing to it. These list nodes need to be easily distinguishable from other list nodes so that others traversing the list can skip over them. So declare a struct ttm_lru_item, with a struct list_head member and a type enum. This will slightly increase the size of a struct ttm_resource. Changes in previous series: - Update enum ttm_lru_item_type documentation. v3: - Introduce ttm_lru_first_res_or_null() (Christian König, Thomas Hellström) Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström Reviewed-by: Matthew Brost Reviewed-by: Christian König --- drivers/gpu/drm/ttm/ttm_device.c | 4 +- drivers/gpu/drm/ttm/ttm_resource.c | 89 +++++++++++++++++++++++------- include/drm/ttm/ttm_resource.h | 54 +++++++++++++++++- 3 files changed, 125 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c index 434cf0258000..09411978a13a 100644 --- a/drivers/gpu/drm/ttm/ttm_device.c +++ b/drivers/gpu/drm/ttm/ttm_device.c @@ -274,14 +274,14 @@ static void ttm_device_clear_lru_dma_mappings(struct ttm_device *bdev, struct ttm_resource *res; spin_lock(&bdev->lru_lock); - while ((res = list_first_entry_or_null(list, typeof(*res), lru))) { + while ((res = ttm_lru_first_res_or_null(list))) { struct ttm_buffer_object *bo = res->bo; /* Take ref against racing releases once lru_lock is unlocked */ if (!ttm_bo_get_unless_zero(bo)) continue; - list_del_init(&res->lru); + list_del_init(&bo->resource->lru.link); spin_unlock(&bdev->lru_lock); if (bo->ttm) diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index 4a66b851b67d..db9a7a3717c4 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -70,8 +70,8 @@ void ttm_lru_bulk_move_tail(struct ttm_lru_bulk_move *bulk) dma_resv_assert_held(pos->last->bo->base.resv); man = ttm_manager_type(pos->first->bo->bdev, i); - list_bulk_move_tail(&man->lru[j], &pos->first->lru, - &pos->last->lru); + list_bulk_move_tail(&man->lru[j], &pos->first->lru.link, + &pos->last->lru.link); } } } @@ -84,14 +84,38 @@ ttm_lru_bulk_move_pos(struct ttm_lru_bulk_move *bulk, struct ttm_resource *res) return &bulk->pos[res->mem_type][res->bo->priority]; } +/* Return the previous resource on the list (skip over non-resource list items) */ +static struct ttm_resource *ttm_lru_prev_res(struct ttm_resource *cur) +{ + struct ttm_lru_item *lru = &cur->lru; + + do { + lru = list_prev_entry(lru, link); + } while (!ttm_lru_item_is_res(lru)); + + return ttm_lru_item_to_res(lru); +} + +/* Return the next resource on the list (skip over non-resource list items) */ +static struct ttm_resource *ttm_lru_next_res(struct ttm_resource *cur) +{ + struct ttm_lru_item *lru = &cur->lru; + + do { + lru = list_next_entry(lru, link); + } while (!ttm_lru_item_is_res(lru)); + + return ttm_lru_item_to_res(lru); +} + /* Move the resource to the tail of the bulk move range */ static void ttm_lru_bulk_move_pos_tail(struct ttm_lru_bulk_move_pos *pos, struct ttm_resource *res) { if (pos->last != res) { if (pos->first == res) - pos->first = list_next_entry(res, lru); - list_move(&res->lru, &pos->last->lru); + pos->first = ttm_lru_next_res(res); + list_move(&res->lru.link, &pos->last->lru.link); pos->last = res; } } @@ -122,11 +146,11 @@ static void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk, pos->first = NULL; pos->last = NULL; } else if (pos->first == res) { - pos->first = list_next_entry(res, lru); + pos->first = ttm_lru_next_res(res); } else if (pos->last == res) { - pos->last = list_prev_entry(res, lru); + pos->last = ttm_lru_prev_res(res); } else { - list_move(&res->lru, &pos->last->lru); + list_move(&res->lru.link, &pos->last->lru.link); } } @@ -155,7 +179,7 @@ void ttm_resource_move_to_lru_tail(struct ttm_resource *res) lockdep_assert_held(&bo->bdev->lru_lock); if (bo->pin_count) { - list_move_tail(&res->lru, &bdev->pinned); + list_move_tail(&res->lru.link, &bdev->pinned); } else if (bo->bulk_move) { struct ttm_lru_bulk_move_pos *pos = @@ -166,7 +190,7 @@ void ttm_resource_move_to_lru_tail(struct ttm_resource *res) struct ttm_resource_manager *man; man = ttm_manager_type(bdev, res->mem_type); - list_move_tail(&res->lru, &man->lru[bo->priority]); + list_move_tail(&res->lru.link, &man->lru[bo->priority]); } } @@ -197,9 +221,9 @@ void ttm_resource_init(struct ttm_buffer_object *bo, man = ttm_manager_type(bo->bdev, place->mem_type); spin_lock(&bo->bdev->lru_lock); if (bo->pin_count) - list_add_tail(&res->lru, &bo->bdev->pinned); + list_add_tail(&res->lru.link, &bo->bdev->pinned); else - list_add_tail(&res->lru, &man->lru[bo->priority]); + list_add_tail(&res->lru.link, &man->lru[bo->priority]); man->usage += res->size; spin_unlock(&bo->bdev->lru_lock); } @@ -221,7 +245,7 @@ void ttm_resource_fini(struct ttm_resource_manager *man, struct ttm_device *bdev = man->bdev; spin_lock(&bdev->lru_lock); - list_del_init(&res->lru); + list_del_init(&res->lru.link); man->usage -= res->size; spin_unlock(&bdev->lru_lock); } @@ -472,14 +496,16 @@ struct ttm_resource * ttm_resource_manager_first(struct ttm_resource_manager *man, struct ttm_resource_cursor *cursor) { - struct ttm_resource *res; + struct ttm_lru_item *lru; lockdep_assert_held(&man->bdev->lru_lock); for (cursor->priority = 0; cursor->priority < TTM_MAX_BO_PRIORITY; ++cursor->priority) - list_for_each_entry(res, &man->lru[cursor->priority], lru) - return res; + list_for_each_entry(lru, &man->lru[cursor->priority], link) { + if (ttm_lru_item_is_res(lru)) + return ttm_lru_item_to_res(lru); + } return NULL; } @@ -498,15 +524,40 @@ ttm_resource_manager_next(struct ttm_resource_manager *man, struct ttm_resource_cursor *cursor, struct ttm_resource *res) { + struct ttm_lru_item *lru = &res->lru; + lockdep_assert_held(&man->bdev->lru_lock); - list_for_each_entry_continue(res, &man->lru[cursor->priority], lru) - return res; + list_for_each_entry_continue(lru, &man->lru[cursor->priority], link) { + if (ttm_lru_item_is_res(lru)) + return ttm_lru_item_to_res(lru); + } for (++cursor->priority; cursor->priority < TTM_MAX_BO_PRIORITY; ++cursor->priority) - list_for_each_entry(res, &man->lru[cursor->priority], lru) - return res; + list_for_each_entry(lru, &man->lru[cursor->priority], link) { + if (ttm_lru_item_is_res(lru)) + ttm_lru_item_to_res(lru); + } + + return NULL; +} + +/** + * ttm_lru_first_res_or_null() - Return the first resource on an lru list + * @head: The list head of the lru list. + * + * Return: Pointer to the first resource on the lru list or NULL if + * there is none. + */ +struct ttm_resource *ttm_lru_first_res_or_null(struct list_head *head) +{ + struct ttm_lru_item *lru; + + list_for_each_entry(lru, head, link) { + if (ttm_lru_item_is_res(lru)) + return ttm_lru_item_to_res(lru); + } return NULL; } diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h index 69769355139f..1511d91e290d 100644 --- a/include/drm/ttm/ttm_resource.h +++ b/include/drm/ttm/ttm_resource.h @@ -49,6 +49,43 @@ struct io_mapping; struct sg_table; struct scatterlist; +/** + * enum ttm_lru_item_type - enumerate ttm_lru_item subclasses + */ +enum ttm_lru_item_type { + /** @TTM_LRU_RESOURCE: The resource subclass */ + TTM_LRU_RESOURCE, + /** @TTM_LRU_HITCH: The iterator hitch subclass */ + TTM_LRU_HITCH +}; + +/** + * struct ttm_lru_item - The TTM lru list node base class + * @link: The list link + * @type: The subclass type + */ +struct ttm_lru_item { + struct list_head link; + enum ttm_lru_item_type type; +}; + +/** + * ttm_lru_item_init() - initialize a struct ttm_lru_item + * @item: The item to initialize + * @type: The subclass type + */ +static inline void ttm_lru_item_init(struct ttm_lru_item *item, + enum ttm_lru_item_type type) +{ + item->type = type; + INIT_LIST_HEAD(&item->link); +} + +static inline bool ttm_lru_item_is_res(const struct ttm_lru_item *item) +{ + return item->type == TTM_LRU_RESOURCE; +} + struct ttm_resource_manager_func { /** * struct ttm_resource_manager_func member alloc @@ -217,9 +254,21 @@ struct ttm_resource { /** * @lru: Least recently used list, see &ttm_resource_manager.lru */ - struct list_head lru; + struct ttm_lru_item lru; }; +/** + * ttm_lru_item_to_res() - Downcast a struct ttm_lru_item to a struct ttm_resource + * @item: The struct ttm_lru_item to downcast + * + * Return: Pointer to the embedding struct ttm_resource + */ +static inline struct ttm_resource * +ttm_lru_item_to_res(struct ttm_lru_item *item) +{ + return container_of(item, struct ttm_resource, lru); +} + /** * struct ttm_resource_cursor * @@ -393,6 +442,9 @@ ttm_resource_manager_next(struct ttm_resource_manager *man, struct ttm_resource_cursor *cursor, struct ttm_resource *res); +struct ttm_resource * +ttm_lru_first_res_or_null(struct list_head *head); + /** * ttm_resource_manager_for_each_res - iterate over all resources * @man: the resource manager From patchwork Tue May 21 07:16:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 73974C25B75 for ; Tue, 21 May 2024 07:17:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5C69410E41C; Tue, 21 May 2024 07:17:35 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="QhF5wL/X"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 57EEC10E31F; Tue, 21 May 2024 07:17:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275821; x=1747811821; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MrlTFaUKD9oPYPNW55sW3gLBR3qWwC63ss+95qAXt0o=; b=QhF5wL/XWwin7McRQnD883+e1za9NZVmzC1X/ve6xMdkAoxcUO/2O5DI CnkFWSy4eE+HAY61AKWo2F/69gjhGNcz9KDQT6sZanqYatDtzaYj9YEzF yD9JZfWM4riEs0GSqZTj/MjuRsupPpxbB+mx+7Z+N5F35ZBgkIdzmpDuD OxLlSy85Jxz0vMiHudDTN5SMDBVRBVLNnkIvOQ2MXPA8Qv84AFl6JtYR1 yRbN3dA4wkz+AHZ+4/zS1Jl0chhXfkHcbiziqHxBXfUwE3dkctu0UM27O 54DATJ5V0TfeToN38tfLvhX91BARlnjX8Pd1MOQMH+fQzigx+aml6XTIK Q==; X-CSE-ConnectionGUID: tcT7EylITVerYAqSjW9bDg== X-CSE-MsgGUID: 37qsr8cFT0C9IxE7mDBaOQ== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393445" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393445" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:01 -0700 X-CSE-ConnectionGUID: gRBxpjcUT3qglhHX/jXA5g== X-CSE-MsgGUID: rer3s9vUR5OU+DnwBxBsSA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336648" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:00 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 02/21] drm/ttm: Slightly clean up LRU list iteration Date: Tue, 21 May 2024 09:16:20 +0200 Message-ID: <20240521071639.77614-3-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To make the transition to using lru hitches easier, simplify the ttm_resource_manager_next() interface to only take the cursor and reuse ttm_resource_manager_next() functionality from ttm_resource_manager_first(). Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström Reviewed-by: Matthew Brost Reviewed-by: Christian König --- drivers/gpu/drm/ttm/ttm_resource.c | 48 +++++++++++++----------------- include/drm/ttm/ttm_resource.h | 10 ++++--- 2 files changed, 27 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index db9a7a3717c4..8bfbddddc0e8 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -496,50 +496,44 @@ struct ttm_resource * ttm_resource_manager_first(struct ttm_resource_manager *man, struct ttm_resource_cursor *cursor) { - struct ttm_lru_item *lru; - lockdep_assert_held(&man->bdev->lru_lock); - for (cursor->priority = 0; cursor->priority < TTM_MAX_BO_PRIORITY; - ++cursor->priority) - list_for_each_entry(lru, &man->lru[cursor->priority], link) { - if (ttm_lru_item_is_res(lru)) - return ttm_lru_item_to_res(lru); - } - - return NULL; + cursor->priority = 0; + cursor->man = man; + cursor->cur = &man->lru[cursor->priority]; + return ttm_resource_manager_next(cursor); } /** * ttm_resource_manager_next * - * @man: resource manager to iterate over * @cursor: cursor to record the position - * @res: the current resource pointer * - * Returns the next resource from the resource manager. + * Return: the next resource from the resource manager. */ struct ttm_resource * -ttm_resource_manager_next(struct ttm_resource_manager *man, - struct ttm_resource_cursor *cursor, - struct ttm_resource *res) +ttm_resource_manager_next(struct ttm_resource_cursor *cursor) { - struct ttm_lru_item *lru = &res->lru; + struct ttm_resource_manager *man = cursor->man; + struct ttm_lru_item *lru; lockdep_assert_held(&man->bdev->lru_lock); - list_for_each_entry_continue(lru, &man->lru[cursor->priority], link) { - if (ttm_lru_item_is_res(lru)) - return ttm_lru_item_to_res(lru); - } - - for (++cursor->priority; cursor->priority < TTM_MAX_BO_PRIORITY; - ++cursor->priority) - list_for_each_entry(lru, &man->lru[cursor->priority], link) { - if (ttm_lru_item_is_res(lru)) - ttm_lru_item_to_res(lru); + for (;;) { + lru = list_entry(cursor->cur, typeof(*lru), link); + list_for_each_entry_continue(lru, &man->lru[cursor->priority], link) { + if (ttm_lru_item_is_res(lru)) { + cursor->cur = &lru->link; + return ttm_lru_item_to_res(lru); + } } + if (++cursor->priority >= TTM_MAX_BO_PRIORITY) + break; + + cursor->cur = &man->lru[cursor->priority]; + } + return NULL; } diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h index 1511d91e290d..7d81fd5b5b83 100644 --- a/include/drm/ttm/ttm_resource.h +++ b/include/drm/ttm/ttm_resource.h @@ -272,11 +272,15 @@ ttm_lru_item_to_res(struct ttm_lru_item *item) /** * struct ttm_resource_cursor * + * @man: The resource manager currently being iterated over. + * @cur: The list head the cursor currently points to. * @priority: the current priority * * Cursor to iterate over the resources in a manager. */ struct ttm_resource_cursor { + struct ttm_resource_manager *man; + struct list_head *cur; unsigned int priority; }; @@ -438,9 +442,7 @@ struct ttm_resource * ttm_resource_manager_first(struct ttm_resource_manager *man, struct ttm_resource_cursor *cursor); struct ttm_resource * -ttm_resource_manager_next(struct ttm_resource_manager *man, - struct ttm_resource_cursor *cursor, - struct ttm_resource *res); +ttm_resource_manager_next(struct ttm_resource_cursor *cursor); struct ttm_resource * ttm_lru_first_res_or_null(struct list_head *head); @@ -455,7 +457,7 @@ ttm_lru_first_res_or_null(struct list_head *head); */ #define ttm_resource_manager_for_each_res(man, cursor, res) \ for (res = ttm_resource_manager_first(man, cursor); res; \ - res = ttm_resource_manager_next(man, cursor, res)) + res = ttm_resource_manager_next(cursor)) struct ttm_kmap_iter * ttm_kmap_iter_iomap_init(struct ttm_kmap_iter_iomap *iter_io, From patchwork Tue May 21 07:16:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7D44EC25B74 for ; Tue, 21 May 2024 07:17:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BE9C210E31F; Tue, 21 May 2024 07:17:30 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="CKdisOTq"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id E715410E34B; Tue, 21 May 2024 07:17:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275842; x=1747811842; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZvZ1BphXYciURwEKa+fmU+EBguAVfZYndYSE48drrxo=; b=CKdisOTq6wyRcQOw5D7VwvfEQrwMgf007v8Pemh2r5Ud0f2tzlTmilp8 nn8zFFWJ4nNhF8bCTmr4mfGqbfM5xqoRR4iEGfJWEea4QnGS0upYldx/2 lxFq2ytCHxNfLPbQF1oHN/FFDjfzeCmTYEQhbY+DEie/k9qI47YTgp7Fd AEYwDi+4HPeNmFyyllejOC0r8RsvG8D3b4m9FQbo7gewxE0a9+hga9xq/ ofMnJkiaBtUngY8g4+Y/E+RWhJ7e1zwz28la1CJ2FHHLGf+OrWLZ4PMD3 VjjWhjnNGLexQUOMJFypuYUVhpHZQAcRogNfxgHbOVltW1e0iI9vxJOJq A==; X-CSE-ConnectionGUID: EqpyVjouQ8e+P21bsJGkMg== X-CSE-MsgGUID: 4jvF7xV9TPeeYqGHv6Pc+A== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393447" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393447" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:03 -0700 X-CSE-ConnectionGUID: IGgZOZMJRh2IMHuGgHo/pQ== X-CSE-MsgGUID: uSNpczv+T5WdoctEQlv4GA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336668" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:02 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 03/21] drm/ttm: Use LRU hitches Date: Tue, 21 May 2024 09:16:21 +0200 Message-ID: <20240521071639.77614-4-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Have iterators insert themselves into the list they are iterating over using hitch list nodes. Since only the iterator owner can remove these list nodes from the list, it's safe to unlock the list and when continuing, use them as a starting point. Due to the way LRU bumping works in TTM, newly added items will not be missed, and bumped items will be iterated over a second time before reaching the end of the list. The exception is list with bulk move sublists. When bumping a sublist, a hitch that is part of that sublist will also be moved and we might miss items if restarting from it. This will be addressed in a later patch. Changes in previous series: - Updated ttm_resource_cursor_fini() documentation. v2: - Don't reorder ttm_resource_manager_first() and _next(). (Christian König). - Use list_add instead of list_move (Christian König) v3: - Split into two patches, one cleanup, one new functionality (Christian König) - use ttm_resource_cursor_fini_locked() instead of open-coding (Matthew Brost) Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström Reviewed-by: Matthew Brost --- drivers/gpu/drm/ttm/ttm_bo.c | 1 + drivers/gpu/drm/ttm/ttm_device.c | 9 +++-- drivers/gpu/drm/ttm/ttm_resource.c | 56 +++++++++++++++++++++++++----- include/drm/ttm/ttm_resource.h | 9 +++-- 4 files changed, 62 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 6396dece0db1..43eda720657f 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -621,6 +621,7 @@ int ttm_mem_evict_first(struct ttm_device *bdev, if (locked) dma_resv_unlock(res->bo->base.resv); } + ttm_resource_cursor_fini_locked(&cursor); if (!bo) { if (busy_bo && !ttm_bo_get_unless_zero(busy_bo)) diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c index 09411978a13a..f9e9b1ec8c8a 100644 --- a/drivers/gpu/drm/ttm/ttm_device.c +++ b/drivers/gpu/drm/ttm/ttm_device.c @@ -170,12 +170,17 @@ int ttm_device_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, num_pages = PFN_UP(bo->base.size); ret = ttm_bo_swapout(bo, ctx, gfp_flags); /* ttm_bo_swapout has dropped the lru_lock */ - if (!ret) + if (!ret) { + ttm_resource_cursor_fini(&cursor); return num_pages; - if (ret != -EBUSY) + } + if (ret != -EBUSY) { + ttm_resource_cursor_fini(&cursor); return ret; + } } } + ttm_resource_cursor_fini_locked(&cursor); spin_unlock(&bdev->lru_lock); return 0; } diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index 8bfbddddc0e8..9c8b6499edfb 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -33,6 +33,37 @@ #include +/** + * ttm_resource_cursor_fini_locked() - Finalize the LRU list cursor usage + * @cursor: The struct ttm_resource_cursor to finalize. + * + * The function pulls the LRU list cursor off any lists it was previusly + * attached to. Needs to be called with the LRU lock held. The function + * can be called multiple times after eachother. + */ +void ttm_resource_cursor_fini_locked(struct ttm_resource_cursor *cursor) +{ + lockdep_assert_held(&cursor->man->bdev->lru_lock); + list_del_init(&cursor->hitch.link); +} + +/** + * ttm_resource_cursor_fini() - Finalize the LRU list cursor usage + * @cursor: The struct ttm_resource_cursor to finalize. + * + * The function pulls the LRU list cursor off any lists it was previusly + * attached to. Needs to be called without the LRU list lock held. The + * function can be called multiple times after eachother. + */ +void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor) +{ + spinlock_t *lru_lock = &cursor->man->bdev->lru_lock; + + spin_lock(lru_lock); + ttm_resource_cursor_fini_locked(cursor); + spin_unlock(lru_lock); +} + /** * ttm_lru_bulk_move_init - initialize a bulk move structure * @bulk: the structure to init @@ -485,12 +516,15 @@ void ttm_resource_manager_debug(struct ttm_resource_manager *man, EXPORT_SYMBOL(ttm_resource_manager_debug); /** - * ttm_resource_manager_first - * + * ttm_resource_manager_first() - Start iterating over the resources + * of a resource manager * @man: resource manager to iterate over * @cursor: cursor to record the position * - * Returns the first resource from the resource manager. + * Initializes the cursor and starts iterating. When done iterating, + * the caller must explicitly call ttm_resource_cursor_fini(). + * + * Return: The first resource from the resource manager. */ struct ttm_resource * ttm_resource_manager_first(struct ttm_resource_manager *man, @@ -500,13 +534,15 @@ ttm_resource_manager_first(struct ttm_resource_manager *man, cursor->priority = 0; cursor->man = man; - cursor->cur = &man->lru[cursor->priority]; + ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH); + list_add(&cursor->hitch.link, &man->lru[cursor->priority]); + return ttm_resource_manager_next(cursor); } /** - * ttm_resource_manager_next - * + * ttm_resource_manager_next() - Continue iterating over the resource manager + * resources * @cursor: cursor to record the position * * Return: the next resource from the resource manager. @@ -520,10 +556,10 @@ ttm_resource_manager_next(struct ttm_resource_cursor *cursor) lockdep_assert_held(&man->bdev->lru_lock); for (;;) { - lru = list_entry(cursor->cur, typeof(*lru), link); + lru = &cursor->hitch; list_for_each_entry_continue(lru, &man->lru[cursor->priority], link) { if (ttm_lru_item_is_res(lru)) { - cursor->cur = &lru->link; + list_move(&cursor->hitch.link, &lru->link); return ttm_lru_item_to_res(lru); } } @@ -531,9 +567,11 @@ ttm_resource_manager_next(struct ttm_resource_cursor *cursor) if (++cursor->priority >= TTM_MAX_BO_PRIORITY) break; - cursor->cur = &man->lru[cursor->priority]; + list_move(&cursor->hitch.link, &man->lru[cursor->priority]); } + ttm_resource_cursor_fini_locked(cursor); + return NULL; } diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h index 7d81fd5b5b83..8fac781f641e 100644 --- a/include/drm/ttm/ttm_resource.h +++ b/include/drm/ttm/ttm_resource.h @@ -273,17 +273,22 @@ ttm_lru_item_to_res(struct ttm_lru_item *item) * struct ttm_resource_cursor * * @man: The resource manager currently being iterated over. - * @cur: The list head the cursor currently points to. + * @hitch: A hitch list node inserted before the next resource + * to iterate over. * @priority: the current priority * * Cursor to iterate over the resources in a manager. */ struct ttm_resource_cursor { struct ttm_resource_manager *man; - struct list_head *cur; + struct ttm_lru_item hitch; unsigned int priority; }; +void ttm_resource_cursor_fini_locked(struct ttm_resource_cursor *cursor); + +void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor); + /** * struct ttm_lru_bulk_move_pos * From patchwork Tue May 21 07:16:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 110BBC25B7C for ; Tue, 21 May 2024 07:17:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3146310E38A; Tue, 21 May 2024 07:17:33 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="O0fx5wqz"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id E7DA710E35B; Tue, 21 May 2024 07:17:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275842; x=1747811842; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O4cOlVH2zZPfTp5odDBD2fqe2RvVqYX9td2gN7Yq0n4=; b=O0fx5wqzcI+a4K7kuNCFswCyQWFa95DnK5aNLQ9EEqflZ1gCvuodSFGP oi8OkjqQCw4k8LkTy3VvGy+b8k4DjWD+B8ldUQDga2vffFY1oyb38YUm1 V8HDN+ACYuxVgnv/iNlW43Pu1Mg509ukGq2LAmSKdwlP02Jya++5MIeco VMOPKlNeNAS2ZEHGm/63xOmbwjEP2oJ9OsIYRW+ZoqgfgfCKLOqWJqS4c G1fcPm+0DGkx7nEgcHIHfwYpwpHZ8vWmxV2afIJ6u0uJtPmRGjgcC2nNP oPotWL5ac0KbZhMbFjEgAdgNEctQhU282wwKq964EmEM1dWLNIka+K6pK A==; X-CSE-ConnectionGUID: fhlPbelhQ0mwwE0iQIC9qQ== X-CSE-MsgGUID: pgYv5mgzTPaxJwLW8/WJ6A== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393449" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393449" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:05 -0700 X-CSE-ConnectionGUID: P/OldN/KR2OUGHoJkjOPIQ== X-CSE-MsgGUID: nMmw1plmSpy9HlIgcWvQEg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336675" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:04 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 04/21] drm/ttm, drm/amdgpu, drm/xe: Consider hitch moves within bulk sublist moves Date: Tue, 21 May 2024 09:16:22 +0200 Message-ID: <20240521071639.77614-5-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To address the problem with hitches moving when bulk move sublists are lru-bumped, register the list cursors with the ttm_lru_bulk_move structure when traversing its list, and when lru-bumping the list, move the cursor hitch to the tail. This also means it's mandatory for drivers to call ttm_lru_bulk_move_init() and ttm_lru_bulk_move_fini() when initializing and finalizing the bulk move structure, so add those calls to the amdgpu- and xe driver. Compared to v1 this is slightly more code but less fragile and hopefully easier to understand. Changes in previous series: - Completely rework the functionality - Avoid a NULL pointer dereference assigning manager->mem_type - Remove some leftover code causing build problems v2: - For hitch bulk tail moves, store the mem_type in the cursor instead of with the manager. v3: - Remove leftover mem_type member from change in v2. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 4 ++ drivers/gpu/drm/ttm/ttm_resource.c | 89 ++++++++++++++++++++++++++ drivers/gpu/drm/xe/xe_vm.c | 4 ++ include/drm/ttm/ttm_resource.h | 56 ++++++++++------ 4 files changed, 132 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 4e2391c83d7c..6293f3b54b4a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -2422,6 +2422,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm, if (r) return r; + ttm_lru_bulk_move_init(&vm->lru_bulk_move); + vm->is_compute_context = false; vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode & @@ -2486,6 +2488,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm, error_free_delayed: dma_fence_put(vm->last_tlb_flush); dma_fence_put(vm->last_unlocked); + ttm_lru_bulk_move_fini(&adev->mman.bdev, &vm->lru_bulk_move); amdgpu_vm_fini_entities(vm); return r; @@ -2642,6 +2645,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm) } } + ttm_lru_bulk_move_fini(&adev->mman.bdev, &vm->lru_bulk_move); } /** diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index 9c8b6499edfb..a03090683e79 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -33,6 +33,49 @@ #include +/* Detach the cursor from the bulk move list*/ +static void +ttm_resource_cursor_clear_bulk(struct ttm_resource_cursor *cursor) +{ + cursor->bulk = NULL; + list_del_init(&cursor->bulk_link); +} + +/* Move the cursor to the end of the bulk move list it's in */ +static void ttm_resource_cursor_move_bulk_tail(struct ttm_lru_bulk_move *bulk, + struct ttm_resource_cursor *cursor) +{ + struct ttm_lru_bulk_move_pos *pos; + + if (WARN_ON_ONCE(bulk != cursor->bulk)) { + list_del_init(&cursor->bulk_link); + return; + } + + pos = &bulk->pos[cursor->mem_type][cursor->priority]; + if (pos) + list_move(&cursor->hitch.link, &pos->last->lru.link); + ttm_resource_cursor_clear_bulk(cursor); +} + +/* Move all cursors attached to a bulk move to its end */ +static void ttm_bulk_move_adjust_cursors(struct ttm_lru_bulk_move *bulk) +{ + struct ttm_resource_cursor *cursor, *next; + + list_for_each_entry_safe(cursor, next, &bulk->cursor_list, bulk_link) + ttm_resource_cursor_move_bulk_tail(bulk, cursor); +} + +/* Remove a cursor from an empty bulk move list */ +static void ttm_bulk_move_drop_cursors(struct ttm_lru_bulk_move *bulk) +{ + struct ttm_resource_cursor *cursor, *next; + + list_for_each_entry_safe(cursor, next, &bulk->cursor_list, bulk_link) + ttm_resource_cursor_clear_bulk(cursor); +} + /** * ttm_resource_cursor_fini_locked() - Finalize the LRU list cursor usage * @cursor: The struct ttm_resource_cursor to finalize. @@ -45,6 +88,7 @@ void ttm_resource_cursor_fini_locked(struct ttm_resource_cursor *cursor) { lockdep_assert_held(&cursor->man->bdev->lru_lock); list_del_init(&cursor->hitch.link); + ttm_resource_cursor_clear_bulk(cursor); } /** @@ -73,9 +117,27 @@ void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor) void ttm_lru_bulk_move_init(struct ttm_lru_bulk_move *bulk) { memset(bulk, 0, sizeof(*bulk)); + INIT_LIST_HEAD(&bulk->cursor_list); } EXPORT_SYMBOL(ttm_lru_bulk_move_init); +/** + * ttm_lru_bulk_move_fini - finalize a bulk move structure + * @bdev: The struct ttm_device + * @bulk: the structure to finalize + * + * Sanity checks that bulk moves don't have any + * resources left and hence no cursors attached. + */ +void ttm_lru_bulk_move_fini(struct ttm_device *bdev, + struct ttm_lru_bulk_move *bulk) +{ + spin_lock(&bdev->lru_lock); + ttm_bulk_move_drop_cursors(bulk); + spin_unlock(&bdev->lru_lock); +} +EXPORT_SYMBOL(ttm_lru_bulk_move_fini); + /** * ttm_lru_bulk_move_tail - bulk move range of resources to the LRU tail. * @@ -88,6 +150,7 @@ void ttm_lru_bulk_move_tail(struct ttm_lru_bulk_move *bulk) { unsigned i, j; + ttm_bulk_move_adjust_cursors(bulk); for (i = 0; i < TTM_NUM_MEM_TYPES; ++i) { for (j = 0; j < TTM_MAX_BO_PRIORITY; ++j) { struct ttm_lru_bulk_move_pos *pos = &bulk->pos[i][j]; @@ -515,6 +578,29 @@ void ttm_resource_manager_debug(struct ttm_resource_manager *man, } EXPORT_SYMBOL(ttm_resource_manager_debug); +static void +ttm_resource_cursor_check_bulk(struct ttm_resource_cursor *cursor, + struct ttm_lru_item *next_lru) +{ + struct ttm_resource *next = ttm_lru_item_to_res(next_lru); + struct ttm_lru_bulk_move *bulk = NULL; + struct ttm_buffer_object *bo = next->bo; + + lockdep_assert_held(&cursor->man->bdev->lru_lock); + if (bo && bo->resource == next) + bulk = bo->bulk_move; + + if (cursor->bulk != bulk) { + if (bulk) { + list_move_tail(&cursor->bulk_link, &bulk->cursor_list); + cursor->mem_type = next->mem_type; + } else { + list_del_init(&cursor->bulk_link); + } + cursor->bulk = bulk; + } +} + /** * ttm_resource_manager_first() - Start iterating over the resources * of a resource manager @@ -535,6 +621,7 @@ ttm_resource_manager_first(struct ttm_resource_manager *man, cursor->priority = 0; cursor->man = man; ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH); + INIT_LIST_HEAD(&cursor->bulk_link); list_add(&cursor->hitch.link, &man->lru[cursor->priority]); return ttm_resource_manager_next(cursor); @@ -559,6 +646,7 @@ ttm_resource_manager_next(struct ttm_resource_cursor *cursor) lru = &cursor->hitch; list_for_each_entry_continue(lru, &man->lru[cursor->priority], link) { if (ttm_lru_item_is_res(lru)) { + ttm_resource_cursor_check_bulk(cursor, lru); list_move(&cursor->hitch.link, &lru->link); return ttm_lru_item_to_res(lru); } @@ -568,6 +656,7 @@ ttm_resource_manager_next(struct ttm_resource_cursor *cursor) break; list_move(&cursor->hitch.link, &man->lru[cursor->priority]); + ttm_resource_cursor_clear_bulk(cursor); } ttm_resource_cursor_fini_locked(cursor); diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index c5b1694b292f..e2ec148c9c33 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -1339,6 +1339,8 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) INIT_WORK(&vm->destroy_work, vm_destroy_work_func); + ttm_lru_bulk_move_init(&vm->lru_bulk_move); + INIT_LIST_HEAD(&vm->preempt.exec_queues); vm->preempt.min_run_period_ms = 10; /* FIXME: Wire up to uAPI */ @@ -1456,6 +1458,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) mutex_destroy(&vm->snap_mutex); for_each_tile(tile, xe, id) xe_range_fence_tree_fini(&vm->rftree[id]); + ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move); kfree(vm); if (!(flags & XE_VM_FLAG_MIGRATION)) xe_pm_runtime_put(xe); @@ -1599,6 +1602,7 @@ static void vm_destroy_work_func(struct work_struct *w) XE_WARN_ON(vm->pt_root[id]); trace_xe_vm_free(vm); + ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move); kfree(vm); } diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h index 8fac781f641e..571abb4861a6 100644 --- a/include/drm/ttm/ttm_resource.h +++ b/include/drm/ttm/ttm_resource.h @@ -269,26 +269,6 @@ ttm_lru_item_to_res(struct ttm_lru_item *item) return container_of(item, struct ttm_resource, lru); } -/** - * struct ttm_resource_cursor - * - * @man: The resource manager currently being iterated over. - * @hitch: A hitch list node inserted before the next resource - * to iterate over. - * @priority: the current priority - * - * Cursor to iterate over the resources in a manager. - */ -struct ttm_resource_cursor { - struct ttm_resource_manager *man; - struct ttm_lru_item hitch; - unsigned int priority; -}; - -void ttm_resource_cursor_fini_locked(struct ttm_resource_cursor *cursor); - -void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor); - /** * struct ttm_lru_bulk_move_pos * @@ -304,8 +284,9 @@ struct ttm_lru_bulk_move_pos { /** * struct ttm_lru_bulk_move - * * @pos: first/last lru entry for resources in the each domain/priority + * @cursor_list: The list of cursors currently traversing any of + * the sublists of @pos. Protected by the ttm device's lru_lock. * * Container for the current bulk move state. Should be used with * ttm_lru_bulk_move_init() and ttm_bo_set_bulk_move(). @@ -315,8 +296,39 @@ struct ttm_lru_bulk_move_pos { */ struct ttm_lru_bulk_move { struct ttm_lru_bulk_move_pos pos[TTM_NUM_MEM_TYPES][TTM_MAX_BO_PRIORITY]; + struct list_head cursor_list; }; +/** + * struct ttm_resource_cursor + * @man: The resource manager currently being iterated over + * @hitch: A hitch list node inserted before the next resource + * to iterate over. + * @bulk_link: A list link for the list of cursors traversing the + * bulk sublist of @bulk. Protected by the ttm device's lru_lock. + * @bulk: Pointer to struct ttm_lru_bulk_move whose subrange @hitch is + * inserted to. NULL if none. Never dereference this pointer since + * the struct ttm_lru_bulk_move object pointed to might have been + * freed. The pointer is only for comparison. + * @mem_type: The memory type of the LRU list being traversed. + * This field is valid iff @bulk != NULL. + * @priority: the current priority + * + * Cursor to iterate over the resources in a manager. + */ +struct ttm_resource_cursor { + struct ttm_resource_manager *man; + struct ttm_lru_item hitch; + struct list_head bulk_link; + struct ttm_lru_bulk_move *bulk; + unsigned int mem_type; + unsigned int priority; +}; + +void ttm_resource_cursor_fini_locked(struct ttm_resource_cursor *cursor); + +void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor); + /** * struct ttm_kmap_iter_iomap - Specialization for a struct io_mapping + * struct sg_table backed struct ttm_resource. @@ -405,6 +417,8 @@ ttm_resource_manager_cleanup(struct ttm_resource_manager *man) void ttm_lru_bulk_move_init(struct ttm_lru_bulk_move *bulk); void ttm_lru_bulk_move_tail(struct ttm_lru_bulk_move *bulk); +void ttm_lru_bulk_move_fini(struct ttm_device *bdev, + struct ttm_lru_bulk_move *bulk); void ttm_resource_add_bulk_move(struct ttm_resource *res, struct ttm_buffer_object *bo); From patchwork Tue May 21 07:16:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 078D7C25B74 for ; Tue, 21 May 2024 07:18:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A612110E442; Tue, 21 May 2024 07:18:07 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="EReYwhGc"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id E839B10E36B; Tue, 21 May 2024 07:17:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275842; x=1747811842; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=x+27eBShM9N8y9seRdQ7YZp/ZaEfV5hWmImaaFr9wFo=; b=EReYwhGcPSd/AIfQHcxyYlYz1lpVPDhV5+uShpZJdGq/VHG4wCSP0faA h6ri+KUl170gohuMcPqgivIomsQpIzZYwb2GjjC/HRdsTBK7ydZ9oLEtm xcFZfv7DK1dqvYxwEXvefBfpGkDrTdQ9uT67e1K7ow+RL3Wvo5K2IPRDt Psbb1cY90XaodBdhbM6Wl9rzzfk4EO9jgEAL/n5/Q0bXfYNE3osP2sDK9 Nr8BjRt1pkcrVnbQcg6eEjDINvkY7acDYvuhNDm3ZtTlsAXwkJ7Se6Nt/ s6KxBNMza8vFTfZ69faOLjX3gIz1SppRs/gCM+WYNC8DH7kq059D731eY w==; X-CSE-ConnectionGUID: fzFuLkRGSQq355YRVya7cw== X-CSE-MsgGUID: pDE2Fiv4QeOKUFm+0u6ieg== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393451" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393451" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:07 -0700 X-CSE-ConnectionGUID: JqeEgu++T3WfB6g2x/z+sg== X-CSE-MsgGUID: lenJZEykRoq4azRu+p578g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336682" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:06 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 05/21] drm/ttm: Provide a generic LRU walker helper Date: Tue, 21 May 2024 09:16:23 +0200 Message-ID: <20240521071639.77614-6-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Provide a generic LRU walker in TTM, in the spirit of drm_gem_lru_scan() but building on the restartable TTM LRU functionality. The LRU walker optionally supports locking objects as part of a ww mutex locking transaction, to mimic to some extent the current functionality in ttm. However any -EDEADLK return is converted to -ENOMEM, so that the driver will need to back off and possibly retry without being able to keep the ticket. v3: - Move the helper to core ttm. - Remove the drm_exec usage from it for now, it will be reintroduced later in the series. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_bo_util.c | 145 ++++++++++++++++++++++++++++++ include/drm/ttm/ttm_bo.h | 32 +++++++ 2 files changed, 177 insertions(+) diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 0b3f4267130c..be200c06cc79 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -768,3 +768,148 @@ int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo) ttm_tt_destroy(bo->bdev, ttm); return ret; } + +static bool ttm_lru_walk_trylock(struct ttm_lru_walk *walk, + struct ttm_buffer_object *bo, + bool *needs_unlock) +{ + struct ttm_operation_ctx *ctx = walk->ctx; + + *needs_unlock = false; + + if (dma_resv_trylock(bo->base.resv)) { + *needs_unlock = true; + return true; + } + + if (bo->base.resv == ctx->resv && ctx->allow_res_evict) { + dma_resv_assert_held(bo->base.resv); + return true; + } + + return false; +} + +static int ttm_lru_walk_ticketlock(struct ttm_lru_walk *walk, + struct ttm_buffer_object *bo, + bool *needs_unlock) +{ + struct dma_resv *resv = bo->base.resv; + int ret; + + if (walk->ctx->interruptible) + ret = dma_resv_lock_interruptible(resv, walk->ticket); + else + ret = dma_resv_lock(resv, walk->ticket); + + if (ret == -EDEADLK) + ret = -ENOSPC; + + if (!ret) { + *needs_unlock = true; + /* Only a single ticketlock per loop */ + walk->ticket = NULL; + } + + return ret; +} + +static void ttm_lru_walk_unlock(struct ttm_buffer_object *bo, bool locked) +{ + if (locked) + dma_resv_unlock(bo->base.resv); +} + +/** + * ttm_lru_walk_for_evict() - Perform a LRU list walk, with actions taken on + * valid items. + * @walk: describe the walks and actions taken + * @bdev: The TTM device. + * @man: The struct ttm_resource manager whose LRU lists we're walking. + * @target: The end condition for the walk. + * + * The LRU lists of @man are walk, and for each struct ttm_resource encountered, + * the corresponding ttm_buffer_object is locked and taken a reference on, and + * the LRU lock is dropped. the LRU lock may be dropped before locking and, in + * that case, it's verified that the item actually remains on the LRU list after + * the lock, and that the buffer object didn't switch resource in between. + * + * With a locked object, the actions indicated by @walk->process_bo are + * performed, and after that, the bo is unlocked, the refcount dropped and the + * next struct ttm_resource is processed. Here, the walker relies on + * TTM's restartable LRU list implementation. + * + * Typically @walk->process_bo() would return the number of pages evicted, + * swapped or shrunken, so that when the total exceeds @target, or when the + * LRU list has been walked in full, iteration is terminated. It's also terminated + * on error. Note that the definition of @target is done by the caller, it + * could have a different meaning than the number of pages. + * + * Note that the way dma_resv individualization is done, locking needs to be done + * either with the LRU lock held (trylocking only) or with a reference on the + * object. + * + * Return: The progress made towards target or negative error code on error. + */ +long ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, + struct ttm_resource_manager *man, long target) +{ + struct ttm_resource_cursor cursor; + struct ttm_resource *res; + long sofar = 0; + long lret; + + spin_lock(&bdev->lru_lock); + ttm_resource_manager_for_each_res(man, &cursor, res) { + struct ttm_buffer_object *bo = res->bo; + bool bo_needs_unlock = false; + bool bo_locked = false; + int mem_type; + + if (!bo || bo->resource != res) + continue; + + if (ttm_lru_walk_trylock(walk, bo, &bo_needs_unlock)) + bo_locked = true; + else if ((!walk->ticket) || walk->ctx->no_wait_gpu || + walk->trylock_only) + continue; + + if (!ttm_bo_get_unless_zero(bo)) { + ttm_lru_walk_unlock(bo, bo_needs_unlock); + continue; + } + + mem_type = res->mem_type; + spin_unlock(&bdev->lru_lock); + + lret = 0; + if (!bo_locked && walk->ticket) + lret = ttm_lru_walk_ticketlock(walk, bo, &bo_needs_unlock); + + /* + * Note that in between the release of the lru lock and the + * ticketlock, the bo may have switched resource, + * and also memory type, since the resource may have been + * freed and allocated again with a different memory type. + * In that case, just skip it. + */ + if (!lret && bo->resource == res && res->mem_type == mem_type) + lret = walk->ops->process_bo(walk, bo); + + ttm_lru_walk_unlock(bo, bo_needs_unlock); + ttm_bo_put(bo); + if (lret == -EBUSY) + lret = 0; + sofar = (lret < 0) ? lret : sofar + lret; + if (sofar < 0 || sofar >= target) + goto out; + + cond_resched(); + spin_lock(&bdev->lru_lock); + } + spin_unlock(&bdev->lru_lock); +out: + ttm_resource_cursor_fini(&cursor); + return sofar; +} diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h index 6ccf96c91f3a..8b032298d66e 100644 --- a/include/drm/ttm/ttm_bo.h +++ b/include/drm/ttm/ttm_bo.h @@ -190,6 +190,38 @@ struct ttm_operation_ctx { uint64_t bytes_moved; }; +struct ttm_lru_walk; + +/** struct ttm_lru_walk_ops - Operations for a LRU walk. */ +struct ttm_lru_walk_ops { + /** + * process_bo - Process this bo. + * @walk: struct ttm_lru_walk describing the walk. + * @bo: A locked and referenced buffer object. + * + * Return: Negative error code on error, Number of processed pages on + * success. 0 also indicates success. + */ + long (*process_bo)(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo); +}; + +/** + * struct ttm_lru_walk - Structure describing a LRU walk. + */ +struct ttm_lru_walk { + /** @ops: Pointer to the ops structure. */ + const struct ttm_lru_walk_ops *ops; + /** @ctx: Pointer to the struct ttm_operation_ctx. */ + struct ttm_operation_ctx *ctx; + /** @ticket: The struct ww_acquire_ctx if any. */ + struct ww_acquire_ctx *ticket; + /** @tryock_only: Only use trylock for locking. */ + bool trylock_only; +}; + +long ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, + struct ttm_resource_manager *man, long target); + /** * ttm_bo_get - reference a struct ttm_buffer_object * From patchwork Tue May 21 07:16:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E63A1C25B78 for ; Tue, 21 May 2024 07:17:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 487C710E34B; Tue, 21 May 2024 07:17:32 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="hqhCy+ER"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id EF6E810E31F; Tue, 21 May 2024 07:17:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275844; x=1747811844; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6VSOZXNkBNqbEa7t5l8unBWmFDdTavYig+ZUUf1FznY=; b=hqhCy+ER7ZCFEEhcyBFANFqlDSriiuOKbFszM+Im0+aJJ6vJNNLOfulk E0/xCp1IV34eGWatWtHj/hu39jVS8PeHBx9y6JQAOB/ZtKkv7Mrtu+71F wM94k4F3LBDx2W4aocpi47wK7ej6C5aW6XslWcz4U45YuH9GkVe8iVNcn SCP2hMl0bOtiRuDlMoFzuiZJnr2h6IU2MAvcWfgbh9JLZ6fLAONYNKuFz NDWb3eXzcqw8T5HrLCjNjISQV8OgTRypzKKlHe0OAWKW2o0bV8sH0/q1B eILxHvTPSsQIS+2lyK9sM4oEcOk0hOqd2X4McG5sFDsrF6VtXzf17URGm Q==; X-CSE-ConnectionGUID: 7cBwEOveQhCEZq3Pcn0ufQ== X-CSE-MsgGUID: njKiNzo6QVyeibqydRnBKg== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393455" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393455" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:10 -0700 X-CSE-ConnectionGUID: Tz90/t/7RRC512SoX8Jb3g== X-CSE-MsgGUID: cZDWESzkQXSFLPjHL2mH8Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336695" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:08 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 06/21] drm/ttm: Use the LRU walker helper for swapping Date: Tue, 21 May 2024 09:16:24 +0200 Message-ID: <20240521071639.77614-7-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rework the TTM swapping to use the LRU walker helper. This helps fixing up the ttm_bo_swapout() interface to be consistent about not requiring any locking. For now mimic the current behaviour of using trylock only. We could be using ticket-locks here but defer that until it's deemed necessary. The TTM swapout functionality is a bit weird anyway since it alternates between memory types without exhausting TTM_PL_SYSTEM first. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_bo.c | 112 +++++++++++++++++++++---------- drivers/gpu/drm/ttm/ttm_device.c | 30 ++------- include/drm/ttm/ttm_bo.h | 5 +- 3 files changed, 83 insertions(+), 64 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 43eda720657f..63a91b77f7da 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1118,11 +1118,23 @@ int ttm_bo_wait_ctx(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx) } EXPORT_SYMBOL(ttm_bo_wait_ctx); -int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, - gfp_t gfp_flags) +/** + * struct ttm_bo_swapout_walk - Parameters for the swapout walk + */ +struct ttm_bo_swapout_walk { + /** @walk: The walk base parameters. */ + struct ttm_lru_walk walk; + /** @gfp_flags: The gfp flags to use for ttm_tt_swapout() */ + gfp_t gfp_flags; +}; + +static long +ttm_bo_swapout_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo) { - struct ttm_place place; - bool locked; + struct ttm_place place = {.mem_type = bo->resource->mem_type}; + struct ttm_bo_swapout_walk *swapout_walk = + container_of(walk, typeof(*swapout_walk), walk); + struct ttm_operation_ctx *ctx = walk->ctx; long ret; /* @@ -1131,28 +1143,29 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, * The driver may use the fact that we're moving from SYSTEM * as an indication that we're about to swap out. */ - memset(&place, 0, sizeof(place)); - place.mem_type = bo->resource->mem_type; - if (!ttm_bo_evict_swapout_allowable(bo, ctx, &place, &locked, NULL)) - return -EBUSY; + if (!bo->bdev->funcs->eviction_valuable(bo, &place)) { + ret = -EBUSY; + goto out; + } if (!bo->ttm || !ttm_tt_is_populated(bo->ttm) || bo->ttm->page_flags & TTM_TT_FLAG_EXTERNAL || - bo->ttm->page_flags & TTM_TT_FLAG_SWAPPED || - !ttm_bo_get_unless_zero(bo)) { - if (locked) - dma_resv_unlock(bo->base.resv); - return -EBUSY; + bo->ttm->page_flags & TTM_TT_FLAG_SWAPPED) { + ret = -EBUSY; + goto out; } if (bo->deleted) { - ret = ttm_bo_cleanup_refs(bo, false, false, locked); - ttm_bo_put(bo); - return ret == -EBUSY ? -ENOSPC : ret; - } + pgoff_t num_pages = bo->ttm->num_pages; - /* TODO: Cleanup the locking */ - spin_unlock(&bo->bdev->lru_lock); + ret = ttm_bo_wait_ctx(bo, ctx); + if (ret) + goto out; + + ttm_bo_cleanup_memtype_use(bo); + ret = num_pages; + goto out; + } /* * Move to system cached @@ -1164,12 +1177,13 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, memset(&hop, 0, sizeof(hop)); place.mem_type = TTM_PL_SYSTEM; ret = ttm_resource_alloc(bo, &place, &evict_mem); - if (unlikely(ret)) + if (ret) goto out; ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop); - if (unlikely(ret != 0)) { - WARN(ret == -EMULTIHOP, "Unexpected multihop in swaput - likely driver bug.\n"); + if (ret) { + WARN(ret == -EMULTIHOP, + "Unexpected multihop in swapout - likely driver bug.\n"); ttm_resource_free(bo, &evict_mem); goto out; } @@ -1179,30 +1193,54 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, * Make sure BO is idle. */ ret = ttm_bo_wait_ctx(bo, ctx); - if (unlikely(ret != 0)) + if (ret) goto out; ttm_bo_unmap_virtual(bo); - - /* - * Swap out. Buffer will be swapped in again as soon as - * anyone tries to access a ttm page. - */ if (bo->bdev->funcs->swap_notify) bo->bdev->funcs->swap_notify(bo); if (ttm_tt_is_populated(bo->ttm)) - ret = ttm_tt_swapout(bo->bdev, bo->ttm, gfp_flags); + ret = ttm_tt_swapout(bo->bdev, bo->ttm, swapout_walk->gfp_flags); out: + /* Consider some error codes fatal. Others may continue the walk. */ + if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS || + ret == -EAGAIN || ret > 0) + return ret; - /* - * Unreserve without putting on LRU to avoid swapping out an - * already swapped buffer. - */ - if (locked) - dma_resv_unlock(bo->base.resv); - ttm_bo_put(bo); - return ret == -EBUSY ? -ENOSPC : ret; + return 0; +} + +const struct ttm_lru_walk_ops ttm_swap_ops = { + .process_bo = ttm_bo_swapout_cb, +}; + +/** + * ttm_bo_swapout() - Swap out buffer objects on the LRU list to shmem. + * @bdev: The ttm device. + * @ctx: The ttm_operation_ctx governing the swapout operation. + * @man: The resource manager whose resources / buffer objects are + * goint to be swapped out. + * @gfp_flags: The gfp flags used for shmem page allocations. + * @target: The desired number of pages to swap out. + * + * Return: The number of pages actually swapped out, or negative error code + * on error. + */ +long ttm_bo_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, + struct ttm_resource_manager *man, gfp_t gfp_flags, + pgoff_t target) +{ + struct ttm_bo_swapout_walk swapout_walk = { + .walk = { + .ops = &ttm_swap_ops, + .ctx = ctx, + .trylock_only = true, + }, + .gfp_flags = gfp_flags, + }; + + return ttm_lru_walk_for_evict(&swapout_walk.walk, bdev, man, target); } void ttm_bo_tt_destroy(struct ttm_buffer_object *bo) diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c index f9e9b1ec8c8a..ee575d8a54c0 100644 --- a/drivers/gpu/drm/ttm/ttm_device.c +++ b/drivers/gpu/drm/ttm/ttm_device.c @@ -148,40 +148,20 @@ int ttm_global_swapout(struct ttm_operation_ctx *ctx, gfp_t gfp_flags) int ttm_device_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, gfp_t gfp_flags) { - struct ttm_resource_cursor cursor; struct ttm_resource_manager *man; - struct ttm_resource *res; unsigned i; - int ret; + long lret; - spin_lock(&bdev->lru_lock); for (i = TTM_PL_SYSTEM; i < TTM_NUM_MEM_TYPES; ++i) { man = ttm_manager_type(bdev, i); if (!man || !man->use_tt) continue; - ttm_resource_manager_for_each_res(man, &cursor, res) { - struct ttm_buffer_object *bo = res->bo; - uint32_t num_pages; - - if (!bo || bo->resource != res) - continue; - - num_pages = PFN_UP(bo->base.size); - ret = ttm_bo_swapout(bo, ctx, gfp_flags); - /* ttm_bo_swapout has dropped the lru_lock */ - if (!ret) { - ttm_resource_cursor_fini(&cursor); - return num_pages; - } - if (ret != -EBUSY) { - ttm_resource_cursor_fini(&cursor); - return ret; - } - } + lret = ttm_bo_swapout(bdev, ctx, man, gfp_flags, 1); + /* Can be both positive (num_pages) and negative (error) */ + if (lret) + return lret; } - ttm_resource_cursor_fini_locked(&cursor); - spin_unlock(&bdev->lru_lock); return 0; } EXPORT_SYMBOL(ttm_device_swapout); diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h index 8b032298d66e..472a55b69afb 100644 --- a/include/drm/ttm/ttm_bo.h +++ b/include/drm/ttm/ttm_bo.h @@ -410,8 +410,9 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); int ttm_bo_vmap(struct ttm_buffer_object *bo, struct iosys_map *map); void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct iosys_map *map); int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo); -int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, - gfp_t gfp_flags); +long ttm_bo_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, + struct ttm_resource_manager *man, gfp_t gfp_flags, + pgoff_t target); void ttm_bo_pin(struct ttm_buffer_object *bo); void ttm_bo_unpin(struct ttm_buffer_object *bo); int ttm_mem_evict_first(struct ttm_device *bdev, From patchwork Tue May 21 07:16:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 16156C25B75 for ; Tue, 21 May 2024 07:17:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 303B210E32B; Tue, 21 May 2024 07:17:41 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="Xgrxmtxj"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id F036910E32B; Tue, 21 May 2024 07:17:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275844; x=1747811844; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vGSBfs33EAo6r3NtiWL+vXUkkmhO4gG6wMOVlbZF7YQ=; b=XgrxmtxjAo59XPyBlhv4XtJU3xJZa/qEAGD42Vh2eH3KexbiRzl+iKmU I0sR8uCl9Ogicb7EoeraknGyvTMwrkHfCvbUE7c//MiiMCWNAOhExLfsv ryDr1LHOrIuH3xMVOdlb6B6MneXt+jC/QcKSC9SG75AV6jvlw2FJ7LmWk kP86yFRFPZDR4gXLOtGQlKsKlzjrownGAjarCG9viOxXow3/XrQWLb8xi YRiu0aG5HwNVLaPgF60t9YxXif2vkEZRHus7POF+ZlVT4/ZxKq+7s6vc7 QvQqTfpd8FAKXiOwqqSRNMdFJQpTFIVbZtSdfqoaZgQCj4vHm8bnuDaKo A==; X-CSE-ConnectionGUID: 8qWbsmAITZqAkB7kXWQHKQ== X-CSE-MsgGUID: oO25zBY8SnanhLsqq2zaJg== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393457" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393457" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:11 -0700 X-CSE-ConnectionGUID: W8yhVacwRLGRC52aLYsBcw== X-CSE-MsgGUID: UJ+iWa+zR5OOlYsrp1kakg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336715" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:10 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 07/21] drm/ttm: Use the LRU walker for eviction Date: Tue, 21 May 2024 09:16:25 +0200 Message-ID: <20240521071639.77614-8-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use the LRU walker for eviction. This helps removing a lot of code with weird locking semantics. The functionality is slightly changed so that when trylocked buffer objects are exhausted, we continue to interleave walks with ticket-locks while there is still progress made. The list walks are not restarted in-between evictions. Also provide a separate ttm_bo_evict_first() function for its single user. The context of that user allows sleeping dma_resv locks. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_bo.c | 350 ++++++++++++----------------- drivers/gpu/drm/ttm/ttm_resource.c | 20 +- include/drm/ttm/ttm_bo.h | 8 +- 3 files changed, 145 insertions(+), 233 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 63a91b77f7da..316afe19a325 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -224,80 +224,6 @@ static void ttm_bo_flush_all_fences(struct ttm_buffer_object *bo) dma_resv_iter_end(&cursor); } -/** - * ttm_bo_cleanup_refs - * If bo idle, remove from lru lists, and unref. - * If not idle, block if possible. - * - * Must be called with lru_lock and reservation held, this function - * will drop the lru lock and optionally the reservation lock before returning. - * - * @bo: The buffer object to clean-up - * @interruptible: Any sleeps should occur interruptibly. - * @no_wait_gpu: Never wait for gpu. Return -EBUSY instead. - * @unlock_resv: Unlock the reservation lock as well. - */ - -static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo, - bool interruptible, bool no_wait_gpu, - bool unlock_resv) -{ - struct dma_resv *resv = &bo->base._resv; - int ret; - - if (dma_resv_test_signaled(resv, DMA_RESV_USAGE_BOOKKEEP)) - ret = 0; - else - ret = -EBUSY; - - if (ret && !no_wait_gpu) { - long lret; - - if (unlock_resv) - dma_resv_unlock(bo->base.resv); - spin_unlock(&bo->bdev->lru_lock); - - lret = dma_resv_wait_timeout(resv, DMA_RESV_USAGE_BOOKKEEP, - interruptible, - 30 * HZ); - - if (lret < 0) - return lret; - else if (lret == 0) - return -EBUSY; - - spin_lock(&bo->bdev->lru_lock); - if (unlock_resv && !dma_resv_trylock(bo->base.resv)) { - /* - * We raced, and lost, someone else holds the reservation now, - * and is probably busy in ttm_bo_cleanup_memtype_use. - * - * Even if it's not the case, because we finished waiting any - * delayed destruction would succeed, so just return success - * here. - */ - spin_unlock(&bo->bdev->lru_lock); - return 0; - } - ret = 0; - } - - if (ret) { - if (unlock_resv) - dma_resv_unlock(bo->base.resv); - spin_unlock(&bo->bdev->lru_lock); - return ret; - } - - spin_unlock(&bo->bdev->lru_lock); - ttm_bo_cleanup_memtype_use(bo); - - if (unlock_resv) - dma_resv_unlock(bo->base.resv); - - return 0; -} - /* * Block for the dma_resv object to become idle, lock the buffer and clean up * the resource and tt object. @@ -505,151 +431,154 @@ bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, } EXPORT_SYMBOL(ttm_bo_eviction_valuable); -/* - * Check the target bo is allowable to be evicted or swapout, including cases: - * - * a. if share same reservation object with ctx->resv, have assumption - * reservation objects should already be locked, so not lock again and - * return true directly when either the opreation allow_reserved_eviction - * or the target bo already is in delayed free list; +/** + * ttm_bo_evict_first() - Evict the first bo on the manager's LRU list. + * @bdev: The ttm device. + * @man: The manager whose bo to evict. + * @ctx: The TTM operation ctx governing the eviction. * - * b. Otherwise, trylock it. + * Return: 0 if successful or the resource disappeared. Negative error code on error. */ -static bool ttm_bo_evict_swapout_allowable(struct ttm_buffer_object *bo, - struct ttm_operation_ctx *ctx, - const struct ttm_place *place, - bool *locked, bool *busy) +int ttm_bo_evict_first(struct ttm_device *bdev, struct ttm_resource_manager *man, + struct ttm_operation_ctx *ctx) { - bool ret = false; + struct ttm_resource_cursor cursor; + struct ttm_buffer_object *bo; + struct ttm_resource *res; + unsigned int mem_type; + int ret = 0; - if (bo->pin_count) { - *locked = false; - if (busy) - *busy = false; - return false; + spin_lock(&bdev->lru_lock); + res = ttm_resource_manager_first(man, &cursor); + if (!res) { + ret = -ENOENT; + goto out_no_ref; } + bo = res->bo; + if (!ttm_bo_get_unless_zero(bo)) + goto out_no_ref; + mem_type = res->mem_type; + spin_unlock(&bdev->lru_lock); + ret = ttm_bo_reserve(bo, ctx->interruptible, ctx->no_wait_gpu, NULL); + if (ret) + goto out_no_lock; + if (bo->resource != res || res->mem_type != mem_type) + goto out_bad_res; - if (bo->base.resv == ctx->resv) { - dma_resv_assert_held(bo->base.resv); - if (ctx->allow_res_evict) - ret = true; - *locked = false; - if (busy) - *busy = false; + if (bo->deleted) { + ret = ttm_bo_wait_ctx(bo, ctx); + if (ret) + ttm_bo_cleanup_memtype_use(bo); } else { - ret = dma_resv_trylock(bo->base.resv); - *locked = ret; - if (busy) - *busy = !ret; - } - - if (ret && place && (bo->resource->mem_type != place->mem_type || - !bo->bdev->funcs->eviction_valuable(bo, place))) { - ret = false; - if (*locked) { - dma_resv_unlock(bo->base.resv); - *locked = false; - } + ret = ttm_bo_evict(bo, ctx); } - +out_bad_res: + dma_resv_unlock(bo->base.resv); +out_no_lock: + ttm_bo_put(bo); + ttm_resource_cursor_fini(&cursor); return ret; + +out_no_ref: + ttm_resource_cursor_fini_locked(&cursor); + spin_unlock(&bdev->lru_lock); + return -ENOENT; } /** - * ttm_mem_evict_wait_busy - wait for a busy BO to become available - * - * @busy_bo: BO which couldn't be locked with trylock - * @ctx: operation context - * @ticket: acquire ticket - * - * Try to lock a busy buffer object to avoid failing eviction. + * struct ttm_bo_evict_walk - Parameters for the evict walk. */ -static int ttm_mem_evict_wait_busy(struct ttm_buffer_object *busy_bo, - struct ttm_operation_ctx *ctx, - struct ww_acquire_ctx *ticket) -{ - int r; - - if (!busy_bo || !ticket) - return -EBUSY; - - if (ctx->interruptible) - r = dma_resv_lock_interruptible(busy_bo->base.resv, - ticket); - else - r = dma_resv_lock(busy_bo->base.resv, ticket); - - /* - * TODO: It would be better to keep the BO locked until allocation is at - * least tried one more time, but that would mean a much larger rework - * of TTM. - */ - if (!r) - dma_resv_unlock(busy_bo->base.resv); - - return r == -EDEADLK ? -EBUSY : r; -} +struct ttm_bo_evict_walk { + /** @walk: The walk base parameters. */ + struct ttm_lru_walk walk; + /** @place: The place passed to the resource allocation. */ + const struct ttm_place *place; + /** @evictor: The buffer object we're trying to make room for. */ + struct ttm_buffer_object *evictor; + /** @res: The allocated resource if any. */ + struct ttm_resource **res; + /** @evicted: The number of evicted pages. */ + unsigned long evicted; +}; -int ttm_mem_evict_first(struct ttm_device *bdev, - struct ttm_resource_manager *man, - const struct ttm_place *place, - struct ttm_operation_ctx *ctx, - struct ww_acquire_ctx *ticket) +static long ttm_bo_evict_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo) { - struct ttm_buffer_object *bo = NULL, *busy_bo = NULL; - struct ttm_resource_cursor cursor; - struct ttm_resource *res; - bool locked = false; - int ret; + struct ttm_bo_evict_walk *evict_walk = + container_of(walk, typeof(*evict_walk), walk); + long lret; - spin_lock(&bdev->lru_lock); - ttm_resource_manager_for_each_res(man, &cursor, res) { - bool busy; - - if (!ttm_bo_evict_swapout_allowable(res->bo, ctx, place, - &locked, &busy)) { - if (busy && !busy_bo && ticket != - dma_resv_locking_ctx(res->bo->base.resv)) - busy_bo = res->bo; - continue; - } + if (!bo->bdev->funcs->eviction_valuable(bo, evict_walk->place)) + return 0; - if (ttm_bo_get_unless_zero(res->bo)) { - bo = res->bo; - break; - } - if (locked) - dma_resv_unlock(res->bo->base.resv); + if (bo->deleted) { + lret = ttm_bo_wait_ctx(bo, walk->ctx); + if (!lret) + ttm_bo_cleanup_memtype_use(bo); + } else { + lret = ttm_bo_evict(bo, walk->ctx); } - ttm_resource_cursor_fini_locked(&cursor); - if (!bo) { - if (busy_bo && !ttm_bo_get_unless_zero(busy_bo)) - busy_bo = NULL; - spin_unlock(&bdev->lru_lock); - ret = ttm_mem_evict_wait_busy(busy_bo, ctx, ticket); - if (busy_bo) - ttm_bo_put(busy_bo); - return ret; - } + if (lret) + goto out; - if (bo->deleted) { - ret = ttm_bo_cleanup_refs(bo, ctx->interruptible, - ctx->no_wait_gpu, locked); - ttm_bo_put(bo); - return ret; - } + evict_walk->evicted++; + if (evict_walk->res) + lret = ttm_resource_alloc(evict_walk->evictor, evict_walk->place, + evict_walk->res); + if (lret == 0) + return 1; +out: + /* Errors that should terminate the walk. */ + if (lret == -ENOMEM || lret == -EINTR || lret == -ERESTARTSYS || + lret == -EAGAIN) + return lret; - spin_unlock(&bdev->lru_lock); + return 0; +} - ret = ttm_bo_evict(bo, ctx); - if (locked) - ttm_bo_unreserve(bo); - else - ttm_bo_move_to_lru_tail_unlocked(bo); +static const struct ttm_lru_walk_ops ttm_evict_walk_ops = { + .process_bo = ttm_bo_evict_cb, +}; - ttm_bo_put(bo); - return ret; +static int ttm_bo_evict_alloc(struct ttm_device *bdev, + struct ttm_resource_manager *man, + const struct ttm_place *place, + struct ttm_buffer_object *evictor, + struct ttm_operation_ctx *ctx, + struct ww_acquire_ctx *ticket, + struct ttm_resource **res) +{ + struct ttm_bo_evict_walk evict_walk = { + .walk = { + .ops = &ttm_evict_walk_ops, + .ctx = ctx, + .ticket = ticket, + }, + .place = place, + .evictor = evictor, + .res = res, + }; + long lret; + + evict_walk.walk.trylock_only = true; + lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1); + if (lret || !ticket) + goto out; + + /* If ticket-locking, repeat while making progress. */ + evict_walk.walk.trylock_only = false; + do { + /* The walk may clear the evict_walk.walk.ticket field */ + evict_walk.walk.ticket = ticket; + evict_walk.evicted = 0; + lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1); + } while (!lret && evict_walk.evicted); +out: + if (lret < 0) + return lret; + if (lret == 0) + return -EBUSY; + return 0; } /** @@ -760,6 +689,7 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo, for (i = 0; i < placement->num_placement; ++i) { const struct ttm_place *place = &placement->placement[i]; struct ttm_resource_manager *man; + bool may_evict; man = ttm_manager_type(bdev, place->mem_type); if (!man || !ttm_resource_manager_used(man)) @@ -769,22 +699,21 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo, TTM_PL_FLAG_FALLBACK)) continue; - do { - ret = ttm_resource_alloc(bo, place, res); - if (unlikely(ret && ret != -ENOSPC)) + may_evict = (force_space && place->mem_type != TTM_PL_SYSTEM); + ret = ttm_resource_alloc(bo, place, res); + if (ret) { + if (ret != -ENOSPC) return ret; - if (likely(!ret) || !force_space) - break; - - ret = ttm_mem_evict_first(bdev, man, place, ctx, - ticket); - if (unlikely(ret == -EBUSY)) - break; - if (unlikely(ret)) + if (!may_evict) + continue; + + ret = ttm_bo_evict_alloc(bdev, man, place, bo, ctx, + ticket, res); + if (ret == -EBUSY) + continue; + if (ret) return ret; - } while (1); - if (ret) - continue; + } ret = ttm_bo_add_move_fence(bo, man, ctx->no_wait_gpu); if (unlikely(ret)) { @@ -796,7 +725,6 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo, } return 0; } - return -ENOSPC; } diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index a03090683e79..6d0c66fc36e3 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -508,24 +508,10 @@ int ttm_resource_manager_evict_all(struct ttm_device *bdev, }; struct dma_fence *fence; int ret; - unsigned i; - - /* - * Can't use standard list traversal since we're unlocking. - */ - spin_lock(&bdev->lru_lock); - for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) { - while (!list_empty(&man->lru[i])) { - spin_unlock(&bdev->lru_lock); - ret = ttm_mem_evict_first(bdev, man, NULL, &ctx, - NULL); - if (ret) - return ret; - spin_lock(&bdev->lru_lock); - } - } - spin_unlock(&bdev->lru_lock); + do { + ret = ttm_bo_evict_first(bdev, man, &ctx); + } while (!ret); spin_lock(&man->move_lock); fence = dma_fence_get(man->move); diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h index 472a55b69afb..148f49f625e4 100644 --- a/include/drm/ttm/ttm_bo.h +++ b/include/drm/ttm/ttm_bo.h @@ -415,11 +415,9 @@ long ttm_bo_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, pgoff_t target); void ttm_bo_pin(struct ttm_buffer_object *bo); void ttm_bo_unpin(struct ttm_buffer_object *bo); -int ttm_mem_evict_first(struct ttm_device *bdev, - struct ttm_resource_manager *man, - const struct ttm_place *place, - struct ttm_operation_ctx *ctx, - struct ww_acquire_ctx *ticket); +int ttm_bo_evict_first(struct ttm_device *bdev, + struct ttm_resource_manager *man, + struct ttm_operation_ctx *ctx); vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, struct vm_fault *vmf); vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, From patchwork Tue May 21 07:16:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DECADC25B75 for ; Tue, 21 May 2024 07:18:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1DCAA10E3E5; Tue, 21 May 2024 07:18:03 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="b8U2ELnq"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id F1A6810E35B; Tue, 21 May 2024 07:17:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275844; x=1747811844; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4LxpPN8VbzxoLLGRee8idTVA6cFjw1E/SGpjJRdHaE4=; b=b8U2ELnqoTRAQDNsqAfEHLdtnoq7c14f8A6fvyierk8ktdVXB9SHfqVS zDtIw7uGK0gctLxHzMND6hJZR0fesauSxcj+7Nxm/JlO8anQ3SHNb+C6T 2oad8RifsywwI+KUd+a36CIKNWlV3SqfcKaVsXguRK4WJxCJmjj7pq2cn SRhsxzb06LrlKwGRd+asjp1LIgifrBb3piSGmsAU/dPl8RfRW9dcRSsZ0 DKCH8CFY8OvddoWIbaPJjw1kNRnHrp9bIRBETTAwkfK67y8crlmBcnn+2 pNE/jYEON60W//qBirMCmu2zhYrdZZHzCGk0e2XQT6Hp9moMNPQ5v04zl g==; X-CSE-ConnectionGUID: FsQgBabLTIGPFRpWXXWZ1A== X-CSE-MsgGUID: gzIyMQDuRQ2o26W+ppCVKg== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393461" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393461" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:13 -0700 X-CSE-ConnectionGUID: kLOK1eQQT3K8HkipGSj81A== X-CSE-MsgGUID: 7Z0sRXhUT02LOirY3bn+rQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336721" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:12 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 08/21] drm/ttm: Add a virtual base class for graphics memory backup Date: Tue, 21 May 2024 09:16:26 +0200 Message-ID: <20240521071639.77614-9-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Initially intended for experimenting with different backup solutions (shmem vs direct swap cache insertion), abstract the backup destination using a virtual base class. Also provide a sample implementation for shmem. While when settling on a preferred backup solution, one could perhaps skip the abstraction, this functionality may actually come in handy for configurable dedicated graphics memory backup to fast nvme files or similar, whithout affecting swap-space. Could indeed be useful for VRAM backup on S4 and other cases. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/Makefile | 2 +- drivers/gpu/drm/ttm/ttm_backup_shmem.c | 137 +++++++++++++++++++++++++ include/drm/ttm/ttm_backup.h | 136 ++++++++++++++++++++++++ 3 files changed, 274 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/drm/ttm/ttm_backup_shmem.c create mode 100644 include/drm/ttm/ttm_backup.h diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile index dad298127226..5e980dd90e41 100644 --- a/drivers/gpu/drm/ttm/Makefile +++ b/drivers/gpu/drm/ttm/Makefile @@ -4,7 +4,7 @@ ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o \ - ttm_device.o ttm_sys_manager.o + ttm_device.o ttm_sys_manager.o ttm_backup_shmem.o ttm-$(CONFIG_AGP) += ttm_agp_backend.o obj-$(CONFIG_DRM_TTM) += ttm.o diff --git a/drivers/gpu/drm/ttm/ttm_backup_shmem.c b/drivers/gpu/drm/ttm/ttm_backup_shmem.c new file mode 100644 index 000000000000..79c2f552863a --- /dev/null +++ b/drivers/gpu/drm/ttm/ttm_backup_shmem.c @@ -0,0 +1,137 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2024 Intel Corporation + */ + +#include +#include + +/** + * struct ttm_backup_shmem - A shmem based ttm_backup subclass. + * @backup: The base struct ttm_backup + * @filp: The associated shmem object + */ +struct ttm_backup_shmem { + struct ttm_backup backup; + struct file *filp; +}; + +static struct ttm_backup_shmem *to_backup_shmem(struct ttm_backup *backup) +{ + return container_of(backup, struct ttm_backup_shmem, backup); +} + +static void ttm_backup_shmem_drop(struct ttm_backup *backup, unsigned long handle) +{ + handle -= 1; + shmem_truncate_range(file_inode(to_backup_shmem(backup)->filp), handle, + handle + 1); +} + +static int ttm_backup_shmem_copy_page(struct ttm_backup *backup, struct page *dst, + unsigned long handle, bool killable) +{ + struct file *filp = to_backup_shmem(backup)->filp; + struct address_space *mapping = filp->f_mapping; + struct folio *from_folio; + + handle -= 1; + from_folio = shmem_read_folio(mapping, handle); + if (IS_ERR(from_folio)) + return PTR_ERR(from_folio); + + /* Note: Use drm_memcpy_from_wc? */ + copy_highpage(dst, folio_file_page(from_folio, handle)); + folio_put(from_folio); + + return 0; +} + +static unsigned long +ttm_backup_shmem_backup_page(struct ttm_backup *backup, struct page *page, + bool writeback, pgoff_t i, gfp_t page_gfp, + gfp_t alloc_gfp) +{ + struct file *filp = to_backup_shmem(backup)->filp; + struct address_space *mapping = filp->f_mapping; + unsigned long handle = 0; + struct folio *to_folio; + int ret; + + to_folio = shmem_read_folio_gfp(mapping, i, alloc_gfp); + if (IS_ERR(to_folio)) + return handle; + + folio_mark_accessed(to_folio); + folio_lock(to_folio); + folio_mark_dirty(to_folio); + copy_highpage(folio_file_page(to_folio, i), page); + handle = i + 1; + + if (writeback && !folio_mapped(to_folio) && folio_clear_dirty_for_io(to_folio)) { + struct writeback_control wbc = { + .sync_mode = WB_SYNC_NONE, + .nr_to_write = SWAP_CLUSTER_MAX, + .range_start = 0, + .range_end = LLONG_MAX, + .for_reclaim = 1, + }; + folio_set_reclaim(to_folio); + ret = mapping->a_ops->writepage(folio_page(to_folio, 0), &wbc); + if (!folio_test_writeback(to_folio)) + folio_clear_reclaim(to_folio); + /* If writepage succeeds, it unlocks the folio */ + if (ret) + folio_unlock(to_folio); + } else { + folio_unlock(to_folio); + } + + folio_put(to_folio); + + return handle; +} + +static void ttm_backup_shmem_fini(struct ttm_backup *backup) +{ + struct ttm_backup_shmem *sbackup = to_backup_shmem(backup); + + fput(sbackup->filp); + kfree(sbackup); +} + +static const struct ttm_backup_ops ttm_backup_shmem_ops = { + .drop = ttm_backup_shmem_drop, + .copy_backed_up_page = ttm_backup_shmem_copy_page, + .backup_page = ttm_backup_shmem_backup_page, + .fini = ttm_backup_shmem_fini, +}; + +/** + * ttm_backup_shmem_create() - Create a shmem-based struct backup. + * @size: The maximum size (in bytes) to back up. + * + * Create a backup utilizing shmem objects. + * + * Return: A pointer to a struct ttm_backup on success, + * an error pointer on error. + */ +struct ttm_backup *ttm_backup_shmem_create(loff_t size) +{ + struct ttm_backup_shmem *sbackup = + kzalloc(sizeof(*sbackup), GFP_KERNEL | __GFP_ACCOUNT); + + if (!sbackup) + return ERR_PTR(-ENOMEM); + + sbackup->filp = shmem_file_setup("ttm shmem backup", size, 0); + if (IS_ERR(sbackup->filp)) { + kfree(sbackup); + return ERR_CAST(sbackup->filp); + } + + sbackup->backup.ops = &ttm_backup_shmem_ops; + + return &sbackup->backup; +} +EXPORT_SYMBOL_GPL(ttm_backup_shmem_create); diff --git a/include/drm/ttm/ttm_backup.h b/include/drm/ttm/ttm_backup.h new file mode 100644 index 000000000000..88e8b97a6fdc --- /dev/null +++ b/include/drm/ttm/ttm_backup.h @@ -0,0 +1,136 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2024 Intel Corporation + */ + +#ifndef _TTM_BACKUP_H_ +#define _TTM_BACKUP_H_ + +#include +#include + +struct ttm_backup; + +/** + * ttm_backup_handle_to_page_ptr() - Convert handle to struct page pointer + * @handle: The handle to convert. + * + * Converts an opaque handle received from the + * struct ttm_backoup_ops::backup_page() function to an (invalid) + * struct page pointer suitable for a struct page array. + * + * Return: An (invalid) struct page pointer. + */ +static inline struct page * +ttm_backup_handle_to_page_ptr(unsigned long handle) +{ + return (struct page *)(handle << 1 | 1); +} + +/** + * ttm_backup_page_ptr_is_handle() - Whether a struct page pointer is a handle + * @page: The struct page pointer to check. + * + * Return: true if the struct page pointer is a handld returned from + * ttm_backup_handle_to_page_ptr(). False otherwise. + */ +static inline bool ttm_backup_page_ptr_is_handle(const struct page *page) +{ + return (unsigned long)page & 1; +} + +/** + * ttm_backup_page_ptr_to_handle() - Convert a struct page pointer to a handle + * @page: The struct page pointer to convert + * + * Return: The handle that was previously used in + * ttm_backup_handle_to_page_ptr() to obtain a struct page pointer, suitable + * for use as argument in the struct ttm_backup_ops drop() or + * copy_backed_up_page() functions. + */ +static inline unsigned long +ttm_backup_page_ptr_to_handle(const struct page *page) +{ + WARN_ON(!ttm_backup_page_ptr_is_handle(page)); + return (unsigned long)page >> 1; +} + +/** struct ttm_backup_ops - A struct ttm_backup backend operations */ +struct ttm_backup_ops { + /** + * drop - release memory associated with a handle + * @backup: The struct backup pointer used to obtain the handle + * @handle: The handle obtained from the @backup_page function. + */ + void (*drop)(struct ttm_backup *backup, unsigned long handle); + + /** + * copy_backed_up_page - Copy the contents of a previously backed + * up page + * @backup: The struct backup pointer used to back up the page. + * @dst: The struct page to copy into. + * @handle: The handle returned when the page was backed up. + * @intr: Try to perform waits interruptable or at least killable. + * + * Return: 0 on success, Negative error code on failure, notably + * -EINTR if @intr was set to true and a signal is pending. + */ + int (*copy_backed_up_page)(struct ttm_backup *backup, struct page *dst, + unsigned long handle, bool intr); + + /** + * backup_page - Backup a page + * @backup: The struct backup pointer to use. + * @page: The page to back up. + * @writeback: Whether to perform immediate writeback of the page. + * This may have performance implications. + * @i: A unique integer for each page and each struct backup. + * This is a hint allowing the backup backend to avoid managing + * its address space separately. + * @page_gfp: The gfp value used when the page was allocated. + * This is used for accounting purposes. + * @alloc_gfp: The gpf to be used when the backend needs to allocaete + * memory. + * + * Return: A handle on success. 0 on failure. + * (This is following the swp_entry_t convention). + * + * Note: This function could be extended to back up a folio and + * backends would then split the folio internally if needed. + * Drawback is that the caller would then have to keep track of + */ + unsigned long (*backup_page)(struct ttm_backup *backup, struct page *page, + bool writeback, pgoff_t i, gfp_t page_gfp, + gfp_t alloc_gfp); + /** + * fini - Free the struct backup resources after last use. + * @backup: Pointer to the struct backup whose resources to free. + * + * After a call to @fini, it's illegal to use the @backup pointer. + */ + void (*fini)(struct ttm_backup *backup); +}; + +/** + * struct ttm_backup - Abstract a backup backend. + * @ops: The operations as described above. + * + * The struct ttm_backup is intended to be subclassed by the + * backend implementation. + */ +struct ttm_backup { + const struct ttm_backup_ops *ops; +}; + +/** + * ttm_backup_shmem_create() - Create a shmem-based struct backup. + * @size: The maximum size (in bytes) to back up. + * + * Create a backup utilizing shmem objects. + * + * Return: A pointer to a struct ttm_backup on success, + * an error pointer on error. + */ +struct ttm_backup *ttm_backup_shmem_create(loff_t size); + +#endif From patchwork Tue May 21 07:16:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C3722C25B7C for ; Tue, 21 May 2024 07:17:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DD8B910E35B; Tue, 21 May 2024 07:17:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="H0jYpNzl"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6D98B10E34B; Tue, 21 May 2024 07:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275844; x=1747811844; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=shzvdxL5VuuXKcxdKvktYBp2kKvSxNvl4kDiMi8lkf8=; b=H0jYpNzlWxK/+A2mWhQUzGF3sdaLXvtkF6jdVYqKrd8uqMVZ/6fgTK6g HyqGeJ/iBOUsk6vciatd7f94cmNZ43nX/Rpv6tCHnyd3+YIVrnB3FLxZa KmS6C1UV3Jh55bRNEElkVWnYRXgnKNs2eLesWrFflOvE50Cv6Qwg23Tag Wrr4qOwiN7ISMY9gvusXIicTXV3GVxA7QQX93h0SR5DJSO8nzSVVGa1yt zzn5PJDWP4jzQajQeN5a8SMomJGMxMExBSm6vtF8n5etkhlZG6Wnbf9Eo axkncLA6Y2dOfj+3Odb5iFndZ/CT6Nf10gIDjNd2tJl+Eb01MDZhzl3NB A==; X-CSE-ConnectionGUID: ckG3kWGFT7uaZfL9TG9pnw== X-CSE-MsgGUID: RUczMnSLQpiK5jfM7MjMgw== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393463" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393463" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:15 -0700 X-CSE-ConnectionGUID: p9OSkkgWQ/iPRy2NJwpz6Q== X-CSE-MsgGUID: lpvGmNI8RfW+asLthAZUww== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336734" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:14 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 09/21] drm/ttm/pool: Provide a helper to shrink pages Date: Tue, 21 May 2024 09:16:27 +0200 Message-ID: <20240521071639.77614-10-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Provide a helper to shrink ttm_tt page-vectors on a per-page basis. A ttm_backup backend could then in theory get away with allocating a single temporary page for each struct ttm_tt. This is accomplished by splitting larger pages before trying to back them up. In the future we could allow ttm_backup to handle backing up large pages as well, but currently there's no benefit in doing that, since the shmem backup backend would have to split those anyway to avoid allocating too much temporary memory, and if the backend instead inserts pages into the swap-cache, those are split on reclaim by the core. Due to potential backup- and recover errors, allow partially swapped out struct ttm_tt's, although mark them as swapped out stopping them from being swapped out a second time. More details in the ttm_pool.c DOC section. v2: - A couple of cleanups and error fixes in ttm_pool_back_up_tt. - s/back_up/backup/ - Add a writeback parameter to the exported interface. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_pool.c | 397 +++++++++++++++++++++++++++++++-- drivers/gpu/drm/ttm/ttm_tt.c | 37 +++ include/drm/ttm/ttm_pool.h | 5 + include/drm/ttm/ttm_tt.h | 20 ++ 4 files changed, 446 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 6e1fd6985ffc..38e50cf81b0a 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -41,6 +41,7 @@ #include #endif +#include #include #include #include @@ -58,6 +59,32 @@ struct ttm_pool_dma { unsigned long vaddr; }; +/** + * struct ttm_pool_tt_restore - State representing restore from backup + * @alloced_pages: Total number of already allocated pages for the ttm_tt. + * @restored_pages: Number of (sub) pages restored from swap for this + * chunk of 1 << @order pages. + * @first_page: The ttm page ptr representing for @old_pages[0]. + * @caching_divide: Page pointer where subsequent pages are cached. + * @old_pages: Backup copy of page pointers that were replaced by the new + * page allocation. + * @pool: The pool used for page allocation while restoring. + * @order: The order of the last page allocated while restoring. + * + * Recovery from backup might fail when we've recovered less than the + * full ttm_tt. In order not to loose any data (yet), keep information + * around that allows us to restart a failed ttm backup recovery. + */ +struct ttm_pool_tt_restore { + pgoff_t alloced_pages; + pgoff_t restored_pages; + struct page **first_page; + struct page **caching_divide; + struct ttm_pool *pool; + unsigned int order; + struct page *old_pages[]; +}; + static unsigned long page_pool_size; MODULE_PARM_DESC(page_pool_size, "Number of pages in the WC/UC/DMA pool"); @@ -354,11 +381,102 @@ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) return p->private; } +/* + * To be able to insert single pages into backup directly, + * we need to split multi-order page allocations and make them look + * like single-page allocations. + */ +static void ttm_pool_split_for_swap(struct ttm_pool *pool, struct page *p) +{ + unsigned int order = ttm_pool_page_order(pool, p); + pgoff_t nr; + + if (!order) + return; + + split_page(p, order); + nr = 1UL << order; + while (nr--) + (p++)->private = 0; +} + +/** + * DOC: Partial backup and restoration of a struct ttm_tt. + * + * Swapout using ttm_backup::ops::backup_page() and swapin using + * ttm_backup::ops::copy_backed_up_page() may fail. + * The former most likely due to lack of swap-space or memory, the latter due + * to lack of memory or because of signal interruption during waits. + * + * Backupfailure is easily handled by using a ttm_tt pages vector that holds + * both swap entries and page pointers. This has to be taken into account when + * restoring such a ttm_tt from backup, and when freeing it while backed up. + * When restoring, for simplicity, new pages are actually allocated from the + * pool and the contents of any old pages are copied in and then the old pages + * are released. + * + * For restoration failures, the struct ttm_pool_tt_restore holds sufficient state + * to be able to resume an interrupted restore, and that structure is freed once + * the restoration is complete. If the struct ttm_tt is destroyed while there + * is a valid struct ttm_pool_tt_restore attached, that is also properly taken + * care of. + */ + +static bool ttm_pool_restore_valid(const struct ttm_pool_tt_restore *restore) +{ + return restore && restore->restored_pages < (1 << restore->order); +} + +static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore, + struct ttm_backup *backup, + struct ttm_operation_ctx *ctx) +{ + unsigned int i, nr = 1 << restore->order; + int ret = 0; + + if (!ttm_pool_restore_valid(restore)) + return 0; + + for (i = restore->restored_pages; i < nr; ++i) { + struct page *p = restore->old_pages[i]; + + if (ttm_backup_page_ptr_is_handle(p)) { + unsigned long handle = ttm_backup_page_ptr_to_handle(p); + + if (handle == 0) + continue; + + ret = backup->ops->copy_backed_up_page + (backup, restore->first_page[i], + handle, ctx->interruptible); + if (ret) + break; + + backup->ops->drop(backup, handle); + } else if (p) { + /* + * We could probably avoid splitting the old page + * using clever logic, but ATM we don't care. + */ + ttm_pool_split_for_swap(restore->pool, p); + copy_highpage(restore->first_page[i], p); + __free_pages(p, 0); + } + + restore->restored_pages++; + restore->old_pages[i] = NULL; + cond_resched(); + } + + return ret; +} + /* Called when we got a page, either from a pool or newly allocated */ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order, struct page *p, dma_addr_t **dma_addr, unsigned long *num_pages, - struct page ***pages) + struct page ***pages, + struct ttm_pool_tt_restore *restore) { unsigned int i; int r; @@ -369,6 +487,16 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order, return r; } + if (restore) { + memcpy(restore->old_pages, *pages, + (1 << order) * sizeof(*restore->old_pages)); + memset(*pages, 0, (1 << order) * sizeof(**pages)); + restore->order = order; + restore->restored_pages = 0; + restore->first_page = *pages; + restore->alloced_pages += 1UL << order; + } + *num_pages -= 1 << order; for (i = 1 << order; i; --i, ++(*pages), ++p) **pages = p; @@ -394,22 +522,39 @@ static void ttm_pool_free_range(struct ttm_pool *pool, struct ttm_tt *tt, pgoff_t start_page, pgoff_t end_page) { struct page **pages = &tt->pages[start_page]; + struct ttm_backup *backup = tt->backup; unsigned int order; pgoff_t i, nr; for (i = start_page; i < end_page; i += nr, pages += nr) { struct ttm_pool_type *pt = NULL; + struct page *p = *pages; + + if (ttm_backup_page_ptr_is_handle(p)) { + unsigned long handle = ttm_backup_page_ptr_to_handle(p); + + nr = 1; + if (handle != 0) + backup->ops->drop(backup, handle); + continue; + } + + if (pool) { + order = ttm_pool_page_order(pool, p); + nr = (1UL << order); + if (tt->dma_address) + ttm_pool_unmap(pool, tt->dma_address[i], nr); - order = ttm_pool_page_order(pool, *pages); - nr = (1UL << order); - if (tt->dma_address) - ttm_pool_unmap(pool, tt->dma_address[i], nr); + pt = ttm_pool_select_type(pool, caching, order); + } else { + order = p->private; + nr = (1UL << order); + } - pt = ttm_pool_select_type(pool, caching, order); if (pt) - ttm_pool_type_give(pt, *pages); + ttm_pool_type_give(pt, p); else - ttm_pool_free_page(pool, caching, order, *pages); + ttm_pool_free_page(pool, caching, order, p); } } @@ -453,9 +598,37 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, else gfp_flags |= GFP_HIGHUSER; - for (order = min_t(unsigned int, MAX_PAGE_ORDER, __fls(num_pages)); - num_pages; - order = min_t(unsigned int, order, __fls(num_pages))) { + order = min_t(unsigned int, MAX_PAGE_ORDER, __fls(num_pages)); + + if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) { + if (!tt->restore) { + gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; + + if (ctx->gfp_retry_mayfail) + gfp |= __GFP_RETRY_MAYFAIL; + + tt->restore = + kvzalloc(struct_size(tt->restore, old_pages, + (size_t)1 << order), gfp); + /* RFC: Possibly loop on -ENOMEM and reduce order. */ + if (!tt->restore) + return -ENOMEM; + } else if (ttm_pool_restore_valid(tt->restore)) { + struct ttm_pool_tt_restore *restore = tt->restore; + + num_pages -= restore->alloced_pages; + order = min_t(unsigned int, order, __fls(num_pages)); + pages += restore->alloced_pages; + r = ttm_pool_restore_tt(restore, tt->backup, ctx); + if (r) + return r; + caching = restore->caching_divide; + } + + tt->restore->pool = pool; + } + + for (; num_pages; order = min_t(unsigned int, order, __fls(num_pages))) { struct ttm_pool_type *pt; page_caching = tt->caching; @@ -472,11 +645,19 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, r = ttm_pool_page_allocated(pool, order, p, &dma_addr, &num_pages, - &pages); + &pages, + tt->restore); if (r) goto error_free_page; caching = pages; + if (ttm_pool_restore_valid(tt->restore)) { + r = ttm_pool_restore_tt(tt->restore, tt->backup, + ctx); + if (r) + goto error_free_all; + } + if (num_pages < (1 << order)) break; @@ -496,9 +677,17 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, caching = pages; } r = ttm_pool_page_allocated(pool, order, p, &dma_addr, - &num_pages, &pages); + &num_pages, &pages, + tt->restore); if (r) goto error_free_page; + + if (ttm_pool_restore_valid(tt->restore)) { + r = ttm_pool_restore_tt(tt->restore, tt->backup, ctx); + if (r) + goto error_free_all; + } + if (PageHighMem(p)) caching = pages; } @@ -517,12 +706,26 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, if (r) goto error_free_all; + if (tt->restore) { + kvfree(tt->restore); + tt->restore = NULL; + } + + if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) + tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP | + TTM_TT_FLAG_SWAPPED); + return 0; error_free_page: ttm_pool_free_page(pool, page_caching, order, p); error_free_all: + if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) { + tt->restore->caching_divide = caching; + return r; + } + num_pages = tt->num_pages - num_pages; caching_divide = caching - tt->pages; ttm_pool_free_range(pool, tt, tt->caching, 0, caching_divide); @@ -549,6 +752,174 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) } EXPORT_SYMBOL(ttm_pool_free); +/** + * ttm_pool_release_backed_up() - Release content of a swapped-out struct ttm_tt + * @tt: The struct ttm_tt. + * + * Release handles with associated content or any remaining pages of + * a backed-up struct ttm_tt. + */ +void ttm_pool_release_backed_up(struct ttm_tt *tt) +{ + struct ttm_backup *backup = tt->backup; + struct ttm_pool_tt_restore *restore; + pgoff_t i, start_page = 0; + unsigned long handle; + + if (!(tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)) + return; + + restore = tt->restore; + + if (ttm_pool_restore_valid(restore)) { + pgoff_t nr = 1UL << restore->order; + + for (i = restore->restored_pages; i < nr; ++i) { + struct page *p = restore->old_pages[i]; + + if (ttm_backup_page_ptr_is_handle(p)) { + handle = ttm_backup_page_ptr_to_handle(p); + if (handle == 0) + continue; + + backup->ops->drop(backup, handle); + } else if (p) { + ttm_pool_split_for_swap(restore->pool, p); + __free_pages(p, 0); + } + } + } + + if (restore) { + pgoff_t mid = restore->caching_divide - tt->pages; + + start_page = restore->alloced_pages; + /* Pages that might be dma-mapped and non-cached */ + ttm_pool_free_range(restore->pool, tt, tt->caching, + 0, mid); + /* Pages that might be dma-mapped but cached */ + ttm_pool_free_range(restore->pool, tt, ttm_cached, + mid, restore->alloced_pages); + } + + /* Shrunken pages. Cached and not dma-mapped. */ + ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt->num_pages); + + if (restore) { + kvfree(restore); + tt->restore = NULL; + } + + tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP | TTM_TT_FLAG_SWAPPED); +} + +/** + * ttm_pool_backup_tt() - Back up or purge a struct ttm_tt + * @pool: The pool used when allocating the struct ttm_tt. + * @ttm: The struct ttm_tt. + * @purge: Don't back up but release pages directly to system. + * @writeback: If !@purge, Try to write out directly to the + * underlying persistent media. + * + * Back up or purge a struct ttm_tt. If @purge is true, then + * all pages will be freed directly to the system rather than to the pool + * they were allocated from, making the function behave similarly to + * ttm_pool_free(). If @purge is false the pages will be backed up instead, + * exchanged for handles. + * A subsequent call to ttm_pool_alloc() will then read back the content and + * a subsequent call to ttm_pool_release_shrunken() will drop it. + * If backup of a page fails for whatever reason, @ttm will still be + * partially backed up, retaining those pages for which backup fails. + * + * Return: Number of pages actually backed up or freed, or negative + * error code on error. + */ +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm, bool purge, + bool writeback) +{ + struct ttm_backup *backup = ttm->backup; + struct page *page; + unsigned long handle; + gfp_t alloc_gfp; + gfp_t gfp; + int ret = 0; + pgoff_t shrunken = 0; + pgoff_t i, num_pages; + + if ((!get_nr_swap_pages() && !purge) || + pool->use_dma_alloc || + (ttm->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)) + return -EBUSY; + +#ifdef CONFIG_X86 + /* Anything returned to the system needs to be cached. */ + if (ttm->caching != ttm_cached) + set_pages_array_wb(ttm->pages, ttm->num_pages); +#endif + + if (ttm->dma_address || purge) { + for (i = 0; i < ttm->num_pages; i += num_pages) { + unsigned int order; + + page = ttm->pages[i]; + if (unlikely(!page)) { + num_pages = 1; + continue; + } + + order = ttm_pool_page_order(pool, page); + num_pages = 1UL << order; + if (ttm->dma_address) + ttm_pool_unmap(pool, ttm->dma_address[i], + num_pages); + if (purge) { + shrunken += num_pages; + page->private = 0; + __free_pages(page, order); + memset(ttm->pages + i, 0, + num_pages * sizeof(*ttm->pages)); + } + } + } + + if (purge) + return shrunken; + + if (pool->use_dma32) + gfp = GFP_DMA32; + else + gfp = GFP_HIGHUSER; + + alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN | __GFP_RETRY_MAYFAIL; + + for (i = 0; i < ttm->num_pages; ++i) { + page = ttm->pages[i]; + if (unlikely(!page)) + continue; + + ttm_pool_split_for_swap(pool, page); + + handle = backup->ops->backup_page(backup, page, writeback, i, + gfp, alloc_gfp); + if (handle) { + ttm->pages[i] = ttm_backup_handle_to_page_ptr(handle); + put_page(page); + shrunken++; + } else { + /* We allow partially shrunken tts */ + ret = -ENOMEM; + break; + } + cond_resched(); + } + + if (shrunken) + ttm->page_flags |= (TTM_TT_FLAG_PRIV_BACKED_UP | + TTM_TT_FLAG_SWAPPED); + + return shrunken ? shrunken : ret; +} + /** * ttm_pool_init - Initialize a pool * diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 7b00ddf0ce49..bc994b8e7e73 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -158,6 +159,7 @@ static void ttm_tt_init_fields(struct ttm_tt *ttm, ttm->swap_storage = NULL; ttm->sg = bo->sg; ttm->caching = caching; + ttm->restore = NULL; } int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, @@ -182,6 +184,12 @@ void ttm_tt_fini(struct ttm_tt *ttm) fput(ttm->swap_storage); ttm->swap_storage = NULL; + ttm_pool_release_backed_up(ttm); + if (ttm->backup) { + ttm->backup->ops->fini(ttm->backup); + ttm->backup = NULL; + } + if (ttm->pages) kvfree(ttm->pages); else @@ -252,6 +260,35 @@ int ttm_tt_swapin(struct ttm_tt *ttm) return ret; } +/** + * ttm_tt_backup() - Helper to back up a struct ttm_tt. + * @bdev: The TTM device. + * @tt: The struct ttm_tt. + * @purge: Don't back up but release pages directly to system, + * bypassing any pooling. + * @writeback: If !@purge, try to write out directly to the + * underlying persistent media. + * + * Helper for a TTM driver to use from the bo_shrink() method to shrink + * a struct ttm_tt, after it has done the necessary unbinding. This function + * will update the page accounting and call ttm_pool_shrink_tt to free pages + * or move them to the swap cache. + * + * Return: Number of pages freed or swapped out, or negative error code on + * error. + */ +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt, bool purge, + bool writeback) +{ + long ret = ttm_pool_backup_tt(&bdev->pool, tt, purge, writeback); + + if (ret > 0) + tt->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED; + + return ret; +} +EXPORT_SYMBOL(ttm_tt_backup); + /** * ttm_tt_swapout - swap out tt object * diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index 160d954a261e..4e4db369952b 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -89,6 +89,11 @@ void ttm_pool_fini(struct ttm_pool *pool); int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m); +void ttm_pool_release_backed_up(struct ttm_tt *tt); + +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm, + bool purge, bool writeback); + int ttm_pool_mgr_init(unsigned long num_pages); void ttm_pool_mgr_fini(void); diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index 2b9d856ff388..6b990f1e7dd0 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -32,11 +32,13 @@ #include #include +struct ttm_backup; struct ttm_device; struct ttm_tt; struct ttm_resource; struct ttm_buffer_object; struct ttm_operation_ctx; +struct ttm_pool_tt_restore; /** * struct ttm_tt - This is a structure holding the pages, caching- and aperture @@ -85,6 +87,9 @@ struct ttm_tt { * fault handling abuses the DMA api a bit and dma_map_attrs can't be * used to assure pgprot always matches. * + * TTM_TT_FLAG_PRIV_BACKED_UP: TTM internal only. This is set if the + * struct ttm_tt has been (possibly partially) backed up. + * * TTM_TT_FLAG_PRIV_POPULATED: TTM internal only. DO NOT USE. This is * set by TTM after ttm_tt_populate() has successfully returned, and is * then unset when TTM calls ttm_tt_unpopulate(). @@ -96,6 +101,7 @@ struct ttm_tt { #define TTM_TT_FLAG_DECRYPTED BIT(4) #define TTM_TT_FLAG_PRIV_POPULATED BIT(5) +#define TTM_TT_FLAG_PRIV_BACKED_UP BIT(6) uint32_t page_flags; /** @num_pages: Number of pages in the page array. */ uint32_t num_pages; @@ -105,11 +111,21 @@ struct ttm_tt { dma_addr_t *dma_address; /** @swap_storage: Pointer to shmem struct file for swap storage. */ struct file *swap_storage; + /** + * @backup: Pointer to backup struct for backed up tts. + * RFC: Could possibly be unified with @swap_storage. + */ + struct ttm_backup *backup; /** * @caching: The current caching state of the pages, see enum * ttm_caching. */ enum ttm_caching caching; + /** + * @restore: Partial restoration from backup state. + * RFC: Incorporate in struct ttm_backup? + */ + struct ttm_pool_tt_restore *restore; }; /** @@ -230,6 +246,10 @@ void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma32_pages); struct ttm_kmap_iter *ttm_kmap_iter_tt_init(struct ttm_kmap_iter_tt *iter_tt, struct ttm_tt *tt); unsigned long ttm_tt_pages_limit(void); + +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt, bool purge, + bool writeback); + #if IS_ENABLED(CONFIG_AGP) #include From patchwork Tue May 21 07:16:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B2460C25B75 for ; Tue, 21 May 2024 07:18:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 45EED10E475; Tue, 21 May 2024 07:18:15 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="FZkPUfOF"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id AEC4C10E38A; Tue, 21 May 2024 07:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275845; x=1747811845; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qbdxuEcuw3/ePQkGChibOQLpeUyo2Cc7dwj1217/7Q8=; b=FZkPUfOFCHyrbp0BGSoJRphSH0WIRmnifQjmJMYiK/0Vb8YFXy604FCH rdGWh3s94jyayCOFWVFThOVlYSkk50V6FACRgj7C3dOhiNe1uuYprN7Mn ZZ15yFE0+aOUoyLdnJ21S75OMfUws7+T5TCpHoe1mhaYI7t6+4nZ1Z/Ei Vy0gguheYrUOh0sTZeE5cyeJikBFJN/qTbHrgs1d6uFXY/JEPkJmI4rxd VTdiNVrZiuiPpeIBVpeFIwIneubpljRS+zA5ZVdZiArvlEZKVySMU+fhY /Avuf4ZWNg7Eb4TfcG12FHOvZVeBKRetsMwfevKch2/8yVaTcUe0yI1V+ w==; X-CSE-ConnectionGUID: t5XHwEiiQyS4VQdooZYuLA== X-CSE-MsgGUID: SqVmQ+scQByAJgslS0e/8w== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393478" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393478" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:24 -0700 X-CSE-ConnectionGUID: S9JRKM5oRBi8jkhsAePjyg== X-CSE-MsgGUID: vMZq9RZiSyOa0zewtTKiVQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336746" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:16 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 10/21] drm/ttm: Use fault-injection to test error paths Date: Tue, 21 May 2024 09:16:28 +0200 Message-ID: <20240521071639.77614-11-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use fault-injection to test partial TTM swapout and interrupted swapin. Return -EINTR for swapin to test the callers ability to handle and restart the swapin, and on swapout perform a partial swapout to test that the swapin and release_shrunken functionality. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/Kconfig | 10 ++++++++++ drivers/gpu/drm/ttm/ttm_pool.c | 17 ++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 026444eeb5c6..f041ef44228d 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -258,6 +258,16 @@ config DRM_GPUVM GPU-VM representation providing helpers to manage a GPUs virtual address space +config DRM_TTM_BACKUP_FAULT_INJECT + bool "Enable fault injection during TTM backup" + depends on DRM_TTM + default n + help + Inject recoverable failures during TTM backup and recovery of + backed-up objects. For DRM driver developers only. + + If in doubt, choose N. + config DRM_BUDDY tristate depends on DRM diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 38e50cf81b0a..d32a1f2e5e50 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -431,6 +431,7 @@ static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore, struct ttm_backup *backup, struct ttm_operation_ctx *ctx) { + static unsigned long __maybe_unused swappedin; unsigned int i, nr = 1 << restore->order; int ret = 0; @@ -446,6 +447,13 @@ static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore, if (handle == 0) continue; + if (IS_ENABLED(CONFIG_DRM_TTM_BACKUP_FAULT_INJECT) && + ctx->interruptible && + ++swappedin % 100 == 0) { + ret = -EINTR; + break; + } + ret = backup->ops->copy_backed_up_page (backup, restore->first_page[i], handle, ctx->interruptible); @@ -892,7 +900,14 @@ long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm, bool purge, alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN | __GFP_RETRY_MAYFAIL; - for (i = 0; i < ttm->num_pages; ++i) { + num_pages = ttm->num_pages; + + /* Pretend doing fault injection by shrinking only half of the pages. */ + + if (IS_ENABLED(CONFIG_DRM_TTM_BACKUP_FAULT_INJECT)) + num_pages = DIV_ROUND_UP(num_pages, 2); + + for (i = 0; i < num_pages; ++i) { page = ttm->pages[i]; if (unlikely(!page)) continue; From patchwork Tue May 21 07:16:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BC252C25B74 for ; Tue, 21 May 2024 07:17:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6F5DB10E3C9; Tue, 21 May 2024 07:17:44 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="HI8s5xtb"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id AE69410E36B; Tue, 21 May 2024 07:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275845; x=1747811845; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kHViRZv0e+W3+ev/yG2qWyGA5tzeP14jFCoL6wksSdU=; b=HI8s5xtbZBqQ9ZMxs8sHZkL3QYOkCXn2u2gEonC0UKONXD9ViKffTAS/ 1qJt6xjRtlc0C1hav3HZZ6th953uGlpZW8DjG/IxtpMbOB/TCkCEohKsM +aPB/0Lj7n2zwmYh4iNVCBei/QAx2v3et3vFHZJ0LsGbReRBLr9V05l7e +un7+z6A6Ovt8w613BDSlVyppxKqjWEagsw9S3oejldO6FUEw6+6nNUR+ p9dDdnFc/fIIVJtSWp2xVlEET8ct/IeylTBAktc4UWfjbSLWamrqNwFJ9 h4x5qkVOFFghY9c02tatjOqL+CXH6zMSwv5aSeX9b0XUgGhkm3NDfON4t Q==; X-CSE-ConnectionGUID: i7Vv6SVxSrq6eH/JZSczKQ== X-CSE-MsgGUID: 5pmkM76IThKdKbpHeboaFQ== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393481" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393481" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:24 -0700 X-CSE-ConnectionGUID: Aa8QtZBHRdKY3B04OsGrlg== X-CSE-MsgGUID: KWE7mxjUSm+CCf0XuOMIdQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336755" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:18 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [PATCH v3 11/21] drm/ttm, drm/xe: Add a shrinker for xe bos Date: Tue, 21 May 2024 09:16:29 +0200 Message-ID: <20240521071639.77614-12-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rather than relying on the TTM watermark accounting add a shrinker for xe_bos in TT or system memory. Leverage the newly added TTM per-page shrinking and shmem backup support. Although xe doesn't fully support WONTNEED (purgeable) bos yet, introduce and add shrinker support for purgeable ttm_tts. v2: - Cleanups bugfixes and a KUNIT shrinker test. - Add writeback support, and activate if kswapd. v3: - Move the try_shrink() helper to core TTM. - Minor cleanups. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_bo_util.c | 67 ++++++++ drivers/gpu/drm/xe/Makefile | 1 + drivers/gpu/drm/xe/tests/xe_bo.c | 118 ++++++++++++++ drivers/gpu/drm/xe/tests/xe_bo_test.c | 1 + drivers/gpu/drm/xe/tests/xe_bo_test.h | 1 + drivers/gpu/drm/xe/xe_bo.c | 128 +++++++++++++-- drivers/gpu/drm/xe/xe_bo.h | 4 + drivers/gpu/drm/xe/xe_device.c | 8 + drivers/gpu/drm/xe/xe_device_types.h | 2 + drivers/gpu/drm/xe/xe_shrinker.c | 224 ++++++++++++++++++++++++++ drivers/gpu/drm/xe/xe_shrinker.h | 18 +++ include/drm/ttm/ttm_bo.h | 3 + 12 files changed, 559 insertions(+), 16 deletions(-) create mode 100644 drivers/gpu/drm/xe/xe_shrinker.c create mode 100644 drivers/gpu/drm/xe/xe_shrinker.h diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index be200c06cc79..f6460024077d 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -913,3 +913,70 @@ long ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, ttm_resource_cursor_fini(&cursor); return sofar; } +EXPORT_SYMBOL(ttm_lru_walk_for_evict); + +/** + * ttm_bo_try_shrink - LRU walk helper to shrink a ttm buffer object. + * @walk: The struct xe_ttm_lru_walk that describes the walk. + * @bo: The buffer object. + * @purge: Whether to attempt to purge the bo content since it's no + * longer needed. + * @writeback: If !@purge, attempt to write out to persistent storage. + * + * The function uses the ttm_tt_back_up functionality to back up or + * purge a struct ttm_tt. If the bo is not in system, it's first + * moved there. + * + * Return: The number of pages shrunken or purged, or + * negative error code on failure. + */ +long ttm_bo_try_shrink(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo, + bool purge, bool writeback) +{ + static const struct ttm_place sys_placement_flags = { + .fpfn = 0, + .lpfn = 0, + .mem_type = TTM_PL_SYSTEM, + .flags = 0, + }; + static struct ttm_placement sys_placement = { + .num_placement = 1, + .placement = &sys_placement_flags, + }; + struct ttm_operation_ctx *ctx = walk->ctx; + struct ttm_tt *tt = bo->ttm; + long lret; + + dma_resv_assert_held(bo->base.resv); + + if (!tt || !ttm_tt_is_populated(tt)) + return 0; + + if (bo->resource->mem_type != TTM_PL_SYSTEM) { + int ret = ttm_bo_validate(bo, &sys_placement, ctx); + + if (ret) { + if (ret == -EINTR || ret == -EDEADLK || + ret == -ERESTARTSYS) + return ret; + return 0; + } + } + + lret = ttm_bo_wait_ctx(bo, ctx); + if (lret < 0) { + if (lret == -ERESTARTSYS) + return lret; + return 0; + } + + if (bo->deleted) + lret = ttm_tt_backup(bo->bdev, tt, true, writeback); + else + lret = ttm_tt_backup(bo->bdev, tt, purge, writeback); + if (lret < 0 && lret != -EINTR) + return 0; + + return lret; +} +EXPORT_SYMBOL(ttm_bo_try_shrink); diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile index c9f067b8f54d..83d57d530a35 100644 --- a/drivers/gpu/drm/xe/Makefile +++ b/drivers/gpu/drm/xe/Makefile @@ -130,6 +130,7 @@ xe-y += xe_bb.o \ xe_ring_ops.o \ xe_sa.o \ xe_sched_job.o \ + xe_shrinker.o \ xe_step.o \ xe_sync.o \ xe_tile.o \ diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c index 9f3c02826464..7576d362020f 100644 --- a/drivers/gpu/drm/xe/tests/xe_bo.c +++ b/drivers/gpu/drm/xe/tests/xe_bo.c @@ -6,6 +6,8 @@ #include #include +#include + #include "tests/xe_bo_test.h" #include "tests/xe_pci_test.h" #include "tests/xe_test.h" @@ -350,3 +352,119 @@ void xe_bo_evict_kunit(struct kunit *test) xe_call_for_each_device(evict_test_run_device); } EXPORT_SYMBOL_IF_KUNIT(xe_bo_evict_kunit); + +struct xe_bo_link { + struct list_head link; + struct xe_bo *bo; +}; + +#define XE_BO_SHRINK_SIZE ((unsigned long)SZ_64M) + +/* + * Try to create system bos corresponding to twice the amount + * of available system memory to test shrinker functionality. + * If no swap space is available to accommodate the + * memory overcommit, mark bos purgeable. + */ +static int shrink_test_run_device(struct xe_device *xe) +{ + struct kunit *test = xe_cur_kunit(); + LIST_HEAD(bos); + struct xe_bo_link *link, *next; + struct sysinfo si; + size_t total, alloced; + unsigned int interrupted = 0, successful = 0; + + si_meminfo(&si); + total = si.freeram * si.mem_unit; + + kunit_info(test, "Free ram is %lu bytes. Will allocate twice of that.\n", + total); + + total <<= 1; + for (alloced = 0; alloced < total ; alloced += XE_BO_SHRINK_SIZE) { + struct xe_bo *bo; + unsigned int mem_type; + + link = kzalloc(sizeof(*link), GFP_KERNEL); + if (!link) { + KUNIT_FAIL(test, "Unexpeced link allocation failure\n"); + break; + } + + INIT_LIST_HEAD(&link->link); + + /* We can create bos using WC caching here. But it is slower. */ + bo = xe_bo_create_user(xe, NULL, NULL, XE_BO_SHRINK_SIZE, + DRM_XE_GEM_CPU_CACHING_WB, + ttm_bo_type_device, + XE_BO_FLAG_SYSTEM); + if (IS_ERR(bo)) { + if (bo != ERR_PTR(-ENOMEM) && bo != ERR_PTR(-ENOSPC) && + bo != ERR_PTR(-EINTR) && bo != ERR_PTR(-ERESTARTSYS)) + KUNIT_FAIL(test, "Error creating bo: %pe\n", bo); + kfree(link); + break; + } + link->bo = bo; + list_add_tail(&link->link, &bos); + xe_bo_lock(bo, false); + + /* + * If we're low on swap entries, we can't shrink unless the bo + * is marked purgeable. + */ + if (get_nr_swap_pages() < (XE_BO_SHRINK_SIZE >> PAGE_SHIFT) * 128) { + struct xe_ttm_tt *xe_tt = + container_of(bo->ttm.ttm, typeof(*xe_tt), ttm); + long num_pages = xe_tt->ttm.num_pages; + + xe_tt->purgeable = true; + xe_shrinker_mod_pages(xe->mem.shrinker, -num_pages, + num_pages); + } + + mem_type = bo->ttm.resource->mem_type; + xe_bo_unlock(bo); + if (mem_type != XE_PL_TT) + KUNIT_FAIL(test, "Bo in incorrect memory type: %u\n", + bo->ttm.resource->mem_type); + cond_resched(); + if (signal_pending(current)) + break; + } + + /* Read back and destroy bos */ + list_for_each_entry_safe_reverse(link, next, &bos, link) { + static struct ttm_operation_ctx ctx = {.interruptible = true}; + struct xe_bo *bo = link->bo; + int ret; + + if (!signal_pending(current)) { + xe_bo_lock(bo, NULL); + ret = ttm_bo_validate(&bo->ttm, &tt_placement, &ctx); + xe_bo_unlock(bo); + if (ret && ret != -EINTR) + KUNIT_FAIL(test, "Validation failed: %pe\n", + ERR_PTR(ret)); + else if (ret) + interrupted++; + else + successful++; + } + xe_bo_put(link->bo); + list_del(&link->link); + kfree(link); + cond_resched(); + } + kunit_info(test, "Readbacks interrupted: %u successful: %u\n", + interrupted, successful); + + return 0; +} + +void xe_bo_shrink_kunit(struct kunit *test) +{ + xe_call_for_each_device(shrink_test_run_device); +} +EXPORT_SYMBOL_IF_KUNIT(xe_bo_shrink_kunit); diff --git a/drivers/gpu/drm/xe/tests/xe_bo_test.c b/drivers/gpu/drm/xe/tests/xe_bo_test.c index a324cde77db8..317fa923e287 100644 --- a/drivers/gpu/drm/xe/tests/xe_bo_test.c +++ b/drivers/gpu/drm/xe/tests/xe_bo_test.c @@ -10,6 +10,7 @@ static struct kunit_case xe_bo_tests[] = { KUNIT_CASE(xe_ccs_migrate_kunit), KUNIT_CASE(xe_bo_evict_kunit), + KUNIT_CASE_SLOW(xe_bo_shrink_kunit), {} }; diff --git a/drivers/gpu/drm/xe/tests/xe_bo_test.h b/drivers/gpu/drm/xe/tests/xe_bo_test.h index 0113ab45066a..7f44d14a45c5 100644 --- a/drivers/gpu/drm/xe/tests/xe_bo_test.h +++ b/drivers/gpu/drm/xe/tests/xe_bo_test.h @@ -10,5 +10,6 @@ struct kunit; void xe_ccs_migrate_kunit(struct kunit *test); void xe_bo_evict_kunit(struct kunit *test); +void xe_bo_shrink_kunit(struct kunit *test); #endif diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index 03f7fe7acf8c..9a0ca2cab7b6 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -25,6 +26,7 @@ #include "xe_pm.h" #include "xe_preempt_fence.h" #include "xe_res_cursor.h" +#include "xe_shrinker.h" #include "xe_trace.h" #include "xe_ttm_stolen_mgr.h" #include "xe_vm.h" @@ -278,11 +280,15 @@ static void xe_evict_flags(struct ttm_buffer_object *tbo, } } +/* struct xe_ttm_tt - Subclassed ttm_tt for xe */ struct xe_ttm_tt { struct ttm_tt ttm; - struct device *dev; + /** @xe - The xe device */ + struct xe_device *xe; struct sg_table sgt; struct sg_table *sg; + /** @purgeable - Whether the bo is purgeable (WONTNEED) */ + bool purgeable; }; static int xe_tt_map_sg(struct ttm_tt *tt) @@ -291,7 +297,8 @@ static int xe_tt_map_sg(struct ttm_tt *tt) unsigned long num_pages = tt->num_pages; int ret; - XE_WARN_ON(tt->page_flags & TTM_TT_FLAG_EXTERNAL); + XE_WARN_ON((tt->page_flags & TTM_TT_FLAG_EXTERNAL) && + !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE)); if (xe_tt->sg) return 0; @@ -299,13 +306,13 @@ static int xe_tt_map_sg(struct ttm_tt *tt) ret = sg_alloc_table_from_pages_segment(&xe_tt->sgt, tt->pages, num_pages, 0, (u64)num_pages << PAGE_SHIFT, - xe_sg_segment_size(xe_tt->dev), + xe_sg_segment_size(xe_tt->xe->drm.dev), GFP_KERNEL); if (ret) return ret; xe_tt->sg = &xe_tt->sgt; - ret = dma_map_sgtable(xe_tt->dev, xe_tt->sg, DMA_BIDIRECTIONAL, + ret = dma_map_sgtable(xe_tt->xe->drm.dev, xe_tt->sg, DMA_BIDIRECTIONAL, DMA_ATTR_SKIP_CPU_SYNC); if (ret) { sg_free_table(xe_tt->sg); @@ -321,7 +328,7 @@ static void xe_tt_unmap_sg(struct ttm_tt *tt) struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm); if (xe_tt->sg) { - dma_unmap_sgtable(xe_tt->dev, xe_tt->sg, + dma_unmap_sgtable(xe_tt->xe->drm.dev, xe_tt->sg, DMA_BIDIRECTIONAL, 0); sg_free_table(xe_tt->sg); xe_tt->sg = NULL; @@ -336,21 +343,41 @@ struct sg_table *xe_bo_sg(struct xe_bo *bo) return xe_tt->sg; } +/* + * Account ttm pages against the device shrinker's shrinkable and + * purgeable counts. + */ +static void xe_ttm_tt_account(struct ttm_tt *tt, bool add) +{ + struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm); + long num_pages = tt->num_pages; + + if (!add) + num_pages = -num_pages; + + if (xe_tt->purgeable) + xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, 0, num_pages); + else + xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, num_pages, 0); +} + static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo, u32 page_flags) { struct xe_bo *bo = ttm_to_xe_bo(ttm_bo); struct xe_device *xe = xe_bo_device(bo); - struct xe_ttm_tt *tt; + struct xe_ttm_tt *xe_tt; + struct ttm_tt *tt; unsigned long extra_pages; enum ttm_caching caching; int err; - tt = kzalloc(sizeof(*tt), GFP_KERNEL); - if (!tt) + xe_tt = kzalloc(sizeof(*xe_tt), GFP_KERNEL); + if (!xe_tt) return NULL; - tt->dev = xe->drm.dev; + tt = &xe_tt->ttm; + xe_tt->xe = xe; extra_pages = 0; if (xe_bo_needs_ccs_pages(bo)) @@ -378,42 +405,101 @@ static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo, (xe->info.graphics_verx100 >= 1270 && bo->flags & XE_BO_FLAG_PAGETABLE)) caching = ttm_write_combined; - err = ttm_tt_init(&tt->ttm, &bo->ttm, page_flags, caching, extra_pages); + if (ttm_bo->type != ttm_bo_type_sg) + page_flags |= TTM_TT_FLAG_EXTERNAL | TTM_TT_FLAG_EXTERNAL_MAPPABLE; + + err = ttm_tt_init(tt, &bo->ttm, page_flags, caching, extra_pages); if (err) { - kfree(tt); + kfree(xe_tt); + return NULL; + } + + tt->backup = ttm_backup_shmem_create(tt->num_pages << PAGE_SHIFT); + if (IS_ERR(tt->backup)) { + ttm_tt_fini(tt); + kfree(xe_tt); return NULL; } - return &tt->ttm; + return tt; } static int xe_ttm_tt_populate(struct ttm_device *ttm_dev, struct ttm_tt *tt, struct ttm_operation_ctx *ctx) { + struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm); int err; /* * dma-bufs are not populated with pages, and the dma- * addresses are set up when moved to XE_PL_TT. */ - if (tt->page_flags & TTM_TT_FLAG_EXTERNAL) + if ((tt->page_flags & TTM_TT_FLAG_EXTERNAL) && + !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE)) return 0; err = ttm_pool_alloc(&ttm_dev->pool, tt, ctx); if (err) return err; - return err; + xe_tt->purgeable = false; + xe_ttm_tt_account(tt, true); + + return 0; } static void xe_ttm_tt_unpopulate(struct ttm_device *ttm_dev, struct ttm_tt *tt) { - if (tt->page_flags & TTM_TT_FLAG_EXTERNAL) + if ((tt->page_flags & TTM_TT_FLAG_EXTERNAL) && + !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE)) return; xe_tt_unmap_sg(tt); - return ttm_pool_free(&ttm_dev->pool, tt); + ttm_pool_free(&ttm_dev->pool, tt); + xe_ttm_tt_account(tt, false); +} + +/** + * xe_bo_shrink() - Try to shrink an xe bo. + * @walk: - The walk parameters + * @bo: The TTM buffer object + * @purge: Only consider purgeable bos. + * @writeback: Try to write back to persistent storage. + * + * Try to shrink- or purge a bo, and if it succeeds, unmap dma. + * Note that we need to be able to handle also non xe bos + * (ghost bos), but only if the struct ttm_tt is embedded in + * a struct xe_ttm_tt. + * + * Return: The number of pages shrunken or purged, or negative error + * code on failure. + */ +long xe_bo_shrink(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo, + bool purge, bool writeback) +{ + struct ttm_tt *tt = bo->ttm; + struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm); + struct ttm_place place = {.mem_type = bo->resource->mem_type}; + struct xe_device *xe = xe_tt->xe; + long lret; + + if (!tt || !ttm_tt_is_populated(tt) || + !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE) || + (purge && !xe_tt->purgeable)) + return 0L; + + if (!ttm_bo_eviction_valuable(bo, &place)) + return 0L; + + lret = ttm_bo_try_shrink(walk, bo, xe_tt->purgeable, writeback); + if (lret > 0) { + xe_assert(xe, !ttm_tt_is_populated(tt)); + + xe_ttm_tt_account(tt, false); + } + + return lret; } static void xe_ttm_tt_destroy(struct ttm_device *ttm_dev, struct ttm_tt *tt) @@ -1229,6 +1315,7 @@ struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo, struct ttm_operation_ctx ctx = { .interruptible = true, .no_wait_gpu = false, + .gfp_retry_mayfail = true, }; struct ttm_placement *placement; uint32_t alignment; @@ -1672,6 +1759,8 @@ int xe_bo_pin_external(struct xe_bo *bo) } ttm_bo_pin(&bo->ttm); + if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm)) + xe_ttm_tt_account(bo->ttm.ttm, false); /* * FIXME: If we always use the reserve / unreserve functions for locking @@ -1730,6 +1819,8 @@ int xe_bo_pin(struct xe_bo *bo) } ttm_bo_pin(&bo->ttm); + if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm)) + xe_ttm_tt_account(bo->ttm.ttm, false); /* * FIXME: If we always use the reserve / unreserve functions for locking @@ -1765,6 +1856,9 @@ void xe_bo_unpin_external(struct xe_bo *bo) } ttm_bo_unpin(&bo->ttm); + if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm)) + xe_ttm_tt_account(bo->ttm.ttm, true); + /* * FIXME: If we always use the reserve / unreserve functions for locking @@ -1794,6 +1888,8 @@ void xe_bo_unpin(struct xe_bo *bo) } ttm_bo_unpin(&bo->ttm); + if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm)) + xe_ttm_tt_account(bo->ttm.ttm, true); } /** diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h index 6de894c728f5..220e71086e65 100644 --- a/drivers/gpu/drm/xe/xe_bo.h +++ b/drivers/gpu/drm/xe/xe_bo.h @@ -63,6 +63,7 @@ #define XE_BO_PROPS_INVALID (-1) struct sg_table; +struct xe_ttm_lru_walk; struct xe_bo *xe_bo_alloc(void); void xe_bo_free(struct xe_bo *bo); @@ -315,6 +316,9 @@ static inline unsigned int xe_sg_segment_size(struct device *dev) #define i915_gem_object_flush_if_display(obj) ((void)(obj)) +long xe_bo_shrink(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo, + bool purge, bool writeback); + #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST) /** * xe_bo_is_mem_type - Whether the bo currently resides in the given diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c index 8da90934c900..7080558adb80 100644 --- a/drivers/gpu/drm/xe/xe_device.c +++ b/drivers/gpu/drm/xe/xe_device.c @@ -42,6 +42,7 @@ #include "xe_pcode.h" #include "xe_pm.h" #include "xe_query.h" +#include "xe_shrinker.h" #include "xe_sriov.h" #include "xe_tile.h" #include "xe_ttm_stolen_mgr.h" @@ -239,6 +240,9 @@ static void xe_device_destroy(struct drm_device *dev, void *dummy) if (xe->unordered_wq) destroy_workqueue(xe->unordered_wq); + if (!IS_ERR_OR_NULL(xe->mem.shrinker)) + xe_shrinker_destroy(xe->mem.shrinker); + ttm_device_fini(&xe->ttm); } @@ -268,6 +272,10 @@ struct xe_device *xe_device_create(struct pci_dev *pdev, if (err) goto err; + xe->mem.shrinker = xe_shrinker_create(xe); + if (IS_ERR(xe->mem.shrinker)) + return ERR_CAST(xe->mem.shrinker); + xe->info.devid = pdev->device; xe->info.revid = pdev->revision; xe->info.force_execlist = xe_modparam.force_execlist; diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h index 5c5e36de452a..fc4f4d17a89f 100644 --- a/drivers/gpu/drm/xe/xe_device_types.h +++ b/drivers/gpu/drm/xe/xe_device_types.h @@ -319,6 +319,8 @@ struct xe_device { struct xe_mem_region vram; /** @mem.sys_mgr: system TTM manager */ struct ttm_resource_manager sys_mgr; + /** @mem.sys_mgr: system memory shrinker. */ + struct xe_shrinker *shrinker; } mem; /** @sriov: device level virtualization data */ diff --git a/drivers/gpu/drm/xe/xe_shrinker.c b/drivers/gpu/drm/xe/xe_shrinker.c new file mode 100644 index 000000000000..4913cba7700b --- /dev/null +++ b/drivers/gpu/drm/xe/xe_shrinker.c @@ -0,0 +1,224 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2024 Intel Corporation + */ + +#include +#include + +#include +#include + +#include "xe_bo.h" +#include "xe_shrinker.h" + +/** + * struct xe_shrinker - per-device shrinker + * @xe: Back pointer to the device. + * @lock: Lock protecting accounting. + * @shrinkable_pages: Number of pages that are currently shrinkable. + * @purgeable_pages: Number of pages that are currently purgeable. + * @shrink: Pointer to the mm shrinker. + */ +struct xe_shrinker { + struct xe_device *xe; + rwlock_t lock; + long shrinkable_pages; + long purgeable_pages; + struct shrinker *shrink; +}; + +/** + * struct xe_shrink_lru_walk - lru_walk subclass for shrinker + * @walk: The embedded base class. + * @xe: Pointer to the xe device. + * @purge: Purgeable only request from the srinker. + * @writeback: Try to write back to persistent storage. + */ +struct xe_shrink_lru_walk { + struct ttm_lru_walk walk; + struct xe_device *xe; + bool purge; + bool writeback; +}; + +static struct xe_shrinker *to_xe_shrinker(struct shrinker *shrink) +{ + return shrink->private_data; +} + +static struct xe_shrink_lru_walk * +to_xe_shrink_lru_walk(struct ttm_lru_walk *walk) +{ + return container_of(walk, struct xe_shrink_lru_walk, walk); +} + +/** + * xe_shrinker_mod_pages() - Modify shrinker page accounting + * @shrinker: Pointer to the struct xe_shrinker. + * @shrinkable: Shrinkable pages delta. May be negative. + * @purgeable: Purgeable page delta. May be negative. + * + * Modifies the shrinkable and purgeable pages accounting. + */ +void +xe_shrinker_mod_pages(struct xe_shrinker *shrinker, long shrinkable, long purgeable) +{ + write_lock(&shrinker->lock); + shrinker->shrinkable_pages += shrinkable; + shrinker->purgeable_pages += purgeable; + write_unlock(&shrinker->lock); +} + +static long xe_shrinker_process_bo(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo) +{ + struct xe_shrink_lru_walk *shrink_walk = to_xe_shrink_lru_walk(walk); + + return xe_bo_shrink(walk, bo, shrink_walk->purge, shrink_walk->writeback); +} + +static long xe_shrinker_walk(struct xe_shrink_lru_walk *shrink_walk, long target) +{ + struct xe_device *xe = shrink_walk->xe; + struct ttm_resource_manager *man; + unsigned int mem_type; + long sofar = 0; + long lret; + + for (mem_type = XE_PL_SYSTEM; mem_type <= XE_PL_TT; ++mem_type) { + man = ttm_manager_type(&xe->ttm, mem_type); + if (!man || !man->use_tt) + continue; + + lret = ttm_lru_walk_for_evict(&shrink_walk->walk, &xe->ttm, man, target); + if (lret < 0) + return lret; + + sofar += lret; + if (sofar >= target) + break; + } + + return sofar; +} + +static unsigned long +xe_shrinker_count(struct shrinker *shrink, struct shrink_control *sc) +{ + struct xe_shrinker *shrinker = to_xe_shrinker(shrink); + unsigned long num_pages; + + num_pages = get_nr_swap_pages(); + read_lock(&shrinker->lock); + num_pages = min_t(unsigned long, num_pages, shrinker->shrinkable_pages); + num_pages += shrinker->purgeable_pages; + read_unlock(&shrinker->lock); + + return num_pages ? num_pages : SHRINK_EMPTY; +} + +static const struct ttm_lru_walk_ops xe_shrink_ops = { + .process_bo = xe_shrinker_process_bo, +}; + +static unsigned long xe_shrinker_scan(struct shrinker *shrink, struct shrink_control *sc) +{ + struct xe_shrinker *shrinker = to_xe_shrinker(shrink); + bool is_kswapd = current_is_kswapd(); + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = !is_kswapd, + }; + unsigned long nr_to_scan, freed = 0; + struct xe_shrink_lru_walk shrink_walk = { + .walk = { + .ops = &xe_shrink_ops, + .ctx = &ctx, + .trylock_only = true, + }, + .xe = shrinker->xe, + .purge = true, + .writeback = is_kswapd, + }; + bool purgeable; + long ret; + + sc->nr_scanned = 0; + nr_to_scan = sc->nr_to_scan; + + read_lock(&shrinker->lock); + purgeable = !!shrinker->purgeable_pages; + read_unlock(&shrinker->lock); + + while (purgeable && freed < nr_to_scan) { + ret = xe_shrinker_walk(&shrink_walk, nr_to_scan); + if (ret <= 0) + break; + + freed += ret; + } + + sc->nr_scanned = freed; + if (freed < nr_to_scan) + nr_to_scan -= freed; + else + nr_to_scan = 0; + if (!nr_to_scan) + return freed ? freed : SHRINK_STOP; + + shrink_walk.purge = false; + nr_to_scan = sc->nr_to_scan; + while (freed < nr_to_scan) { + ret = xe_shrinker_walk(&shrink_walk, nr_to_scan); + if (ret <= 0) + break; + + freed += ret; + } + + sc->nr_scanned = freed; + + return freed ? freed : SHRINK_STOP; +} + +/** + * xe_shrinker_create() - Create an xe per-device shrinker + * @xe: Pointer to the xe device. + * + * Returns: A pointer to the created shrinker on success, + * Negative error code on failure. + */ +struct xe_shrinker *xe_shrinker_create(struct xe_device *xe) +{ + struct xe_shrinker *shrinker = kzalloc(sizeof(*shrinker), GFP_KERNEL); + + if (!shrinker) + return ERR_PTR(-ENOMEM); + + shrinker->shrink = shrinker_alloc(0, "xe system shrinker"); + if (!shrinker->shrink) { + kfree(shrinker); + return ERR_PTR(-ENOMEM); + } + + shrinker->xe = xe; + rwlock_init(&shrinker->lock); + shrinker->shrink->count_objects = xe_shrinker_count; + shrinker->shrink->scan_objects = xe_shrinker_scan; + shrinker->shrink->private_data = shrinker; + shrinker_register(shrinker->shrink); + + return shrinker; +} + +/** + * xe_shrinker_destroy() - Destroy an xe per-device shrinker + * @shrinker: Pointer to the shrinker to destroy. + */ +void xe_shrinker_destroy(struct xe_shrinker *shrinker) +{ + xe_assert(shrinker->xe, !shrinker->shrinkable_pages); + xe_assert(shrinker->xe, !shrinker->purgeable_pages); + shrinker_free(shrinker->shrink); + kfree(shrinker); +} diff --git a/drivers/gpu/drm/xe/xe_shrinker.h b/drivers/gpu/drm/xe/xe_shrinker.h new file mode 100644 index 000000000000..28a038f4fcbf --- /dev/null +++ b/drivers/gpu/drm/xe/xe_shrinker.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2024 Intel Corporation + */ + +#ifndef _XE_SHRINKER_H_ +#define _XE_SHRINKER_H_ + +struct xe_shrinker; +struct xe_device; + +void xe_shrinker_mod_pages(struct xe_shrinker *shrinker, long shrinkable, long purgeable); + +struct xe_shrinker *xe_shrinker_create(struct xe_device *xe); + +void xe_shrinker_destroy(struct xe_shrinker *shrinker); + +#endif diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h index 148f49f625e4..deaedfb060ed 100644 --- a/include/drm/ttm/ttm_bo.h +++ b/include/drm/ttm/ttm_bo.h @@ -222,6 +222,9 @@ struct ttm_lru_walk { long ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, struct ttm_resource_manager *man, long target); +long ttm_bo_try_shrink(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo, + bool purge, bool writeback); + /** * ttm_bo_get - reference a struct ttm_buffer_object * From patchwork Tue May 21 07:16:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 598D0C25B74 for ; Tue, 21 May 2024 07:17:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9EB6510E3F4; Tue, 21 May 2024 07:17:47 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="UTUqp1xM"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0795310E3AB; Tue, 21 May 2024 07:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275845; x=1747811845; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PLYRnoYqgg1L6orz8LmegrmBhEJAKmkEgady1KTsHxc=; b=UTUqp1xMekI9332S/OTvs3caCZ9TocwmiYUIYoZRpW93RV77q/ejUdnd 7JyU76Or2br3DiH1qyKQesAi8VslxECyACwM4KK6elj4xe/DCFrZb3tAe oCHGpxWfKOqGax9piFSl3ZOkiJCGlUv3GtOqYiRuLTcyEk4fASI4kvexb pxbajesJ5cgz1XvqHvEUMnnPzt5CA/DD5baPVPMoz0JE+JtXTIt7VHooC 5t3QR19bufkcAM6bTDhId5OcQzrp9ZEz215SE6RZR0cJoGCASELwjGIE3 RoVmof3cBCrpo10fyiPgnUw+47zDQEkdz49TOlIrgoVN38s5tXniqy2CZ A==; X-CSE-ConnectionGUID: UIU3Soo/Tdasp0lGOIwkKA== X-CSE-MsgGUID: suf+bEbxSqiBI3FWPMipUw== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393484" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393484" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:24 -0700 X-CSE-ConnectionGUID: CcyjFvGgQTuvTWYbgI77cg== X-CSE-MsgGUID: tIKzPYFfS4WPK/amzZ6b8g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336772" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:20 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [RFC PATCH v3 12/21] dma-buf/dma-resv: Introduce dma_resv_trylock_ctx() Date: Tue, 21 May 2024 09:16:30 +0200 Message-ID: <20240521071639.77614-13-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" For the drm_exec_trylock() functionality, there is a need to be able to trylock a dma-resv object as part of a drm_exec transaction. Therefore expose a variant of dma_resv_trylock that also takes a struct ww_acquire_ctx parameter. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Cc: Signed-off-by: Thomas Hellström --- include/linux/dma-resv.h | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 8d0e34dad446..68dae8f2a22c 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -405,6 +405,27 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj, return ww_mutex_lock_slow_interruptible(&obj->lock, ctx); } +/** + * dma_resv_trylock_ctx - trylock the reservation object + * @obj: the reservation object + * @ctx: The ww acquire context or NULL. + * + * Tries to lock the reservation object for exclusive access and modification. + * Note, that the lock is only against other writers, readers will run + * concurrently with a writer under RCU. The seqlock is used to notify readers + * if they overlap with a writer. The context parameter ensures that other + * ww transactions can perform deadlock backoff if necessary, and that + * subsequent attempts to dma_resv_lock() @obj for @ctx will return + * -EALREADY. + * + * Return: true if the lock was acquired, false otherwise. + */ +static inline bool __must_check +dma_resv_trylock_ctx(struct dma_resv *obj, struct ww_acquire_ctx *ctx) +{ + return ww_mutex_trylock(&obj->lock, ctx); +} + /** * dma_resv_trylock - trylock the reservation object * @obj: the reservation object @@ -421,7 +442,7 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj, */ static inline bool __must_check dma_resv_trylock(struct dma_resv *obj) { - return ww_mutex_trylock(&obj->lock, NULL); + return dma_resv_trylock_ctx(obj, NULL); } /** From patchwork Tue May 21 07:16:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 424D3C25B75 for ; Tue, 21 May 2024 07:18:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8673D10E3C6; Tue, 21 May 2024 07:18:06 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="bcEiVf/N"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8102210E3C9; Tue, 21 May 2024 07:17:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275845; x=1747811845; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lp89lmnbxGsUdT5iRPNiUBq+/aaH1eNQO5ZGbaO+Vbc=; b=bcEiVf/NMFw0wKrYNDZqou7psNvu2SBHYhqVFmfsLXCmSvq1ZvdVL9M8 eE0r4y+eN7wwUisz1rQDw5DHZ7Vr+rRze7UwtAcYj/8EGMxmBoayTWYXf Isw5URaryIx3sg/spwXQi7S2t65T+7N4nVLsZVJbp3bPr7wqeBAvO4FJl hwPFbBE1PNwrVVIgCLA3Sq3dL+btB3nyHowNda4OYIZeCvZXut0OZi3Fr tVHdFKqi3s4J04CRRd80NqeWL35FePtplQ1stdH0DYz6Jk75YgRrNTC11 LmadXyx2NOI9J7G170mtDl7d+d+YzGskLJMzADJtQpDX6uLH8DRpaId1F Q==; X-CSE-ConnectionGUID: SkI/fNq9T56UNj1G6mqFHA== X-CSE-MsgGUID: +5NePlibSaO0zfHw3jfHaA== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393486" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393486" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:24 -0700 X-CSE-ConnectionGUID: RMOpe9QnRYilzAyW7Wyy8Q== X-CSE-MsgGUID: 8yM23/UASbOE6cigaaharw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336773" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:22 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 13/21] drm/exec: Rework contended locking Date: Tue, 21 May 2024 09:16:31 +0200 Message-ID: <20240521071639.77614-14-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" If contention and backoff occurs during a drm_exec ww transaction, the contended lock is not locked again until the next orinary attempt to lock a dma_resv lock. However, with the introduction of drm_exec_trylock(), that doesn't work, since the locking of the contended lock needs to be a sleeping lock. Neither can we ignore locking the contended lock during a trylock since that would violate at least the ww_mutex annotations. So resolve this by actually locking the contended lock during drm_exec_retry_on_contention(). However, this introduces a new point of failure since locking the contended lock may return -EINTR. Hence drm_exec_retry_on_contention() must take an error parameter and also return a value indicating success. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 16 ++++----- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 6 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 8 ++--- drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c | 8 ++--- drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c | 8 ++--- drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 2 +- drivers/gpu/drm/drm_exec.c | 35 ++++++++++++++----- drivers/gpu/drm/drm_gpuvm.c | 8 ++--- drivers/gpu/drm/imagination/pvr_job.c | 2 +- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +- drivers/gpu/drm/tests/drm_exec_test.c | 12 +++---- drivers/gpu/drm/xe/xe_gt_pagefault.c | 4 +-- drivers/gpu/drm/xe/xe_vm.c | 10 +++--- include/drm/drm_exec.h | 23 +++++++++--- 17 files changed, 92 insertions(+), 62 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index e4d4e55c08ad..4a08a692aa1f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -1152,12 +1152,12 @@ static int reserve_bo_and_vm(struct kgd_mem *mem, drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0); drm_exec_until_all_locked(&ctx->exec) { ret = amdgpu_vm_lock_pd(vm, &ctx->exec, 2); - drm_exec_retry_on_contention(&ctx->exec); + ret = drm_exec_retry_on_contention(&ctx->exec, ret); if (unlikely(ret)) goto error; ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1); - drm_exec_retry_on_contention(&ctx->exec); + ret = drm_exec_retry_on_contention(&ctx->exec, ret); if (unlikely(ret)) goto error; } @@ -1199,14 +1199,14 @@ static int reserve_bo_and_cond_vms(struct kgd_mem *mem, ret = amdgpu_vm_lock_pd(entry->bo_va->base.vm, &ctx->exec, 2); - drm_exec_retry_on_contention(&ctx->exec); + ret = drm_exec_retry_on_contention(&ctx->exec, ret); if (unlikely(ret)) goto error; ++ctx->n_vms; } ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1); - drm_exec_retry_on_contention(&ctx->exec); + ret = drm_exec_retry_on_contention(&ctx->exec, ret); if (unlikely(ret)) goto error; } @@ -2619,7 +2619,7 @@ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info) list_for_each_entry(peer_vm, &process_info->vm_list_head, vm_list_node) { ret = amdgpu_vm_lock_pd(peer_vm, &exec, 2); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); if (unlikely(ret)) goto unreserve_out; } @@ -2631,7 +2631,7 @@ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info) gobj = &mem->bo->tbo.base; ret = drm_exec_prepare_obj(&exec, gobj, 1); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); if (unlikely(ret)) goto unreserve_out; } @@ -2875,7 +2875,7 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence __rcu * list_for_each_entry(peer_vm, &process_info->vm_list_head, vm_list_node) { ret = amdgpu_vm_lock_pd(peer_vm, &exec, 2); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); if (unlikely(ret)) { pr_err("Locking VM PD failed, ret: %d\n", ret); goto ttm_reserve_fail; @@ -2891,7 +2891,7 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence __rcu * gobj = &mem->bo->tbo.base; ret = drm_exec_prepare_obj(&exec, gobj, 1); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); if (unlikely(ret)) { pr_err("drm_exec_prepare_obj failed, ret: %d\n", ret); goto ttm_reserve_fail; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index ec888fc6ead8..299e46a6d934 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -897,7 +897,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, drm_exec_until_all_locked(&p->exec) { r = amdgpu_vm_lock_pd(&fpriv->vm, &p->exec, 1 + p->gang_size); - drm_exec_retry_on_contention(&p->exec); + r = drm_exec_retry_on_contention(&p->exec, r); if (unlikely(r)) goto out_free_user_pages; @@ -905,7 +905,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, /* One fence for TTM and one for each CS job */ r = drm_exec_prepare_obj(&p->exec, &e->bo->tbo.base, 1 + p->gang_size); - drm_exec_retry_on_contention(&p->exec); + r = drm_exec_retry_on_contention(&p->exec, r); if (unlikely(r)) goto out_free_user_pages; @@ -915,7 +915,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, if (p->uf_bo) { r = drm_exec_prepare_obj(&p->exec, &p->uf_bo->tbo.base, 1 + p->gang_size); - drm_exec_retry_on_contention(&p->exec); + r = drm_exec_retry_on_contention(&p->exec, r); if (unlikely(r)) goto out_free_user_pages; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c index cfdf558b48b6..8b2b86c7a6c5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c @@ -74,7 +74,7 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm, r = amdgpu_vm_lock_pd(vm, &exec, 0); if (likely(!r)) r = drm_exec_lock_obj(&exec, &bo->tbo.base); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) { DRM_ERROR("failed to reserve CSA,PD BOs: err=%d\n", r); goto error; @@ -114,7 +114,7 @@ int amdgpu_unmap_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm, r = amdgpu_vm_lock_pd(vm, &exec, 0); if (likely(!r)) r = drm_exec_lock_obj(&exec, &bo->tbo.base); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) { DRM_ERROR("failed to reserve CSA,PD BOs: err=%d\n", r); goto error; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index 67c234bcf89f..17e16c971e21 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -239,12 +239,12 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj, drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0); drm_exec_until_all_locked(&exec) { r = drm_exec_prepare_obj(&exec, &bo->tbo.base, 1); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto out_unlock; r = amdgpu_vm_lock_pd(vm, &exec, 0); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto out_unlock; } @@ -776,13 +776,13 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data, drm_exec_until_all_locked(&exec) { if (gobj) { r = drm_exec_lock_obj(&exec, gobj); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto error; } r = amdgpu_vm_lock_pd(&fpriv->vm, &exec, 2); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto error; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c index 5ca5c47ab54e..1b1a5147606e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c @@ -1221,12 +1221,12 @@ int amdgpu_mes_ctx_map_meta_data(struct amdgpu_device *adev, drm_exec_until_all_locked(&exec) { r = drm_exec_lock_obj(&exec, &ctx_data->meta_data_obj->tbo.base); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto error_fini_exec; r = amdgpu_vm_lock_pd(vm, &exec, 0); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto error_fini_exec; } @@ -1292,12 +1292,12 @@ int amdgpu_mes_ctx_unmap_meta_data(struct amdgpu_device *adev, drm_exec_until_all_locked(&exec) { r = drm_exec_lock_obj(&exec, &ctx_data->meta_data_obj->tbo.base); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto out_unlock; r = amdgpu_vm_lock_pd(vm, &exec, 0); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto out_unlock; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c index e22cb2b5cd92..72b8213e352c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c @@ -77,7 +77,7 @@ int amdgpu_seq64_map(struct amdgpu_device *adev, struct amdgpu_vm *vm, r = amdgpu_vm_lock_pd(vm, &exec, 0); if (likely(!r)) r = drm_exec_lock_obj(&exec, &bo->tbo.base); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto error; } @@ -138,7 +138,7 @@ void amdgpu_seq64_unmap(struct amdgpu_device *adev, struct amdgpu_fpriv *fpriv) r = amdgpu_vm_lock_pd(vm, &exec, 0); if (likely(!r)) r = drm_exec_lock_obj(&exec, &bo->tbo.base); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto error; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c index e01c1c8e64c4..63392ce43945 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c @@ -89,12 +89,12 @@ static int map_ring_data(struct amdgpu_device *adev, struct amdgpu_vm *vm, drm_exec_init(&exec, 0, 0); drm_exec_until_all_locked(&exec) { r = drm_exec_lock_obj(&exec, &bo->tbo.base); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto error_fini_exec; r = amdgpu_vm_lock_pd(vm, &exec, 0); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto error_fini_exec; } @@ -152,12 +152,12 @@ static int unmap_ring_data(struct amdgpu_device *adev, struct amdgpu_vm *vm, drm_exec_init(&exec, 0, 0); drm_exec_until_all_locked(&exec) { r = drm_exec_lock_obj(&exec, &bo->tbo.base); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto out_unlock; r = amdgpu_vm_lock_pd(vm, &exec, 0); - drm_exec_retry_on_contention(&exec); + r = drm_exec_retry_on_contention(&exec, r); if (unlikely(r)) goto out_unlock; } diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c index 386875e6eb96..a3aa7fd22f6a 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c @@ -1499,7 +1499,7 @@ static int svm_range_reserve_bos(struct svm_validate_context *ctx, bool intr) vm = drm_priv_to_vm(pdd->drm_priv); r = amdgpu_vm_lock_pd(vm, &ctx->exec, 2); - drm_exec_retry_on_contention(&ctx->exec); + r = drm_exec_retry_on_contention(&ctx->exec, r); if (unlikely(r)) { pr_debug("failed %d to reserve bo\n", r); goto unreserve_out; diff --git a/drivers/gpu/drm/drm_exec.c b/drivers/gpu/drm/drm_exec.c index 2da094bdf8a4..3770a5d30213 100644 --- a/drivers/gpu/drm/drm_exec.c +++ b/drivers/gpu/drm/drm_exec.c @@ -28,12 +28,12 @@ * drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT); * drm_exec_until_all_locked(&exec) { * ret = drm_exec_prepare_obj(&exec, boA, 1); - * drm_exec_retry_on_contention(&exec); + * ret = drm_exec_retry_on_contention(&exec, ret); * if (ret) * goto error; * * ret = drm_exec_prepare_obj(&exec, boB, 1); - * drm_exec_retry_on_contention(&exec); + * ret = drm_exec_retry_on_contention(&exec, ret); * if (ret) * goto error; * } @@ -48,7 +48,8 @@ */ /* Dummy value used to initially enter the retry loop */ -#define DRM_EXEC_DUMMY ((void *)~0) +#define DRM_EXEC_DUMMY ERR_PTR(-ESTALE) +#define DRM_EXEC_CONTENDED ERR_PTR(-EDEADLK) /* Unlock all objects and drop references */ static void drm_exec_unlock_all(struct drm_exec *exec) @@ -131,8 +132,7 @@ bool drm_exec_cleanup(struct drm_exec *exec) return true; } - drm_exec_unlock_all(exec); - exec->num_objects = 0; + exec->contended = NULL; return true; } EXPORT_SYMBOL(drm_exec_cleanup); @@ -194,6 +194,27 @@ static int drm_exec_lock_contended(struct drm_exec *exec) return ret; } +/** + * drm_exec_handle_contended() - Perform cleanup before a ww transaction restart + * @exec: Pointer to the drm_exec object. + * + * Unlocks all held resvs and re-locks the contended object. + * + * Return: 0 on success, negative error code on failure. + */ +int drm_exec_handle_contended(struct drm_exec *exec) +{ + int ret; + + drm_exec_unlock_all(exec); + exec->num_objects = 0; + ret = drm_exec_lock_contended(exec); + exec->contended = DRM_EXEC_CONTENDED; + + return ret; +} +EXPORT_SYMBOL(drm_exec_handle_contended); + /** * drm_exec_lock_obj - lock a GEM object for use * @exec: the drm_exec object with the state @@ -209,10 +230,6 @@ int drm_exec_lock_obj(struct drm_exec *exec, struct drm_gem_object *obj) { int ret; - ret = drm_exec_lock_contended(exec); - if (unlikely(ret)) - return ret; - if (exec->prelocked == obj) { drm_gem_object_put(exec->prelocked); exec->prelocked = NULL; diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index f9eb56f24bef..0923d6ae18e2 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1254,18 +1254,18 @@ drm_gpuvm_exec_lock(struct drm_gpuvm_exec *vm_exec) drm_exec_until_all_locked(exec) { ret = drm_gpuvm_prepare_vm(gpuvm, exec, num_fences); - drm_exec_retry_on_contention(exec); + ret = drm_exec_retry_on_contention(exec, ret); if (ret) goto err; ret = drm_gpuvm_prepare_objects(gpuvm, exec, num_fences); - drm_exec_retry_on_contention(exec); + ret = drm_exec_retry_on_contention(exec, ret); if (ret) goto err; if (vm_exec->extra.fn) { ret = vm_exec->extra.fn(vm_exec); - drm_exec_retry_on_contention(exec); + ret = drm_exec_retry_on_contention(exec, ret); if (ret) goto err; } @@ -1346,7 +1346,7 @@ drm_gpuvm_exec_lock_range(struct drm_gpuvm_exec *vm_exec, drm_exec_until_all_locked(exec) { ret = drm_gpuvm_prepare_range(gpuvm, exec, addr, range, vm_exec->num_fences); - drm_exec_retry_on_contention(exec); + ret = drm_exec_retry_on_contention(exec, ret); if (ret) goto err; } diff --git a/drivers/gpu/drm/imagination/pvr_job.c b/drivers/gpu/drm/imagination/pvr_job.c index 78c2f3c6dce0..6e0ce6c4576c 100644 --- a/drivers/gpu/drm/imagination/pvr_job.c +++ b/drivers/gpu/drm/imagination/pvr_job.c @@ -574,7 +574,7 @@ prepare_job_resvs_for_each(struct drm_exec *exec, struct pvr_job_data *job_data, drm_exec_until_all_locked(exec) { int err = jobs_lock_all_objs(exec, job_data, job_count); - drm_exec_retry_on_contention(exec); + err = drm_exec_retry_on_contention(exec, err); if (err) return err; } diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index fba78193127d..01992b43ea4b 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -259,7 +259,7 @@ static int submit_lock_objects(struct msm_gem_submit *submit) for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; ret = drm_exec_prepare_obj(&submit->exec, obj, 1); - drm_exec_retry_on_contention(&submit->exec); + ret = drm_exec_retry_on_contention(&submit->exec, ret); if (ret) goto error; } diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index ee02cd833c5e..0c871634fdfb 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1350,7 +1350,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job, drm_exec_init(exec, vme->flags, 0); drm_exec_until_all_locked(exec) { ret = bind_lock_validate(job, exec, vme->num_fences); - drm_exec_retry_on_contention(exec); + ret = drm_exec_retry_on_contention(exec, ret); if (ret) { op = list_last_op(&bind_job->ops); goto unwind; diff --git a/drivers/gpu/drm/tests/drm_exec_test.c b/drivers/gpu/drm/tests/drm_exec_test.c index 81f928a429ba..28558fdb08df 100644 --- a/drivers/gpu/drm/tests/drm_exec_test.c +++ b/drivers/gpu/drm/tests/drm_exec_test.c @@ -63,7 +63,7 @@ static void test_lock(struct kunit *test) drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0); drm_exec_until_all_locked(&exec) { ret = drm_exec_lock_obj(&exec, &gobj); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); KUNIT_EXPECT_EQ(test, ret, 0); if (ret) break; @@ -83,14 +83,14 @@ static void test_lock_unlock(struct kunit *test) drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0); drm_exec_until_all_locked(&exec) { ret = drm_exec_lock_obj(&exec, &gobj); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); KUNIT_EXPECT_EQ(test, ret, 0); if (ret) break; drm_exec_unlock_obj(&exec, &gobj); ret = drm_exec_lock_obj(&exec, &gobj); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); KUNIT_EXPECT_EQ(test, ret, 0); if (ret) break; @@ -110,13 +110,13 @@ static void test_duplicates(struct kunit *test) drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0); drm_exec_until_all_locked(&exec) { ret = drm_exec_lock_obj(&exec, &gobj); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); KUNIT_EXPECT_EQ(test, ret, 0); if (ret) break; ret = drm_exec_lock_obj(&exec, &gobj); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); KUNIT_EXPECT_EQ(test, ret, 0); if (ret) break; @@ -137,7 +137,7 @@ static void test_prepare(struct kunit *test) drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0); drm_exec_until_all_locked(&exec) { ret = drm_exec_prepare_obj(&exec, &gobj, 1); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); KUNIT_EXPECT_EQ(test, ret, 0); if (ret) break; diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c index 040dd142c49c..20ec1ab1b52d 100644 --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c @@ -200,7 +200,7 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf) drm_exec_init(&exec, 0, 0); drm_exec_until_all_locked(&exec) { ret = xe_pf_begin(&exec, vma, atomic, tile->id); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); if (ret) goto unlock_dma_resv; @@ -543,7 +543,7 @@ static int handle_acc(struct xe_gt *gt, struct acc *acc) drm_exec_init(&exec, 0, 0); drm_exec_until_all_locked(&exec) { ret = xe_pf_begin(&exec, vma, true, tile->id); - drm_exec_retry_on_contention(&exec); + ret = drm_exec_retry_on_contention(&exec, ret); if (ret) break; } diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index e2ec148c9c33..335524e803e7 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -501,7 +501,7 @@ static void preempt_rebind_work_func(struct work_struct *w) bool done = false; err = xe_preempt_work_begin(&exec, vm, &done); - drm_exec_retry_on_contention(&exec); + err = drm_exec_retry_on_contention(&exec, err); if (err || done) { drm_exec_fini(&exec); if (err && xe_vm_validate_should_retry(&exec, err, &end)) @@ -1052,7 +1052,7 @@ static void xe_vma_destroy_unlocked(struct xe_vma *vma) drm_exec_init(&exec, 0, 0); drm_exec_until_all_locked(&exec) { err = xe_vm_lock_vma(&exec, vma); - drm_exec_retry_on_contention(&exec); + err = drm_exec_retry_on_contention(&exec, err); if (XE_WARN_ON(err)) break; } @@ -2148,11 +2148,11 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op, err = 0; if (!bo->vm) { err = drm_exec_lock_obj(&exec, xe_vm_obj(vm)); - drm_exec_retry_on_contention(&exec); + err = drm_exec_retry_on_contention(&exec, err); } if (!err) { err = drm_exec_lock_obj(&exec, &bo->ttm.base); - drm_exec_retry_on_contention(&exec); + err = drm_exec_retry_on_contention(&exec, err); } if (err) { drm_exec_fini(&exec); @@ -2884,7 +2884,7 @@ static int vm_bind_ioctl_ops_execute(struct xe_vm *vm, DRM_EXEC_IGNORE_DUPLICATES, 0); drm_exec_until_all_locked(&exec) { err = vm_bind_ioctl_ops_lock_and_prep(&exec, vm, vops); - drm_exec_retry_on_contention(&exec); + err = drm_exec_retry_on_contention(&exec, err); if (err) goto unlock; diff --git a/include/drm/drm_exec.h b/include/drm/drm_exec.h index aa786b828a0a..fafb40d96e38 100644 --- a/include/drm/drm_exec.h +++ b/include/drm/drm_exec.h @@ -51,6 +51,8 @@ struct drm_exec { struct drm_gem_object *prelocked; }; +int drm_exec_handle_contended(struct drm_exec *exec); + /** * drm_exec_obj() - Return the object for a give drm_exec index * @exec: Pointer to the drm_exec context @@ -113,15 +115,26 @@ __PASTE(__drm_exec_, __LINE__): \ /** * drm_exec_retry_on_contention - restart the loop to grap all locks * @exec: drm_exec object + * @_ret: The current error status * * Control flow helper to continue when a contention was detected and we need to * clean up and re-start the loop to prepare all GEM objects. + * + * Return: If no loop restart occurred: The error status. */ -#define drm_exec_retry_on_contention(exec) \ - do { \ - if (unlikely(drm_exec_is_contended(exec))) \ - goto *__drm_exec_retry_ptr; \ - } while (0) +#define drm_exec_retry_on_contention(exec, _ret) \ + ({ \ + struct drm_exec *__exec = (exec); \ + int __ret = (_ret); \ + \ + if (unlikely(drm_exec_is_contended(__exec))) { \ + WARN_ON(__ret != -EDEADLK); \ + __ret = drm_exec_handle_contended(__exec); \ + if (!__ret) \ + goto *__drm_exec_retry_ptr; \ + } \ + __ret; \ + }) /** * drm_exec_is_contended - check for contention From patchwork Tue May 21 07:16:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 68FDFC25B78 for ; Tue, 21 May 2024 07:18:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AF31910E40F; Tue, 21 May 2024 07:17:50 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="Wh9K8fg9"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 77C9910E3C9; Tue, 21 May 2024 07:17:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275853; x=1747811853; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VCo+bXcbAgNy5o6Dz2ru7oCaq24A3+YxX21YepnLrJw=; b=Wh9K8fg9bq8zLwiH3sctTCUThWXbHe0GLuigjBHrNhby5Z3ZG1mNtRTN V5wFT3ru5Snbh9dHKXvUwm6sRnQXPiXwbedYdloYiA+lyVLbvd0siRO8w 9MDO3ysJJXwn2nooum2bOF4pSNgYambebR4JpEq4wKMmKVkkzGGkS3i6S AXxO2nWb/zYN5iO0enkU7T9wbg1E95S4rlpbcvfJFOCfvhDKUuwHkxbzN rRByeRdB9IlsuFJPHP+P84Fxgy9eYrQ3Pf/2k2M/AlVOrYT4OFmBEaN7a KaTRqQtaqOVMy6QLVi+xRAPlqL+bPnX3JlrNarx8xEvqFV4+t5FDZjqzk w==; X-CSE-ConnectionGUID: zSs48kshTImcIoaVLWO1aA== X-CSE-MsgGUID: 1sbnqWYRQYCUX+QsDNmqQw== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393492" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393492" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:32 -0700 X-CSE-ConnectionGUID: 67OUm/8OTaq0uDRxy06TbQ== X-CSE-MsgGUID: XRwcA+ynSYaFoZxuHDWNSQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336778" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:24 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 14/21] drm/exec: Introduce a drm_exec_trylock_obj() function Date: Tue, 21 May 2024 09:16:32 +0200 Message-ID: <20240521071639.77614-15-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" TTM needs to trylock dma_resv objects in a couple of places. However this functionality is missing in drm_exec. Introduce it. Note that in addition to the -EBUSY error returned on failure to take the lock, the operation may return -ENOMEM if there was a failure to allocate memory for the drm_exec held lock array. This failure mode could be avoided if the drm_exec structure instead maintained a linked list of locked objects, similar to i915. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/drm_exec.c | 50 +++++++++++++++++++++++++++++++++++--- include/drm/drm_exec.h | 1 + 2 files changed, 47 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_exec.c b/drivers/gpu/drm/drm_exec.c index 3770a5d30213..1383680ffa4a 100644 --- a/drivers/gpu/drm/drm_exec.c +++ b/drivers/gpu/drm/drm_exec.c @@ -139,14 +139,17 @@ EXPORT_SYMBOL(drm_exec_cleanup); /* Track the locked object in the array */ static int drm_exec_obj_locked(struct drm_exec *exec, - struct drm_gem_object *obj) + struct drm_gem_object *obj, + gfp_t gfp) { + might_alloc(gfp); + if (unlikely(exec->num_objects == exec->max_objects)) { size_t size = exec->max_objects * sizeof(void *); void *tmp; tmp = kvrealloc(exec->objects, size, size + PAGE_SIZE, - GFP_KERNEL); + gfp); if (!tmp) return -ENOMEM; @@ -179,7 +182,7 @@ static int drm_exec_lock_contended(struct drm_exec *exec) dma_resv_lock_slow(obj->resv, &exec->ticket); } - ret = drm_exec_obj_locked(exec, obj); + ret = drm_exec_obj_locked(exec, obj, GFP_KERNEL); if (unlikely(ret)) goto error_unlock; @@ -215,6 +218,45 @@ int drm_exec_handle_contended(struct drm_exec *exec) } EXPORT_SYMBOL(drm_exec_handle_contended); +/** + * drm_exec_trylock_obj - trylock a GEM object for use + * @exec: the drm_exec object with the state. + * @obj: the GEM object to lock. + * + * Trylock a GEM object for use and grab a reference to it. + * + * Returns: -EALREADY when object is already locked (can be suppressed by + * setting the DRM_EXEC_IGNORE_DUPLICATES flag), -ENOMEM when memory + * allocation failed, and zero for success. If the object was already + * locked, -EBUSY will be returned. + */ +int drm_exec_trylock_obj(struct drm_exec *exec, struct drm_gem_object *obj) +{ + int ret; + + might_alloc(GFP_ATOMIC); + + if (exec->prelocked == obj) { + drm_gem_object_put(exec->prelocked); + exec->prelocked = NULL; + return 0; + } + + if (!dma_resv_trylock_ctx(obj->resv, &exec->ticket)) { + if (dma_resv_locking_ctx(obj->resv) == &exec->ticket) + return (exec->flags & DRM_EXEC_IGNORE_DUPLICATES) ? 0 : -EALREADY; + else + return -EBUSY; + } + + ret = drm_exec_obj_locked(exec, obj, GFP_ATOMIC | __GFP_NOWARN); + if (ret) + dma_resv_unlock(obj->resv); + + return ret; +} +EXPORT_SYMBOL(drm_exec_trylock_obj); + /** * drm_exec_lock_obj - lock a GEM object for use * @exec: the drm_exec object with the state @@ -254,7 +296,7 @@ int drm_exec_lock_obj(struct drm_exec *exec, struct drm_gem_object *obj) if (unlikely(ret)) return ret; - ret = drm_exec_obj_locked(exec, obj); + ret = drm_exec_obj_locked(exec, obj, GFP_KERNEL); if (ret) goto error_unlock; diff --git a/include/drm/drm_exec.h b/include/drm/drm_exec.h index fafb40d96e38..ea0f2117ee0c 100644 --- a/include/drm/drm_exec.h +++ b/include/drm/drm_exec.h @@ -152,6 +152,7 @@ void drm_exec_init(struct drm_exec *exec, u32 flags, unsigned nr); void drm_exec_fini(struct drm_exec *exec); bool drm_exec_cleanup(struct drm_exec *exec); int drm_exec_lock_obj(struct drm_exec *exec, struct drm_gem_object *obj); +int drm_exec_trylock_obj(struct drm_exec *exec, struct drm_gem_object *obj); void drm_exec_unlock_obj(struct drm_exec *exec, struct drm_gem_object *obj); int drm_exec_prepare_obj(struct drm_exec *exec, struct drm_gem_object *obj, unsigned int num_fences); From patchwork Tue May 21 07:16:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AFE38C25B78 for ; Tue, 21 May 2024 07:18:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 52FDB10E426; Tue, 21 May 2024 07:18:04 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="MAuNj7bM"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id CD4C810E36B; Tue, 21 May 2024 07:17:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275853; x=1747811853; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yyCTWfTCAT0eL5il2694dnzuOZVup9F3xSYAIXXUQEI=; b=MAuNj7bMEPCh5zSn4ydwep7bElih6xwJuVQkVApS7DOB7avtMXQHFRmP 73m2OCSwAtOvzb8HAVwqgHh/Qn/Ntb35qDdnkO/o8dNDoCGm+ZH9uprlY THIaYvFgbNtF74Db58BlBVjnTXG6sXwMeeLS/tNo/JZ4LgFZmsQcbARcD Blig2sU/GpOWs6w1rhjDaBvUhNaQ/iPKUiL3zolrGB5qhhOsWB4U2S269 eoIwIjRDV0lYAoHMVeFEA+mZy//qHixbSn6DcQlf/ezTTH6/anFo3i9fj 9fb1e6MqiqImo6+uzlpRbmLRVVSD7l5kuxo5bJQSnrdwMfcJkJjSgZImi A==; X-CSE-ConnectionGUID: co0rktWqTbqUrKO/EU2fVw== X-CSE-MsgGUID: +VzAhy16Qu2jwb+9ktGppQ== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393490" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393490" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:32 -0700 X-CSE-ConnectionGUID: FZK3yX4LRkqnQxWI1laz1Q== X-CSE-MsgGUID: /eCxpIRkSS2B0m1cz4vmog== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336787" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:26 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 15/21] drm/exec: Add a snapshot capability Date: Tue, 21 May 2024 09:16:33 +0200 Message-ID: <20240521071639.77614-16-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" When validating a buffer object for submission, we might need to lock a number of object for eviction to make room for the validation. This makes it pretty likely that validation will eventually succeed, since eventually the validating process will hold most dma_resv locks of the buffer objects residing in the memory type being validated for. However, once validation of a single object has succeeded it might not be beneficial to hold on to those locks anymore, and the validator would want to drop the locks of all objects taken during validation. Introduce a drm_exec snapshot functionality that can be used to record the locks held at a certain time, and a restore functionality that restores the drm_exec state to the snapshot by dropping all locks. Snapshots can be nested if needed. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/drm_exec.c | 55 +++++++++++++++++++++++++++++++++++++- include/drm/drm_exec.h | 23 +++++++++++++++- 2 files changed, 76 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_exec.c b/drivers/gpu/drm/drm_exec.c index 1383680ffa4a..9eea5d0d3a98 100644 --- a/drivers/gpu/drm/drm_exec.c +++ b/drivers/gpu/drm/drm_exec.c @@ -57,6 +57,7 @@ static void drm_exec_unlock_all(struct drm_exec *exec) struct drm_gem_object *obj; unsigned long index; + WARN_ON(exec->snap); drm_exec_for_each_locked_object_reverse(exec, index, obj) { dma_resv_unlock(obj->resv); drm_gem_object_put(obj); @@ -90,6 +91,7 @@ void drm_exec_init(struct drm_exec *exec, u32 flags, unsigned nr) exec->num_objects = 0; exec->contended = DRM_EXEC_DUMMY; exec->prelocked = NULL; + exec->snap = NULL; } EXPORT_SYMBOL(drm_exec_init); @@ -301,7 +303,6 @@ int drm_exec_lock_obj(struct drm_exec *exec, struct drm_gem_object *obj) goto error_unlock; return 0; - error_unlock: dma_resv_unlock(obj->resv); return ret; @@ -395,5 +396,57 @@ int drm_exec_prepare_array(struct drm_exec *exec, } EXPORT_SYMBOL(drm_exec_prepare_array); +/** + * drm_exec_restore() - Restore the drm_exec state to the point of a snapshot. + * @exec: The drm_exec object with the state. + * @snap: The snapshot state. + * + * Restores the drm_exec object by means of unlocking and dropping references + * to objects locked after the snapshot. + */ +void drm_exec_restore(struct drm_exec *exec, struct drm_exec_snapshot *snap) +{ + struct drm_gem_object *obj; + unsigned int index; + + exec->snap = snap->saved_snap; + + drm_exec_for_each_locked_object_reverse(exec, index, obj) { + if (index + 1 == snap->num_locked) + break; + + dma_resv_unlock(obj->resv); + drm_gem_object_put(obj); + exec->objects[index] = NULL; + } + + exec->num_objects = snap->num_locked; + + if (!exec->prelocked) + exec->prelocked = snap->prelocked; + else + drm_gem_object_put(snap->prelocked); +} +EXPORT_SYMBOL(drm_exec_restore); + +/** + * drm_exec_snapshot() - Take a snapshot of the drm_exec state + * @exec: The drm_exec object with the state. + * @snap: The snapshot state. + * + * Records the @exec state in @snap. The @snap object is typically allocated + * in the stack of the caller. + */ +void drm_exec_snapshot(struct drm_exec *exec, struct drm_exec_snapshot *snap) +{ + snap->num_locked = exec->num_objects; + snap->prelocked = exec->prelocked; + if (snap->prelocked) + drm_gem_object_get(snap->prelocked); + snap->saved_snap = exec->snap; + exec->snap = snap; +} +EXPORT_SYMBOL(drm_exec_snapshot); + MODULE_DESCRIPTION("DRM execution context"); MODULE_LICENSE("Dual MIT/GPL"); diff --git a/include/drm/drm_exec.h b/include/drm/drm_exec.h index ea0f2117ee0c..0ce4d749511b 100644 --- a/include/drm/drm_exec.h +++ b/include/drm/drm_exec.h @@ -19,7 +19,6 @@ struct drm_exec { * @flags: Flags to control locking behavior */ u32 flags; - /** * @ticket: WW ticket used for acquiring locks */ @@ -49,6 +48,25 @@ struct drm_exec { * @prelocked: already locked GEM object due to contention */ struct drm_gem_object *prelocked; + + /** + * @snap: Pointer to the last snapshot taken or NULL if none. + */ + struct drm_exec_snapshot *snap; +}; + +/** + * struct drm_exec_snapshot - drm_exec snapshot information + */ +struct drm_exec_snapshot { + /** @saved_snap: Pointer to the previous snapshot or NULL. */ + struct drm_exec_snapshot *saved_snap; + + /** @prelocked: Refcounted pointer to the prelocked object at snapshot time. */ + struct drm_gem_object *prelocked; + + /** @num_locked: Number of locked objects at snapshot time. */ + unsigned long num_locked; }; int drm_exec_handle_contended(struct drm_exec *exec); @@ -160,5 +178,8 @@ int drm_exec_prepare_array(struct drm_exec *exec, struct drm_gem_object **objects, unsigned int num_objects, unsigned int num_fences); +void drm_exec_snapshot(struct drm_exec *exec, struct drm_exec_snapshot *snap); +void drm_exec_restore(struct drm_exec *exec, struct drm_exec_snapshot *snap); + #endif From patchwork Tue May 21 07:16:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8243CC25B7D for ; Tue, 21 May 2024 07:17:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 123DE10E42B; Tue, 21 May 2024 07:17:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="RGkAWLLh"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 01C2810E36B; Tue, 21 May 2024 07:17:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275854; x=1747811854; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CAlFHQG/ojcxP6rO5lLkzNxc8pffryoGxkDQaxaOrCE=; b=RGkAWLLhH/fLYYlffXRQOpvEdYbQbLv7G48twU6tV90LWRcUD2L2RHVk FOeOB9GALzoZcyL7y8kzSB7d5r6ACNfY50Yz0PzBBLgJCgbhqp9Utpuoi c7WYCS9dNSvy4zT5O19OTKdb+ROCwxNseHlQirTMGK0+AujuhguR/+W1J DHg4gYwFygJjCRvjIIjBUb1rGyuQcyELUTHQTyTi1ynpNnIuDZfcjzTBT iUvgjjH/jOXFU/OrjqFOIj4sKuBAk0Jdj90ZFSxX2wniUYOiKcUn7ywCZ YjlEmDlDTBmp66LTdhhvRxIuWvufqXVeh4HyDlkYSkTNTc1CNUYWQoAtS A==; X-CSE-ConnectionGUID: iqXrmbecQA6nT5t4dFc2gQ== X-CSE-MsgGUID: lFhYJUMvTMS+WFoqqfTQPw== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393494" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393494" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:32 -0700 X-CSE-ConnectionGUID: ds2JA6p1Rr+YnQFys+gRJA== X-CSE-MsgGUID: DIOcA92ZTOiAE1sy7DhxQQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336794" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:28 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 16/21] drm/exec: Introduce an evict mode Date: Tue, 21 May 2024 09:16:34 +0200 Message-ID: <20240521071639.77614-17-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Locking for eviction is in some way different from locking for submission: 1) We can't lock objects that are already locked for submission, hence DRM_EXEC_IGNORE_DUPLICATES must be unset. 2) We must be able to re-lock objects locked for eviction, either for submission or for yet another eviction, in particular objects sharing a single resv must be considered. 3) There is no point to keep a contending object after the transaction restart. We don't know whether we actually want to use it again. So introduce a drm_exec evict mode, and for now instead of explicitly setting it using a function call or implement separate locking functions that use evict mode, assume evict mode if there is a snapshot registered. This can easily be changed later. To keep track of resvs locked for eviction, use a pointer set implemented by an xarray. This is probably not the most efficient data structure but used as an easy-to-implement first approach. If the set is empty (evict mode never used), the performance- and memory usage impact will be very small. TODO: Probably want to implement the set using an open addressing hash table. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/drm_exec.c | 77 ++++++++++++++++++++++++++++++++++---- include/drm/drm_exec.h | 15 ++++++++ 2 files changed, 85 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/drm_exec.c b/drivers/gpu/drm/drm_exec.c index 9eea5d0d3a98..ea79d96f5439 100644 --- a/drivers/gpu/drm/drm_exec.c +++ b/drivers/gpu/drm/drm_exec.c @@ -65,6 +65,10 @@ static void drm_exec_unlock_all(struct drm_exec *exec) drm_gem_object_put(exec->prelocked); exec->prelocked = NULL; + + /* garbage collect */ + xa_destroy(&exec->resv_set); + xa_init(&exec->resv_set); } /** @@ -92,6 +96,8 @@ void drm_exec_init(struct drm_exec *exec, u32 flags, unsigned nr) exec->contended = DRM_EXEC_DUMMY; exec->prelocked = NULL; exec->snap = NULL; + exec->drop_contended = false; + xa_init(&exec->resv_set); } EXPORT_SYMBOL(drm_exec_init); @@ -110,6 +116,7 @@ void drm_exec_fini(struct drm_exec *exec) drm_gem_object_put(exec->contended); ww_acquire_fini(&exec->ticket); } + xa_destroy(&exec->resv_set); } EXPORT_SYMBOL(drm_exec_fini); @@ -139,6 +146,30 @@ bool drm_exec_cleanup(struct drm_exec *exec) } EXPORT_SYMBOL(drm_exec_cleanup); +static unsigned long drm_exec_resv_to_key(const struct dma_resv *resv) +{ + return (unsigned long)resv / __alignof__(typeof(*resv)); +} + +static void +drm_exec_resv_set_erase(struct drm_exec *exec, unsigned long key) +{ + if (xa_load(&exec->resv_set, key)) + xa_erase(&exec->resv_set, key); +} + +static bool drm_exec_in_evict_mode(struct drm_exec *exec) +{ + return !!exec->snap; +} + +static void drm_exec_set_evict_mode(struct drm_exec *exec, + struct drm_exec_snapshot *snap) +{ + exec->snap = snap; + exec->flags &= ~DRM_EXEC_IGNORE_DUPLICATES; +} + /* Track the locked object in the array */ static int drm_exec_obj_locked(struct drm_exec *exec, struct drm_gem_object *obj, @@ -161,6 +192,14 @@ static int drm_exec_obj_locked(struct drm_exec *exec, drm_gem_object_get(obj); exec->objects[exec->num_objects++] = obj; + /* + * Errors here are not fatal, It means the object we locked + * for eviction can't be locked again. If that is problematic + * we may need to reconsider this. + */ + if (drm_exec_in_evict_mode(exec)) + (void)xa_store(&exec->resv_set, drm_exec_resv_to_key(obj->resv), + obj->resv, gfp | __GFP_NOWARN); return 0; } @@ -184,6 +223,9 @@ static int drm_exec_lock_contended(struct drm_exec *exec) dma_resv_lock_slow(obj->resv, &exec->ticket); } + if (exec->drop_contended) + goto error_unlock; + ret = drm_exec_obj_locked(exec, obj, GFP_KERNEL); if (unlikely(ret)) goto error_unlock; @@ -245,10 +287,19 @@ int drm_exec_trylock_obj(struct drm_exec *exec, struct drm_gem_object *obj) } if (!dma_resv_trylock_ctx(obj->resv, &exec->ticket)) { - if (dma_resv_locking_ctx(obj->resv) == &exec->ticket) - return (exec->flags & DRM_EXEC_IGNORE_DUPLICATES) ? 0 : -EALREADY; - else + if (dma_resv_locking_ctx(obj->resv) == &exec->ticket) { + unsigned long key = drm_exec_resv_to_key(obj->resv); + + if (exec->flags & DRM_EXEC_IGNORE_DUPLICATES || + xa_load(&exec->resv_set, key)) { + if (!drm_exec_in_evict_mode(exec)) + drm_exec_resv_set_erase(exec, key); + return 0; + } + return -EALREADY; + } else { return -EBUSY; + } } ret = drm_exec_obj_locked(exec, obj, GFP_ATOMIC | __GFP_NOWARN); @@ -288,12 +339,20 @@ int drm_exec_lock_obj(struct drm_exec *exec, struct drm_gem_object *obj) if (unlikely(ret == -EDEADLK)) { drm_gem_object_get(obj); exec->contended = obj; + exec->drop_contended = drm_exec_in_evict_mode(exec); return -EDEADLK; } - if (unlikely(ret == -EALREADY) && - exec->flags & DRM_EXEC_IGNORE_DUPLICATES) - return 0; + if (unlikely(ret == -EALREADY)) { + unsigned long key = drm_exec_resv_to_key(obj->resv); + + if (exec->flags & DRM_EXEC_IGNORE_DUPLICATES || + xa_load(&exec->resv_set, key)) { + if (!drm_exec_in_evict_mode(exec)) + drm_exec_resv_set_erase(exec, key); + return 0; + } + } if (unlikely(ret)) return ret; @@ -324,6 +383,7 @@ void drm_exec_unlock_obj(struct drm_exec *exec, struct drm_gem_object *obj) for (i = exec->num_objects; i--;) { if (exec->objects[i] == obj) { + drm_exec_resv_set_erase(exec, drm_exec_resv_to_key(obj->resv)); dma_resv_unlock(obj->resv); for (++i; i < exec->num_objects; ++i) exec->objects[i - 1] = exec->objects[i]; @@ -415,12 +475,14 @@ void drm_exec_restore(struct drm_exec *exec, struct drm_exec_snapshot *snap) if (index + 1 == snap->num_locked) break; + xa_erase(&exec->resv_set, drm_exec_resv_to_key(obj->resv)); dma_resv_unlock(obj->resv); drm_gem_object_put(obj); exec->objects[index] = NULL; } exec->num_objects = snap->num_locked; + exec->flags = snap->flags; if (!exec->prelocked) exec->prelocked = snap->prelocked; @@ -443,8 +505,9 @@ void drm_exec_snapshot(struct drm_exec *exec, struct drm_exec_snapshot *snap) snap->prelocked = exec->prelocked; if (snap->prelocked) drm_gem_object_get(snap->prelocked); + snap->flags = exec->flags; snap->saved_snap = exec->snap; - exec->snap = snap; + drm_exec_set_evict_mode(exec, snap); } EXPORT_SYMBOL(drm_exec_snapshot); diff --git a/include/drm/drm_exec.h b/include/drm/drm_exec.h index 0ce4d749511b..0b6d5ac0c092 100644 --- a/include/drm/drm_exec.h +++ b/include/drm/drm_exec.h @@ -5,6 +5,7 @@ #include #include +#include #define DRM_EXEC_INTERRUPTIBLE_WAIT BIT(0) #define DRM_EXEC_IGNORE_DUPLICATES BIT(1) @@ -53,6 +54,17 @@ struct drm_exec { * @snap: Pointer to the last snapshot taken or NULL if none. */ struct drm_exec_snapshot *snap; + + /** + * @resv_set: Set of pointers to locked objects in evict mode. + */ + struct xarray resv_set; + + /** + * @drop_contended: Drop the contended object after WW transaction + * relaxation. + */ + bool drop_contended; }; /** @@ -67,6 +79,9 @@ struct drm_exec_snapshot { /** @num_locked: Number of locked objects at snapshot time. */ unsigned long num_locked; + + /** @flags: The drm_exec flags at snapshot time. */ + u32 flags; }; int drm_exec_handle_contended(struct drm_exec *exec); From patchwork Tue May 21 07:16:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1F900C25B7D for ; Tue, 21 May 2024 07:18:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C502210E36B; Tue, 21 May 2024 07:18:03 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="OEofoNed"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5D93A10E40E; Tue, 21 May 2024 07:17:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275854; x=1747811854; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Yih0PdYe5evtTObGVw5HYYMNH39V4bdftEQIWVIAfR0=; b=OEofoNedkJ6/7ACD9H7cOobewa7I6I/8dOvB3QZ572klb/bccqdmYgdG IweE4bsQ/J6FD10BJJk0GOv+SzZRsQi5ytcdWxThHIhQ4s7WbdWABApQJ hNs/mKwaXeIE+sdrTzUS90Lzwyvp9X/RnmPt7PXTXdIX38umhjZjOFJgS z98nDABVnYATVYjNjHQORZvfJCDXZIfLjlsGSfFzXtlF38eyeg4akN+df fdxZzCYNahEclaK/l6jVmHFSewSobEyxRSgzZS9O49E/UGjAwu+88WLNE M3sgM7aaHHmPq1ILXB1YsdmPL50TlZmzDGsI6w/rmu/VC0ZN4+jrBP1wB A==; X-CSE-ConnectionGUID: BT36mD4aSb2K6d4BVkUlOg== X-CSE-MsgGUID: xx/AMk2KTRmpbVLSWahvNw== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393496" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393496" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:32 -0700 X-CSE-ConnectionGUID: fRayHqjpSlWDuy757CV3aQ== X-CSE-MsgGUID: 4FH8CzdiSA2sh7x416hn9A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336802" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:30 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 17/21] drm/ttm: Support drm_exec locking for eviction and swapping Date: Tue, 21 May 2024 09:16:35 +0200 Message-ID: <20240521071639.77614-18-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Snapshot the drm_exec state before validation, and perform locking for eviction and swapping using the passed in drm_exec pointer if any. Otherwise fall back to trylock / ticketlock. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_bo.c | 44 ++++++++++++++++++++++------- drivers/gpu/drm/ttm/ttm_bo_util.c | 47 +++++++++++++++++++++++++------ include/drm/ttm/ttm_bo.h | 5 ++++ 3 files changed, 77 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 316afe19a325..8706502edcb1 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -31,6 +31,8 @@ #define pr_fmt(fmt) "[TTM] " fmt +#include + #include #include #include @@ -560,10 +562,16 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev, }; long lret; - evict_walk.walk.trylock_only = true; - lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1); - if (lret || !ticket) - goto out; + /* + * If ww_mutex slowpath debugging, skip the drm_exec trylock step + * to properly exercise the ww transaction backoff from eviction. + */ + if (!ctx->exec || !IS_ENABLED(CONFIG_DEBUG_WW_MUTEX_SLOWPATH)) { + evict_walk.walk.trylock_only = true; + lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1); + if (lret || !(ticket || ctx->exec)) + goto out; + } /* If ticket-locking, repeat while making progress. */ evict_walk.walk.trylock_only = false; @@ -776,6 +784,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, struct ttm_placement *placement, struct ttm_operation_ctx *ctx) { + struct drm_exec_snapshot snap; struct ttm_resource *res; struct ttm_place hop; bool force_space; @@ -789,17 +798,24 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, if (!placement->num_placement) return ttm_bo_pipeline_gutting(bo); + if (ctx->exec) + drm_exec_snapshot(ctx->exec, &snap); + force_space = false; do { /* Check whether we need to move buffer. */ if (bo->resource && ttm_resource_compatible(bo->resource, placement, - force_space)) - return 0; + force_space)) { + ret = 0; + goto out; + } /* Moving of pinned BOs is forbidden */ - if (bo->pin_count) - return -EINVAL; + if (bo->pin_count) { + ret = -EINVAL; + goto out; + } /* * Determine where to move the buffer. @@ -816,7 +832,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, if (ret == -ENOSPC) continue; if (ret) - return ret; + goto out; bounce: ret = ttm_bo_handle_move_mem(bo, res, false, ctx, &hop); @@ -828,11 +844,14 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, } if (ret) { ttm_resource_free(bo, &res); - return ret; + goto out; } } while (ret && force_space); + if (ctx->exec) + drm_exec_restore(ctx->exec, &snap); + /* For backward compatibility with userspace */ if (ret == -ENOSPC) return -ENOMEM; @@ -846,6 +865,11 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, return ret; } return 0; +out: + if (ctx->exec) + drm_exec_restore(ctx->exec, &snap); + + return ret; } EXPORT_SYMBOL(ttm_bo_validate); diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index f6460024077d..0849a1472e3d 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -31,6 +31,8 @@ #include +#include + #include #include #include @@ -814,6 +816,25 @@ static int ttm_lru_walk_ticketlock(struct ttm_lru_walk *walk, return ret; } +static int ttm_lru_walk_execlock(struct ttm_lru_walk *walk, + struct ttm_buffer_object *bo) +{ + struct ttm_operation_ctx *ctx = walk->ctx; + struct drm_gem_object *obj = &bo->base; + struct drm_exec *exec = ctx->exec; + int ret; + + if (walk->trylock_only) + ret = drm_exec_trylock_obj(exec, obj); + else + ret = drm_exec_lock_obj(exec, obj); + + if (ret == -EALREADY && bo->base.resv == ctx->resv && ctx->allow_res_evict) + return 0; + + return ret; +} + static void ttm_lru_walk_unlock(struct ttm_buffer_object *bo, bool locked) { if (locked) @@ -854,6 +875,7 @@ static void ttm_lru_walk_unlock(struct ttm_buffer_object *bo, bool locked) long ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, struct ttm_resource_manager *man, long target) { + struct drm_exec *exec = walk->ctx->exec; struct ttm_resource_cursor cursor; struct ttm_resource *res; long sofar = 0; @@ -869,11 +891,14 @@ long ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, if (!bo || bo->resource != res) continue; - if (ttm_lru_walk_trylock(walk, bo, &bo_needs_unlock)) - bo_locked = true; - else if ((!walk->ticket) || walk->ctx->no_wait_gpu || - walk->trylock_only) - continue; + if (!exec) { + if (ttm_lru_walk_trylock(walk, bo, &bo_needs_unlock)) + bo_locked = true; + + else if (!walk->ticket || walk->ctx->no_wait_gpu || + walk->trylock_only) + continue; + } if (!ttm_bo_get_unless_zero(bo)) { ttm_lru_walk_unlock(bo, bo_needs_unlock); @@ -884,12 +909,16 @@ long ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, spin_unlock(&bdev->lru_lock); lret = 0; - if (!bo_locked && walk->ticket) - lret = ttm_lru_walk_ticketlock(walk, bo, &bo_needs_unlock); + if (!bo_locked) { + if (exec) + lret = ttm_lru_walk_execlock(walk, bo); + else + lret = ttm_lru_walk_ticketlock(walk, bo, &bo_needs_unlock); + } /* * Note that in between the release of the lru lock and the - * ticketlock, the bo may have switched resource, + * drm_exec_lock_obj / ticketlock, the bo may have switched resource, * and also memory type, since the resource may have been * freed and allocated again with a different memory type. * In that case, just skip it. @@ -899,7 +928,7 @@ long ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, ttm_lru_walk_unlock(bo, bo_needs_unlock); ttm_bo_put(bo); - if (lret == -EBUSY) + if (lret == -EBUSY || lret == -EALREADY) lret = 0; sofar = (lret < 0) ? lret : sofar + lret; if (sofar < 0 || sofar >= target) diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h index deaedfb060ed..1c9f4880abb9 100644 --- a/include/drm/ttm/ttm_bo.h +++ b/include/drm/ttm/ttm_bo.h @@ -164,6 +164,8 @@ struct ttm_bo_kmap_obj { struct ttm_buffer_object *bo; }; +struct drm_exec; + /** * struct ttm_operation_ctx * @@ -175,6 +177,8 @@ struct ttm_bo_kmap_obj { * @force_alloc: Don't check the memory account during suspend or CPU page * faults. Should only be used by TTM internally. * @resv: Reservation object to allow reserved evictions with. + * @exec: If part of a drm_exec transaction, pointer to the struct drm_exec. + * Null otherwise. * @bytes_moved: Statistics on how many bytes have been moved. * * Context for TTM operations like changing buffer placement or general memory @@ -187,6 +191,7 @@ struct ttm_operation_ctx { bool allow_res_evict; bool force_alloc; struct dma_resv *resv; + struct drm_exec *exec; uint64_t bytes_moved; }; From patchwork Tue May 21 07:16:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D937DC25B74 for ; Tue, 21 May 2024 07:18:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5490910E2C6; Tue, 21 May 2024 07:18:12 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="lmVkKaLN"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9D96D10E3C9; Tue, 21 May 2024 07:17:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275855; x=1747811855; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7k+OlmKwABQOj2GmwYOay5XvfiV0pMhykml+cUIx9t8=; b=lmVkKaLN821AbOyiu8BivP457ipot5Xq6V5d1YcUNk/2KWCY4wJX/aOf Bqist9xdVzlXPW0Tv/kB0DTLLxMxApmMy0iiRd9XI7/Jrqd4SJFOv69J4 P1cdI7Us5oWWVDzHnwAPglpqazdKe7+IuQ12TmBQUCBFoGWfIKyeSR3MA lDiasYPqAjIiSEf82RtEaUsCITFRyafM/+/ITFMQxJZF2pLBtmW8oI63E K1+EObsUqlSz9GK80q2/+xGkOdJ7+bLEog82KVWZfphh2m0bHwBZfVHbI 8YUqWxncg+GNL0/4XPIdVirE35z388CXlHrfV83nkIbd1RYjyhZZjelzC g==; X-CSE-ConnectionGUID: fYcQ0NOQTQm+98RGbViM5g== X-CSE-MsgGUID: syXfq726Tji+YkBEPnoihw== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393503" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393503" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:33 -0700 X-CSE-ConnectionGUID: s76fSgpCRqSr9FmW5P1j+A== X-CSE-MsgGUID: ybCafxXlT5SQ4HKqrFHXHQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336812" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:32 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 18/21] drm/ttm: Convert ttm vm to using drm_exec Date: Tue, 21 May 2024 09:16:36 +0200 Message-ID: <20240521071639.77614-19-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" TTM faulting may include migration and swapping. Convert helpers to support drm_exec locking and enable it by converting the ttm_bo_vm_fault() function to include a drm_exec loop. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 4 +- drivers/gpu/drm/nouveau/nouveau_gem.c | 4 +- drivers/gpu/drm/radeon/radeon_gem.c | 4 +- drivers/gpu/drm/ttm/ttm_bo_vm.c | 101 ++++++++++++++------- drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c | 6 +- drivers/gpu/drm/xe/xe_bo.c | 5 +- include/drm/ttm/ttm_bo.h | 6 +- 8 files changed, 85 insertions(+), 49 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index 17e16c971e21..22d61cdb0d88 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -52,7 +52,7 @@ static vm_fault_t amdgpu_gem_fault(struct vm_fault *vmf) vm_fault_t ret; int idx; - ret = ttm_bo_vm_reserve(bo, vmf); + ret = ttm_bo_vm_reserve(bo, vmf, NULL); if (ret) return ret; @@ -64,7 +64,7 @@ static vm_fault_t amdgpu_gem_fault(struct vm_fault *vmf) } ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, - TTM_BO_VM_NUM_PREFAULT); + TTM_BO_VM_NUM_PREFAULT, NULL); drm_dev_exit(idx); } else { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index e6f177183c0f..c66e2b54c9a2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1046,7 +1046,7 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf) area->vm_flags & VM_WRITE)) return VM_FAULT_SIGBUS; - ret = ttm_bo_vm_reserve(bo, vmf); + ret = ttm_bo_vm_reserve(bo, vmf, NULL); if (ret) return ret; @@ -1108,7 +1108,7 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf) if (drm_dev_enter(dev, &idx)) { ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, - TTM_BO_VM_NUM_PREFAULT); + TTM_BO_VM_NUM_PREFAULT, NULL); drm_dev_exit(idx); } else { ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 5a887d67dc0e..bc6901955508 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -46,7 +46,7 @@ static vm_fault_t nouveau_ttm_fault(struct vm_fault *vmf) pgprot_t prot; vm_fault_t ret; - ret = ttm_bo_vm_reserve(bo, vmf); + ret = ttm_bo_vm_reserve(bo, vmf, NULL); if (ret) return ret; @@ -56,7 +56,7 @@ static vm_fault_t nouveau_ttm_fault(struct vm_fault *vmf) nouveau_bo_del_io_reserve_lru(bo); prot = vm_get_page_prot(vma->vm_flags); - ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT, NULL); nouveau_bo_add_io_reserve_lru(bo); if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) return ret; diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index 2ef201a072f1..f29761b7ca97 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -54,7 +54,7 @@ static vm_fault_t radeon_gem_fault(struct vm_fault *vmf) down_read(&rdev->pm.mclk_lock); - ret = ttm_bo_vm_reserve(bo, vmf); + ret = ttm_bo_vm_reserve(bo, vmf, NULL); if (ret) goto unlock_mclk; @@ -63,7 +63,7 @@ static vm_fault_t radeon_gem_fault(struct vm_fault *vmf) goto unlock_resv; ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, - TTM_BO_VM_NUM_PREFAULT); + TTM_BO_VM_NUM_PREFAULT, NULL); if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) goto unlock_mclk; diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 4212b8c91dd4..74daa910d0b7 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -31,6 +31,8 @@ #define pr_fmt(fmt) "[TTM] " fmt +#include + #include #include #include @@ -39,7 +41,8 @@ #include static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo, - struct vm_fault *vmf) + struct vm_fault *vmf, + struct drm_exec *exec) { long err = 0; @@ -63,7 +66,10 @@ static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo, (void)dma_resv_wait_timeout(bo->base.resv, DMA_RESV_USAGE_KERNEL, true, MAX_SCHEDULE_TIMEOUT); - dma_resv_unlock(bo->base.resv); + if (exec) + drm_exec_unlock_obj(exec, &bo->base); + else + dma_resv_unlock(bo->base.resv); ttm_bo_put(bo); return VM_FAULT_RETRY; } @@ -96,6 +102,7 @@ static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo, * ttm_bo_vm_reserve - Reserve a buffer object in a retryable vm callback * @bo: The buffer object * @vmf: The fault structure handed to the callback + * @exec: The drm_exec locking transaction context. May be NULL. * * vm callbacks like fault() and *_mkwrite() allow for the mmap_lock to be dropped * during long waits, and after the wait the callback will be restarted. This @@ -114,15 +121,16 @@ static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo, * VM_FAULT_NOPAGE if blocking wait and retrying was not allowed. */ vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, - struct vm_fault *vmf) + struct vm_fault *vmf, struct drm_exec *exec) { - /* - * Work around locking order reversal in fault / nopfn - * between mmap_lock and bo_reserve: Perform a trylock operation - * for reserve, and if it fails, retry the fault after waiting - * for the buffer to become unreserved. - */ - if (unlikely(!dma_resv_trylock(bo->base.resv))) { + int ret; + + if (exec) + ret = drm_exec_trylock_obj(exec, &bo->base); + else + ret = dma_resv_trylock(bo->base.resv) ? 0 : -EBUSY; + + if (unlikely(ret == -EBUSY)) { /* * If the fault allows retry and this is the first * fault attempt, we try to release the mmap_lock @@ -132,16 +140,26 @@ vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { ttm_bo_get(bo); mmap_read_unlock(vmf->vma->vm_mm); - if (!dma_resv_lock_interruptible(bo->base.resv, - NULL)) - dma_resv_unlock(bo->base.resv); + if (exec) { + ret = drm_exec_lock_obj(exec, &bo->base); + if (!ret) + drm_exec_unlock_obj(exec, &bo->base); + } else { + if (!dma_resv_lock_interruptible(bo->base.resv, + NULL)) + dma_resv_unlock(bo->base.resv); + } ttm_bo_put(bo); } return VM_FAULT_RETRY; } - if (dma_resv_lock_interruptible(bo->base.resv, NULL)) + if (exec) + ret = drm_exec_lock_obj(exec, &bo->base); + else + ret = dma_resv_lock_interruptible(bo->base.resv, NULL); + if (ret) return VM_FAULT_NOPAGE; } @@ -151,7 +169,10 @@ vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, */ if (bo->ttm && (bo->ttm->page_flags & TTM_TT_FLAG_EXTERNAL)) { if (!(bo->ttm->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE)) { - dma_resv_unlock(bo->base.resv); + if (exec) + drm_exec_unlock_obj(exec, &bo->base); + else + dma_resv_unlock(bo->base.resv); return VM_FAULT_SIGBUS; } } @@ -167,6 +188,7 @@ EXPORT_SYMBOL(ttm_bo_vm_reserve); * @num_prefault: Maximum number of prefault pages. The caller may want to * specify this based on madvice settings and the size of the GPU object * backed by the memory. + * @exec: The struct drm_exec locking transaction context. May be NULL. * * This function inserts one or more page table entries pointing to the * memory backing the buffer object, and then returns a return code @@ -180,7 +202,8 @@ EXPORT_SYMBOL(ttm_bo_vm_reserve); */ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, pgprot_t prot, - pgoff_t num_prefault) + pgoff_t num_prefault, + struct drm_exec *exec) { struct vm_area_struct *vma = vmf->vma; struct ttm_buffer_object *bo = vma->vm_private_data; @@ -199,7 +222,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, * Wait for buffer data in transit, due to a pipelined * move. */ - ret = ttm_bo_vm_fault_idle(bo, vmf); + ret = ttm_bo_vm_fault_idle(bo, vmf, exec); if (unlikely(ret != 0)) return ret; @@ -220,7 +243,8 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, struct ttm_operation_ctx ctx = { .interruptible = true, .no_wait_gpu = false, - .force_alloc = true + .force_alloc = true, + .exec = exec, }; ttm = bo->ttm; @@ -324,25 +348,34 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) pgprot_t prot; struct ttm_buffer_object *bo = vma->vm_private_data; struct drm_device *ddev = bo->base.dev; + struct drm_exec exec; vm_fault_t ret; - int idx; - - ret = ttm_bo_vm_reserve(bo, vmf); - if (ret) - return ret; - - prot = vma->vm_page_prot; - if (drm_dev_enter(ddev, &idx)) { - ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); - drm_dev_exit(idx); - } else { - ret = ttm_bo_vm_dummy_page(vmf, prot); + int idx, err; + + drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 16); + drm_exec_until_all_locked(&exec) { + ret = ttm_bo_vm_reserve(bo, vmf, &exec); + err = drm_exec_retry_on_contention(&exec, 0); + if (err) + ret = VM_FAULT_NOPAGE; + if (ret) + goto out; + + prot = vma->vm_page_prot; + if (drm_dev_enter(ddev, &idx)) { + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT, + &exec); + drm_dev_exit(idx); + err = drm_exec_retry_on_contention(&exec, 0); + if (err) + ret = VM_FAULT_NOPAGE; + } else { + ret = ttm_bo_vm_dummy_page(vmf, prot); + } } - if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) - return ret; - - dma_resv_unlock(bo->base.resv); +out: + drm_exec_fini(&exec); return ret; } EXPORT_SYMBOL(ttm_bo_vm_fault); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c index 74ff2812d66a..fc275afd000c 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c @@ -388,7 +388,7 @@ vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf) */ save_flags = vmf->flags; vmf->flags &= ~FAULT_FLAG_ALLOW_RETRY; - ret = ttm_bo_vm_reserve(bo, vmf); + ret = ttm_bo_vm_reserve(bo, vmf, NULL); vmf->flags = save_flags; if (ret) return ret; @@ -423,7 +423,7 @@ vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf) pgprot_t prot; vm_fault_t ret; - ret = ttm_bo_vm_reserve(bo, vmf); + ret = ttm_bo_vm_reserve(bo, vmf, NULL); if (ret) return ret; @@ -457,7 +457,7 @@ vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf) else prot = vm_get_page_prot(vma->vm_flags); - ret = ttm_bo_vm_fault_reserved(vmf, prot, num_prefault); + ret = ttm_bo_vm_fault_reserved(vmf, prot, num_prefault, NULL); if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) return ret; diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index 9a0ca2cab7b6..3c56858e0751 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -1223,7 +1223,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf) if (needs_rpm) xe_pm_runtime_get(xe); - ret = ttm_bo_vm_reserve(tbo, vmf); + ret = ttm_bo_vm_reserve(tbo, vmf, NULL); if (ret) goto out; @@ -1231,7 +1231,8 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf) trace_xe_bo_cpu_fault(bo); ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, - TTM_BO_VM_NUM_PREFAULT); + TTM_BO_VM_NUM_PREFAULT, + NULL); drm_dev_exit(idx); } else { ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h index 1c9f4880abb9..d749de3aa936 100644 --- a/include/drm/ttm/ttm_bo.h +++ b/include/drm/ttm/ttm_bo.h @@ -427,10 +427,12 @@ int ttm_bo_evict_first(struct ttm_device *bdev, struct ttm_resource_manager *man, struct ttm_operation_ctx *ctx); vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, - struct vm_fault *vmf); + struct vm_fault *vmf, + struct drm_exec *exec); vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, pgprot_t prot, - pgoff_t num_prefault); + pgoff_t num_prefault, + struct drm_exec *exec); vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf); void ttm_bo_vm_open(struct vm_area_struct *vma); void ttm_bo_vm_close(struct vm_area_struct *vma); From patchwork Tue May 21 07:16:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 73D0FC25B74 for ; Tue, 21 May 2024 07:18:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7BAB010E4A6; Tue, 21 May 2024 07:18:16 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="l++GiKo5"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7C98710E426; Tue, 21 May 2024 07:17:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275855; x=1747811855; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zcDtuFKrnmw6i3xTfahiKuOa4+9MWMmnLp6lmULrif4=; b=l++GiKo5Ff79fNF8rXJcHlYF5Bx8z7l6Bnd6F7tF/jUFPKwGOGFfhc3S iViWtUjiMC5IlKDU05bHV3imOy2Rb4RgSlj3/p3TEuZKQ7Og+hlwytTIF ipzkoMk+Ji/m6tb2asQ++7UvDaToiGV8yu0BymWEapKhaqgvV6FtiLWVB ntadtaU8C/z0fnYepFD8oswFAu4DN3h+kc5g0jkhGjQqFjYSqS7v0/7cj LMd8JyN6ndoSQFE7eY1d3Q7/QolupVPoyQB80u16SdBslwmVmwuqlRMoR 8WOrqHI712QwcOQcfTqUChKkIZURkw5RY6yHKiDwrO3e1pgDGQR5Ymm2Z w==; X-CSE-ConnectionGUID: pKiLYOISQ+mxoa1zgTGBqw== X-CSE-MsgGUID: 3lGEFJOaRbKWros7wB+gZg== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393506" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393506" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:35 -0700 X-CSE-ConnectionGUID: 3DwZe2yXSwSd8qdw5Sh4Qg== X-CSE-MsgGUID: g0JQ8xf0TZKEOWy5NgaPWQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336822" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:34 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 19/21] drm/xe: Use drm_exec for fault locking Date: Tue, 21 May 2024 09:16:37 +0200 Message-ID: <20240521071639.77614-20-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Similar to how TTM vm does this, convert the drm/xe fault handler to use drm_exec locking. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/xe/xe_bo.c | 38 +++++++++++++++++++++++--------------- 1 file changed, 23 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index 3c56858e0751..27d7d36401b5 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -1217,29 +1217,37 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf) struct xe_device *xe = to_xe_device(ddev); struct xe_bo *bo = ttm_to_xe_bo(tbo); bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK; + struct drm_exec exec; vm_fault_t ret; - int idx; + int idx, err; if (needs_rpm) xe_pm_runtime_get(xe); - ret = ttm_bo_vm_reserve(tbo, vmf, NULL); - if (ret) - goto out; + drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 16); + drm_exec_until_all_locked(&exec) { + ret = ttm_bo_vm_reserve(tbo, vmf, &exec); + err = drm_exec_retry_on_contention(&exec, 0); + if (err) + ret = VM_FAULT_NOPAGE; + if (ret) + goto out; - if (drm_dev_enter(ddev, &idx)) { - trace_xe_bo_cpu_fault(bo); + if (drm_dev_enter(ddev, &idx)) { + trace_xe_bo_cpu_fault(bo); - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, - TTM_BO_VM_NUM_PREFAULT, - NULL); - drm_dev_exit(idx); - } else { - ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); + ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, + TTM_BO_VM_NUM_PREFAULT, + &exec); + drm_dev_exit(idx); + err = drm_exec_retry_on_contention(&exec, 0); + if (err) + ret = VM_FAULT_NOPAGE; + } else { + ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); + } } - if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) - goto out; /* * ttm_bo_vm_reserve() already has dma_resv_lock. */ @@ -1250,8 +1258,8 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf) mutex_unlock(&xe->mem_access.vram_userfault.lock); } - dma_resv_unlock(tbo->base.resv); out: + drm_exec_fini(&exec); if (needs_rpm) xe_pm_runtime_put(xe); From patchwork Tue May 21 07:16:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 399B9C25B74 for ; Tue, 21 May 2024 07:18:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B9636896EC; Tue, 21 May 2024 07:17:51 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="jneGhiYt"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id CA92B89664; Tue, 21 May 2024 07:17:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275858; x=1747811858; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=j9zgurAplBD1uVfXbOgyajlrNiNlrRH5yjRE3Se1bpo=; b=jneGhiYtEm679bSKUTiSD7i3pKnmzJDSLmSaxJZ3HgnxI0/RA9nXkkXg FY21T0pmAA/aFF2XSfwag337t7f4ZHGwchuVuzB2wkFpajG+BOPB5OS85 +rEIuMOPFFIzgKTdDifStDpaz6jYwMtwggC1RFngAq0lDfLyUGJrB3iFr 91y/pBiK32ErlTELJws9lBqx8hk0gbYLEq94BW8DWsgLfZ74PyQR8uo/x Ku3MvehvndKN1bbV/2ekw5Uz3LS0oXxUv+4SwAHE2iXu0Qi72u4KmSbBX nreWQ4XPOKQe7N3cIhKbXoW5HkLe1CZ7qYqP/Tg0d4PGOwL3tbPepigBm g==; X-CSE-ConnectionGUID: V1GkYznlT9ygXrhBT9/7KA== X-CSE-MsgGUID: oQ7Gn+71TUWavk2ZUOpK/Q== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393513" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393513" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:37 -0700 X-CSE-ConnectionGUID: wvH0pGMmTv6jejI7tGOHEA== X-CSE-MsgGUID: cIC/Vve2TEmZEY3nG5d8Hg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336833" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:36 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 20/21] drm/ttm: Use drm_exec_trylock for bo initialization Date: Tue, 21 May 2024 09:16:38 +0200 Message-ID: <20240521071639.77614-21-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Buffer object initialization may be part of a drm_exec transaction. Rather than using dma_resv_trylock, use drm_exec_trylock_obj(). RFC: This patch indicates to me that we should avoid the -ENOMEM failure for drm_exec_trylock, Could probably use a sleeping lock here without problems. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_bo.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 8706502edcb1..70af66b5b86e 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -942,10 +942,17 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, struct ttm_buffer_object *bo, /* passed reservation objects should already be locked, * since otherwise lockdep will be angered in radeon. */ - if (!resv) - WARN_ON(!dma_resv_trylock(bo->base.resv)); - else + if (!resv) { + if (ctx->exec) { + ret = drm_exec_trylock_obj(ctx->exec, &bo->base); + if (ret) + goto err_put; + } else { + WARN_ON(!dma_resv_trylock(bo->base.resv)); + } + } else { dma_resv_assert_held(resv); + } ret = ttm_bo_validate(bo, placement, ctx); if (unlikely(ret)) @@ -954,8 +961,12 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, struct ttm_buffer_object *bo, return 0; err_unlock: - if (!resv) - dma_resv_unlock(bo->base.resv); + if (!resv) { + if (ctx->exec) + drm_exec_unlock_obj(ctx->exec, &bo->base); + else + dma_resv_unlock(bo->base.resv); + } err_put: ttm_bo_put(bo); From patchwork Tue May 21 07:16:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668960 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 49CBAC25B74 for ; Tue, 21 May 2024 07:18:29 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0749710E433; Tue, 21 May 2024 07:18:28 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="Wq27Ae3G"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8162B10E3C9; Tue, 21 May 2024 07:17:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275860; x=1747811860; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=svIaUanGpmHyIlaldpLOJ/O5w2HS0h1s+T7XxCFspGM=; b=Wq27Ae3GesT+GkRAW8CVqyYCGIotNlhp0DjmPiGxIGenHKLGqfdzfOKs vWh7yNzVv8SSFnuiAcvxobpz9hxMWtTDZqfYmmZFL4brz07aPbNd0/UIW unUnMzEEshjuOGjoMJQj5qXbwxQCb1H9BVWrhI8IOqF3rtMtfwr+dqRHY sXBwW9AySGgqcqMutSrxFhXmp/lKKws8028/050wCKxEotW6Aos7f5qG6 gdNVV1XT8bleXFVImF4KO5YU2vG6nwvvK7s1Ml+ndkvLGIFNwYVump5Vh bgedjDJMQVtmFvgYuzkS9gH8MAhVIAUVZTupLGJcvhBcknA3ftQVsDHpz Q==; X-CSE-ConnectionGUID: evC9jGA/SneVxTKOYM/thw== X-CSE-MsgGUID: 8sUJb8SPROaTExABWBvKZw== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393520" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393520" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:40 -0700 X-CSE-ConnectionGUID: aEl2OMAVQ92eirIsgqGY8A== X-CSE-MsgGUID: C6KCXGMTRnGZz2sGeMuKOg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336842" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:38 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 21/21] drm/xe: Initial support for drm exec locking during validate Date: Tue, 21 May 2024 09:16:39 +0200 Message-ID: <20240521071639.77614-22-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Initial stab at converting xe_bo validation to drm_exec locking where it matters most. (Low hanging fruit). For a couple of call sites as well as for bo allocation, the passing down the call chaing of the drm_exec object may turn out a bit tricky. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/xe/display/xe_fb_pin.c | 2 +- drivers/gpu/drm/xe/tests/xe_bo.c | 6 +++--- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 4 ++-- drivers/gpu/drm/xe/tests/xe_migrate.c | 2 +- drivers/gpu/drm/xe/xe_bo.c | 8 +++++--- drivers/gpu/drm/xe/xe_bo.h | 4 +++- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/gpu/drm/xe/xe_ggtt.c | 2 +- drivers/gpu/drm/xe/xe_gt_pagefault.c | 2 +- drivers/gpu/drm/xe/xe_vm.c | 4 ++-- 10 files changed, 20 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/xe/display/xe_fb_pin.c b/drivers/gpu/drm/xe/display/xe_fb_pin.c index 36e15c4961c1..85f37dd7ecb1 100644 --- a/drivers/gpu/drm/xe/display/xe_fb_pin.c +++ b/drivers/gpu/drm/xe/display/xe_fb_pin.c @@ -289,7 +289,7 @@ static struct i915_vma *__xe_pin_fb_vma(const struct intel_framebuffer *fb, if (IS_DGFX(xe)) ret = xe_bo_migrate(bo, XE_PL_VRAM0); else - ret = xe_bo_validate(bo, NULL, true); + ret = xe_bo_validate(bo, NULL, true, NULL); if (!ret) ttm_bo_pin(&bo->ttm); ttm_bo_unreserve(&bo->ttm); diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c index 7576d362020f..410579f75a39 100644 --- a/drivers/gpu/drm/xe/tests/xe_bo.c +++ b/drivers/gpu/drm/xe/tests/xe_bo.c @@ -30,7 +30,7 @@ static int ccs_test_migrate(struct xe_tile *tile, struct xe_bo *bo, u32 offset; /* Move bo to VRAM if not already there. */ - ret = xe_bo_validate(bo, NULL, false); + ret = xe_bo_validate(bo, NULL, false, NULL); if (ret) { KUNIT_FAIL(test, "Failed to validate bo.\n"); return ret; @@ -276,7 +276,7 @@ static int evict_test_run_tile(struct xe_device *xe, struct xe_tile *tile, struc if (i) { down_read(&vm->lock); xe_vm_lock(vm, false); - err = xe_bo_validate(bo, bo->vm, false); + err = xe_bo_validate(bo, bo->vm, false, NULL); xe_vm_unlock(vm); up_read(&vm->lock); if (err) { @@ -285,7 +285,7 @@ static int evict_test_run_tile(struct xe_device *xe, struct xe_tile *tile, struc goto cleanup_all; } xe_bo_lock(external, false); - err = xe_bo_validate(external, NULL, false); + err = xe_bo_validate(external, NULL, false, NULL); xe_bo_unlock(external); if (err) { KUNIT_FAIL(test, "external bo valid err=%pe\n", diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c index e7f9b531c465..ef88b4dd184c 100644 --- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c @@ -81,7 +81,7 @@ static void check_residency(struct kunit *test, struct xe_bo *exported, } /* Re-validate the importer. This should move also exporter in. */ - ret = xe_bo_validate(imported, NULL, false); + ret = xe_bo_validate(imported, NULL, false, NULL); if (ret) { if (ret != -EINTR && ret != -ERESTARTSYS) KUNIT_FAIL(test, "Validating importer failed with err=%d.\n", @@ -157,7 +157,7 @@ static void xe_test_dmabuf_import_same_driver(struct xe_device *xe) /* Is everything where we expect it to be? */ xe_bo_lock(import_bo, false); - err = xe_bo_validate(import_bo, NULL, false); + err = xe_bo_validate(import_bo, NULL, false, NULL); /* Pinning in VRAM is not allowed. */ if (!is_dynamic(params) && diff --git a/drivers/gpu/drm/xe/tests/xe_migrate.c b/drivers/gpu/drm/xe/tests/xe_migrate.c index b6e7f80c3774..0feb99d3ef7d 100644 --- a/drivers/gpu/drm/xe/tests/xe_migrate.c +++ b/drivers/gpu/drm/xe/tests/xe_migrate.c @@ -90,7 +90,7 @@ static void test_copy(struct xe_migrate *m, struct xe_bo *bo, return; } - err = xe_bo_validate(remote, NULL, false); + err = xe_bo_validate(remote, NULL, false, NULL); if (err) { KUNIT_FAIL(test, "Failed to validate system bo for %s: %i\n", str, err); diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index 27d7d36401b5..f33120f3a829 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -1755,7 +1755,7 @@ int xe_bo_pin_external(struct xe_bo *bo) xe_assert(xe, xe_bo_is_user(bo)); if (!xe_bo_is_pinned(bo)) { - err = xe_bo_validate(bo, NULL, false); + err = xe_bo_validate(bo, NULL, false, NULL); if (err) return err; @@ -1801,7 +1801,7 @@ int xe_bo_pin(struct xe_bo *bo) /* We only expect at most 1 pin */ xe_assert(xe, !xe_bo_is_pinned(bo)); - err = xe_bo_validate(bo, NULL, false); + err = xe_bo_validate(bo, NULL, false, NULL); if (err) return err; @@ -1917,11 +1917,13 @@ void xe_bo_unpin(struct xe_bo *bo) * Return: 0 on success, negative error code on failure. May return * -EINTR or -ERESTARTSYS if internal waits are interrupted by a signal. */ -int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict) +int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict, + struct drm_exec *exec) { struct ttm_operation_ctx ctx = { .interruptible = true, .no_wait_gpu = false, + .exec = exec, }; if (vm) { diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h index 220e71086e65..a6ddaff8ce74 100644 --- a/drivers/gpu/drm/xe/xe_bo.h +++ b/drivers/gpu/drm/xe/xe_bo.h @@ -62,6 +62,7 @@ #define XE_BO_PROPS_INVALID (-1) +struct drm_exec; struct sg_table; struct xe_ttm_lru_walk; @@ -164,7 +165,8 @@ int xe_bo_pin_external(struct xe_bo *bo); int xe_bo_pin(struct xe_bo *bo); void xe_bo_unpin_external(struct xe_bo *bo); void xe_bo_unpin(struct xe_bo *bo); -int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict); +int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict, + struct drm_exec *exec); static inline bool xe_bo_is_pinned(struct xe_bo *bo) { diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c index 68f309f5e981..ce84aa70cca8 100644 --- a/drivers/gpu/drm/xe/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/xe_dma_buf.c @@ -102,7 +102,7 @@ static struct sg_table *xe_dma_buf_map(struct dma_buf_attachment *attach, if (!attach->peer2peer) r = xe_bo_migrate(bo, XE_PL_TT); else - r = xe_bo_validate(bo, NULL, false); + r = xe_bo_validate(bo, NULL, false, NULL); if (r) return ERR_PTR(r); } diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c index 0d541f55b4fc..ba9b2c3236ab 100644 --- a/drivers/gpu/drm/xe/xe_ggtt.c +++ b/drivers/gpu/drm/xe/xe_ggtt.c @@ -400,7 +400,7 @@ static int __xe_ggtt_insert_bo_at(struct xe_ggtt *ggtt, struct xe_bo *bo, return 0; } - err = xe_bo_validate(bo, NULL, false); + err = xe_bo_validate(bo, NULL, false, NULL); if (err) return err; diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c index 20ec1ab1b52d..3971bf567f78 100644 --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c @@ -117,7 +117,7 @@ static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma, return err; } else if (bo) { /* Create backing store if needed */ - err = xe_bo_validate(bo, vm, true); + err = xe_bo_validate(bo, vm, true, NULL); if (err) return err; } diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 335524e803e7..ea2308ff76c4 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -380,7 +380,7 @@ static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec) list_move_tail(&gpuva_to_vma(gpuva)->combined_links.rebind, &vm->rebind_list); - ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false); + ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false, exec); if (ret) return ret; @@ -2698,7 +2698,7 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma, if (!bo->vm) err = drm_exec_lock_obj(exec, &bo->ttm.base); if (!err && validate) - err = xe_bo_validate(bo, xe_vma_vm(vma), true); + err = xe_bo_validate(bo, xe_vma_vm(vma), true, exec); } return err;