From patchwork Mon Sep 6 16:55:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 12477147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E616FC433FE for ; Mon, 6 Sep 2021 16:55:45 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B16AD60FBF for ; Mon, 6 Sep 2021 16:55:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B16AD60FBF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1949C89B20; Mon, 6 Sep 2021 16:55:42 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id CF98689B62; Mon, 6 Sep 2021 16:55:40 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10099"; a="219697305" X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="219697305" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:35 -0700 X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="579695028" Received: from mkrygin-mobl1.ccr.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.45]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:33 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, =?utf-8?q?Tho?= =?utf-8?q?mas_Hellstr=C3=B6m?= Date: Mon, 6 Sep 2021 18:55:09 +0200 Message-Id: <20210906165515.450541-2-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> References: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 1/6] drm/i915/ttm: Implement a function to copy the contents of two TTM-base objects X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" When backing up or restoring contents of pinned objects at suspend / resume time we need to allocate a new object as the backup. Add a function to facilitate copies between the two. Some data needs to be copied before the migration context is ready for operation, so make sure we can disable accelerated copies. Signed-off-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 69 +++++++++++++++++++++---- drivers/gpu/drm/i915/gem/i915_gem_ttm.h | 4 ++ 2 files changed, 64 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 59ca53a3ef6a..df2dcbad1eb9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -432,6 +432,7 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object *obj, static int i915_ttm_accel_move(struct ttm_buffer_object *bo, bool clear, struct ttm_resource *dst_mem, + struct ttm_tt *dst_ttm, struct sg_table *dst_st) { struct drm_i915_private *i915 = container_of(bo->bdev, typeof(*i915), @@ -441,14 +442,14 @@ static int i915_ttm_accel_move(struct ttm_buffer_object *bo, struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); struct sg_table *src_st; struct i915_request *rq; - struct ttm_tt *ttm = bo->ttm; + struct ttm_tt *src_ttm = bo->ttm; enum i915_cache_level src_level, dst_level; int ret; if (!i915->gt.migrate.context) return -EINVAL; - dst_level = i915_ttm_cache_level(i915, dst_mem, ttm); + dst_level = i915_ttm_cache_level(i915, dst_mem, dst_ttm); if (clear) { if (bo->type == ttm_bo_type_kernel) return -EINVAL; @@ -465,10 +466,10 @@ static int i915_ttm_accel_move(struct ttm_buffer_object *bo, } intel_engine_pm_put(i915->gt.migrate.context->engine); } else { - src_st = src_man->use_tt ? i915_ttm_tt_get_st(ttm) : + src_st = src_man->use_tt ? i915_ttm_tt_get_st(src_ttm) : obj->ttm.cached_io_st; - src_level = i915_ttm_cache_level(i915, bo->resource, ttm); + src_level = i915_ttm_cache_level(i915, bo->resource, src_ttm); intel_engine_pm_get(i915->gt.migrate.context->engine); ret = intel_context_migrate_copy(i915->gt.migrate.context, NULL, src_st->sgl, src_level, @@ -488,11 +489,14 @@ static int i915_ttm_accel_move(struct ttm_buffer_object *bo, static void __i915_ttm_move(struct ttm_buffer_object *bo, bool clear, struct ttm_resource *dst_mem, - struct sg_table *dst_st) + struct ttm_tt *dst_ttm, + struct sg_table *dst_st, + bool allow_accel) { - int ret; + int ret = -EINVAL; - ret = i915_ttm_accel_move(bo, clear, dst_mem, dst_st); + if (allow_accel) + ret = i915_ttm_accel_move(bo, clear, dst_mem, dst_ttm, dst_st); if (ret) { struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); struct intel_memory_region *dst_reg, *src_reg; @@ -507,7 +511,7 @@ static void __i915_ttm_move(struct ttm_buffer_object *bo, bool clear, GEM_BUG_ON(!dst_reg || !src_reg); dst_iter = !cpu_maps_iomem(dst_mem) ? - ttm_kmap_iter_tt_init(&_dst_iter.tt, bo->ttm) : + ttm_kmap_iter_tt_init(&_dst_iter.tt, dst_ttm) : ttm_kmap_iter_iomap_init(&_dst_iter.io, &dst_reg->iomap, dst_st, dst_reg->region.start); @@ -562,7 +566,7 @@ static int i915_ttm_move(struct ttm_buffer_object *bo, bool evict, clear = !cpu_maps_iomem(bo->resource) && (!ttm || !ttm_tt_is_populated(ttm)); if (!(clear && ttm && !(ttm->page_flags & TTM_PAGE_FLAG_ZERO_ALLOC))) - __i915_ttm_move(bo, clear, dst_mem, dst_st); + __i915_ttm_move(bo, clear, dst_mem, bo->ttm, dst_st, true); ttm_bo_move_sync_cleanup(bo, dst_mem); i915_ttm_adjust_domains_after_move(obj); @@ -973,3 +977,50 @@ i915_gem_ttm_system_setup(struct drm_i915_private *i915, intel_memory_region_set_name(mr, "system-ttm"); return mr; } + +/** + * i915_gem_obj_copy_ttm - Copy the contents of one ttm-based gem object to + * another + * @dst: The destination object + * @src: The source object + * @allow_accel: Allow using the blitter. Otherwise TTM memcpy is used. + * @intr: Whether to perform waits interruptible: + * + * Note: The caller is responsible for assuring that the underlying + * TTM objects are populated if needed and locked. + * + * Return: Zero on success. Negative error code on error. If @intr == true, + * then it may return -ERESTARTSYS or -EINTR. + */ +int i915_gem_obj_copy_ttm(struct drm_i915_gem_object *dst, + struct drm_i915_gem_object *src, + bool allow_accel, bool intr) +{ + struct ttm_buffer_object *dst_bo = i915_gem_to_ttm(dst); + struct ttm_buffer_object *src_bo = i915_gem_to_ttm(src); + struct ttm_operation_ctx ctx = { + .interruptible = intr, + }; + struct sg_table *dst_st; + int ret; + + assert_object_held(dst); + assert_object_held(src); + + /* + * Sync for now. This will change with async moves. + */ + ret = ttm_bo_wait_ctx(dst_bo, &ctx); + if (!ret) + ttm_bo_wait_ctx(src_bo, &ctx); + if (ret) + return ret; + + dst_st = gpu_binds_iomem(dst_bo->resource) ? + dst->ttm.cached_io_st : i915_ttm_tt_get_st(dst_bo->ttm); + + __i915_ttm_move(src_bo, false, dst_bo->resource, dst_bo->ttm, + dst_st, allow_accel); + + return 0; +} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h index 40927f67b6d9..34ac78d47b0d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h @@ -46,4 +46,8 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, resource_size_t size, resource_size_t page_size, unsigned int flags); + +int i915_gem_obj_copy_ttm(struct drm_i915_gem_object *dst, + struct drm_i915_gem_object *src, + bool allow_accel, bool intr); #endif From patchwork Mon Sep 6 16:55:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 12477149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6645C433EF for ; Mon, 6 Sep 2021 16:55:52 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA1FC60FBF for ; Mon, 6 Sep 2021 16:55:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AA1FC60FBF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5F20389BAE; Mon, 6 Sep 2021 16:55:43 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id A00B189B20; Mon, 6 Sep 2021 16:55:40 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10099"; a="219697330" X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="219697330" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:36 -0700 X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="579695047" Received: from mkrygin-mobl1.ccr.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.45]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:35 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, =?utf-8?q?Tho?= =?utf-8?q?mas_Hellstr=C3=B6m?= Date: Mon, 6 Sep 2021 18:55:10 +0200 Message-Id: <20210906165515.450541-3-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> References: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 2/6] drm/i915/gem: Implement a function to process all gem objects of a region X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" An upcoming common pattern is to traverse the region object list and perform certain actions on all objects in a region. It's a little tricky to get the list locking right, in particular since a gem object may change region unless it's pinned or the object lock is held. Define a function that does this for us and that takes an argument that defines the action to be performed on each object. Signed-off-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_region.c | 70 ++++++++++++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_region.h | 33 ++++++++++ 2 files changed, 103 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index 1f557b2178ed..a016ccec36f3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -80,3 +80,73 @@ i915_gem_object_create_region(struct intel_memory_region *mem, i915_gem_object_free(obj); return ERR_PTR(err); } + +/** + * i915_gem_process_region - Iterate over all objects of a region using ops + * to process and optionally skip objects + * @mr: The memory region + * @apply: ops and private data + * + * This function can be used to iterate over the regions object list, + * checking whether to skip objects, and, if not, lock the objects and + * process them using the supplied ops. Note that this function temporarily + * removes objects from the region list while iterating, so that if run + * concurrently with itself may not iterate over all objects. + * + * Return: 0 if successful, negative error code on failure. + */ +int i915_gem_process_region(struct intel_memory_region *mr, + struct i915_gem_apply_to_region *apply) +{ + const struct i915_gem_apply_to_region_ops *ops = apply->ops; + struct drm_i915_gem_object *obj; + struct list_head still_in_list; + int ret = 0; + + /* + * In the future, a non-NULL apply->ww could mean the caller is + * already in a locking transaction and provides its own context. + */ + GEM_WARN_ON(apply->ww); + + INIT_LIST_HEAD(&still_in_list); + mutex_lock(&mr->objects.lock); + for (;;) { + struct i915_gem_ww_ctx ww; + + obj = list_first_entry_or_null(&mr->objects.list, typeof(*obj), + mm.region_link); + if (!obj) + break; + + list_move_tail(&obj->mm.region_link, &still_in_list); + if (!kref_get_unless_zero(&obj->base.refcount)) + continue; + + /* + * Note: Someone else might be migrating the object at this + * point. The object's region is not stable until we lock + * the object. + */ + mutex_unlock(&mr->objects.lock); + apply->ww = &ww; + for_i915_gem_ww(&ww, ret, apply->interruptible) { + ret = i915_gem_object_lock(obj, apply->ww); + if (ret) + continue; + + if (obj->mm.region == mr) + ret = ops->process_obj(apply, obj); + /* Implicit object unlock */ + } + + i915_gem_object_put(obj); + mutex_lock(&mr->objects.lock); + if (ret) + break; + } + list_splice_tail(&still_in_list, &mr->objects.list); + mutex_unlock(&mr->objects.lock); + + return ret; +} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h index 1008e580a89a..f62195847056 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h @@ -12,6 +12,37 @@ struct intel_memory_region; struct drm_i915_gem_object; struct sg_table; +struct i915_gem_apply_to_region; + +/** + * struct i915_gem_apply_to_region_ops - ops to use when iterating over all + * region objects. + */ +struct i915_gem_apply_to_region_ops { + /** + * process_obj - Process the current object + * @apply: Embed this for provate data + * @obj: The current object. + */ + int (*process_obj)(struct i915_gem_apply_to_region *apply, + struct drm_i915_gem_object *obj); +}; + +/** + * struct i915_gem_apply_to_region - Argument to the struct + * i915_gem_apply_to_region_ops functions. + * @ops: The ops for the operation. + * @ww: Locking context used for the transaction. + * @interruptible: Whether to perform object locking interruptible. + * + * This structure is intended to be embedded in a private struct if needed + */ +struct i915_gem_apply_to_region { + const struct i915_gem_apply_to_region_ops *ops; + struct i915_gem_ww_ctx *ww; + u32 interruptible:1; +}; + void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, struct intel_memory_region *mem); void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj); @@ -22,4 +53,6 @@ i915_gem_object_create_region(struct intel_memory_region *mem, resource_size_t page_size, unsigned int flags); +int i915_gem_process_region(struct intel_memory_region *mr, + struct i915_gem_apply_to_region *apply); #endif From patchwork Mon Sep 6 16:55:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 12477145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A2CBC433EF for ; Mon, 6 Sep 2021 16:55:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 51DAA60FBF for ; Mon, 6 Sep 2021 16:55:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 51DAA60FBF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A85D989B62; Mon, 6 Sep 2021 16:55:41 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id 50E7689B20; Mon, 6 Sep 2021 16:55:40 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10099"; a="219697342" X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="219697342" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:38 -0700 X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="579695067" Received: from mkrygin-mobl1.ccr.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.45]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:37 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, =?utf-8?q?Tho?= =?utf-8?q?mas_Hellstr=C3=B6m?= Date: Mon, 6 Sep 2021 18:55:11 +0200 Message-Id: <20210906165515.450541-4-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> References: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 3/6] drm/i915 Implement LMEM backup and restore for suspend / resume X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Just evict unpinned objects to system. For pinned LMEM objects, make a backup system object and blit the contents to that. Backup is performed in three steps, 1: Opportunistically evict evictable objects using the gpu blitter. 2: After gt idle, evict evictable objects using the gpu blitter. This will be modified in an upcoming patch to backup pinned objects that are not used by the blitter itself. 3: Backup remaining pinned objects using memcpy. Also move uC suspend to after 2) to make sure we have a functional GuC during 2) if using GuC submission. v2: - Major refactor to make sure gem_exec_suspend@hang-SX subtests work, and suspend / resume works with a slightly modified GuC submission enabling patch series. Signed-off-by: Thomas Hellström --- drivers/gpu/drm/i915/Makefile | 1 + .../gpu/drm/i915/gem/i915_gem_object_types.h | 1 + drivers/gpu/drm/i915/gem/i915_gem_pm.c | 92 +++++++- drivers/gpu/drm/i915/gem/i915_gem_pm.h | 3 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 29 ++- drivers/gpu/drm/i915/gem/i915_gem_ttm.h | 10 + drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c | 205 ++++++++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.h | 24 ++ drivers/gpu/drm/i915/gt/intel_gt_pm.c | 4 +- drivers/gpu/drm/i915/i915_drv.c | 10 +- drivers/gpu/drm/i915/i915_drv.h | 2 +- 11 files changed, 364 insertions(+), 17 deletions(-) create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index c36c8a4f0716..3379a0a6c91e 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -155,6 +155,7 @@ gem-y += \ gem/i915_gem_throttle.o \ gem/i915_gem_tiling.o \ gem/i915_gem_ttm.o \ + gem/i915_gem_ttm_pm.o \ gem/i915_gem_userptr.o \ gem/i915_gem_wait.o \ gem/i915_gemfs.o diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 2471f36aaff3..734cc8e16481 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -534,6 +534,7 @@ struct drm_i915_gem_object { struct { struct sg_table *cached_io_st; struct i915_gem_object_page_iter get_io_page; + struct drm_i915_gem_object *backup; bool created:1; } ttm; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c index 8b9d7d14c4bd..9746c255ddcc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c @@ -5,6 +5,7 @@ */ #include "gem/i915_gem_pm.h" +#include "gem/i915_gem_ttm_pm.h" #include "gt/intel_gt.h" #include "gt/intel_gt_pm.h" #include "gt/intel_gt_requests.h" @@ -39,7 +40,79 @@ void i915_gem_suspend(struct drm_i915_private *i915) i915_gem_drain_freed_objects(i915); } -void i915_gem_suspend_late(struct drm_i915_private *i915) +static int lmem_restore(struct drm_i915_private *i915, bool allow_gpu) +{ + struct intel_memory_region *mr; + int ret = 0, id; + + for_each_memory_region(mr, i915, id) { + if (mr->type == INTEL_MEMORY_LOCAL) { + ret = i915_ttm_restore_region(mr, allow_gpu); + if (ret) + break; + } + } + + return ret; +} + +static int lmem_suspend(struct drm_i915_private *i915, bool allow_gpu, + bool backup_pinned) +{ + struct intel_memory_region *mr; + int ret = 0, id; + + for_each_memory_region(mr, i915, id) { + if (mr->type == INTEL_MEMORY_LOCAL) { + ret = i915_ttm_backup_region(mr, allow_gpu, backup_pinned); + if (ret) + break; + } + } + + return ret; +} + +static void lmem_recover(struct drm_i915_private *i915) +{ + struct intel_memory_region *mr; + int id; + + for_each_memory_region(mr, i915, id) + if (mr->type == INTEL_MEMORY_LOCAL) + i915_ttm_recover_region(mr); +} + +int i915_gem_backup_suspend(struct drm_i915_private *i915) +{ + int ret; + + /* Opportunistically try to evict unpinned objects */ + ret = lmem_suspend(i915, true, false); + if (ret) + goto out_recover; + + i915_gem_suspend(i915); + + /* + * More objects may have become unpinned as requests were + * retired. Now try to evict again. The gt may be wedged here + * in which case we automatically fall back to memcpy. + */ + + ret = lmem_suspend(i915, true, false); + if (ret) + goto out_recover; + + return 0; + +out_recover: + lmem_recover(i915); + + return ret; +} + +int i915_gem_suspend_late(struct drm_i915_private *i915) { struct drm_i915_gem_object *obj; struct list_head *phases[] = { @@ -49,6 +122,13 @@ void i915_gem_suspend_late(struct drm_i915_private *i915) }, **phase; unsigned long flags; bool flush = false; + int ret; + + ret = lmem_suspend(i915, false, true); + if (ret) { + lmem_recover(i915); + return ret; + } /* * Neither the BIOS, ourselves or any other kernel @@ -83,6 +163,8 @@ void i915_gem_suspend_late(struct drm_i915_private *i915) spin_unlock_irqrestore(&i915->mm.obj_lock, flags); if (flush) wbinvd_on_all_cpus(); + + return 0; } int i915_gem_freeze(struct drm_i915_private *i915) @@ -128,12 +210,20 @@ int i915_gem_freeze_late(struct drm_i915_private *i915) void i915_gem_resume(struct drm_i915_private *i915) { + int ret; + GEM_TRACE("%s\n", dev_name(i915->drm.dev)); + ret = lmem_restore(i915, false); + GEM_WARN_ON(ret); + /* * As we didn't flush the kernel context before suspend, we cannot * guarantee that the context image is complete. So let's just reset * it and start again. */ intel_gt_resume(&i915->gt); + + ret = lmem_restore(i915, true); + GEM_WARN_ON(ret); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.h b/drivers/gpu/drm/i915/gem/i915_gem_pm.h index c9a66630e92e..1b37d2bb6de8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pm.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.h @@ -17,7 +17,8 @@ void i915_gem_resume(struct drm_i915_private *i915); void i915_gem_idle_work_handler(struct work_struct *work); void i915_gem_suspend(struct drm_i915_private *i915); -void i915_gem_suspend_late(struct drm_i915_private *i915); +int i915_gem_suspend_late(struct drm_i915_private *i915); +int i915_gem_backup_suspend(struct drm_i915_private *i915); int i915_gem_freeze(struct drm_i915_private *i915); int i915_gem_freeze_late(struct drm_i915_private *i915); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index df2dcbad1eb9..eac52732c0ac 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -10,18 +10,16 @@ #include "intel_memory_region.h" #include "intel_region_ttm.h" +#include "gem/i915_gem_mman.h" #include "gem/i915_gem_object.h" #include "gem/i915_gem_region.h" #include "gem/i915_gem_ttm.h" -#include "gem/i915_gem_mman.h" +#include "gem/i915_gem_ttm_pm.h" -#include "gt/intel_migrate.h" -#include "gt/intel_engine_pm.h" -#define I915_PL_LMEM0 TTM_PL_PRIV -#define I915_PL_SYSTEM TTM_PL_SYSTEM -#define I915_PL_STOLEN TTM_PL_VRAM -#define I915_PL_GGTT TTM_PL_TT +#include "gt/intel_engine_pm.h" +#include "gt/intel_gt.h" +#include "gt/intel_migrate.h" #define I915_TTM_PRIO_PURGE 0 #define I915_TTM_PRIO_NO_PAGES 1 @@ -64,6 +62,20 @@ static struct ttm_placement i915_sys_placement = { .busy_placement = &sys_placement_flags, }; +/** + * i915_ttm_sys_placement - Return the struct ttm_placement to be + * used for an object in system memory. + * + * Rather than making the struct extern, use this + * function. + * + * Return: A pointer to a static variable for sys placement. + */ +struct ttm_placement *i915_ttm_sys_placement(void) +{ + return &i915_sys_placement; +} + static int i915_ttm_err_to_gem(int err) { /* Fastpath */ @@ -446,7 +458,7 @@ static int i915_ttm_accel_move(struct ttm_buffer_object *bo, enum i915_cache_level src_level, dst_level; int ret; - if (!i915->gt.migrate.context) + if (!i915->gt.migrate.context || intel_gt_is_wedged(&i915->gt)) return -EINVAL; dst_level = i915_ttm_cache_level(i915, dst_mem, dst_ttm); @@ -890,6 +902,7 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object *bo) { struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); + i915_ttm_backup_free(obj); i915_gem_object_release_memory_region(obj); mutex_destroy(&obj->ttm.get_io_page.lock); if (obj->ttm.created) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h index 34ac78d47b0d..0b7291dd897c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h @@ -50,4 +50,14 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, int i915_gem_obj_copy_ttm(struct drm_i915_gem_object *dst, struct drm_i915_gem_object *src, bool allow_accel, bool intr); + +/* Internal I915 TTM declarations and definitions below. */ + +#define I915_PL_LMEM0 TTM_PL_PRIV +#define I915_PL_SYSTEM TTM_PL_SYSTEM +#define I915_PL_STOLEN TTM_PL_VRAM +#define I915_PL_GGTT TTM_PL_TT + +struct ttm_placement *i915_ttm_sys_placement(void); + #endif diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c new file mode 100644 index 000000000000..3884bf45dab8 --- /dev/null +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c @@ -0,0 +1,205 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2021 Intel Corporation + */ + +#include +#include + +#include "i915_drv.h" +#include "intel_memory_region.h" +#include "intel_region_ttm.h" + +#include "gem/i915_gem_region.h" +#include "gem/i915_gem_ttm.h" +#include "gem/i915_gem_ttm_pm.h" + +/** + * i915_ttm_backup_free - Free any backup attached to this object + * @obj: The object whose backup is to be freed. + */ +void i915_ttm_backup_free(struct drm_i915_gem_object *obj) +{ + if (obj->ttm.backup) { + i915_gem_object_put(obj->ttm.backup); + obj->ttm.backup = NULL; + } +} + +/** + * struct i915_gem_ttm_pm_apply - Apply-to-region subclass for restore + * @base: The i915_gem_apply_to_region we derive from. + * @allow_gpu: Whether using the gpu blitter is allowed. + * @backup_pinned: On backup, backup also pinned objects. + */ +struct i915_gem_ttm_pm_apply { + struct i915_gem_apply_to_region base; + bool allow_gpu : 1; + bool backup_pinned : 1; +}; + +static int i915_ttm_backup(struct i915_gem_apply_to_region *apply, + struct drm_i915_gem_object *obj) +{ + struct i915_gem_ttm_pm_apply *pm_apply = + container_of(apply, typeof(*pm_apply), base); + struct ttm_buffer_object *bo = i915_gem_to_ttm(obj); + struct ttm_buffer_object *backup_bo; + struct drm_i915_private *i915 = + container_of(bo->bdev, typeof(*i915), bdev); + struct intel_memory_region *sys_region; + struct drm_i915_gem_object *backup; + struct ttm_operation_ctx ctx = {}; + int err = 0; + + if (bo->resource->mem_type == I915_PL_SYSTEM || obj->ttm.backup) + return 0; + + if (pm_apply->allow_gpu && i915_gem_object_evictable(obj)) + return ttm_bo_validate(bo, i915_ttm_sys_placement(), &ctx); + + if (!pm_apply->backup_pinned) + return 0; + + sys_region = i915->mm.regions[INTEL_REGION_SMEM]; + backup = i915_gem_object_create_region(sys_region, + obj->base.size, + 0, 0); + if (IS_ERR(backup)) + return PTR_ERR(backup); + + err = i915_gem_object_lock(backup, apply->ww); + if (err) + goto out_no_lock; + + backup_bo = i915_gem_to_ttm(backup); + err = ttm_tt_populate(backup_bo->bdev, backup_bo->ttm, &ctx); + if (err) + goto out_no_populate; + + err = i915_gem_obj_copy_ttm(backup, obj, pm_apply->allow_gpu, false); + GEM_WARN_ON(err); + + obj->ttm.backup = backup; + return 0; + +out_no_populate: + i915_gem_ww_unlock_single(backup); +out_no_lock: + i915_gem_object_put(backup); + + return err; +} + +static int i915_ttm_recover(struct i915_gem_apply_to_region *apply, + struct drm_i915_gem_object *obj) +{ + i915_ttm_backup_free(obj); + return 0; +} + +/** + * i915_ttm_recover_region - Free the backup of all objects of a region + * @mr: The memory region + * + * Checks all objects of a region if there is backup attached and if so + * frees that backup. Typically this is called to recover after a partially + * performed backup. + */ +void i915_ttm_recover_region(struct intel_memory_region *mr) +{ + static const struct i915_gem_apply_to_region_ops recover_ops = { + .process_obj = i915_ttm_recover, + }; + struct i915_gem_apply_to_region apply = {.ops = &recover_ops}; + int ret; + + ret = i915_gem_process_region(mr, &apply); + GEM_WARN_ON(ret); +} + +/** + * i915_ttm_backup_region - Back up all objects of a region to smem. + * @mr: The memory region + * @allow_gpu: Whether to allow the gpu blitter for this backup. + * @backup_pinned: Backup also pinned objects. + * + * Loops over all objects of a region and either evicts them if they are + * evictable or backs them up using a backup object if they are pinned. + * + * Return: Zero on success. Negative error code on error. + */ +int i915_ttm_backup_region(struct intel_memory_region *mr, bool allow_gpu, + bool backup_pinned) +{ + static const struct i915_gem_apply_to_region_ops backup_ops = { + .process_obj = i915_ttm_backup, + }; + struct i915_gem_ttm_pm_apply pm_apply = { + .base = {.ops = &backup_ops}, + .allow_gpu = allow_gpu, + .backup_pinned = backup_pinned, + }; + + return i915_gem_process_region(mr, &pm_apply.base); +} + +static int i915_ttm_restore(struct i915_gem_apply_to_region *apply, + struct drm_i915_gem_object *obj) +{ + struct i915_gem_ttm_pm_apply *pm_apply = + container_of(apply, typeof(*pm_apply), base); + struct drm_i915_gem_object *backup = obj->ttm.backup; + struct ttm_buffer_object *backup_bo = i915_gem_to_ttm(backup); + struct ttm_operation_ctx ctx = {}; + int err; + + if (!obj->ttm.backup) + return 0; + + if (!pm_apply->allow_gpu && (obj->flags & I915_BO_ALLOC_USER)) + return 0; + + err = i915_gem_object_lock(backup, apply->ww); + if (err) + return err; + + /* Content may have been swapped. */ + err = ttm_tt_populate(backup_bo->bdev, backup_bo->ttm, &ctx); + if (!err) { + err = i915_gem_obj_copy_ttm(obj, backup, pm_apply->allow_gpu, + false); + GEM_WARN_ON(err); + + obj->ttm.backup = NULL; + err = 0; + } + + i915_gem_ww_unlock_single(backup); + i915_gem_object_put(backup); + + return err; +} + +/** + * i915_ttm_restore_region - Back up all objects of a region to smem. + * @mr: The memory region + * @allow_gpu: Whether to allow the gpu blitter to recover. + * + * Loops over all objects of a region and either evicts them if they are + * evictable or backs them up using a backup object if they are pinned. + * + * Return: Zero on success. Negative error code on error. + */ +int i915_ttm_restore_region(struct intel_memory_region *mr, bool allow_gpu) +{ + static const struct i915_gem_apply_to_region_ops restore_ops = { + .process_obj = i915_ttm_restore, + }; + struct i915_gem_ttm_pm_apply pm_apply = { + .base = {.ops = &restore_ops}, + .allow_gpu = allow_gpu, + }; + + return i915_gem_process_region(mr, &pm_apply.base); +} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.h new file mode 100644 index 000000000000..4da27c21f3f4 --- /dev/null +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2021 Intel Corporation + */ + +#ifndef _I915_GEM_TTM_PM_H_ +#define _I915_GEM_TTM_PM_H_ + +#include + +struct intel_memory_region; +struct drm_i915_gem_object; + +int i915_ttm_backup_region(struct intel_memory_region *mr, bool allow_gpu, + bool backup_pinned); + +void i915_ttm_recover_region(struct intel_memory_region *mr); + +int i915_ttm_restore_region(struct intel_memory_region *mr, bool allow_gpu); + +/* Internal I915 TTM functions below. */ +void i915_ttm_backup_free(struct drm_i915_gem_object *obj); + +#endif diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c index dea8e2479897..c8e439d3b0e3 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c @@ -295,8 +295,6 @@ void intel_gt_suspend_prepare(struct intel_gt *gt) { user_forcewake(gt, true); wait_for_suspend(gt); - - intel_uc_suspend(>->uc); } static suspend_state_t pm_suspend_target(void) @@ -320,6 +318,8 @@ void intel_gt_suspend_late(struct intel_gt *gt) GEM_BUG_ON(gt->awake); + intel_uc_suspend(>->uc); + /* * On disabling the device, we want to turn off HW access to memory * that we no longer own. diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 59fb4c710c8c..ffb54ea26dcc 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1096,9 +1096,7 @@ static int i915_drm_prepare(struct drm_device *dev) * split out that work and pull it forward so that after point, * the GPU is not woken again. */ - i915_gem_suspend(i915); - - return 0; + return i915_gem_backup_suspend(i915); } static int i915_drm_suspend(struct drm_device *dev) @@ -1167,7 +1165,11 @@ static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation) disable_rpm_wakeref_asserts(rpm); - i915_gem_suspend_late(dev_priv); + ret = i915_gem_suspend_late(dev_priv); + if (ret) { + enable_rpm_wakeref_asserts(rpm); + return ret; + } intel_uncore_suspend(&dev_priv->uncore); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index be2392bbcecc..7d448591a445 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1856,7 +1856,7 @@ void i915_gem_driver_unregister(struct drm_i915_private *i915); void i915_gem_driver_remove(struct drm_i915_private *dev_priv); void i915_gem_driver_release(struct drm_i915_private *dev_priv); void i915_gem_suspend(struct drm_i915_private *dev_priv); -void i915_gem_suspend_late(struct drm_i915_private *dev_priv); +int i915_gem_suspend_late(struct drm_i915_private *dev_priv); void i915_gem_resume(struct drm_i915_private *dev_priv); int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file); From patchwork Mon Sep 6 16:55:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 12477155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36B60C433EF for ; Mon, 6 Sep 2021 16:56:00 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 04277606A5 for ; Mon, 6 Sep 2021 16:55:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 04277606A5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D614989BF3; Mon, 6 Sep 2021 16:55:46 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9E85C89B20; Mon, 6 Sep 2021 16:55:41 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10099"; a="219697354" X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="219697354" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:41 -0700 X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="579695087" Received: from mkrygin-mobl1.ccr.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.45]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:39 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, =?utf-8?q?Tho?= =?utf-8?q?mas_Hellstr=C3=B6m?= , Tvrtko Ursulin , Brost Matthew , Chris Wilson Date: Mon, 6 Sep 2021 18:55:12 +0200 Message-Id: <20210906165515.450541-5-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> References: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 4/6] drm/i915/gt: Register the migrate contexts with their engines X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Pinned contexts, like the migrate contexts need reset after resume since their context image may have been lost. Also the GuC needs to register pinned contexts. Add a list to struct intel_engine_cs where we add all pinned contexts on creation, and traverse that list at resume time to reset the pinned contexts. This fixes the kms_pipe_crc_basic@suspend-read-crc-pipe-a selftest for now, but proper LMEM backup / restore is needed for full suspend functionality. However, note that even with full LMEM backup / restore it may be desirable to keep the reset since backing up the migrate context images must happen using memcpy() after the migrate context has become inactive, and for performance- and other reasons we want to avoid memcpy() from LMEM. Also traverse the list at guc_init_lrc_mapping() calling guc_kernel_context_pin() for the pinned contexts, like is already done for the kernel context. v2: - Don't reset the contexts on each __engine_unpark() but rather at resume time (Chris Wilson). v3: - Reset contexts in the engine sanitize callback. (Chris Wilson) Cc: Tvrtko Ursulin Cc: Matthew Auld Cc: Maarten Lankhorst Cc: Brost Matthew Cc: Chris Wilson Signed-off-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/intel_context_types.h | 8 +++++++ drivers/gpu/drm/i915/gt/intel_engine_cs.c | 4 ++++ drivers/gpu/drm/i915/gt/intel_engine_pm.c | 23 +++++++++++++++++++ drivers/gpu/drm/i915/gt/intel_engine_pm.h | 2 ++ drivers/gpu/drm/i915/gt/intel_engine_types.h | 7 ++++++ .../drm/i915/gt/intel_execlists_submission.c | 2 ++ .../gpu/drm/i915/gt/intel_ring_submission.c | 3 +++ drivers/gpu/drm/i915/gt/mock_engine.c | 2 ++ .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++--- 9 files changed, 60 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index e54351a170e2..a63631ea0ec4 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -152,6 +152,14 @@ struct intel_context { /** sseu: Control eu/slice partitioning */ struct intel_sseu sseu; + /** + * pinned_contexts_link: List link for the engine's pinned contexts. + * This is only used if this is a perma-pinned kernel context and + * the list is assumed to only be manipulated during driver load + * or unload time so no mutex protection currently. + */ + struct list_head pinned_contexts_link; + u8 wa_bb_page; /* if set, page num reserved for context workarounds */ struct { diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index 332efea696a5..c606a4714904 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -320,6 +320,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id) BUILD_BUG_ON(BITS_PER_TYPE(engine->mask) < I915_NUM_ENGINES); + INIT_LIST_HEAD(&engine->pinned_contexts_list); engine->id = id; engine->legacy_idx = INVALID_ENGINE; engine->mask = BIT(id); @@ -875,6 +876,8 @@ intel_engine_create_pinned_context(struct intel_engine_cs *engine, return ERR_PTR(err); } + list_add_tail(&ce->pinned_contexts_link, &engine->pinned_contexts_list); + /* * Give our perma-pinned kernel timelines a separate lockdep class, * so that we can use them from within the normal user timelines @@ -897,6 +900,7 @@ void intel_engine_destroy_pinned_context(struct intel_context *ce) list_del(&ce->timeline->engine_link); mutex_unlock(&hwsp->vm->mutex); + list_del(&ce->pinned_contexts_link); intel_context_unpin(ce); intel_context_put(ce); } diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c index 1f07ac4e0672..dacd62773735 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c @@ -298,6 +298,29 @@ void intel_engine_init__pm(struct intel_engine_cs *engine) intel_engine_init_heartbeat(engine); } +/** + * intel_engine_reset_pinned_contexts - Reset the pinned contexts of + * an engine. + * @engine: The engine whose pinned contexts we want to reset. + * + * Typically the pinned context LMEM images lose or get their content + * corrupted on suspend. This function resets their images. + */ +void intel_engine_reset_pinned_contexts(struct intel_engine_cs *engine) +{ + struct intel_context *ce; + + list_for_each_entry(ce, &engine->pinned_contexts_list, + pinned_contexts_link) { + /* kernel context gets reset at __engine_unpark() */ + if (ce == engine->kernel_context) + continue; + + dbg_poison_ce(ce); + ce->ops->reset(ce); + } +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftest_engine_pm.c" #endif diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.h b/drivers/gpu/drm/i915/gt/intel_engine_pm.h index 70ea46d6cfb0..8520c595f5e1 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.h @@ -69,4 +69,6 @@ intel_engine_create_kernel_request(struct intel_engine_cs *engine) void intel_engine_init__pm(struct intel_engine_cs *engine); +void intel_engine_reset_pinned_contexts(struct intel_engine_cs *engine); + #endif /* INTEL_ENGINE_PM_H */ diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h index bfbfe53c23dd..5ae1207c363b 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h @@ -307,6 +307,13 @@ struct intel_engine_cs { struct intel_context *kernel_context; /* pinned */ + /** + * pinned_contexts_list: List of pinned contexts. This list is only + * assumed to be manipulated during driver load- or unload time and + * does therefore not have any additional protection. + */ + struct list_head pinned_contexts_list; + intel_engine_mask_t saturated; /* submitting semaphores too late? */ struct { diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index de5f9c86b9a4..e80d6a1c7cde 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -2791,6 +2791,8 @@ static void execlists_sanitize(struct intel_engine_cs *engine) /* And scrub the dirty cachelines for the HWSP */ clflush_cache_range(engine->status_page.addr, PAGE_SIZE); + + intel_engine_reset_pinned_contexts(engine); } static void enable_error_interrupt(struct intel_engine_cs *engine) diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index 3c65efcb7bed..593524195707 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -17,6 +17,7 @@ #include "intel_ring.h" #include "shmem_utils.h" #include "intel_engine_heartbeat.h" +#include "intel_engine_pm.h" /* Rough estimate of the typical request size, performing a flush, * set-context and then emitting the batch. @@ -292,6 +293,8 @@ static void xcs_sanitize(struct intel_engine_cs *engine) /* And scrub the dirty cachelines for the HWSP */ clflush_cache_range(engine->status_page.addr, PAGE_SIZE); + + intel_engine_reset_pinned_contexts(engine); } static void reset_prepare(struct intel_engine_cs *engine) diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c index 2c1af030310c..8b89215afe46 100644 --- a/drivers/gpu/drm/i915/gt/mock_engine.c +++ b/drivers/gpu/drm/i915/gt/mock_engine.c @@ -376,6 +376,8 @@ int mock_engine_init(struct intel_engine_cs *engine) { struct intel_context *ce; + INIT_LIST_HEAD(&engine->pinned_contexts_list); + engine->sched_engine = i915_sched_engine_create(ENGINE_MOCK); if (!engine->sched_engine) return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 87d8dc8f51b9..58bc50d3abe5 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -2310,6 +2310,8 @@ static void guc_sanitize(struct intel_engine_cs *engine) /* And scrub the dirty cachelines for the HWSP */ clflush_cache_range(engine->status_page.addr, PAGE_SIZE); + + intel_engine_reset_pinned_contexts(engine); } static void setup_hwsp(struct intel_engine_cs *engine) @@ -2385,9 +2387,13 @@ static inline void guc_init_lrc_mapping(struct intel_guc *guc) * and even it did this code would be run again. */ - for_each_engine(engine, gt, id) - if (engine->kernel_context) - guc_kernel_context_pin(guc, engine->kernel_context); + for_each_engine(engine, gt, id) { + struct intel_context *ce; + + list_for_each_entry(ce, &engine->pinned_contexts_list, + pinned_contexts_link) + guc_kernel_context_pin(guc, ce); + } } static void guc_release(struct intel_engine_cs *engine) From patchwork Mon Sep 6 16:55:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 12477151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6171C433F5 for ; Mon, 6 Sep 2021 16:55:53 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 909ED60FBF for ; Mon, 6 Sep 2021 16:55:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 909ED60FBF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 19A0D89BBE; Mon, 6 Sep 2021 16:55:45 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4447789BAB; Mon, 6 Sep 2021 16:55:43 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10099"; a="219697383" X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="219697383" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:42 -0700 X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="579695100" Received: from mkrygin-mobl1.ccr.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.45]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:41 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, =?utf-8?q?Tho?= =?utf-8?q?mas_Hellstr=C3=B6m?= Date: Mon, 6 Sep 2021 18:55:13 +0200 Message-Id: <20210906165515.450541-6-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> References: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 5/6] drm/i915: Don't back up pinned LMEM context images and rings during suspend X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Pinned context images are now reset during resume. Don't back them up, and assuming that rings can be assumed empty at suspend, don't back them up either. Introduce a new object flag, I915_BO_ALLOC_PM_VOLATILE meaning that an object is allowed to lose its content on suspend. Signed-off-by: Thomas Hellström --- .../gpu/drm/i915/gem/i915_gem_object_types.h | 17 ++++++++++------- drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c | 3 +++ drivers/gpu/drm/i915/gt/intel_lrc.c | 3 ++- drivers/gpu/drm/i915/gt/intel_ring.c | 3 ++- 4 files changed, 17 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 734cc8e16481..66123ba46247 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -288,16 +288,19 @@ struct drm_i915_gem_object { I915_SELFTEST_DECLARE(struct list_head st_link); unsigned long flags; -#define I915_BO_ALLOC_CONTIGUOUS BIT(0) -#define I915_BO_ALLOC_VOLATILE BIT(1) -#define I915_BO_ALLOC_CPU_CLEAR BIT(2) -#define I915_BO_ALLOC_USER BIT(3) +#define I915_BO_ALLOC_CONTIGUOUS BIT(0) +#define I915_BO_ALLOC_VOLATILE BIT(1) +#define I915_BO_ALLOC_CPU_CLEAR BIT(2) +#define I915_BO_ALLOC_USER BIT(3) +/* Object may lose its contents on suspend / resume */ +#define I915_BO_ALLOC_PM_VOLATILE BIT(4) #define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \ I915_BO_ALLOC_VOLATILE | \ I915_BO_ALLOC_CPU_CLEAR | \ - I915_BO_ALLOC_USER) -#define I915_BO_READONLY BIT(4) -#define I915_TILING_QUIRK_BIT 5 /* unknown swizzling; do not release! */ + I915_BO_ALLOC_USER | \ + I915_BO_ALLOC_PM_VOLATILE) +#define I915_BO_READONLY BIT(5) +#define I915_TILING_QUIRK_BIT 6 /* unknown swizzling; do not release! */ /** * @mem_flags - Mutable placement-related flags diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c index 3884bf45dab8..eaceecfc3f19 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c @@ -61,6 +61,9 @@ static int i915_ttm_backup(struct i915_gem_apply_to_region *apply, if (!pm_apply->backup_pinned) return 0; + if (obj->flags & I915_BO_ALLOC_PM_VOLATILE) + return 0; + sys_region = i915->mm.regions[INTEL_REGION_SMEM]; backup = i915_gem_object_create_region(sys_region, obj->base.size, diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index 6ba8daea2f56..3ef9eaf8c50e 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -942,7 +942,8 @@ __lrc_alloc_state(struct intel_context *ce, struct intel_engine_cs *engine) context_size += PAGE_SIZE; } - obj = i915_gem_object_create_lmem(engine->i915, context_size, 0); + obj = i915_gem_object_create_lmem(engine->i915, context_size, + I915_BO_ALLOC_PM_VOLATILE); if (IS_ERR(obj)) obj = i915_gem_object_create_shmem(engine->i915, context_size); if (IS_ERR(obj)) diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c index 7c4d5158e03b..2fdd52b62092 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring.c +++ b/drivers/gpu/drm/i915/gt/intel_ring.c @@ -112,7 +112,8 @@ static struct i915_vma *create_ring_vma(struct i915_ggtt *ggtt, int size) struct drm_i915_gem_object *obj; struct i915_vma *vma; - obj = i915_gem_object_create_lmem(i915, size, I915_BO_ALLOC_VOLATILE); + obj = i915_gem_object_create_lmem(i915, size, I915_BO_ALLOC_VOLATILE | + I915_BO_ALLOC_PM_VOLATILE); if (IS_ERR(obj) && i915_ggtt_has_aperture(ggtt)) obj = i915_gem_object_create_stolen(i915, size); if (IS_ERR(obj)) From patchwork Mon Sep 6 16:55:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 12477153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE73AC433F5 for ; Mon, 6 Sep 2021 16:55:59 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5A4B2606A5 for ; Mon, 6 Sep 2021 16:55:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5A4B2606A5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CF80889BF1; Mon, 6 Sep 2021 16:55:46 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id 19FDA89BD2; Mon, 6 Sep 2021 16:55:45 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10099"; a="219697389" X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="219697389" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:44 -0700 X-IronPort-AV: E=Sophos;i="5.85,272,1624345200"; d="scan'208";a="579695104" Received: from mkrygin-mobl1.ccr.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.45]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 09:55:43 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, =?utf-8?q?Tho?= =?utf-8?q?mas_Hellstr=C3=B6m?= Date: Mon, 6 Sep 2021 18:55:14 +0200 Message-Id: <20210906165515.450541-7-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> References: <20210906165515.450541-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 6/6] drm/i915: Reduce the number of objects subject to memcpy recover X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We really only need memcpy restore for objects that affect the operability of the migrate context. That is, primarily the page-table objects of the migrate VM. Add an object flag, I915_BO_ALLOC_PM_EARLY for objects that need early restores using memcpy and a way to assign LMEM page-table object flags to be used by the vms. Restore objects without this flag with the gpu blitter and only objects carrying the flag using TTM memcpy. Initially mark the migrate, gt, gtt and vgpu vms to use this flag, and defer for a later audit which vms actually need it. Most importantly, user- allocated vms with pinned page-table objects can be restored using the blitter. Performance-wise memcpy restore is probably as fast as gpu restore if not faster, but using gpu restore will help tackling future restrictions in mappable LMEM size. Signed-off-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 4 ++-- drivers/gpu/drm/i915/gem/i915_gem_object_types.h | 9 ++++++--- drivers/gpu/drm/i915/gem/i915_gem_pm.c | 6 ++++-- drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c | 6 ++++-- drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 2 +- drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 2 +- drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 5 +++-- drivers/gpu/drm/i915/gt/gen8_ppgtt.h | 4 +++- drivers/gpu/drm/i915/gt/intel_ggtt.c | 2 +- drivers/gpu/drm/i915/gt/intel_gt.c | 2 +- drivers/gpu/drm/i915/gt/intel_gtt.c | 3 ++- drivers/gpu/drm/i915/gt/intel_gtt.h | 9 +++++++-- drivers/gpu/drm/i915/gt/intel_migrate.c | 2 +- drivers/gpu/drm/i915/gt/intel_ppgtt.c | 13 ++++++++----- drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 2 +- drivers/gpu/drm/i915/gvt/scheduler.c | 2 +- drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 4 ++-- 17 files changed, 48 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index fd169cf2f75a..3dbebced0950 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -1312,7 +1312,7 @@ i915_gem_create_context(struct drm_i915_private *i915, } else if (HAS_FULL_PPGTT(i915)) { struct i915_ppgtt *ppgtt; - ppgtt = i915_ppgtt_create(&i915->gt); + ppgtt = i915_ppgtt_create(&i915->gt, 0); if (IS_ERR(ppgtt)) { drm_dbg(&i915->drm, "PPGTT setup failed (%ld)\n", PTR_ERR(ppgtt)); @@ -1490,7 +1490,7 @@ int i915_gem_vm_create_ioctl(struct drm_device *dev, void *data, if (args->flags) return -EINVAL; - ppgtt = i915_ppgtt_create(&i915->gt); + ppgtt = i915_ppgtt_create(&i915->gt, 0); if (IS_ERR(ppgtt)) return PTR_ERR(ppgtt); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 66123ba46247..477b98b656b4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -294,13 +294,16 @@ struct drm_i915_gem_object { #define I915_BO_ALLOC_USER BIT(3) /* Object may lose its contents on suspend / resume */ #define I915_BO_ALLOC_PM_VOLATILE BIT(4) +/* Object needs to be restored early using memcpy during resume */ +#define I915_BO_ALLOC_PM_EARLY BIT(5) #define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \ I915_BO_ALLOC_VOLATILE | \ I915_BO_ALLOC_CPU_CLEAR | \ I915_BO_ALLOC_USER | \ - I915_BO_ALLOC_PM_VOLATILE) -#define I915_BO_READONLY BIT(5) -#define I915_TILING_QUIRK_BIT 6 /* unknown swizzling; do not release! */ + I915_BO_ALLOC_PM_VOLATILE | \ + I915_BO_ALLOC_PM_EARLY) +#define I915_BO_READONLY BIT(6) +#define I915_TILING_QUIRK_BIT 7 /* unknown swizzling; do not release! */ /** * @mem_flags - Mutable placement-related flags diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c index 9746c255ddcc..cdd344f64404 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c @@ -98,9 +98,11 @@ int i915_gem_backup_suspend(struct drm_i915_private *i915) * More objects may have become unpinned as requests were * retired. Now try to evict again. The gt may be wedged here * in which case we automatically fall back to memcpy. + * We allow also backing up pinned objects that have not been + * marked for early recover, and that may contain, for example, + * page-tables for the migrate context. */ - - ret = lmem_suspend(i915, true, false); + ret = lmem_suspend(i915, true, true); if (ret) goto out_recover; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c index eaceecfc3f19..bedd0bdd8302 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c @@ -58,7 +58,8 @@ static int i915_ttm_backup(struct i915_gem_apply_to_region *apply, if (pm_apply->allow_gpu && i915_gem_object_evictable(obj)) return ttm_bo_validate(bo, i915_ttm_sys_placement(), &ctx); - if (!pm_apply->backup_pinned) + if (!pm_apply->backup_pinned || + (pm_apply->allow_gpu && (obj->flags & I915_BO_ALLOC_PM_EARLY))) return 0; if (obj->flags & I915_BO_ALLOC_PM_VOLATILE) @@ -160,7 +161,8 @@ static int i915_ttm_restore(struct i915_gem_apply_to_region *apply, if (!obj->ttm.backup) return 0; - if (!pm_apply->allow_gpu && (obj->flags & I915_BO_ALLOC_USER)) + if (!pm_apply->allow_gpu && ((obj->flags & I915_BO_ALLOC_USER) || + !(obj->flags & I915_BO_ALLOC_PM_EARLY))) return 0; err = i915_gem_object_lock(backup, apply->ww); diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index a094f3ce1a90..149f62221a83 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -1629,7 +1629,7 @@ int i915_gem_huge_page_mock_selftests(void) mkwrite_device_info(dev_priv)->ppgtt_type = INTEL_PPGTT_FULL; mkwrite_device_info(dev_priv)->ppgtt_size = 48; - ppgtt = i915_ppgtt_create(&dev_priv->gt); + ppgtt = i915_ppgtt_create(&dev_priv->gt, 0); if (IS_ERR(ppgtt)) { err = PTR_ERR(ppgtt); goto out_unlock; diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c index 1aee5e6b1b23..890191f286e3 100644 --- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c @@ -429,7 +429,7 @@ struct i915_ppgtt *gen6_ppgtt_create(struct intel_gt *gt) mutex_init(&ppgtt->flush); mutex_init(&ppgtt->pin_mutex); - ppgtt_init(&ppgtt->base, gt); + ppgtt_init(&ppgtt->base, gt, 0); ppgtt->base.vm.pd_shift = ilog2(SZ_4K * SZ_4K / sizeof(gen6_pte_t)); ppgtt->base.vm.top = 1; diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index 6e0e52eeb87a..da4aa73f46a9 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -751,7 +751,8 @@ gen8_alloc_top_pd(struct i915_address_space *vm) * space. * */ -struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt) +struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt, + unsigned long lmem_pt_obj_flags) { struct i915_ppgtt *ppgtt; int err; @@ -760,7 +761,7 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt) if (!ppgtt) return ERR_PTR(-ENOMEM); - ppgtt_init(ppgtt, gt); + ppgtt_init(ppgtt, gt, lmem_pt_obj_flags); ppgtt->vm.top = i915_vm_is_4lvl(&ppgtt->vm) ? 3 : 2; ppgtt->vm.pd_shift = ilog2(SZ_4K * SZ_4K / sizeof(gen8_pte_t)); diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.h b/drivers/gpu/drm/i915/gt/gen8_ppgtt.h index b9028c2ad3c7..f541d19264b4 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.h +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.h @@ -12,7 +12,9 @@ struct i915_address_space; struct intel_gt; enum i915_cache_level; -struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt); +struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt, + unsigned long lmem_pt_obj_flags); + u64 gen8_ggtt_pte_encode(dma_addr_t addr, enum i915_cache_level level, u32 flags); diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c index de3ac58fceec..6e5d2f82b1f2 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c @@ -644,7 +644,7 @@ static int init_aliasing_ppgtt(struct i915_ggtt *ggtt) struct i915_ppgtt *ppgtt; int err; - ppgtt = i915_ppgtt_create(ggtt->vm.gt); + ppgtt = i915_ppgtt_create(ggtt->vm.gt, I915_BO_ALLOC_PM_EARLY); if (IS_ERR(ppgtt)) return PTR_ERR(ppgtt); diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index 62d40c986642..70c06d6eeec8 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -481,7 +481,7 @@ static void intel_gt_fini_scratch(struct intel_gt *gt) static struct i915_address_space *kernel_vm(struct intel_gt *gt) { if (INTEL_PPGTT(gt->i915) > INTEL_PPGTT_ALIASING) - return &i915_ppgtt_create(gt)->vm; + return &i915_ppgtt_create(gt, I915_BO_ALLOC_PM_EARLY)->vm; else return i915_vm_get(>->ggtt->vm); } diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c index e137dd32b5b8..ea6ea97a7a62 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.c +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c @@ -28,7 +28,8 @@ struct drm_i915_gem_object *alloc_pt_lmem(struct i915_address_space *vm, int sz) * used the passed in size for the page size, which should ensure it * also has the same alignment. */ - obj = __i915_gem_object_create_lmem_with_ps(vm->i915, sz, sz, 0); + obj = __i915_gem_object_create_lmem_with_ps(vm->i915, sz, sz, + vm->lmem_pt_obj_flags); /* * Ensure all paging structures for this vm share the same dma-resv * object underneath, with the idea that one object_lock() will lock diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index bc7153018ebd..8361a1e38c98 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -260,6 +260,9 @@ struct i915_address_space { u8 pd_shift; u8 scratch_order; + /* Flags used when creating page-table objects for this vm */ + unsigned long lmem_pt_obj_flags; + struct drm_i915_gem_object * (*alloc_pt_dma)(struct i915_address_space *vm, int sz); @@ -519,7 +522,8 @@ i915_page_dir_dma_addr(const struct i915_ppgtt *ppgtt, const unsigned int n) return __px_dma(pt ? px_base(pt) : ppgtt->vm.scratch[ppgtt->vm.top]); } -void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt); +void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt, + unsigned long lmem_pt_obj_flags); int i915_ggtt_probe_hw(struct drm_i915_private *i915); int i915_ggtt_init_hw(struct drm_i915_private *i915); @@ -537,7 +541,8 @@ static inline bool i915_ggtt_has_aperture(const struct i915_ggtt *ggtt) int i915_ppgtt_init_hw(struct intel_gt *gt); -struct i915_ppgtt *i915_ppgtt_create(struct intel_gt *gt); +struct i915_ppgtt *i915_ppgtt_create(struct intel_gt *gt, + unsigned long lmem_pt_obj_flags); void i915_ggtt_suspend(struct i915_ggtt *gtt); void i915_ggtt_resume(struct i915_ggtt *ggtt); diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 1dac21aa7e5c..afb1cce9a352 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -78,7 +78,7 @@ static struct i915_address_space *migrate_vm(struct intel_gt *gt) * TODO: Add support for huge LMEM PTEs */ - vm = i915_ppgtt_create(gt); + vm = i915_ppgtt_create(gt, I915_BO_ALLOC_PM_EARLY); if (IS_ERR(vm)) return ERR_CAST(vm); diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index 886060f7e6fc..4396bfd630d8 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -155,19 +155,20 @@ int i915_ppgtt_init_hw(struct intel_gt *gt) } static struct i915_ppgtt * -__ppgtt_create(struct intel_gt *gt) +__ppgtt_create(struct intel_gt *gt, unsigned long lmem_pt_obj_flags) { if (GRAPHICS_VER(gt->i915) < 8) return gen6_ppgtt_create(gt); else - return gen8_ppgtt_create(gt); + return gen8_ppgtt_create(gt, lmem_pt_obj_flags); } -struct i915_ppgtt *i915_ppgtt_create(struct intel_gt *gt) +struct i915_ppgtt *i915_ppgtt_create(struct intel_gt *gt, + unsigned long lmem_pt_obj_flags) { struct i915_ppgtt *ppgtt; - ppgtt = __ppgtt_create(gt); + ppgtt = __ppgtt_create(gt, lmem_pt_obj_flags); if (IS_ERR(ppgtt)) return ppgtt; @@ -298,7 +299,8 @@ int ppgtt_set_pages(struct i915_vma *vma) return 0; } -void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt) +void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt, + unsigned long lmem_pt_obj_flags) { struct drm_i915_private *i915 = gt->i915; @@ -306,6 +308,7 @@ void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt) ppgtt->vm.i915 = i915; ppgtt->vm.dma = i915->drm.dev; ppgtt->vm.total = BIT_ULL(INTEL_INFO(i915)->ppgtt_size); + ppgtt->vm.lmem_pt_obj_flags = lmem_pt_obj_flags; dma_resv_init(&ppgtt->vm._resv); i915_address_space_init(&ppgtt->vm, VM_CLASS_PPGTT); diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c index 2c1ed32ca5ac..b9e1e86f0e21 100644 --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c @@ -1596,7 +1596,7 @@ static int igt_reset_evict_ppgtt(void *arg) if (INTEL_PPGTT(gt->i915) < INTEL_PPGTT_FULL) return 0; - ppgtt = i915_ppgtt_create(gt); + ppgtt = i915_ppgtt_create(gt, 0); if (IS_ERR(ppgtt)) return PTR_ERR(ppgtt); diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c index b56a8e37a3cd..0d18e13e3468 100644 --- a/drivers/gpu/drm/i915/gvt/scheduler.c +++ b/drivers/gpu/drm/i915/gvt/scheduler.c @@ -1386,7 +1386,7 @@ int intel_vgpu_setup_submission(struct intel_vgpu *vgpu) enum intel_engine_id i; int ret; - ppgtt = i915_ppgtt_create(&i915->gt); + ppgtt = i915_ppgtt_create(&i915->gt, I915_BO_ALLOC_PM_EARLY); if (IS_ERR(ppgtt)) return PTR_ERR(ppgtt); diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c index f843a5040706..f0645807518e 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -155,7 +155,7 @@ static int igt_ppgtt_alloc(void *arg) if (!HAS_PPGTT(dev_priv)) return 0; - ppgtt = i915_ppgtt_create(&dev_priv->gt); + ppgtt = i915_ppgtt_create(&dev_priv->gt, 0); if (IS_ERR(ppgtt)) return PTR_ERR(ppgtt); @@ -1053,7 +1053,7 @@ static int exercise_ppgtt(struct drm_i915_private *dev_priv, if (IS_ERR(file)) return PTR_ERR(file); - ppgtt = i915_ppgtt_create(&dev_priv->gt); + ppgtt = i915_ppgtt_create(&dev_priv->gt, 0); if (IS_ERR(ppgtt)) { err = PTR_ERR(ppgtt); goto out_free;