From patchwork Tue Jun 21 20:00:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B274C433EF for ; Tue, 21 Jun 2022 20:01:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B0E7F10FEA2; Tue, 21 Jun 2022 20:01:17 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3379710F5C6; Tue, 21 Jun 2022 20:01:14 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 9814266017A7; Tue, 21 Jun 2022 21:01:12 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841672; bh=KHg3eOnYqUoUcg57EXhLlpoQHXoJLlQC7w8GyVbN4OU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lL6WldoeobFzLwX9XwfZyPDkVCi01W5Adrqci6zXH7kLKCW74xatTgF24NGx/oTx8 Lmd0+LLnMhvde2FT/h7Zo2+bfD9UnAgakVztMUsTyACJie5irB/wEzBrsmHbqw/sy+ V2JCy6/F7U0eflmJARn6fusJ2O6tjbDtI4dxTHp1NLyeXu1V3KXQMgVcAEBWc8tSJR KjcWMvHjmHq0ONiRAa3+N3Jm1hikxslYQXFceNqlJJMc3zN2X7LMNqhtuM0/5rtSTe uEzen/BtZhW6CcMngtP1U93GvcE66wDutFOEkGQgBsxewt4an9id6damYSK1A5n3K2 6cey8WLSGuukw== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:49 +0000 Message-Id: <20220621200058.3536182-2-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 01/10] drm/i915/ttm: dont trample cache_level overrides during ttm move X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Various places within the driver override the default chosen cache_level. Before ttm, these overrides were permanent until explicitly changed again or for the lifetime of the buffer. TTM movement code came along and decided that it could make that decision at that time, which is usually well after object creation, so overrode the cache_level decision and reverted it back to its default decision. Add logic to indicate whether the caching mode has been set by anything other than the move logic. If so, assume that the code that overrode the defaults knows best and keep it. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 1 + drivers/gpu/drm/i915/gem/i915_gem_object_types.h | 1 + drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 1 + drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 9 ++++++--- 4 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 06b1b188ce5a..519887769c08 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -125,6 +125,7 @@ void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); obj->cache_level = cache_level; + obj->ttm.cache_level_override = true; if (cache_level != I915_CACHE_NONE) obj->cache_coherent = (I915_BO_CACHE_COHERENT_FOR_READ | diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 2c88bdb8ff7c..6632ed52e919 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -605,6 +605,7 @@ struct drm_i915_gem_object { struct i915_gem_object_page_iter get_io_page; struct drm_i915_gem_object *backup; bool created:1; + bool cache_level_override:1; } ttm; /* diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 4c25d9b2f138..27d59639177f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1241,6 +1241,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, i915_gem_object_init_memory_region(obj, mem); i915_ttm_adjust_domains_after_move(obj); i915_ttm_adjust_gem_after_move(obj); + obj->ttm.cache_level_override = false; i915_gem_object_unlock(obj); return 0; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index a10716f4e717..4c1de0b4a10f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -123,9 +123,12 @@ void i915_ttm_adjust_gem_after_move(struct drm_i915_gem_object *obj) obj->mem_flags |= i915_ttm_cpu_maps_iomem(bo->resource) ? I915_BO_FLAG_IOMEM : I915_BO_FLAG_STRUCT_PAGE; - cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), bo->resource, - bo->ttm); - i915_gem_object_set_cache_coherency(obj, cache_level); + if (!obj->ttm.cache_level_override) { + cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), + bo->resource, bo->ttm); + i915_gem_object_set_cache_coherency(obj, cache_level); + obj->ttm.cache_level_override = false; + } } /** From patchwork Tue Jun 21 20:00:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06F18C433EF for ; Tue, 21 Jun 2022 20:01:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CC08410FEBD; Tue, 21 Jun 2022 20:01:17 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id BAA6710FC27; Tue, 21 Jun 2022 20:01:14 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 41BCC66017E0; Tue, 21 Jun 2022 21:01:13 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841673; bh=JT0TnJkB0rL8RAGj5L8WxKKfTjJ2qr1XwBthtCgxh3Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eGE2Q+8zH1tJUVtZQ5lzBhpbKRbafYRedijLDXXKbPQJWrhKAXhq40DJl8R7kZ/m/ YW75+JrcMMUds0sUTlH7IL34UYbhHj2uefsaXPE/evXZoeGhV5ygN5EPhFG+P+pPCF gnLJQq/7MwonFNUVFytRTsiBX8x/S0HeTelKzgJV5PSuvniqCcOM3/20JezgaV3EY4 aR8vT3TTbn8cryapRZxuAUSrsKjXv9Q0IK3Iv2t5Ms1WqyYg1U4Ut19s2BmEkH6XS2 wbARtOZYb6Kpe2zw2NMae4xHQxHD9cArulyrgYVK5GyRFkGituS/VHnst+h3x6q2MD a3w0+FwTwCkXg== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:50 +0000 Message-Id: <20220621200058.3536182-3-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 02/10] drm/i915: limit ttm to dma32 for i965G[M] X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" i965G[M] cannot relocate objects above 4GiB. Ensure ttm uses dma32 on these systems. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/intel_region_ttm.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index 62ff77445b01..fd2ecfdd8fa1 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -32,10 +32,15 @@ int intel_region_ttm_device_init(struct drm_i915_private *dev_priv) { struct drm_device *drm = &dev_priv->drm; + bool use_dma32 = false; + + /* i965g[m] cannot relocate objects above 4GiB. */ + if (IS_I965GM(dev_priv) || IS_I965G(dev_priv)) + use_dma32 = true; return ttm_device_init(&dev_priv->bdev, i915_ttm_driver(), drm->dev, drm->anon_inode->i_mapping, - drm->vma_offset_manager, false, false); + drm->vma_offset_manager, false, use_dma32); } /** From patchwork Tue Jun 21 20:00:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD965C43334 for ; Tue, 21 Jun 2022 20:01:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B0AF010FE5D; Tue, 21 Jun 2022 20:01:17 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6B02C10FC27; Tue, 21 Jun 2022 20:01:15 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id E411866017FD; Tue, 21 Jun 2022 21:01:13 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841674; bh=8vOVoJXWNBNoAe7uHxF6kK6AtRixKpYtIjs6ocUktn4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cKfR42W8QhvBEZgFjvCi9uu9Tu3me7dH9jzTgZ0qNvBBDZitpkEaRxby0bpdBeX5u S3MM+bnwnJYpA6GgJVcL/fbwGDKeazRBIfuzrm0clmt2kY/J+zTNTW9y/NulZ5FIl5 BNa0JAM9wZqI8nAsGGR9QdIw55GTban3BM6CQympWrPIuEiJ5ZAmimICd4yz29t9xb ao5pUKV651RAZUuVRS/D+TtJItJ2s9DMartS2IdzhqgW9uC9LcCSzwWUBucNTSgNmL eEdyPBvbfrRhG4K4mAdo5fCSDK7EYOZtPRu7Ok5Xr4fqtjK6DIPu5rMN/5q1abc+7v 9q5fwYPwNSnQA== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:51 +0000 Message-Id: <20220621200058.3536182-4-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 03/10] drm/i915/ttm: only trust snooping for dgfx when deciding default cache_level X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" By default i915_ttm_cache_level() decides I915_CACHE_LLC if HAS_SNOOP. This is divergent from existing backends code which only considers HAS_LLC. Testing shows that trusting snooping on gen5- is unreliable and bsw via ggtt mappings, so limit DGFX for now and maintain previous behaviour. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 4c1de0b4a10f..40249fa28a7a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -46,7 +46,9 @@ static enum i915_cache_level i915_ttm_cache_level(struct drm_i915_private *i915, struct ttm_resource *res, struct ttm_tt *ttm) { - return ((HAS_LLC(i915) || HAS_SNOOP(i915)) && + bool can_snoop = HAS_SNOOP(i915) && IS_DGFX(i915); + + return ((HAS_LLC(i915) || can_snoop) && !i915_ttm_gtt_binds_lmem(res) && ttm->caching == ttm_cached) ? I915_CACHE_LLC : I915_CACHE_NONE; From patchwork Tue Jun 21 20:00:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3EA0C433EF for ; Tue, 21 Jun 2022 20:01:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CBDA310FF3A; Tue, 21 Jun 2022 20:01:18 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0499110FE5D; Tue, 21 Jun 2022 20:01:15 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 889F6660181F; Tue, 21 Jun 2022 21:01:14 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841674; bh=NNjcoDidsEevqXEiV8GzBg83apsOHvIp6XGvRc7cnZo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=atDjEUv05QVeHsBif7hEhci7xWp5Ojll9NC3qg6ZPklJ/EZgJ7J9R6xpWo2gc79Ab p++Pe05XyrhU+ZNBDIlfnWJ0W2BQbWnmPPARbghW9YLE4XsejCOoGBN2PZfVeBUAkE 273Mkyy3UjzohQ2fTzXxVGPfDzbbfQpSet5wJank2AXdnCeAgWf7IKdJVvp1Kjvtpb 5gEJazjRoTiL+PIBdF0BAk+/kjtIVaYL7WZbalMVT3GrfPCdds3WBigZoXIqbXrNpp bu5laLMqcISSPw/77ht1w7IDnLAqHCz8/dFGB7CRl9JtXPBPDRx1ZLJQCULiaP+9ZA 6EQYl18gTWCHQ== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:52 +0000 Message-Id: <20220621200058.3536182-5-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 04/10] drm/i915/gem: selftest should not attempt mmap of private regions X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" During testing make can_mmap consider whether the region is private. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 5bc93a1ce3e3..76181e28c75e 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -869,6 +869,9 @@ static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type) struct drm_i915_private *i915 = to_i915(obj->base.dev); bool no_map; + if (obj->mm.region && obj->mm.region->private) + return false; + if (obj->ops->mmap_offset) return type == I915_MMAP_TYPE_FIXED; else if (type == I915_MMAP_TYPE_FIXED) From patchwork Tue Jun 21 20:00:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0297ECCA473 for ; Tue, 21 Jun 2022 20:01:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 88D4110FF1E; Tue, 21 Jun 2022 20:01:18 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id C2DC310FEA2; Tue, 21 Jun 2022 20:01:16 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 50DB6660183C; Tue, 21 Jun 2022 21:01:15 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841675; bh=4aLhOjNbnMWEd5TE2kdvFPY0DN2hydFT2IvNHlkBa+w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YvCD0K/XPoYH0tx3Gogj5YpyR7uG1VVqDV+tGkv0NoWS93A40MLTfB9cBsWNZrOEa 7Epty8sVdQIXc+ro6EBiY0xSj6d/aVMVI7Y5vvO+kcllROzS6ZxAj5P6KY1TqT5rWo Kgx+xKtEmyoeI0kXvDqxfCOyKgHRTpkNpoHVchV268FWRvh25rKKGQdPlAys4pN3j9 6fraV1r3dYdv73sIMBIXBUan7bson7nV0lex9SxdTILgMx2llAu+Md4j2lZ2eSyiq3 FbOcQvMDLRZWV1tqg8MYOBckbh6CNiJFCWENFISNmVpy9vF/unhB0W7R/gjK4Z2+1Q KiOmQPYulYwIA== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:53 +0000 Message-Id: <20220621200058.3536182-6-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 05/10] drm/i915: instantiate ttm ranger manager for stolen memory X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" prepare for ttm based stolen region by using ttm range manager as the resource manager for stolen region. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 6 ++-- drivers/gpu/drm/i915/intel_region_ttm.c | 31 +++++++++++++++----- 2 files changed, 27 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 40249fa28a7a..675e9ab30396 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -60,11 +60,13 @@ i915_ttm_region(struct ttm_device *bdev, int ttm_mem_type) struct drm_i915_private *i915 = container_of(bdev, typeof(*i915), bdev); /* There's some room for optimization here... */ - GEM_BUG_ON(ttm_mem_type != I915_PL_SYSTEM && - ttm_mem_type < I915_PL_LMEM0); + GEM_BUG_ON(ttm_mem_type == I915_PL_GGTT); + if (ttm_mem_type == I915_PL_SYSTEM) return intel_memory_region_lookup(i915, INTEL_MEMORY_SYSTEM, 0); + if (ttm_mem_type == I915_PL_STOLEN) + return i915->mm.stolen_region; return intel_memory_region_lookup(i915, INTEL_MEMORY_LOCAL, ttm_mem_type - I915_PL_LMEM0); diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index fd2ecfdd8fa1..694e9acb69e2 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -54,7 +54,7 @@ void intel_region_ttm_device_fini(struct drm_i915_private *dev_priv) /* * Map the i915 memory regions to TTM memory types. We use the - * driver-private types for now, reserving TTM_PL_VRAM for stolen + * driver-private types for now, reserving I915_PL_STOLEN for stolen * memory and TTM_PL_TT for GGTT use if decided to implement this. */ int intel_region_to_ttm_type(const struct intel_memory_region *mem) @@ -63,11 +63,17 @@ int intel_region_to_ttm_type(const struct intel_memory_region *mem) GEM_BUG_ON(mem->type != INTEL_MEMORY_LOCAL && mem->type != INTEL_MEMORY_MOCK && - mem->type != INTEL_MEMORY_SYSTEM); + mem->type != INTEL_MEMORY_SYSTEM && + mem->type != INTEL_MEMORY_STOLEN_SYSTEM && + mem->type != INTEL_MEMORY_STOLEN_LOCAL); if (mem->type == INTEL_MEMORY_SYSTEM) return TTM_PL_SYSTEM; + if (mem->type == INTEL_MEMORY_STOLEN_SYSTEM || + mem->type == INTEL_MEMORY_STOLEN_LOCAL) + return I915_PL_STOLEN; + type = mem->instance + TTM_PL_PRIV; GEM_BUG_ON(type >= TTM_NUM_MEM_TYPES); @@ -91,10 +97,16 @@ int intel_region_ttm_init(struct intel_memory_region *mem) int mem_type = intel_region_to_ttm_type(mem); int ret; - ret = i915_ttm_buddy_man_init(bdev, mem_type, false, - resource_size(&mem->region), - mem->io_size, - mem->min_page_size, PAGE_SIZE); + if (mem_type == I915_PL_STOLEN) { + ret = ttm_range_man_init(bdev, mem_type, false, + resource_size(&mem->region) >> PAGE_SHIFT); + mem->is_range_manager = true; + } else { + ret = i915_ttm_buddy_man_init(bdev, mem_type, false, + resource_size(&mem->region), + mem->io_size, + mem->min_page_size, PAGE_SIZE); + } if (ret) return ret; @@ -114,6 +126,7 @@ int intel_region_ttm_init(struct intel_memory_region *mem) int intel_region_ttm_fini(struct intel_memory_region *mem) { struct ttm_resource_manager *man = mem->region_private; + int mem_type = intel_region_to_ttm_type(mem); int ret = -EBUSY; int count; @@ -144,8 +157,10 @@ int intel_region_ttm_fini(struct intel_memory_region *mem) if (ret || !man) return ret; - ret = i915_ttm_buddy_man_fini(&mem->i915->bdev, - intel_region_to_ttm_type(mem)); + if (mem_type == I915_PL_STOLEN) + ret = ttm_range_man_fini(&mem->i915->bdev, mem_type); + else + ret = i915_ttm_buddy_man_fini(&mem->i915->bdev, mem_type); GEM_WARN_ON(ret); mem->region_private = NULL; From patchwork Tue Jun 21 20:00:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B29ECCCA482 for ; Tue, 21 Jun 2022 20:01:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 91B7110FE14; Tue, 21 Jun 2022 20:01:19 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 71B5A10FE97; Tue, 21 Jun 2022 20:01:17 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id EFB416601853; Tue, 21 Jun 2022 21:01:15 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841676; bh=AgP5Riz/FHb587iBL1vYiW/g4qmDc92BtRCaR0jjFGw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YwkkW2szc59ejHUw0SzvioMn6cHxM/hnFosRAIAUxdZBqOaZ5xFK4f3bghAYG5Cv5 /oZz66OmyTyDo6OAkMSPwMvRPFDdCcW/hmMmwU5c/VoOOkWZm55tErEdl6qyKIV8Es x57ImaiXaBZVEOm9X9PTspSFweFIct4AOtr5+TJKEaongclhvzebIA3wmP7EbNxWyO TPvLRj4tAsTIatSZgI5Gt0RmnMNHVy26p7n/szHR3uG1DN3UNu+AQCwS59i92Gt4c3 kTfkkxtNMjNSxhGHl42M1zKfI+Mka6UW8xXGRMHLexACEGZVX1G/vc+G8h1TjGlUiL VHlAYhHxMBIVw== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:54 +0000 Message-Id: <20220621200058.3536182-7-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 06/10] drm/i915: sanitize mem_flags for stolen buffers X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Stolen regions are not page backed or considered iomem. Prevent flags indicating such. This correctly prevents stolen buffers from attempting to directly map them. See i915_gem_object_has_struct_page() and i915_gem_object_has_iomem() usage for where it would break otherwise. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 675e9ab30396..81c67ca9edda 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -14,6 +14,7 @@ #include "gem/i915_gem_region.h" #include "gem/i915_gem_ttm.h" #include "gem/i915_gem_ttm_move.h" +#include "gem/i915_gem_stolen.h" #include "gt/intel_engine_pm.h" #include "gt/intel_gt.h" @@ -124,8 +125,9 @@ void i915_ttm_adjust_gem_after_move(struct drm_i915_gem_object *obj) obj->mem_flags &= ~(I915_BO_FLAG_STRUCT_PAGE | I915_BO_FLAG_IOMEM); - obj->mem_flags |= i915_ttm_cpu_maps_iomem(bo->resource) ? I915_BO_FLAG_IOMEM : - I915_BO_FLAG_STRUCT_PAGE; + if (!i915_gem_object_is_stolen(obj)) + obj->mem_flags |= i915_ttm_cpu_maps_iomem(bo->resource) ? I915_BO_FLAG_IOMEM : + I915_BO_FLAG_STRUCT_PAGE; if (!obj->ttm.cache_level_override) { cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), From patchwork Tue Jun 21 20:00:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32FE7C43334 for ; Tue, 21 Jun 2022 20:01:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1FAEA10FE80; Tue, 21 Jun 2022 20:01:21 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5FD9D10FF1C; Tue, 21 Jun 2022 20:01:18 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id AA8D26601870; Tue, 21 Jun 2022 21:01:16 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841677; bh=vcV9d53cSiZS9S6xUBuwTwjcLWVgxyTlm1c7ldt4HDk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YRmNYsYQHqVsNDCC4KajI32aP20FwVpGEdBgTcJ6hpDE5SUhgo94+fZVo46XriYCO i/xDfxv4OQzaq5RqUkLVDDKMam1EDYJvoCTdD9W70/vFly0CJQcgypU0ao9y6D3zcL Bqm5dLBwUeXn4DexwNQ96WUCOeI4mouwnBbe2Iqu+cfZ1tiEVq1TtZoUa9fcAakrGh 6MF2sI3AtfJsfUIKAT3ybYYS0TNwXc5P0EnijXxhWDNhTYFj+hqWj4Wj2Ruu47aJkG OA8xNJyv+suxFeXgaIoRs3WeVH8qiAHhVMlbcRFJHZygrk5iD56ToJh7gYidRWkld2 o9/jTF121PU8g== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:55 +0000 Message-Id: <20220621200058.3536182-8-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 07/10] drm/i915: ttm move/clear logic fix X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" ttm managed buffers start off with system resource definitions and ttm_tt tracking structures allocated (though unpopulated). currently this prevents clearing of buffers on first move to desired placements. The desired behaviour is to clear user allocated buffers and any kernel buffers that specifically requests it only. Make the logic match the desired behaviour. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 22 +++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 81c67ca9edda..a3f8fc056dbc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -3,6 +3,7 @@ * Copyright © 2021 Intel Corporation */ +#include "drm/ttm/ttm_tt.h" #include #include "i915_deps.h" @@ -476,6 +477,25 @@ __i915_ttm_move(struct ttm_buffer_object *bo, return fence; } +static bool +allow_clear(struct drm_i915_gem_object *obj, struct ttm_tt *ttm, struct ttm_resource *dst_mem) +{ + /* never clear stolen */ + if (dst_mem->mem_type == I915_PL_STOLEN) + return false; + /* + * we want to clear user buffers and any kernel buffers + * that specifically request clearing. + */ + if (obj->flags & I915_BO_ALLOC_USER) + return true; + + if (ttm && ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC) + return true; + + return false; +} + /** * i915_ttm_move - The TTM move callback used by i915. * @bo: The buffer object. @@ -526,7 +546,7 @@ int i915_ttm_move(struct ttm_buffer_object *bo, bool evict, return PTR_ERR(dst_rsgt); clear = !i915_ttm_cpu_maps_iomem(bo->resource) && (!ttm || !ttm_tt_is_populated(ttm)); - if (!(clear && ttm && !(ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC))) { + if (!clear || allow_clear(obj, ttm, dst_mem)) { struct i915_deps deps; i915_deps_init(&deps, GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); From patchwork Tue Jun 21 20:00:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 63B2CC43334 for ; Tue, 21 Jun 2022 20:01:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D995710FF96; Tue, 21 Jun 2022 20:01:22 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2CEB310FF42; Tue, 21 Jun 2022 20:01:19 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 9FFC4660188E; Tue, 21 Jun 2022 21:01:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841677; bh=xMYv7v3aHxlEQIE2QtkE00E2gW9W0dS8yiKhilZWusw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i3Z+NXjl66eav0Ee/FimPffiTtZrKh7gRzLbltR+tYrKQ+iO/PhAThssq+rjUnQ7R 8x6SCb3XrwCD4YU25z799eE2DDLhN48/VkqRN3XUn6dD4qntjYG0UaLPFdFKSQAxYn LggEmd06PalHuAZud/341IYUCXuume05znANTWaLuFGe3OHRMZwqQV03sebJIko8lt ElyM8gPuSl7d6UrFPuLOa4rZjW6MAj5Vev9LGW3/+FZ74YKTh7E91sNYrauBttPIxI rtdOcqLDtkvy7p+77IFBlGtoH7O64EGXCDdwmmPc/S9hIERChrbB3F9mm4MVM++Fux fwWPnuf7ekkFQ== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:56 +0000 Message-Id: <20220621200058.3536182-9-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 08/10] drm/i915: allow memory region creators to alloc and free the region X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" add callbacks for alloc and free. this allows region creators to allocate any extra storage they may require. Signed-off-by: Robert Beckett --- drivers/gpu/drm/i915/intel_memory_region.c | 16 +++++++++++++--- drivers/gpu/drm/i915/intel_memory_region.h | 2 ++ 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index e38d2db1c3e3..3da07a712f90 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -231,7 +231,10 @@ intel_memory_region_create(struct drm_i915_private *i915, struct intel_memory_region *mem; int err; - mem = kzalloc(sizeof(*mem), GFP_KERNEL); + if (ops->alloc) + mem = ops->alloc(); + else + mem = kzalloc(sizeof(*mem), GFP_KERNEL); if (!mem) return ERR_PTR(-ENOMEM); @@ -265,7 +268,10 @@ intel_memory_region_create(struct drm_i915_private *i915, if (mem->ops->release) mem->ops->release(mem); err_free: - kfree(mem); + if (mem->ops->free) + mem->ops->free(mem); + else + kfree(mem); return ERR_PTR(err); } @@ -288,7 +294,11 @@ void intel_memory_region_destroy(struct intel_memory_region *mem) GEM_WARN_ON(!list_empty_careful(&mem->objects.list)); mutex_destroy(&mem->objects.lock); - if (!ret) + if (ret) + return; + if (mem->ops->free) + mem->ops->free(mem); + else kfree(mem); } diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h index 3d8378c1b447..048955b5429f 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.h +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -61,6 +61,8 @@ struct intel_memory_region_ops { resource_size_t size, resource_size_t page_size, unsigned int flags); + struct intel_memory_region *(*alloc)(void); + void (*free)(struct intel_memory_region *mem); }; struct intel_memory_region { From patchwork Tue Jun 21 20:00:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889684 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BBD65CCA47E for ; Tue, 21 Jun 2022 20:01:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 47E2410FF64; Tue, 21 Jun 2022 20:01:22 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id DF0C810FEED; Tue, 21 Jun 2022 20:01:19 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 6312E66018AF; Tue, 21 Jun 2022 21:01:18 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841678; bh=BJ87eisXj9Ppx4nwiDdWpEsHsZ0BoBn8nQEON2j1/N8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EKnpgkxf6029SvwIL3c9TcFD4KMfgLWpLnTNEKKmSRo+HKAfOqlfVVnhlcO/RP4K5 xL8D30OdZ+2lOrjMJfOsJggBXGNWm6zlaUfNT1VtR3sRBbdk6BJVaH/nSWq6L8zm2I oV9KO+mlJ8JTvxZhn0R9+M3XHLZPBAiZ4v9efpBPyTcBGVCAl+F0nmshzCCFAt4l9S XgVVRpK93aQwfmqxufXua1QdykDSdN8X4gTWO7EX5f1EYhdfFyY8y2UJAWOkrf7Orx UzWrs2PNmg1VDh3jxZ511ETRhOR25N4EsQ3F6sUUlme8sg/1oZXlw4Rhfq0oxQkEAh bKl+waoz4EcTQ== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:57 +0000 Message-Id: <20220621200058.3536182-10-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 09/10] drm/i915/ttm: add buffer pin on alloc flag X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" For situations where allocations need to fail on alloc instead of delayed get_pages, add a new alloc flag to pin the ttm bo. This makes sure that the resource has been allocated during buffer creation, allowing it to fail with an error if the placement is exhausted. This allows existing fallback options for stolen backend allocation like create_ring_vma to work as expected. Signed-off-by: Robert Beckett --- .../gpu/drm/i915/gem/i915_gem_object_types.h | 13 ++++++---- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 25 ++++++++++++++++++- 2 files changed, 32 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 6632ed52e919..07bc11247a3e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -325,17 +325,20 @@ struct drm_i915_gem_object { * dealing with userspace objects the CPU fault handler is free to ignore this. */ #define I915_BO_ALLOC_GPU_ONLY BIT(6) +/* object should be pinned in destination region from allocation */ +#define I915_BO_ALLOC_PINNED BIT(7) #define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \ I915_BO_ALLOC_VOLATILE | \ I915_BO_ALLOC_CPU_CLEAR | \ I915_BO_ALLOC_USER | \ I915_BO_ALLOC_PM_VOLATILE | \ I915_BO_ALLOC_PM_EARLY | \ - I915_BO_ALLOC_GPU_ONLY) -#define I915_BO_READONLY BIT(7) -#define I915_TILING_QUIRK_BIT 8 /* unknown swizzling; do not release! */ -#define I915_BO_PROTECTED BIT(9) -#define I915_BO_WAS_BOUND_BIT 10 + I915_BO_ALLOC_GPU_ONLY | \ + I915_BO_ALLOC_PINNED) +#define I915_BO_READONLY BIT(8) +#define I915_TILING_QUIRK_BIT 9 /* unknown swizzling; do not release! */ +#define I915_BO_PROTECTED BIT(10) +#define I915_BO_WAS_BOUND_BIT 11 /** * @mem_flags - Mutable placement-related flags * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 27d59639177f..bb988608296d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -998,6 +998,13 @@ static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj) { GEM_BUG_ON(!obj->ttm.created); + /* stolen objects are pinned for lifetime. Unpin before putting */ + if (obj->flags & I915_BO_ALLOC_PINNED) { + ttm_bo_reserve(i915_gem_to_ttm(obj), true, false, NULL); + ttm_bo_unpin(i915_gem_to_ttm(obj)); + ttm_bo_unreserve(i915_gem_to_ttm(obj)); + } + ttm_bo_put(i915_gem_to_ttm(obj)); } @@ -1193,6 +1200,9 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, .no_wait_gpu = false, }; enum ttm_bo_type bo_type; + struct ttm_place _place; + struct ttm_placement _placement; + struct ttm_placement *placement; int ret; drm_gem_private_object_init(&i915->drm, &obj->base, size); @@ -1222,6 +1232,17 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, */ i915_gem_object_make_unshrinkable(obj); + if (obj->flags & I915_BO_ALLOC_PINNED) { + i915_ttm_place_from_region(mem, &_place, obj->bo_offset, + obj->base.size, obj->flags); + _placement.num_placement = 1; + _placement.placement = &_place; + _placement.num_busy_placement = 0; + _placement.busy_placement = NULL; + placement = &_placement; + } else { + placement = &i915_sys_placement; + } /* * If this function fails, it will call the destructor, but * our caller still owns the object. So no freeing in the @@ -1230,7 +1251,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, * until successful initialization. */ ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj), size, - bo_type, &i915_sys_placement, + bo_type, placement, page_size >> PAGE_SHIFT, &ctx, NULL, NULL, i915_ttm_bo_destroy); if (ret) @@ -1242,6 +1263,8 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, i915_ttm_adjust_domains_after_move(obj); i915_ttm_adjust_gem_after_move(obj); obj->ttm.cache_level_override = false; + if (obj->flags & I915_BO_ALLOC_PINNED) + ttm_bo_pin(i915_gem_to_ttm(obj)); i915_gem_object_unlock(obj); return 0; From patchwork Tue Jun 21 20:00:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12889686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 73D72C43334 for ; Tue, 21 Jun 2022 20:01:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4C21E10FEED; Tue, 21 Jun 2022 20:01:24 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id E886310FF61; Tue, 21 Jun 2022 20:01:21 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 6031F66018EA; Tue, 21 Jun 2022 21:01:20 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655841680; bh=P27T3RUkb5+QBdWHtJKjEVtTxZLJlWqV1Ly2/noySCo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O8loukBDVDAJTyTo7dKyKqbFigsxHdNgIephfgKsT1wk7zZFtIkrtKxLtg4wL+AI1 DClCs12Rb1FDwuWuN3Y/7NwcS36gJDfUbWBMYVMwS2SqVW9AHQHUhVB+uo0m3nKYpK lr6sOF8UfMQW9W0Ty2cRRYYjCai1XiRHE9F8Whz9m+Hzn0xnlYdzae5r7RvMc0BBON Mqo2E73+HacZhoTeuod0pl/yIktGVG7lOWTYR88ljTYx1B0v1aexvpS7WdzJpowHg2 EgEX3TTTSStFgFB3hrRu6waRHCcWAoVJPasVJGWxnAMPM792pbc/WxF018B2AagkON nXIDr5N/pcD3Q== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Tue, 21 Jun 2022 20:00:58 +0000 Message-Id: <20220621200058.3536182-11-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621200058.3536182-1-bob.beckett@collabora.com> References: <20220621200058.3536182-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v8 10/10] drm/i915: stolen memory use ttm backend X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" refactor stolen memory region to use ttm. this necessitates using ttm resources to track reserved stolen regions instead of drm_mm_nodes. Signed-off-by: Robert Beckett --- drivers/gpu/drm/i915/display/intel_fbc.c | 78 ++-- .../gpu/drm/i915/gem/i915_gem_object_types.h | 2 - drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 441 +++++++----------- drivers/gpu/drm/i915/gem/i915_gem_stolen.h | 21 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.h | 7 + drivers/gpu/drm/i915/gt/intel_rc6.c | 4 +- drivers/gpu/drm/i915/gt/selftest_reset.c | 16 +- drivers/gpu/drm/i915/i915_debugfs.c | 7 +- drivers/gpu/drm/i915/i915_drv.h | 5 - drivers/gpu/drm/i915/intel_region_ttm.c | 42 +- drivers/gpu/drm/i915/intel_region_ttm.h | 8 +- drivers/gpu/drm/i915/selftests/mock_region.c | 12 +- 13 files changed, 304 insertions(+), 342 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_fbc.c b/drivers/gpu/drm/i915/display/intel_fbc.c index 8b807284cde1..6f3afac5e8c9 100644 --- a/drivers/gpu/drm/i915/display/intel_fbc.c +++ b/drivers/gpu/drm/i915/display/intel_fbc.c @@ -38,6 +38,7 @@ * forcibly disable it to allow proper screen updates. */ +#include "gem/i915_gem_stolen.h" #include #include @@ -51,6 +52,7 @@ #include "intel_display_types.h" #include "intel_fbc.h" #include "intel_frontbuffer.h" +#include "gem/i915_gem_region.h" #define for_each_fbc_id(__dev_priv, __fbc_id) \ for ((__fbc_id) = INTEL_FBC_A; (__fbc_id) < I915_MAX_FBCS; (__fbc_id)++) \ @@ -92,8 +94,8 @@ struct intel_fbc { struct mutex lock; unsigned int busy_bits; - struct drm_mm_node compressed_fb; - struct drm_mm_node compressed_llb; + struct ttm_resource *compressed_fb; + struct ttm_resource *compressed_llb; enum intel_fbc_id id; @@ -331,16 +333,20 @@ static void i8xx_fbc_nuke(struct intel_fbc *fbc) static void i8xx_fbc_program_cfb(struct intel_fbc *fbc) { struct drm_i915_private *i915 = fbc->i915; + u64 fb_offset = i915_gem_stolen_reserve_offset(fbc->compressed_fb); + u64 llb_offset = i915_gem_stolen_reserve_offset(fbc->compressed_llb); + GEM_BUG_ON(fb_offset == I915_BO_INVALID_OFFSET); + GEM_BUG_ON(llb_offset == I915_BO_INVALID_OFFSET); GEM_BUG_ON(range_overflows_end_t(u64, i915->dsm.start, - fbc->compressed_fb.start, U32_MAX)); + fb_offset, U32_MAX)); GEM_BUG_ON(range_overflows_end_t(u64, i915->dsm.start, - fbc->compressed_llb.start, U32_MAX)); + llb_offset, U32_MAX)); intel_de_write(i915, FBC_CFB_BASE, - i915->dsm.start + fbc->compressed_fb.start); + i915->dsm.start + fb_offset); intel_de_write(i915, FBC_LL_BASE, - i915->dsm.start + fbc->compressed_llb.start); + i915->dsm.start + llb_offset); } static const struct intel_fbc_funcs i8xx_fbc_funcs = { @@ -448,8 +454,10 @@ static bool g4x_fbc_is_compressing(struct intel_fbc *fbc) static void g4x_fbc_program_cfb(struct intel_fbc *fbc) { struct drm_i915_private *i915 = fbc->i915; + u64 fb_offset = i915_gem_stolen_reserve_offset(fbc->compressed_fb); - intel_de_write(i915, DPFC_CB_BASE, fbc->compressed_fb.start); + GEM_BUG_ON(fb_offset == I915_BO_INVALID_OFFSET); + intel_de_write(i915, DPFC_CB_BASE, fb_offset); } static const struct intel_fbc_funcs g4x_fbc_funcs = { @@ -499,8 +507,10 @@ static bool ilk_fbc_is_compressing(struct intel_fbc *fbc) static void ilk_fbc_program_cfb(struct intel_fbc *fbc) { struct drm_i915_private *i915 = fbc->i915; + u64 fb_offset = i915_gem_stolen_reserve_offset(fbc->compressed_fb); - intel_de_write(i915, ILK_DPFC_CB_BASE(fbc->id), fbc->compressed_fb.start); + GEM_BUG_ON(fb_offset == I915_BO_INVALID_OFFSET); + intel_de_write(i915, ILK_DPFC_CB_BASE(fbc->id), fb_offset); } static const struct intel_fbc_funcs ilk_fbc_funcs = { @@ -744,21 +754,24 @@ static int find_compression_limit(struct intel_fbc *fbc, { struct drm_i915_private *i915 = fbc->i915; u64 end = intel_fbc_stolen_end(i915); - int ret, limit = min_limit; + int limit = min_limit; + struct ttm_resource *res; size /= limit; /* Try to over-allocate to reduce reallocations and fragmentation. */ - ret = i915_gem_stolen_insert_node_in_range(i915, &fbc->compressed_fb, - size <<= 1, 4096, 0, end); - if (ret == 0) + res = i915_gem_stolen_reserve_range(i915, size <<= 1, 0, end); + if (!IS_ERR(res)) { + fbc->compressed_fb = res; return limit; + } for (; limit <= intel_fbc_max_limit(i915); limit <<= 1) { - ret = i915_gem_stolen_insert_node_in_range(i915, &fbc->compressed_fb, - size >>= 1, 4096, 0, end); - if (ret == 0) + res = i915_gem_stolen_reserve_range(i915, size >>= 1, 0, end); + if (!IS_ERR(res)) { + fbc->compressed_fb = res; return limit; + } } return 0; @@ -769,17 +782,18 @@ static int intel_fbc_alloc_cfb(struct intel_fbc *fbc, { struct drm_i915_private *i915 = fbc->i915; int ret; + struct ttm_resource *res; - drm_WARN_ON(&i915->drm, - drm_mm_node_allocated(&fbc->compressed_fb)); - drm_WARN_ON(&i915->drm, - drm_mm_node_allocated(&fbc->compressed_llb)); + drm_WARN_ON(&i915->drm, fbc->compressed_fb); + drm_WARN_ON(&i915->drm, fbc->compressed_llb); if (DISPLAY_VER(i915) < 5 && !IS_G4X(i915)) { - ret = i915_gem_stolen_insert_node(i915, &fbc->compressed_llb, - 4096, 4096); - if (ret) + res = i915_gem_stolen_reserve_range(i915, 4096, I915_GEM_STOLEN_BIAS, 0); + if (IS_ERR(res)) { + ret = PTR_ERR(res); goto err; + } + fbc->compressed_llb = res; } ret = find_compression_limit(fbc, size, min_limit); @@ -793,15 +807,14 @@ static int intel_fbc_alloc_cfb(struct intel_fbc *fbc, drm_dbg_kms(&i915->drm, "reserved %llu bytes of contiguous stolen space for FBC, limit: %d\n", - fbc->compressed_fb.size, fbc->limit); + i915_gem_stolen_reserve_size(fbc->compressed_fb), fbc->limit); return 0; err_llb: - if (drm_mm_node_allocated(&fbc->compressed_llb)) - i915_gem_stolen_remove_node(i915, &fbc->compressed_llb); + i915_gem_stolen_release_range(i915, fetch_and_zero(&fbc->compressed_llb)); err: - if (drm_mm_initialized(&i915->mm.stolen)) + if (IS_ERR(res) && (PTR_ERR(res) == -ENOMEM || PTR_ERR(res) == -ENXIO)) drm_info_once(&i915->drm, "not enough stolen space for compressed buffer (need %d more bytes), disabling. Hint: you may be able to increase stolen memory size in the BIOS to avoid this.\n", size); return -ENOSPC; } @@ -826,10 +839,10 @@ static void __intel_fbc_cleanup_cfb(struct intel_fbc *fbc) if (WARN_ON(intel_fbc_hw_is_active(fbc))) return; - if (drm_mm_node_allocated(&fbc->compressed_llb)) - i915_gem_stolen_remove_node(i915, &fbc->compressed_llb); - if (drm_mm_node_allocated(&fbc->compressed_fb)) - i915_gem_stolen_remove_node(i915, &fbc->compressed_fb); + if (fbc->compressed_llb) + i915_gem_stolen_release_range(i915, fetch_and_zero(&fbc->compressed_llb)); + if (fbc->compressed_fb) + i915_gem_stolen_release_range(i915, fetch_and_zero(&fbc->compressed_fb)); } void intel_fbc_cleanup(struct drm_i915_private *i915) @@ -1034,9 +1047,10 @@ static bool intel_fbc_is_cfb_ok(const struct intel_plane_state *plane_state) { struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane); struct intel_fbc *fbc = plane->fbc; + u64 fb_size = i915_gem_stolen_reserve_size(fbc->compressed_fb); return intel_fbc_min_limit(plane_state) <= fbc->limit && - intel_fbc_cfb_size(plane_state) <= fbc->compressed_fb.size * fbc->limit; + intel_fbc_cfb_size(plane_state) <= fb_size * fbc->limit; } static bool intel_fbc_is_ok(const struct intel_plane_state *plane_state) @@ -1702,7 +1716,7 @@ void intel_fbc_init(struct drm_i915_private *i915) { enum intel_fbc_id fbc_id; - if (!drm_mm_initialized(&i915->mm.stolen)) + if (!i915->mm.stolen_region) mkwrite_device_info(i915)->display.fbc_mask = 0; if (need_fbc_vtd_wa(i915)) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 07bc11247a3e..13466cca7af8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -633,8 +633,6 @@ struct drm_i915_gem_object { } userptr; #endif - struct drm_mm_node *stolen; - resource_size_t bo_offset; unsigned long scratch; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index 47b5e0e342ab..7ab0977f97e4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -4,75 +4,111 @@ * Copyright © 2008-2012 Intel Corporation */ +#include "drm/ttm/ttm_placement.h" +#include "gem/i915_gem_object_types.h" #include #include -#include #include #include "gem/i915_gem_lmem.h" #include "gem/i915_gem_region.h" #include "gt/intel_gt.h" #include "gt/intel_region_lmem.h" +#include "gem/i915_gem_ttm.h" #include "i915_drv.h" #include "i915_gem_stolen.h" #include "i915_reg.h" #include "i915_utils.h" #include "i915_vgpu.h" #include "intel_mchbar_regs.h" +#include "intel_region_ttm.h" -/* - * The BIOS typically reserves some of the system's memory for the exclusive - * use of the integrated graphics. This memory is no longer available for - * use by the OS and so the user finds that his system has less memory - * available than he put in. We refer to this memory as stolen. - * - * The BIOS will allocate its framebuffer from the stolen memory. Our - * goal is try to reuse that object for our own fbcon which must always - * be available for panics. Anything else we can reuse the stolen memory - * for is a boon. - */ +struct stolen_region { + struct intel_memory_region region; + struct ttm_resource *wa_resvd; +}; -int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *i915, - struct drm_mm_node *node, u64 size, - unsigned alignment, u64 start, u64 end) +inline struct stolen_region *to_stolen_region(struct intel_memory_region *mem) { - int ret; - - if (!drm_mm_initialized(&i915->mm.stolen)) - return -ENODEV; + return container_of(mem, struct stolen_region, region); +} - /* WaSkipStolenMemoryFirstPage:bdw+ */ - if (GRAPHICS_VER(i915) >= 8 && start < 4096) - start = 4096; +/** + * i915_gem_stolen_reserve_range - reserve a region of space in a given range + * @i915: i915 device instance + * @size: size of region to reserve + * @start: start of search area + * @end: end of search area + * + * Search for @size amount of free space within the region delimeted by @start and @end. + * If found reserve it from future use until later release with @i915_gem_stolen_release_range. + * + * Return: pointer to resource tracking structure on success, ERR_PTR otherwise + */ +struct ttm_resource * +i915_gem_stolen_reserve_range(struct drm_i915_private *i915, + resource_size_t size, + u64 start, u64 end) +{ + struct intel_memory_region *mem = i915->mm.stolen_region; - mutex_lock(&i915->mm.stolen_lock); - ret = drm_mm_insert_node_in_range(&i915->mm.stolen, node, - size, alignment, 0, - start, end, DRM_MM_INSERT_BEST); - mutex_unlock(&i915->mm.stolen_lock); + if (!mem) + return ERR_PTR(-ENODEV); + return intel_region_ttm_resource_alloc(mem, size, start, end, I915_BO_ALLOC_CONTIGUOUS); +} - return ret; +/** + * i915_gem_stolen_reserve_offset - return the offset of the reserved space + * @res: pointer to resource tracking structure to check + * + * Return: The offset of the reserved resource, or I915_BO_INVALID_OFFSET on error + */ +u64 i915_gem_stolen_reserve_offset(struct ttm_resource *res) +{ + if (!res) + return I915_BO_INVALID_OFFSET; + return PFN_PHYS(res->start); } -int i915_gem_stolen_insert_node(struct drm_i915_private *i915, - struct drm_mm_node *node, u64 size, - unsigned alignment) +/** + * i915_gem_stolen_reserve_size - return the reserved size of the reserved space + * @res: pointer to resource tracking structure to check + * + * Return: The size of the reserved resource, or I915_BO_INVALID_OFFSET on error + */ +u64 i915_gem_stolen_reserve_size(struct ttm_resource *res) { - return i915_gem_stolen_insert_node_in_range(i915, node, - size, alignment, - I915_GEM_STOLEN_BIAS, - U64_MAX); + if (!res) + return I915_BO_INVALID_OFFSET; + return PFN_PHYS(res->num_pages); } -void i915_gem_stolen_remove_node(struct drm_i915_private *i915, - struct drm_mm_node *node) +/** + * i915_gem_stolen_release_range - release the reserved area to be free for allocation again + * @i915: i915 device instance + * @res: pointer to resource tracking structure allocated via @i915_gem_stolen_reserve_range + */ +void i915_gem_stolen_release_range(struct drm_i915_private *i915, + struct ttm_resource *res) { - mutex_lock(&i915->mm.stolen_lock); - drm_mm_remove_node(node); - mutex_unlock(&i915->mm.stolen_lock); + struct intel_memory_region *mem = i915->mm.stolen_region; + + intel_region_ttm_resource_free(mem, res); } +/* + * The BIOS typically reserves some of the system's memory for the exclusive + * use of the integrated graphics. This memory is no longer available for + * use by the OS and so the user finds that his system has less memory + * available than he put in. We refer to this memory as stolen. + * + * The BIOS will allocate its framebuffer from the stolen memory. Our + * goal is try to reuse that object for our own fbcon which must always + * be available for panics. Anything else we can reuse the stolen memory + * for is a boon. + */ + static int i915_adjust_stolen(struct drm_i915_private *i915, struct resource *dsm) { @@ -173,14 +209,6 @@ static int i915_adjust_stolen(struct drm_i915_private *i915, return 0; } -static void i915_gem_cleanup_stolen(struct drm_i915_private *i915) -{ - if (!drm_mm_initialized(&i915->mm.stolen)) - return; - - drm_mm_takedown(&i915->mm.stolen); -} - static void g4x_get_stolen_reserved(struct drm_i915_private *i915, struct intel_uncore *uncore, resource_size_t *base, @@ -394,8 +422,9 @@ static int i915_gem_init_stolen(struct intel_memory_region *mem) struct intel_uncore *uncore = &i915->uncore; resource_size_t reserved_base, stolen_top; resource_size_t reserved_total, reserved_size; - - mutex_init(&i915->mm.stolen_lock); + struct stolen_region *region = to_stolen_region(mem); + struct ttm_resource *resvd; + int ret; if (intel_vgpu_active(i915)) { drm_notice(&i915->drm, @@ -513,258 +542,99 @@ static int i915_gem_init_stolen(struct intel_memory_region *mem) return 0; /* Basic memrange allocator for stolen space. */ - drm_mm_init(&i915->mm.stolen, 0, i915->stolen_usable_size); - - return 0; -} - -static void dbg_poison(struct i915_ggtt *ggtt, - dma_addr_t addr, resource_size_t size, - u8 x) -{ -#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) - if (!drm_mm_node_allocated(&ggtt->error_capture)) - return; - - if (ggtt->vm.bind_async_flags & I915_VMA_GLOBAL_BIND) - return; /* beware stop_machine() inversion */ - - GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE)); - - mutex_lock(&ggtt->error_mutex); - while (size) { - void __iomem *s; - - ggtt->vm.insert_page(&ggtt->vm, addr, - ggtt->error_capture.start, - I915_CACHE_NONE, 0); - mb(); - - s = io_mapping_map_wc(&ggtt->iomap, - ggtt->error_capture.start, - PAGE_SIZE); - memset_io(s, x, PAGE_SIZE); - io_mapping_unmap(s); - - addr += PAGE_SIZE; - size -= PAGE_SIZE; - } - mb(); - ggtt->vm.clear_range(&ggtt->vm, ggtt->error_capture.start, PAGE_SIZE); - mutex_unlock(&ggtt->error_mutex); -#endif -} - -static struct sg_table * -i915_pages_create_for_stolen(struct drm_device *dev, - resource_size_t offset, resource_size_t size) -{ - struct drm_i915_private *i915 = to_i915(dev); - struct sg_table *st; - struct scatterlist *sg; - - GEM_BUG_ON(range_overflows(offset, size, resource_size(&i915->dsm))); + ret = intel_region_ttm_init(mem); + if (ret) + return ret; - /* We hide that we have no struct page backing our stolen object - * by wrapping the contiguous physical allocation with a fake - * dma mapping in a single scatterlist. + /* + * Reserve the bias area. Nothing should be allocating that region. + * Also covers WaSkipStolenMemoryFirstPage:bdw+ */ - - st = kmalloc(sizeof(*st), GFP_KERNEL); - if (st == NULL) - return ERR_PTR(-ENOMEM); - - if (sg_alloc_table(st, 1, GFP_KERNEL)) { - kfree(st); - return ERR_PTR(-ENOMEM); + resvd = intel_region_ttm_resource_alloc(mem, I915_GEM_STOLEN_BIAS, + 0, I915_GEM_STOLEN_BIAS, 0); + if (IS_ERR(resvd)) { + ret = PTR_ERR(resvd); + goto err_no_reserve; } - sg = st->sgl; - sg->offset = 0; - sg->length = size; + region->wa_resvd = resvd; - sg_dma_address(sg) = (dma_addr_t)i915->dsm.start + offset; - sg_dma_len(sg) = size; + return ret; + +err_no_reserve: + intel_region_ttm_fini(mem); - return st; + return ret; } -static int i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj) +struct drm_i915_gem_object * +i915_gem_object_create_stolen(struct drm_i915_private *i915, + resource_size_t size) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); - struct sg_table *pages = - i915_pages_create_for_stolen(obj->base.dev, - obj->stolen->start, - obj->stolen->size); - if (IS_ERR(pages)) - return PTR_ERR(pages); - - dbg_poison(to_gt(i915)->ggtt, - sg_dma_address(pages->sgl), - sg_dma_len(pages->sgl), - POISON_INUSE); - - __i915_gem_object_set_pages(obj, pages, obj->stolen->size); - - return 0; + return i915_gem_object_create_region(i915->mm.stolen_region, size, 0, + I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_PINNED); } -static void i915_gem_object_put_pages_stolen(struct drm_i915_gem_object *obj, - struct sg_table *pages) +static struct intel_memory_region *alloc_stolen_region(void) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); - /* Should only be called from i915_gem_object_release_stolen() */ + struct stolen_region *region = kzalloc(sizeof(*region), GFP_KERNEL); - dbg_poison(to_gt(i915)->ggtt, - sg_dma_address(pages->sgl), - sg_dma_len(pages->sgl), - POISON_FREE); + if (!region) + return NULL; - sg_free_table(pages); - kfree(pages); + return ®ion->region; } -static void -i915_gem_object_release_stolen(struct drm_i915_gem_object *obj) +static void free_stolen_region(struct intel_memory_region *mem) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); - struct drm_mm_node *stolen = fetch_and_zero(&obj->stolen); - - GEM_BUG_ON(!stolen); - i915_gem_stolen_remove_node(i915, stolen); - kfree(stolen); + struct stolen_region *region = to_stolen_region(mem); - i915_gem_object_release_memory_region(obj); + kfree(region); } -static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = { - .name = "i915_gem_object_stolen", - .get_pages = i915_gem_object_get_pages_stolen, - .put_pages = i915_gem_object_put_pages_stolen, - .release = i915_gem_object_release_stolen, -}; - -static int __i915_gem_object_create_stolen(struct intel_memory_region *mem, - struct drm_i915_gem_object *obj, - struct drm_mm_node *stolen) +static int init_stolen_smem(struct intel_memory_region *mem) { - static struct lock_class_key lock_class; - unsigned int cache_level; - unsigned int flags; - int err; - /* - * Stolen objects are always physically contiguous since we just - * allocate one big block underneath using the drm_mm range allocator. + * Initialise stolen early so that we may reserve preallocated + * objects for the BIOS to KMS transition. */ - flags = I915_BO_ALLOC_CONTIGUOUS; - - drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size); - i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, flags); - - obj->stolen = stolen; - obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT; - cache_level = HAS_LLC(mem->i915) ? I915_CACHE_LLC : I915_CACHE_NONE; - i915_gem_object_set_cache_coherency(obj, cache_level); - - if (WARN_ON(!i915_gem_object_trylock(obj, NULL))) - return -EBUSY; + return i915_gem_init_stolen(mem); +} - i915_gem_object_init_memory_region(obj, mem); +static int release_stolen(struct intel_memory_region *mem) +{ + struct stolen_region *region = to_stolen_region(mem); - err = i915_gem_object_pin_pages(obj); - if (err) - i915_gem_object_release_memory_region(obj); - i915_gem_object_unlock(obj); + if (region->wa_resvd) + intel_region_ttm_resource_free(mem, region->wa_resvd); - return err; + return intel_region_ttm_fini(mem); } -static int _i915_gem_object_stolen_init(struct intel_memory_region *mem, - struct drm_i915_gem_object *obj, - resource_size_t offset, - resource_size_t size, - resource_size_t page_size, - unsigned int flags) +static int stolen_object_init(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t offset, + resource_size_t size, + resource_size_t page_size, + unsigned int flags) { - struct drm_i915_private *i915 = mem->i915; - struct drm_mm_node *stolen; - int ret; - - if (!drm_mm_initialized(&i915->mm.stolen)) + if (!mem->region_private) return -ENODEV; if (size == 0) return -EINVAL; - /* - * With discrete devices, where we lack a mappable aperture there is no - * possible way to ever access this memory on the CPU side. - */ - if (mem->type == INTEL_MEMORY_STOLEN_LOCAL && !mem->io_size && - !(flags & I915_BO_ALLOC_GPU_ONLY)) - return -ENOSPC; - - stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); - if (!stolen) - return -ENOMEM; - - if (offset != I915_BO_INVALID_OFFSET) { - drm_dbg(&i915->drm, - "creating preallocated stolen object: stolen_offset=%pa, size=%pa\n", - &offset, &size); - - stolen->start = offset; - stolen->size = size; - mutex_lock(&i915->mm.stolen_lock); - ret = drm_mm_reserve_node(&i915->mm.stolen, stolen); - mutex_unlock(&i915->mm.stolen_lock); - } else { - ret = i915_gem_stolen_insert_node(i915, stolen, size, - mem->min_page_size); - } - if (ret) - goto err_free; - - ret = __i915_gem_object_create_stolen(mem, obj, stolen); - if (ret) - goto err_remove; - - return 0; - -err_remove: - i915_gem_stolen_remove_node(i915, stolen); -err_free: - kfree(stolen); - return ret; -} - -struct drm_i915_gem_object * -i915_gem_object_create_stolen(struct drm_i915_private *i915, - resource_size_t size) -{ - return i915_gem_object_create_region(i915->mm.stolen_region, size, 0, 0); -} - -static int init_stolen_smem(struct intel_memory_region *mem) -{ - /* - * Initialise stolen early so that we may reserve preallocated - * objects for the BIOS to KMS transition. - */ - return i915_gem_init_stolen(mem); -} - -static int release_stolen_smem(struct intel_memory_region *mem) -{ - i915_gem_cleanup_stolen(mem->i915); - return 0; + /* default range manager relies on page_size for alignment */ + page_size = max(page_size, mem->min_page_size); + return __i915_gem_ttm_object_init(mem, obj, offset, size, page_size, flags); } static const struct intel_memory_region_ops i915_region_stolen_smem_ops = { .init = init_stolen_smem, - .release = release_stolen_smem, - .init_object = _i915_gem_object_stolen_init, + .release = release_stolen, + .init_object = stolen_object_init, + .alloc = alloc_stolen_region, + .free = free_stolen_region, }; static int init_stolen_lmem(struct intel_memory_region *mem) @@ -793,7 +663,7 @@ static int init_stolen_lmem(struct intel_memory_region *mem) return 0; err_cleanup: - i915_gem_cleanup_stolen(mem->i915); + intel_region_ttm_fini(mem); return err; } @@ -801,14 +671,15 @@ static int release_stolen_lmem(struct intel_memory_region *mem) { if (mem->io_size) io_mapping_fini(&mem->iomap); - i915_gem_cleanup_stolen(mem->i915); - return 0; + return release_stolen(mem); } static const struct intel_memory_region_ops i915_region_stolen_lmem_ops = { .init = init_stolen_lmem, .release = release_stolen_lmem, - .init_object = _i915_gem_object_stolen_init, + .init_object = stolen_object_init, + .alloc = alloc_stolen_region, + .free = free_stolen_region, }; struct intel_memory_region * @@ -896,7 +767,37 @@ i915_gem_stolen_smem_setup(struct drm_i915_private *i915, u16 type, return mem; } -bool i915_gem_object_is_stolen(const struct drm_i915_gem_object *obj) +/** + * i915_gem_object_stolen_offset - return offset from start of stolen region + * @obj: the object to return the offset of + * + * Get the offset from stolen region if this object is currently placed in stolen memory. + * + * Return: offset from stolen if successful, I915_BO_INVALID_OFFSET otherwise + */ +u64 i915_gem_object_stolen_offset(struct drm_i915_gem_object *obj) { - return obj->ops == &i915_gem_object_stolen_ops; + struct ttm_buffer_object *ttm_obj; + + if (!obj || !i915_gem_object_is_stolen(obj)) + return I915_BO_INVALID_OFFSET; + + ttm_obj = i915_gem_to_ttm(obj); + if (ttm_obj->resource->mem_type != I915_PL_STOLEN) + return I915_BO_INVALID_OFFSET; + + return PFN_PHYS(ttm_obj->resource->start); +} + +bool i915_gem_object_is_stolen(struct drm_i915_gem_object *obj) +{ + struct intel_memory_region *mr = READ_ONCE(obj->mm.region); + +#ifdef CONFIG_LOCKDEP + if (i915_gem_object_migratable(obj) && + i915_gem_object_evictable(obj)) + assert_object_held(obj); +#endif + return mr && (mr->type == INTEL_MEMORY_STOLEN_SYSTEM || + mr->type == INTEL_MEMORY_STOLEN_LOCAL); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h index d5005a39d130..b39cb6e6c768 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h @@ -10,17 +10,9 @@ struct drm_i915_private; struct drm_mm_node; +struct ttm_resource; struct drm_i915_gem_object; -int i915_gem_stolen_insert_node(struct drm_i915_private *dev_priv, - struct drm_mm_node *node, u64 size, - unsigned alignment); -int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv, - struct drm_mm_node *node, u64 size, - unsigned alignment, u64 start, - u64 end); -void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv, - struct drm_mm_node *node); struct intel_memory_region * i915_gem_stolen_smem_setup(struct drm_i915_private *i915, u16 type, u16 instance); @@ -32,7 +24,16 @@ struct drm_i915_gem_object * i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, resource_size_t size); -bool i915_gem_object_is_stolen(const struct drm_i915_gem_object *obj); +u64 i915_gem_object_stolen_offset(struct drm_i915_gem_object *obj); +bool i915_gem_object_is_stolen(struct drm_i915_gem_object *obj); +struct ttm_resource * +i915_gem_stolen_reserve_range(struct drm_i915_private *i915, + resource_size_t size, + u64 start, u64 end); +u64 i915_gem_stolen_reserve_offset(struct ttm_resource *res); +u64 i915_gem_stolen_reserve_size(struct ttm_resource *res); +void i915_gem_stolen_release_range(struct drm_i915_private *i915, + struct ttm_resource *res); #define I915_GEM_STOLEN_BIAS SZ_128K diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index bb988608296d..c7611efddcf7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1203,6 +1203,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, struct ttm_place _place; struct ttm_placement _placement; struct ttm_placement *placement; + bool is_stolen = intel_region_to_ttm_type(mem) == I915_PL_STOLEN; int ret; drm_gem_private_object_init(&i915->drm, &obj->base, size); @@ -1222,7 +1223,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, obj->base.vma_node.driver_private = i915_gem_to_ttm(obj); /* Forcing the page size is kernel internal only */ - GEM_BUG_ON(page_size && obj->mm.n_placements); + GEM_BUG_ON(page_size && obj->mm.n_placements && !is_stolen); /* * Keep an extra shrink pin to prevent the object from being made diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h index 73e371aa3850..81654a51df51 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h @@ -49,6 +49,13 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, resource_size_t size, resource_size_t page_size, unsigned int flags); +int i915_gem_ttm_object_init_in_place(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t size, + resource_size_t page_size, + unsigned int flags, + u64 start, + u64 end); /* Internal I915 TTM declarations and definitions below. */ diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c index f8d0523f4c18..846eef898f7e 100644 --- a/drivers/gpu/drm/i915/gt/intel_rc6.c +++ b/drivers/gpu/drm/i915/gt/intel_rc6.c @@ -355,9 +355,9 @@ static int vlv_rc6_init(struct intel_rc6 *rc6) GEM_BUG_ON(range_overflows_end_t(u64, i915->dsm.start, - pctx->stolen->start, + i915_gem_object_stolen_offset(pctx), U32_MAX)); - pctx_paddr = i915->dsm.start + pctx->stolen->start; + pctx_paddr = i915->dsm.start + i915_gem_object_stolen_offset(pctx); intel_uncore_write(uncore, VLV_PCBR, pctx_paddr); out: diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c b/drivers/gpu/drm/i915/gt/selftest_reset.c index 37c38bdd5f47..75bc7d90c9dc 100644 --- a/drivers/gpu/drm/i915/gt/selftest_reset.c +++ b/drivers/gpu/drm/i915/gt/selftest_reset.c @@ -6,6 +6,7 @@ #include #include "gem/i915_gem_stolen.h" +#include "intel_region_ttm.h" #include "i915_memcpy.h" #include "i915_selftest.h" @@ -83,6 +84,7 @@ __igt_reset_stolen(struct intel_gt *gt, dma_addr_t dma = (dma_addr_t)dsm->start + (page << PAGE_SHIFT); void __iomem *s; void *in; + bool busy; ggtt->vm.insert_page(&ggtt->vm, dma, ggtt->error_capture.start, @@ -93,9 +95,9 @@ __igt_reset_stolen(struct intel_gt *gt, ggtt->error_capture.start, PAGE_SIZE); - if (!__drm_mm_interval_first(>->i915->mm.stolen, - page << PAGE_SHIFT, - ((page + 1) << PAGE_SHIFT) - 1)) + busy = intel_region_ttm_range_busy(gt->i915->mm.stolen_region, + PFN_PHYS(page), PAGE_SIZE); + if (!busy) memset_io(s, STACK_MAGIC, PAGE_SIZE); in = (void __force *)s; @@ -124,6 +126,7 @@ __igt_reset_stolen(struct intel_gt *gt, void __iomem *s; void *in; u32 x; + bool busy; ggtt->vm.insert_page(&ggtt->vm, dma, ggtt->error_capture.start, @@ -139,10 +142,9 @@ __igt_reset_stolen(struct intel_gt *gt, in = tmp; x = crc32_le(0, in, PAGE_SIZE); - if (x != crc[page] && - !__drm_mm_interval_first(>->i915->mm.stolen, - page << PAGE_SHIFT, - ((page + 1) << PAGE_SHIFT) - 1)) { + busy = intel_region_ttm_range_busy(gt->i915->mm.stolen_region, + PFN_PHYS(page), PAGE_SIZE); + if (x != crc[page] && !busy) { pr_debug("unused stolen page %pa modified by GPU reset\n", &page); if (count++ == 0) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 94e5c29d2ee3..e538b8f71dcc 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -32,6 +32,7 @@ #include +#include "gem/i915_gem_region.h" #include "gem/i915_gem_context.h" #include "gt/intel_gt.h" #include "gt/intel_gt_buffer_pool.h" @@ -157,6 +158,7 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) struct drm_i915_private *dev_priv = to_i915(obj->base.dev); struct i915_vma *vma; int pin_count = 0; + u64 offset; seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", &obj->base, @@ -241,8 +243,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) spin_unlock(&obj->vma.lock); seq_printf(m, " (pinned x %d)", pin_count); - if (i915_gem_object_is_stolen(obj)) - seq_printf(m, " (stolen: %08llx)", obj->stolen->start); + offset = i915_gem_object_stolen_offset(obj); + if (offset != I915_BO_INVALID_OFFSET) + seq_printf(m, " (stolen: %08llx)", offset); if (i915_gem_object_is_framebuffer(obj)) seq_printf(m, " (fb)"); } diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index eba94fa76b18..1ebb266e7b63 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -224,11 +224,6 @@ struct i915_gem_mm { * support stolen. */ struct intel_memory_region *stolen_region; - /** Memory allocator for GTT stolen memory */ - struct drm_mm stolen; - /** Protects the usage of the GTT stolen memory allocator. This is - * always the inner lock when overlapping with struct_mutex. */ - struct mutex stolen_lock; /* Protects bound_list/unbound_list and #drm_i915_gem_object.mm.link */ spinlock_t obj_lock; diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index 694e9acb69e2..b3850188981b 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -194,11 +194,12 @@ intel_region_ttm_resource_to_rsgt(struct intel_memory_region *mem, } } -#ifdef CONFIG_DRM_I915_SELFTEST /** * intel_region_ttm_resource_alloc - Allocate memory resources from a region * @mem: The memory region, * @size: The requested size in bytes + * @start: start of allowed range + * @end: end of allowed range * @flags: Allocation flags * * This functionality is provided only for callers that need to allocate @@ -212,8 +213,9 @@ intel_region_ttm_resource_to_rsgt(struct intel_memory_region *mem, */ struct ttm_resource * intel_region_ttm_resource_alloc(struct intel_memory_region *mem, - resource_size_t offset, resource_size_t size, + u64 start, + u64 end, unsigned int flags) { struct ttm_resource_manager *man = mem->region_private; @@ -222,11 +224,14 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, struct ttm_resource *res; int ret; + if (!man) + return ERR_PTR(-ENODEV); + if (flags & I915_BO_ALLOC_CONTIGUOUS) place.flags |= TTM_PL_FLAG_CONTIGUOUS; - if (offset != I915_BO_INVALID_OFFSET) { - place.fpfn = offset >> PAGE_SHIFT; - place.lpfn = place.fpfn + (size >> PAGE_SHIFT); + if (start || end) { + place.fpfn = PFN_DOWN(start); + place.lpfn = PFN_UP(end); } else if (mem->io_size && mem->io_size < mem->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place.flags |= TTM_PL_FLAG_TOPDOWN; @@ -247,8 +252,6 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, return ret ? ERR_PTR(ret) : res; } -#endif - /** * intel_region_ttm_resource_free - Free a resource allocated from a resource manager * @mem: The region the resource was allocated from. @@ -260,9 +263,34 @@ void intel_region_ttm_resource_free(struct intel_memory_region *mem, struct ttm_resource_manager *man = mem->region_private; struct ttm_buffer_object mock_bo = {}; + if (!man) + return; + mock_bo.base.size = res->num_pages << PAGE_SHIFT; mock_bo.bdev = &mem->i915->bdev; res->bo = &mock_bo; man->func->free(man, res); } + +/** + * intel_region_ttm_range_busy - check whether range has any allocations + * @mem: The region to check + * @start: the start of the range to check + * @size: size of the range to check + * + * Return: true if something is alloceted within the region, false otherwise. + */ +bool intel_region_ttm_range_busy(struct intel_memory_region *mem, + u64 start, u64 size) +{ + struct ttm_resource *dummy; + + dummy = intel_region_ttm_resource_alloc(mem, size, start, start + size, + I915_BO_ALLOC_CONTIGUOUS); + if (IS_ERR(dummy)) + return true; + + intel_region_ttm_resource_free(mem, dummy); + return false; +} diff --git a/drivers/gpu/drm/i915/intel_region_ttm.h b/drivers/gpu/drm/i915/intel_region_ttm.h index cf9d86dcf409..1e88472fb2ea 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.h +++ b/drivers/gpu/drm/i915/intel_region_ttm.h @@ -29,15 +29,17 @@ intel_region_ttm_resource_to_rsgt(struct intel_memory_region *mem, void intel_region_ttm_resource_free(struct intel_memory_region *mem, struct ttm_resource *res); +bool intel_region_ttm_range_busy(struct intel_memory_region *mem, + u64 start, u64 size); + int intel_region_to_ttm_type(const struct intel_memory_region *mem); struct ttm_device_funcs *i915_ttm_driver(void); -#ifdef CONFIG_DRM_I915_SELFTEST struct ttm_resource * intel_region_ttm_resource_alloc(struct intel_memory_region *mem, - resource_size_t offset, resource_size_t size, + u64 start, + u64 end, unsigned int flags); -#endif #endif /* _INTEL_REGION_TTM_H_ */ diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c index 670557ce1024..a41d81dc345d 100644 --- a/drivers/gpu/drm/i915/selftests/mock_region.c +++ b/drivers/gpu/drm/i915/selftests/mock_region.c @@ -24,10 +24,20 @@ static int mock_region_get_pages(struct drm_i915_gem_object *obj) { struct sg_table *pages; int err; + u64 start, end; + + if (obj->bo_offset == I915_BO_INVALID_OFFSET) { + start = 0; + end = 0; + } else { + start = obj->bo_offset; + end = obj->bo_offset + obj->base.size; + } obj->mm.res = intel_region_ttm_resource_alloc(obj->mm.region, - obj->bo_offset, obj->base.size, + start, + end, obj->flags); if (IS_ERR(obj->mm.res)) return PTR_ERR(obj->mm.res);