From patchwork Wed Jul 13 13:38:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12916737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CB13DC43334 for ; Wed, 13 Jul 2022 13:38:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5DF8A10E95B; Wed, 13 Jul 2022 13:38:46 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9463410E418; Wed, 13 Jul 2022 13:38:40 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1F38C66019BF; Wed, 13 Jul 2022 14:38:39 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1657719519; bh=CsNQhhmN7tfVGf4HN1uv8HbRGJrZrAmt/638xemnBMo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U3jpirrge5r19VPek6bsVlH82O/2qiPOBpGuh5J5wPQSiX+nV2d21Kt2IjKo9YV7f I1YkmjtkibkRJ1gv+B/bPfZUtShl8LuZJabDchWDrpK7kz84knef5yqEGpfVxl9uZg W+PfOr6V/GyyUwswgKHaLt3wRRobcJP2DUAXbQVTkxPvc0HNOP8gykzFgctAF9GAQ1 YIkihsVeJEZZxd+RtvTL++EycJd99mdtwrEqVDJFdTt01f+36EK81K/nbHSJDZoxQ2 IH+sXcN+CZeWOFcCFsW49w8buzH0zGf0pOGUzhICd+PSAzn6qdSmZZJezvxLG5Fn0x 6j03uVjDL1wIA== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Date: Wed, 13 Jul 2022 14:38:11 +0100 Message-Id: <20220713133818.3699604-2-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220713133818.3699604-1-bob.beckett@collabora.com> References: <20220713133818.3699604-1-bob.beckett@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 1/8] drm/i915/ttm: dont trample cache_level overrides during ttm move X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Various places within the driver override the default chosen cache_level. Before ttm, these overrides were permanent until explicitly changed again or for the lifetime of the buffer. TTM movement code came along and decided that it could make that decision at that time, which is usually well after object creation, so overrode the cache_level decision and reverted it back to its default decision. Add logic to indicate whether the caching mode has been set by anything other than the move logic. If so, assume that the code that overrode the defaults knows best and keep it. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 1 + drivers/gpu/drm/i915/gem/i915_gem_object_types.h | 1 + drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 1 + drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 9 ++++++--- 4 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index ccec4055fde3..966ac2d778d5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -125,6 +125,7 @@ void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); obj->cache_level = cache_level; + obj->ttm.cache_level_override = true; if (cache_level != I915_CACHE_NONE) obj->cache_coherent = (I915_BO_CACHE_COHERENT_FOR_READ | diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 5cf36a130061..14937cf1daaa 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -623,6 +623,7 @@ struct drm_i915_gem_object { struct i915_gem_object_page_iter get_io_page; struct drm_i915_gem_object *backup; bool created:1; + bool cache_level_override:1; } ttm; /* diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 053b0022ddd0..b6c3fc25d9d1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1253,6 +1253,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, i915_gem_object_init_memory_region(obj, mem); i915_ttm_adjust_domains_after_move(obj); i915_ttm_adjust_gem_after_move(obj); + obj->ttm.cache_level_override = false; i915_gem_object_unlock(obj); return 0; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 9a7e50534b84..042c2237e287 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -129,9 +129,12 @@ void i915_ttm_adjust_gem_after_move(struct drm_i915_gem_object *obj) obj->mem_flags |= i915_ttm_cpu_maps_iomem(bo->resource) ? I915_BO_FLAG_IOMEM : I915_BO_FLAG_STRUCT_PAGE; - cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), bo->resource, - bo->ttm); - i915_gem_object_set_cache_coherency(obj, cache_level); + if (!obj->ttm.cache_level_override) { + cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), + bo->resource, bo->ttm); + i915_gem_object_set_cache_coherency(obj, cache_level); + obj->ttm.cache_level_override = false; + } } /**